Re: Debian and our frenemies of containers and userland repos

2019-10-21 Thread Enrico Weigelt, metux IT consult
On 05.10.19 19:03, Bernd Zeimetz wrote:
 > For that, developers also need or want the
> latest shiniest software - which is something a distribution can't provide.

It can, but that needs different workflows and higher grade of
automation. (and of course wouldn't be so well tested)

Actually, I for python world I already did something: fully automatic
import and debianziation for pypi-packages. Yet experimental and part
of another tool (which I'm using for building customer specific
backport- and extra repos):

https://github.com/metux/deb-pkg/tree/wip/pypi

> I'm wondering if there is something Debian can do to be even more
> successful in the container world. 

You could use dck-buildpackage --create-baseimage to do that.
Feel free to create some target configs, and I'll be happy to add them.


--mtx

-- 
Enrico Weigelt, metux IT consult
Free software and Linux embedded engineering
i...@metux.net -- +49-151-27565287



Re: Debian and our frenemies of containers and userland repos

2019-10-21 Thread Enrico Weigelt, metux IT consult
On 07.10.19 13:17, Shengjing Zhu wrote:

> Why not have a repository for it, like dockerhub. So this becomes
> "pull latest build env", which saves lots of time("re-bootstrap" is
> still slow nowadays).

No idea how sbuild works these days (turned away from it aeons ago, as
I've found it too complicated to set up), but dck-buildpackage can do
both. It can try to pull an existing image for the given target, but
will bootstrap it when there isn't one.

IMHO, the best idea is treating images as nothing as a cache, and
having the build machinery bootstrap automatically when needed.

One thing on my 2do list for dck-buildpackage is keeping cache images
for dependency sets (eg. if the same package is rebuilt many times,
during development) - installing dependencies can eat up a lot of time.
(for now, this can be achieved manually, by configuring a target with
dependencies already installed - but I don't like manual things :o)

BTW: one important point w/ dck-buildpackage for me was being able
to specifiy what's in the image. I really prefer to have it really
minimal.



--mtx


-- 
Enrico Weigelt, metux IT consult
Free software and Linux embedded engineering
i...@metux.net -- +49-151-27565287



Re: Debian and our frenemies of containers and userland repos

2019-10-21 Thread Enrico Weigelt, metux IT consult
On 05.10.19 18:25, Bernd Zeimetz wrote:

Hi,

> Having something that works with git-buildpackage would be really nice,

:)

> though. Even better if it would allow to use the k8s API to build things...

Patches are always welcomed :)

There're some problems to be solved for remote hosts (IMHO, k8s only on
local node doesn't make so much sense ;-)):

dck-buildpackage currently mounts some host directories (eg. local apt
repo and reflink'ed copy of the source tree) into the container. While
one could put docker nodes into a shared filesystem, that probably
wouldn't be so nice w/ k8s ...


--mtx


-- 
Enrico Weigelt, metux IT consult
Free software and Linux embedded engineering
i...@metux.net -- +49-151-27565287



Re: Debian and our frenemies of containers and userland repos

2019-10-21 Thread Enrico Weigelt, metux IT consult
On 05.10.19 03:31, Paul Wise wrote:
> On Fri, Oct 4, 2019 at 10:49 PM Enrico Weigelt wrote:
>> On 24.07.19 08:17, Marc Haber wrote:
>>
>>> Do we have a build technology that uses containers instead of chroots
>>> yet?
>>
>> Something like docker-buildpackage ?
> 
> AFAICT, docker-buildpackage doesn't exist 

I'm pretty sure it does exist, since I wrote it :p

https://github.com/metux/docker-buildpackage


--mtx


-- 
Enrico Weigelt, metux IT consult
Free software and Linux embedded engineering
i...@metux.net -- +49-151-27565287



Re: Debian and our frenemies of containers and userland repos

2019-10-04 Thread Enrico Weigelt, metux IT consult
On 24.07.19 08:17, Marc Haber wrote:

> Do we have a build technology that uses containers instead of chroots
> yet?

Something like docker-buildpackage ?


--mtx


-- 
Enrico Weigelt, metux IT consult
Free software and Linux embedded engineering
i...@metux.net -- +49-151-27565287



Re: Debian and our frenemies of containers and userland repos

2019-07-22 Thread Enrico Weigelt, metux IT consult

On 11.07.19 17:25, Yao Wei wrote:

Hi,


It can be a "solid base" of container images and barebone systems, but
the days are numbered as operating systems as free and focused on its
mission (like Google COOS, Yocto, Alpine etc.) is evolving steady.

Could it be a disaster for us?  And more importantly, do users care?


I don't think so.

COOS:   just yet another special purpose distro, in that case for
docker hosts. neither the first, nor the last one to come.
Yocto:  just yet another compile-yourself distro, focused on embeedded,
that happens to be hyped by certain corporations.
(for small/embedded devices, I'd really recommend ptxdist).
Alpine: yet another distro, optimized for running in small containers

BTW: the idea of building small payload/application-specific 
containers/chroot's is anything but new. I've done it somewhere in

the 90th. But nowadays, these so-called "small" containers tend to be
bigger than whole machines of the 90th.

Containerization is a valid approach for some kind of workloads
(eg. specific inhouse applications) that can be easily isolated from
the rest. But it comes with the price of huge redundancies (depending
on how huge some application stacks are). And unless everybody wants
to go back of maintaining everything on his own, we still need distros.

If different applications need to deeply interact (eg. various plugin
stuff, applications calling each other, etc), containerization doesn't
help much. (eg: how can you have a pure texlive in one container and
extra things like fonts, document classes, etc, in separate ones ? :o)

The whole point about containerization isn't about packaging and
deployment of individual applications - instead it's about automatizing
the rollout of fully-configured installations.

One thing seems to be right: folks who always have been hostile towards
the whole concept of distros now have a better excuse.


--mtx

--
Enrico Weigelt, metux IT consult
Free software and Linux embedded engineering
i...@metux.net -- +49-151-27565287



Re: Backports needed for Firefox/Thunderbird ESR 68 in Stretch

2019-07-08 Thread Enrico Weigelt, metux IT consult
On 02.07.19 22:45, Moritz Mühlenhoff wrote:

Hi,

> ESR 68 will require an updated Rust/Cargo toolchain and build dependencies not
> present in Stretch (nodejs 8, llvm-toolchain-7, cbindgen and maybe more).
> Stretch was already updated wrt Rust/Cargo for ESR 60, so there's at least no
> requirement to bootstrap stage0 builds this time.

Few days ago I had a try with newer rust/cargo version from unstable on
stretch. Unfortunately failed miserably, eg. certain libs were missing
in the source tree, and some of them even couldn't be found on upstream
git repo anymore.

Seems that rust/cargo needs a lot more attention.

> If we want to continue to have Firefox/Thunderbird supported in 
> oldstable-security
> after October, someone needs to step up to take care of backports to a 
> Stretch point
> release before October 22nd (or in case of poor timing, we can also release 
> build
> dependency updates via stretch-security).

ACK. I haven't had a chance to take a deeper look at the rust/cargo
issue yet (currently too occupied with other things). If anybody could
come forward with a solution, I'd be really glad.


--mtx


-- 
Enrico Weigelt, metux IT consult
Free software and Linux embedded engineering
i...@metux.net -- +49-151-27565287



Re: OpenCL / handling of hardware-specific packages

2019-07-02 Thread Enrico Weigelt, metux IT consult
On 01.07.19 23:59, Rebecca N. Palmer wrote:

Hi,

>> So, installing an opencl-based package pulls in *all* cl driver stacks ?
> 
> If we do the above, yes by default, but the user can prevent this by
> explicitly installing any one.

Ok, that's fine, as long as it doesn't cause the already mentioned problems.

>> Please don't do that. This IMHO clearly belongs into the operator's
>> hands
> 
> Do you mean "not as long as it would cause the above bugs" or "not
> ever"?  If the latter, is it because of wasted storage/bandwidth or
> something else?

Bandwidth and install time is one issue, another is storage (yes, some
folks actually care about storage, eg. w/ containers :p), another is
reducing code and therefore potential bugs and attack surface.
In general, I'd like to keep my systems as minimal as possible.

> Do you also believe that the existing hardware-related -all packages
> (printer-driver-all(-enforce), va-driver-all, vdpau-driver-all,
> xserver-xorg-input-all, xserver-xorg-video-all) should not exist /
> should not be installed by default?

I don't have a problem with those packages, but there shouldn't
be dependencies on them, so that suddenly hundreds of packages
come in when just one is needed.

Maybe there should be some selection mechanism (not sure how
that could be implemented, yet).

>> (or possibly some installer magic).
> 
> Do we have such a mechanism?  I agree this would be better if it existed.

Honestly, I don't know. Perhaps that deserves a more detailed
discussion. At that point it would also be nice if we could
configure things like which editor or mailer to install.

Maybe this could be done w/ some virtual package + equivs magic.
I'll have to think about this ...

> The AppStream metadata format includes a field for "hardware this works
> with", and beignet-opencl-icd has one, but I don't know if any existing
> tools use this field.

I don't like the idea of making driver packages depending
on some desktop stuff :o

IMHO, it should be handled on dpkg/apt level.


--mtx

-- 
Enrico Weigelt, metux IT consult
Free software and Linux embedded engineering
i...@metux.net -- +49-151-27565287



Re: ZFS in Buster

2019-07-01 Thread Enrico Weigelt, metux IT consult
On 01.07.19 21:06, Enrico Weigelt, metux IT consult wrote:

Hi,

> IIRC the whole things is actually about crypto stuff. Why don't zfs> folks 
> just use the standard linux crypto api (potentially introduce a>
new algo if the existing ones aren't sufficient) ?
Addendum: just had a quick scan through the code and found a completely
own AES implementation.

Seriously ?

Completely redundant cipher implementations from an likely understaffed
project is something I wouldn't like to have on my machines, not in the
kernel. There're just too many pitfalls (especially w/ buggy x86 CPUs)
in crypto programming - I wouldn't dare to implement that myself and
run it on production systems.

And for performance, many folks like to use hw acceleration. Linux
crypto api already provides that.


--mtx

-- 
Enrico Weigelt, metux IT consult
Free software and Linux embedded engineering
i...@metux.net -- +49-151-27565287



Re: ZFS in Buster

2019-07-01 Thread Enrico Weigelt, metux IT consult
On 07.06.19 10:16, Philipp Kern wrote:

Hi,

> This would not be the case here. But crippling the performance would be> 
> indeed an option, even though this would make Debian much less
relevant> for ZFS deployments and people would just go and use Ubuntu
instead.
Is it really necessary to to have some competition w/ Ubuntu on who's
got the larger user base ? In which way is that relevant for the
progress on Debian ?

For me, personally, when working on FOSS, it never mattered how many
users are out there - don't need to "sell" anything.

> I personally wonder why a kernel which provides a module interface does 
> not provide a way to save FPU state, but alas, they made their decision.

Because that's really low-level, arch specific stuff. I don't even
recall any platform driver that ever cares about such things. From a
kernel hacker/maintainer pov, the idea of having an arch specific
filesystem driver, sounds really weird.

This function IIRC just was a workaround for kvm, which always had been
suboptimal and was replaced by a better solution. Since nobody used it
anymore for quite some time, it got removed. And regarding LTS, I don't
recall that Greg ever made any committment on not removing obsolete and
used stuff (he's just reluctant of putting too much extra work into
that for his lts trees).

Of course, as users of kernel-internal APIs, only the in-tree stuff
matters - this has always been the policy in Linux development
(at least as far as I remember).

IIRC the whole things is actually about crypto stuff. Why don't zfs
folks just use the standard linux crypto api (potentially introduce a
new algo if the existing ones aren't sufficient) ?


--mtx

-- 
Enrico Weigelt, metux IT consult
Free software and Linux embedded engineering
i...@metux.net -- +49-151-27565287



Re: AMDGPU+OpenCL with Debian?

2019-07-01 Thread Enrico Weigelt, metux IT consult
On 19.06.19 09:09, Rebecca N. Palmer wrote:

Hi,

> I proposed [0] fixing this by creating a metapackage for "all OpenCL
> drivers" (similar to the ones for graphics).  However, having unusable
> OpenCL drivers installed can trigger bugs: [1] in llvm, and some
> applications that treat "no hardware for this driver" as "fatal error"
> instead of "try the next driver".

So, installing an opencl-based package pulls in *all* cl driver stacks ?

Please don't do that. This IMHO clearly belongs into the operator's
hands (or possibly some installer magic).

--mtx

-- 
Enrico Weigelt, metux IT consult
Free software and Linux embedded engineering
i...@metux.net -- +49-151-27565287



Re: scratch buildds

2019-07-01 Thread Enrico Weigelt, metux IT consult
On 15.06.19 11:20, Helmut Grohne wrote:

Hi,

> Unlike the ${dist}-proposed variant, the scratch distribution can be set> up 
> entirely outside Debian. It only needs someone doing the work with
no> involvement of DSA. Wait, this reminds me of something. Luca
Falavigna> put up debomatic-${arch}.debian.net. And it has piuparts and
lintian!
As somebody who does backports stuff and project/client specific repos,
I've created something on my own, which can build whole stacks of
packages and create apt repos. it also allows fine control on what is
in the base image, extra repos, etc.

The bad thing for me is: I've only got limited computing power and
and very limited in available archs (just x86 and some older arm).
So, having a CI that can build for all the debian supported archs and
allows using extra repos, tailored base images, work on git directly
(fully debianized branch) and publishes the repos to the outside world,
would be a really cool thing for me. IIRC that should also cover this
'scratch builds' usecase.

I admit, I haven't checked whether gitlab-ci can already do that.


--mtx

-- 
Enrico Weigelt, metux IT consult
Free software and Linux embedded engineering
i...@metux.net -- +49-151-27565287



Re: Survey: git packaging practices / repository format

2019-07-01 Thread Enrico Weigelt, metux IT consult
On 01.07.19 15:09, Andrey Rahmatullin wrote:
> On Mon, Jul 01, 2019 at 03:04:26PM +0200, Enrico Weigelt, metux IT consult 
> wrote:
>> On 29.05.19 17:41, Andrey Rahmatullin wrote:
>>
>>>> Perhaps we should update policy to say that the .orig tarball may (or
>>>> even "should") be generated from an upstream release tag where
>>>> applicable.
>>> This conflicts with shipping tarball signatures.
>>
>> Does that really need to be the upstream's tarballs ?
> The idea is checking the sig that the upstream made, with the key the
> upstream published.

Okay, but is that actually used (by somebody except the maintainers) ?

>> If it's about validating the source integrity all along the path from
>> from upstream to deb-src repo, we could do that by auditable process
>> (eg. fully automatic, easily reproducable transformations)
> Sounds very complicated.

I don't think so, at least if we're considering the whole workflow.

In the end, it's just a matter of trust-chains:

* upstream should used signed tags - we can collect their pubkeys
  in some suitable place (what we should do anyway).
* if upstream doesn't sign, the maintainer has to trust them blindly,
  or needs to verify the code anyways. we could use some half-automated
  process for verifying the diff between the upstream tarball and the
  scm repo (we could add our own signatures here)
* finally the maintainer signs his final tree (the one that's used for
  actual building the final packages)

I believe that 99% can be done automatically, with a little bit of
tooling.

-- 
Enrico Weigelt, metux IT consult
Free software and Linux embedded engineering
i...@metux.net -- +49-151-27565287



Re: Survey results: git packaging practices / repository format

2019-07-01 Thread Enrico Weigelt, metux IT consult
On 28.06.19 23:42, Ian Jackson wrote:

>https://wiki.debian.org/GitPackagingSurvey
> 
> Unfortunately due to bugs in the Debian and Wiki CSS styles, it is
> displayed very suboptimally unless you log into the wiki and set your
> style to "Classic".  (I have filed bug reports.)

Very nice work.

For the next stage I think it would be nice to assign some kind of
canonical names for the individual workflows and try to create some
precise documentation.

Once we have that, we could add some machine readable file to the d/
trees stating the exact workflow and possibly extra metadata
(eg. where to find certain repos/branches, etc).

> [1] This does *not* include the one response from a Debian downstream.

Talking about me ? ;-)

> The task of being a Debian downstream is rather different and it
> doesn't make sense to try to represent that in the same table.

I don't think it's so different in that regard. IMHO the only difference
is that I'm not an official debian maintainer and not using buildd.
But I believe the same approach could be used by any actual Debian
maintainer if he just also creates dsc's and pushes them onto buildd.

Maybe it would be better to just differenciate the statistics between
official debian and others (such as downstreams).


--mtx

-- 
Enrico Weigelt, metux IT consult
Free software and Linux embedded engineering
i...@metux.net -- +49-151-27565287



Re: Survey: git packaging practices / repository format

2019-07-01 Thread Enrico Weigelt, metux IT consult
On 29.05.19 23:39, David Bremner wrote:

>> Also, how do you move to a new upstream version ?
> 
> use git merge, typically from an upstream tag, or from a debian specific
> upstream branch with tarballs imported on top of upstream history.

Uh, that creates an pretty ugly, unreadable git repo and makes
interacting w/ upstream (eg. submitting patches) unncessarily hard.

That's something I regularily see w/ crappy vendor kernels, which then
take lots of time to bring them into some somewhat usable state :o


--mtx

-- 
Enrico Weigelt, metux IT consult
Free software and Linux embedded engineering
i...@metux.net -- +49-151-27565287



Re: Survey: git packaging practices / repository format

2019-07-01 Thread Enrico Weigelt, metux IT consult
On 29.05.19 17:41, Andrey Rahmatullin wrote:

>> Perhaps we should update policy to say that the .orig tarball may (or
>> even "should") be generated from an upstream release tag where
>> applicable.
> This conflicts with shipping tarball signatures.

Does that really need to be the upstream's tarballs ?
Why not just automatically generating the orig tarballs and fingerprint
*them* (not caring about the upstream's tarball at all) ?

If it's about validating the source integrity all along the path from
from upstream to deb-src repo, we could do that by auditable process
(eg. fully automatic, easily reproducable transformations)


--mtx

-- 
Enrico Weigelt, metux IT consult
Free software and Linux embedded engineering
i...@metux.net -- +49-151-27565287



Re: Survey: git packaging practices / repository format

2019-07-01 Thread Enrico Weigelt, metux IT consult
On 29.05.19 15:14, Ben Hutchings wrote:

> Perhaps we should update policy to say that the .orig tarball may (or
> even "should") be generated from an upstream release tag where
> applicable.

ACK. But there should also be some definition or at least guideline on
what is considiered "applicable" (or better: when is it okay not doing
that) and some rules on how the generation process shall be done.
(maybe having some script with a defined name ?)


--mtx

-- 
Enrico Weigelt, metux IT consult
Free software and Linux embedded engineering
i...@metux.net -- +49-151-27565287



Re: Survey: git packaging practices / repository format

2019-07-01 Thread Enrico Weigelt, metux IT consult
On 02.06.19 16:22, Sam Hartman wrote:
>>>>>> "Nikolaus" == Nikolaus Rath  writes:
> 
> 
> Yes, but the lack of similarity is the thing I find interesting.
> In git-pdm (and git-debrebase), you retain all the rebases and stitch
> them together with various pseudo-merges into a combined history.
> 
> If you could avoid that and have a more pure rebase workflow, I think it
> would be nice.
> As Ian points out, we don't know how to do that because we don't know
> how to figure out whether you have the latest rebase.

Could you give me some more details on the intended workflow ?
Why does one need that information at all ?


--mtx

-- 
Enrico Weigelt, metux IT consult
Free software and Linux embedded engineering
i...@metux.net -- +49-151-27565287



Re: Survey: git packaging practices / repository format

2019-07-01 Thread Enrico Weigelt, metux IT consult
On 02.06.19 00:57, Ian Jackson wrote:

Hi,

> The difficulty with this as a collaboration approach is that you can't
> tell whether a rebase is "the newest", at least without a lot of
> additional information.  That additional information is the "clutter"
> if you like which the "cleaner" history doesn't contain.

Depends on what you actually call "new".

I think for each supported upstream version there should be a separate
maintenance branch (we can track the maintenance status in some other
place, maybe some extra metadata branch). These individual branches are
never rebased, just created once on the corresponding upstream tag.
(well, there could be extra devel branches that indeed are rebased on
upstream's master, but that has to be explicitly communicated)

> Both git-debrebase and git-dpm use a special history structure to
> record what the most recent rebase is.  Obviously I prefer
> git-debrebase since I wrote it - using a different data model - even
> after I knew about git-dpm and its data model.  But maybe this isn't
> the thread for that advocacy conversation.

What exactly does this tool do ?


--mtx

-- 
Enrico Weigelt, metux IT consult
Free software and Linux embedded engineering
i...@metux.net -- +49-151-27565287



Re: Survey: git packaging practices / repository format

2019-07-01 Thread Enrico Weigelt, metux IT consult
On 29.05.19 14:47, Sam Hartman wrote:

Hi,

> I'm certainly going to look at dck-buildpackage now, because what he
> describes is a workflow I'd like to be using within Debian.

:)

Maybe you'd also find that useful: https://github.com/metux/deb-pkg

It's a little tool for automatically creating whole repos, eg.
automatically cloning repos (even w/ multiple remotes), building
whole dependency trees (deps are explicitly configured instead of
fetched from the packages, intentionally), etc.

Note: the "master" branch is pretty hackish, basically an example - the
idea is to branch off from "base" for each project and do the necessary
changes directly in git. (from time to time it's worth rebasing).

> For some projects I want to ignore orig tarballs as much as I can.  I'm
> happy with native packages, or 3.0 quilt with single-debian-patch.

single-patch isn't so nice for interacting w/ upstreams.

> I don't want merge artifacts from Debian packaging on my branches.
> I'm happy to need to give the system an upstream tag.

I'd prefer always having a debian branch ontop of the upstream release
tag and doing all the debianziation there, possibly per-release or
dist release, flavour, etc.

> I'm happy for a dsc to fall out the bottom, and so long as it
> corresponds to my git tree I don't care how that happens.

ACK. I see dsc just as an autogenerated itermediate stage for certain
build systems (eg. builldd) or providing src repos.

> I have a slight preference for 3.0 format over 1.0 format packages.  3.0
> makes it possible to deal with binaries, better compression and a couple
> of things like that.  The quilt bits are (in this workflow) an annoyance
> to be conquered, not a value.

ACK. That's why I do everything in git only.
I don't really care how the src packages look like, as long as I've got
an easy and fully automatic way for getting a clean git tree with all
the necessary changes already applied as readable (and documented)
git commits.

> The thing his approach really seems to have going for it is that he
> gives up on the debian history fast forwarding and instead rebases a lot
> for a cleaner history.

ACK. Personally, I don't see any actual value in an separate Debian
history, or even an history of the text-based patches.

git-rebase is one of my primary daily tools.

> If we could figure out a way to collaborate on something like that well,
> it might be a very interesting tool to have.

ACK.

I believe we should set some computable policies on how orig trees are
generated from actual upstream repos and patches are handled, so we can
do imports/transformations fully automatically.


--mtx

-- 
Enrico Weigelt, metux IT consult
Free software and Linux embedded engineering
i...@metux.net -- +49-151-27565287



Re: Survey: git packaging practices / repository format

2019-07-01 Thread Enrico Weigelt, metux IT consult
On 29.05.19 13:50, Ian Jackson wrote:

Hi,

> Oh.  I think I have misunderstood.  I think you are describing a git
> workflow you use as a *downstream* of Debian, not as a maintainer
> *within* Debian.

Yes, something like that. I'm maintaining additional repos for certain
projects, usually backports or packages that aren't in Debian at all.
Pretty often it's pretty much rebasing Debianziation onto latest
upstream releases.

A very annoying aspect for me is the way upstream souces and patches
are managed, eg.

* I need to reproduce how exactly orig trees are constructed from the
  actual upstream (git) trees. We often have autogenerated stuff in
  here (eg. autotools stuff), often files are missing, stuff moved
  around, etc, etc.
  --> here we at least should have some fully automatic transformation
  system within git. probably not within the actual source packages
  themselves, possibly as a cross-package project.
* text-based patches are costly to reproduce into a git repository
  * many just don't apply via git-am --> at least we should fix them
and add a policy that all patches need to be git-am compatible
(no, quilt isn't so much helpful here, and I find it pretty complex
to use - compared with git - I need rebase)
  * we don't have any clear (machine-readable) distinction between types
of patches, eg. whether they're generic or really debian specific
  * somethimes we even have patches against autogenerated stuff
  * many patches lack any clear description
* sometimes we even have weird transformations on the source tree
  (usually on the orig tree, but sometimes also within rules file)

Few days ago, I tried to rebuild recent rustc (which I need for tbird),
but got a lot of strange fails. It also lacked the source code for some
library where that specific version even doesn't exist in the upstream
git repo anymore. I know that rust is an ugly beast, but those things
just should not happen. The rust toolchain seems to be a good candidate
for creating an fully automatic git transformation (eg. transform
submodules into merges, etc).

I'd like to propose some additions to the packaging policies:

* there shall be no source tree transformations in the build process,
  all necessary changes shall be done by patches
* the upstream build process shall be used as-is whenever possible,
  if necessary patch it. (eg. no manual build or install rules, etc)
* there shall be no conditional applying of patches - the queue shall
  be always applied a as a whole. if certain code changes are only
  applicable for certain build conditions (eg. different flavours like
  with the kernel), proper build-time config flags shall be introduced.
* all patches shall be git-am compatible and have a clear description
  of what they're actually doing
* patches shall be written in an generic/upstreamable way if possible
  (eg. introduce new build-time config flags if necessary)
* patches shall be grouped into generic/upstreamable and distro specific
  ones, the differenciation shall be easily machine-readable (eg.
  message headers), and the generic ones shall be first in the queue.
* no patching of autogenerated files
* autogenerated files shall be removed from source and always
  regenerated within the build process
* the debian/ directory shall contain a machine readable file for
  telling exactly the upstream source tree (eg. canonical version, url
  to tarball and gitrepo, tag name + commit id, etc)
* minimum required debhelper version shall be the one present in stable,
  (or better oldstable) - unless threre's really no other sane way

> And I think what you are saying is that you don't use source packages
> (.dsc) except maybe in the innards somewhere of your machinery.
> I think that is a good way for a downstream to work.  Certainly when I
> modify anything locally I don't bothere with that .dsc stuff.

Right, I've always found that very complicated and hard to maintain.
Actually, I'd like to propose phasing it out that old relic. With git
we just don't need that anymore.

> But my aim in this thread was to capture how people work *within*
> Debian, where a maintainer is still required to produce a .dsc.

I don't think that .dsc really makes the picture so different. It always
can be generated automatically. IMHO, it's only needed as an output
format for creating src repos and as an intermediate transport for
traditional tools like buildd.


--mtx

-- 
Enrico Weigelt, metux IT consult
Free software and Linux embedded engineering
i...@metux.net -- +49-151-27565287



Re: ZFS in Buster

2019-05-29 Thread Enrico Weigelt, metux IT consult
On 28.05.19 18:43, Dan wrote:

> Sadly the Linux Kernel has introduced a commit in kernel 4.19 and 5.0> that 
> prevents ZFS from using SIMD. The result is that ZFS won't be>
usable in Buster. See the following issue>
https://github.com/zfsonlinux/zfs/issues/8793
We recently had this discussion on lkml - yet another case of 3rdparty
folks that just don't follow the license rules.

It's not the kernel who broke zfs, it's zfs that broke itself. The
kernel is GPL, and they just have to follow the rules or go away.

OOT modules are conceptionally messy in the first place. It's sometimes
okay as an temporary workaround, until things get mainlined. But
intentionally keeping things oot for long time is just silly and creates
lots of more problems than it creates.

And they're even using now *deeply* arch-internal functions directly.

> NixOS reverted that particular commit:>
https://www.phoronix.com/scan.php?page=news_item=NixOS-Linux-5.0-ZFS-FPU-Drop
Intentional license violation. Not funny.

> Debian is the "Universal Operating System" and gives the user the> option to 
> choose. It provides "vim and emacs", "Gnome and KDE",
If you wanna have something new included, you'll have to sit down and do
the actual work. In the end of the day, it's that simple.

> Would it be possible to provide an alternative patched linux kernel
> that works with ZFS?

You mean patching against the license ?

> The ZFS developers proposed the Linux developers to rewrite the whole
> ZFS code and use GPL, but surprisingly the linux developers didn't
> accept. See below:
> https://github.com/zfsonlinux/zfs/issues/8314

Wait, no. It's not that we refused anything (actually, I don't even
recall any decent discussion on that @lkml). There even wasn't anything
to accept or refuse - except the existing code, that is nowhere near
a quality where any maintainer likes to even have a closer look at.

The major problem is that ZoL always has been oot on purpose, which is
the wrong approach to begin with. That also leads to bad code quality
(eg. lots of useless wrappers, horrible maintenance, ...)

What ZoL folks could do is step by step rewrite it to use mainline
functionality where ever technically feasible and work closely with
upstream to introduce missing functionality. Obviously, their current
proprietary userland interface can't be accepted for mainline - it
has to be reworked to be conformant w/ standard uapi (eg. we already
have it for things like snapshots, deduplication, quotas, ...)

But it's up to ZoL developers to do the actual work and post patches
to lkml. There won't be anybody else doing that.


--mtx

-- 
Enrico Weigelt, metux IT consult
Free software and Linux embedded engineering
i...@metux.net -- +49-151-27565287



Re: Survey: git packaging practices / repository format

2019-05-29 Thread Enrico Weigelt, metux IT consult
On 28.05.19 19:31, Simon McVittie wrote:

Hi,

> Debian Linux kernel
> ===
> 
> Tree contains: an incomplete debian/ directory, notably without d/control,
> and no upstream source
> Changes to upstream source are: d/patches only
> Baseline upstream: changelog version => .orig tarball
> Patches managed by: ???
> Special build tool: there is a pre-build step to generate d/control

I'm handling the kernel very differently (actually the offical packages
never actually built at my site), similar to what I've described in
my other mails - layered branches:

* layer 0: upstream tag (linus or greg)
* layer 1: generic patches for making upstream's 'make dep-pkg' work
   with usual debian workflows (eg. not creating debian/rules
   from there anymore, but a generic one instead)
* layer 2: dist and target specific customizations (changelos, .config,
   etc ...)

The whole thing is again is built via dck-buildpackage (dpkg-
buildpackage should also work, but I never called it manually anymore,
since i've wrote dck-buildpackage).

Note that I don't even try to create some one-fits-all superpackage for
all archs, flavours, etc. - instead I'm using separate layer 2 branches
for that.

(for maintaining lots of kernel configs based on some meta config, I've
got a separate tool)


--mtx

-- 
Enrico Weigelt, metux IT consult
Free software and Linux embedded engineering
i...@metux.net -- +49-151-27565287



Re: Survey: git packaging practices / repository format

2019-05-29 Thread Enrico Weigelt, metux IT consult
On 29.05.19 01:39, Simon McVittie wrote:

Hi,

> You might reasonably assume that, but no, they are not. mesa (and probably
> other xorg-team packages) uses v1.0 dpkg-source format combined with
> dh --with quilt, so deliberate Debian changes can be either direct
> changes to the upstream source code, or quilt patches in d/patches,
> or a mixture. Additionally, mesa uses d/source/local-options to ignore
> files that only exist in the upstream git tag (which is what gets merged
> into the packaging git branch), but not in the upstream `make dist` output
> produced from that tag (which is used as the .orig tarball).

hmm, sounds quite complicated ... anyone here who could explain why
exactly they're doing it that way ?

by the way: that's IMHO an important information we should also collect:
why exactly some particular workflow was picked

> My understanding is that this unusual difference between the .orig
> tarball and what's in git is an attempt to "square the circle" between
> two colliding design principles: "the .orig tarball should be upstream's
> official binary artifact" (in this case Automake `make dist` output,
> including generated files like Makefile.in but not non-critical source
> files like .gitignore) and "what's in git should match upstream's git
> repository" (including .gitignore but
> not usually Makefile.in).

Since we have git, I've completely given up the orig tarball - I'm just
basing on their release tags. And, of course, there shouldn't be
anything autogenerated in the git repo - always recreate everything
(*especially* autotools-generated stuff). The orig tarball, IMHO, is a
long obsolete ancient relic.

For upstreams that don't have a git repo yet, I setup an importer first,
and call that my upstream.


--mtx

-- 
Enrico Weigelt, metux IT consult
Free software and Linux embedded engineering
i...@metux.net -- +49-151-27565287



Re: Survey: git packaging practices / repository format

2019-05-29 Thread Enrico Weigelt, metux IT consult
On 28.05.19 22:08, Ian Jackson wrote:

Hi,

> Please can we leave aside discussion of the merits or otherwise of
> each of these formats/workflows.
> 
> Perhaps we can talk about that (again!) at some point, but it tends to
> derail any conversation about git packaging stuff and I don't want
> this thread derailed.

I understand you point, but I believe we really should discuss this.
(maybe based on some specific examples)

OTOH, I'll only participate in such discussions, if I see that it's
really going forward ... already tried that several times in recent
years, but no success, so I just gave up :(


--mtx

-- 
Enrico Weigelt, metux IT consult
Free software and Linux embedded engineering
i...@metux.net -- +49-151-27565287



Re: Survey: git packaging practices / repository format

2019-05-29 Thread Enrico Weigelt, metux IT consult
On 28.05.19 22:30, Ian Jackson wrote:
> Hi, thanks for replying.  You have an interesting workflow which I
> think I need to ask some questions about before I can document it
> fully.

I'd call it the 'git-only-workflow' ;-)

The main reasons behind are:
* i wanna be able to easily rebase onto upstream anytime
* i wanna keep generic changes separate from the distro-specific stuff
  (usually I try to do very generic, so it can go into mainline, eg.
  instead of directly patching things like pathes, etc, I'm adding
  new build options, ...)
* i wanna easily bring generic changes upstream
* i don't ever like to cope w/ text-based patches anymore (all these
  apply/unapply cycles really suck :p) - git is much easier to handle,
  IMHO
* i wanna have exactly the build tree in my git repo
* i don't wanna versioning patches (reading diffs of diffs are not quite
  useful :o)

Actually, the workflow is a tiny bit more complex: i'm using layered
branches (regularily rebasing):

layer 0: upstream releases
layer 1: per release maintenance branches w/ generic (hopefully
 upstreamable) fixes - based on the corresponding upstream
 release tags (or potentially their maint branches)
layer 2: per distro and release debianized branches
 (sometimes some layer 1.5 for really generic deb stuff)

Branches and tags have a canonical naming - ref name prefixes,
canonical version numbers, ... (eg. anyting for debian stretch is
prefixed 'stretch/' ...)

Years ago, I've already tried to form layer 1 into a greater,
cross-distro community, where stabelization efforts are shared between
many distros (kinda send-patches.org but w/ high grade of normalization
and automation. It was called the 'oss-qm' project (github org with same
name). But the interest from dist maintainers was asymptotically
approaching zero, from below.

> Enrico Weigelt, metux IT consult writes ("Re: Survey: git packaging practices 
> / repository format"):
>> I'm always cloning the upstream repo, branch off at their release tag
>> and add all necessary chanages as individual git commits - first come
>> the generic (non-deb specific) patches, then the deb specific ones.
>> No text-based patches, or even magic rewriting within the build process.
>> The HEAD is exactly the debianized source tree,
> 
> What source format do you use ?  What is in debian/source/format, if
> anything ? 

Usually "3.0 (quilt)", but I actually don't really care so much. Just
picked that some time, as it just worked, and never really though about
it anymore :p

> Do you handle orig tarballs at all ?

No. I'm exclusively using docker-buildpackage, which directly operates
on the final source tree - no intermediate steps like unpacking,
patching, etc.

One of the fine things (besides simplicity) is that if anything goes
wrong, I can just jump into the container (it intentionally doesn't
clean up failing containers) and directly work from there (the git
repo is also there).

> When you go to a new upstream, you make a new git branch, then ?

git checkout  -b 
git rebase 

And then see it it works, finxing things, etc.
Of course, I'll also care about self-consistent and understandable
commits - git history is documentation, not rotating backup ;-)

> Do you publish this git branch anywhere ?

https://github.com/oss-qm

(from time to time I also send patches upstream)

>> which is then fed to dck-buildpackage.
> 
> What is that ?  

https://github.com/metux/docker-buildpackage

It's a little tool that sets up build containers (also creates base
images on-demand), including build tools, extra repos, etc, runs
the build in the container and finally pulls out the debs.

The main audience are folks that maintain extra repos (eg.
customizations, backports, etc) - that's one of the things I'm
regularily doing for my clients.

I've got another toolkit ontop of that, which helps maintaining
whole repos, including managing git repos and their remotes,
dependency handling, etc. It's actually not a standalone tool,
but a fundation for easily setting up your own customized build
environment. I'm using it for all my customers who get apt repos,
but also for backports and depotterization.

(Note: the 'master' branch currently is crappy, more a playgound, w/
lot's of things that have to be cleaned up ... for production use
fork from 'base' branch.)

> manpages.debian.org wasn't any help.

It's not in official Debian. I've announced it long go, but nobody
here really cared. I couldn't even convice debian maintainers for
little less insane scm workflows (just look at the kernel :p), but
failed, so I don't waste my time anymore, instead just clean up the
mess for those packages that I actually need.


--mtx

-- 
Enrico Weigelt, metux IT consult
Free software and Linux embedded engineering
i...@metux.net -- +49-151-27565287



Re: Survey: git packaging practices / repository format

2019-05-28 Thread Enrico Weigelt, metux IT consult
On 28.05.19 17:51, Ian Jackson wrote:> While trying to write the dgit
FAQ, and some of the relevant docs, it
> has become even more painfully obvious that we lack a good handle on
> what all the different ways are that people use git to do their Debian
> packaging, and what people call these formats/workflows, and what
> tools they use.
>
> Can you please look through the table below and see if I have covered
> everything you do ?



I'm always cloning the upstream repo, branch off at their release tag
and add all necessary chanages as individual git commits - first come
the generic (non-deb specific) patches, then the deb specific ones.
No text-based patches, or even magic rewriting within the build process.
The HEAD is exactly the debianized source tree, which is then fed to
dck-buildpackage.


--mtx

-- 
Enrico Weigelt, metux IT consult
Free software and Linux embedded engineering
i...@metux.net -- +49-151-27565287



Re: Introducting Debian Trends: historical graphs about Debian packaging practices, and "packages smells"

2019-04-16 Thread Enrico Weigelt, metux IT consult
On 13.04.19 10:20, Lucas Nussbaum wrote:
> TL;DR: see https://trends.debian.net and
> https://trends.debian.net/#smells
> 
> Hi,
> 
> Following this blog post[1] I did some work on setting up a proper
> framework to graph historical trends about Debian packaging practices.
> The result is now available at [2], and I'm confident that I will be
> able to update this on a regular basis (every few months).

Just a quick idea:

For packages using git, can you also trace how they're using it ?

There're several approaches used, some only track debian/ subtree,
some track source tree plus debian/, some w/ extra text-based patches,
some w/ patches already applied in git.


--mtx

-- 
Enrico Weigelt, metux IT consult
Free software and Linux embedded engineering
i...@metux.net -- +49-151-27565287



Re: PPAs (Re: [Idea] Debian User Repository? (Not simply mimicing AUR))

2019-04-16 Thread Enrico Weigelt, metux IT consult
On 11.04.19 09:44, Mo Zhou wrote:

> Different from that, duprkit's design don't hope to limit> the user with any 
> pre-defined "sequence", but enable the users to>
selectively call the functions they need. In other words, the> user can
define how to deal with the prepared source+debian directories,>
afterall the .durpkg header is a shell script. That said, I think> some
more helper functions would be nice: [1].
I'm still struggling to understand why simply using old-fashioned
./debian/rules file (possibly w/o using debhelper+friends) isn't
sufficient here.

AFAIK, all that tools like buildd do for actual build is calling that
rule file w/ some specific target names (eg. 'binary'). You can put in
anything you like here - it's just a makefile. Theoretically, you could
also use shellscript instead.

If you drop the idea of having everything in a single file in favour
of debian trees (= something that has the 'debian' subdirectory with
a 'rules' file in it), the existing debian toolchains could be used.

> My blueprint includes a very light-weight/simple dependency tracking> 
> mechanism. And I assume the project don't need to handle complex dep>
trees like apt. Because:> > 1. If a package is common and useful enough,
such that it has been>adopted by many other projects, why don't I
package it for the>official Debian release? So, I expect that most
packages that DUPR>will deal with, are actually leaf or near-leaf
packages on the>dependency tree.
Okay, that's a different topic. We have three options here:

a) put it into official debian repo. that would go the usual ways, but
   takes pretty long, until the next release is out and the desired
   audience actually uses it.

b) add it to backports repos. i'm not sure how the actual workflows
   and release timelines look here.

c) go the PPA route. here we'd need some repo-level dependency handling
   (not sure what tools exist here), and we'd have to coordinate between
   several PPAs

> 2. Some of my targeted upstreams do sourceless binary tarball release.>
> They seldom get into the dependency trouble...

When I have to touch those stuff, I basically always run into trouble.
Many subtle breaks, that are extremly hard to resolve (even to track
down). Those stuff I'm only doing in containers. Binary-only stuff is
not trustworthy at all, so it really should be strictly confined.

Those vendors (eg. Microsoft/Skype) also like to mess w/ package manager
configuration, have implicit dependencies like silly Lennartware, etc.
I never ever run such crap outside a strictly confined container.

One of the worst things I've ever seen is coming from National
Instruments (which don't support Debian anyways, just ancient RHEL)
Traditionally they only provided ridiculous installer programs
(just like they're used to from the dillettantic Windows world)
that do lots of really weird things, even messing w/ the kernel
(yeah, they still insist on binary-only kernel modules, that's
always broken-by-design).

Somewhere in last summer they learned what package repos are for,
well, just partially learned. They now messed w/ the repo configs and
installed a globally trusted package source with explicitly disabled
authentication and plain http. Boom - 0day !

Due to their long history of hostility, total bullshit and censorship in
their own "community", I've posted that @full-disclosure (even goverment
institutions like BSI called by for interviews on that matter - their
products also run in critical infrastructure like power plants). Again
it took several month for the issue to be migitated by NI.

> 3. Inserting a DUPR package into the near-root part of the Debian>
> dependency tree is, generally speaking, a terrible idea.>Only
those who aware of what they are doing will do this.
ACK. Those stuff belongs into throwaway-containers.

> The `bin/duprCollector` will collect meta information from a collection> (and 
> will form a dependency tree in the future). I have no plan to>
rethink about the "get-orig-source" target since there are ... lots> of
weird upstreams in my list...
Maybe we should talk about some of these cases, to get a better idea.

In general, we IMHO should rethink the workflows for creating the actual
buildable debian tree from upstream releases (in many packages that's
still pretty manual and hackish)


--mtx

-- 
Enrico Weigelt, metux IT consult
Free software and Linux embedded engineering
i...@metux.net -- +49-151-27565287



Re: PPAs (Re: [Idea] Debian User Repository? (Not simply mimicing AUR))

2019-04-16 Thread Enrico Weigelt, metux IT consult
On 10.04.19 16:56, Helmut Grohne wrote:

Hi,

> I looked into this. Your reasons are sound and you are scratching your> itch. 
> This is great.
ACK. It's always good when people make their hands dirty and work on
solving actual problems. Even if the actual output (=code, etc) finally
doesn't get wide use or even thrown away completely, we still a lot
that way.

When I look back into my earlier years, I've written lots of things that
never have been actually used, but I've learned a lot that way.

> Your implementation goes straight from .durpkg -> .deb. I question this> 
> decision: We already have lots of tools to go from .dsc to .deb. Your>
implementation replicates part of that and I think, this is bad as it>
makes it harder to collaborate.
I did a similar decision w/ dck-buildpackage, because I came to the
conclusion that intermediate steps via dsc are just unncessary
complexity and slowdown. But the motivation of dck-buildpackage was
getting rid of complicated and cumbersome things like pbuilder.

So, I can understand to his decision - he probably doesn't need anything
from the dsc-based tools, as he's operating in a completely different
scope.

> Let me propose a rather intrusive interface change to duprkit. What if> the 
> result of building a .durpkg was a .dsc rather than a .deb? Then
you> could split duprkit into two tools:> >  * One tool to build source
packages from .durpkg files on demand.>  * One tool to build a specific
binary package from a given deb-src>repository.
Let me propose an even more consequent approach: let it operate even one
step earlier in the pipeline by just generating a debianized source
tree. You could then use the tool of your choice to create dsc from that
and put in whatever kind of pipeline you prefer. My personal choice here
would be dck-buildpackage, and my infrastructure ontop of that.

By the way, this opens up another common topic: how do we get from an
upstream tree (in git repo) to a debianzed source tree w/ minimal manual
efforts ?

> Now in principle, the latter is simply sbuild or pbuilder, but there is> more 
> to it:>  * Given the binary package name, figure out which source
package must>be built.

Yet another tricky issue. The primary data source for that usually are
the control files. But they also somethimes are autogenerated.

Could we invent some metadata language for this, that also can handle
tricky cases like the kernel ?

>  * Given that source package's Build-Depends, figure out what other>
> binary packages need to be built.>  * Recurse.>  * Build them in a
suitable order.

You're talking about building whole overlay repos ?
Then I might have something for you:

https://github.com/metux/deb-pkg

Note: it's still pretty hackish and needs some local per-project
customizations. Haven't had the time to make some general purpose
standalone package of it.

I'm just using it for building per private extra repos for my customers.

If anybody likes to join in and turn it into some general purpose
package, let's talk about that in a different thread. The first step
would be creating a decent command line interface (for now, the run-*
scripts are just project-specific dirty hacks to save me from typing
too much ;-)).

> (Some people will observe that this is the "bootstrap" problem. ;)

Not really bootstrap problem, but depenency problem. Easier to solve :p

> There is one major difficulty here (and duprkit doesn't presently solve> that 
> either): If you figure that some binary package is missing, you>
have no way of knowing which .durpkg file to build to get the relevant>
source package.

Yes, he'd have to reinvent the dependency handling. This is one of the
points that let me question the whole approach and favour completely
different approaches like classic containers.

> Now let's assume that you do want to allow complex dependencies in this
> user repository. In this case, it would make sense to trade .durpkg
> files for plain "debian" directories with an additional debian/rules
> target to obtain the source. (We removed "get-orig-source" from policy a
> while ago, but maybe this is what you want here.)

Sounds a good idea.

Maybe we should put this to a broader discussion, along w/ the control
file generation problem. My desired outcome of that would be a generic
way for fully automatically building everything from a debianzed source
tree (eg. git repo) within a minimal container/jail, w/o any other extra
configuration outside that source tree - even for those cases where the
control file needs to be generated, which again needs some deps.


--mtx

-- 
Enrico Weigelt, metux IT consult
Free software and Linux embedded engineering
i...@metux.net -- +49-151-27565287



Re: [Idea] Debian User Repository? (Not simply mimicing AUR)

2019-04-16 Thread Enrico Weigelt, metux IT consult
On 10.04.19 03:53, Russ Allbery wrote:

Hi,

> Possibly my least favorite> thing about RPMs is the spec files, because by 
> smashing everything>
together into the same file, the syntax of that file is absurd.  This
bit> is a shell script!  This bit is a configuration file!  This bit is>
human-readable text!  This bit is a patch list!  This bit is a file>
manifest!  This bit is a structured changelog!  This bit is a bunch of>
preprocessor definitions!  Aie.
Same for me.

Certainly, deb and debian methods also have their downsides:

#1: text-based patches inside debian/ make everything unncessarily
complex, as soon as you're working w/ a decent VCS (eg. git).
their historical purpose is long obsoleted (since over a decade)

#2: for many common usecases, full-blown makefile is too much
complexity, and even w/ debhelper, knowing which variables have
to be set in which exact way, isn't entirely trivial.
some purely declarative rule file (eg. yaml) would make those very
common usecases much easier.

#3: when you have to generate the control file on the fly, things easily
get messy - i'm currently fighting here w/ packaging the kernel.
the problem is that this file contains the source package name and
source dependencies, which need to be known before the control file
can be generated. circular dependency.

I'm currently working around that by having a separate control file
(debian/control.bootstrap) which is used by my build machinery
(dck-buildpackage) in a separate preparation step, when control file
is missing.


But still, IMHO, the debian packaging toolstack is much superior to
anything else i've every encountered.


--mtx

-- 
Enrico Weigelt, metux IT consult
Free software and Linux embedded engineering
i...@metux.net -- +49-151-27565287



[prototype] Debian User Repository Toolkit 0.0a release

2019-04-16 Thread Enrico Weigelt, metux IT consult
On 09.04.19 05:32, Mo Zhou wrote:

Hi,

> I drafted a 0.0 alpha release[1] for the toolkit, and created a logo for
> the DUPR project. From now on I'll try to add more packaging scripts
> (maybe I should call them recipes) to the default collection[2].
> Packaing plans are tracked here[3], and maybe further discussion about
> the DUPR (DUR, whatever.) should be redirected to a dedicated issue[4].
> And, I hope someone could put forward a better name for these prototypes
> (naming issue tracked here: [6]).

it seems that you're trying to package crap software.

I, personally, only touching those stuff when a customer inserts lots of
coins for that. And in these cases, I make it absolutely clear to them,
that we can't expect quality and stability - relying on binary-only crap
is always playing russian-roulette. The mentioned cuda stuff (remember
that Nvidia is a very pretty hostile and fraudulent corporation) is just
the tip of the iceberg - so called "professional" software in industrial
world (eg. Xilinx studio, Sigasi, etc) is even more crappy. (Xilinx is
also criminal - eg. *deliberately* violating the GPL). That's a kind of
software, you seriously don't wanna install outside a well-confined
container anyways.

In some cases, I just write usual debian/rules files, or the q ansible
way. Usually, the job is to provide whole container environments for the
customer's daily work - deb packaging is just one element here, which
isn't necessarily economically efficient (for those stuff, I've already
given up the idea of quality, it's just about making the customer happy
enough, so he inserts more coins :p).

A major challenge here is retrieving the original media in a *reliable*
and fully automatic way, even in the customer's often pretty weird
network setups. One just cannot rely that the original media remain
online - expect it to vanish over night, w/o any single notice.
Therefore you also need your own local mirror, if that stuff shall
come anywhere near to production use.

I would be open for collaborating in maintainance of install stuff for
such crapware (some of that I've already got @github), but there's a
lot more to do than just yet another way to build debs.

OTOH, I'm a bit reluctant to publish some fancy solution, as the vendors
of those crapware - as well as their customers, who even pay for that
crap - should feel the pain. That pain has be increased much higher,
before anybody there even considers learning the fundamental basics
of software build and deployment. (as long as they plague us with their
ridiculous installers, they haven't learned a single bit).

Maybe we should pick a different license here, which mandantes the
customers to squeeze the vendor's balls, make them feel a lot pain ;-)
(eg. the customer could tell the vendor something like "this is the last
time we tolarate that, next purchases only if you provide properly
long-term maintained apt repos for the distros and arch's we use).

Okay, it's just a dream, but a very nice one ;-)


--mtx

-- 
Enrico Weigelt, metux IT consult
Free software and Linux embedded engineering
i...@metux.net -- +49-151-27565287



Re: Conflict over /usr/bin/dune

2019-01-18 Thread Enrico Weigelt, metux IT consult
On 18.01.19 01:43, Andreas Beckmann wrote:
> On Tue, 18 Dec 2018 17:48:06 + Ian Jackson
>  wrote:
>> Ian Jackson writes ("Re: Conflict over /usr/bin/dune"):
>>>  https://www.google.com/search?q=dune+software
>>>  https://en.wikipedia.org/wiki/Dune_(software)
>>>  https://www.google.com/search?q=%2Fusr%2Fbin%2Fdune
>>>
>>> Under the circumstances it seems obvious that, at the very least, the
>>> ocaml build tool should not be allowed the name /usr/bin/dune.



By the way: there's also the Game "Dune". IMHO not in official Debian
repo, but I've got it hanging around in some 3rdparty repo ...

--mtx

-- 
Enrico Weigelt, metux IT consult
Free software and Linux embedded engineering
i...@metux.net -- +49-151-27565287



Re: Limiting the power of packages

2018-12-07 Thread Enrico Weigelt, metux IT consult
On 19.11.18 20:24, gregor herrmann wrote:
> On Mon, 19 Nov 2018 17:29:37 +0100, Enrico Weigelt, metux IT consult wrote:
> 
> (OT, but since I noticed it too:)
> 
>> Anyways, Skype doesn't work since 8.30 as it crashes directly on
>> startup.
> 
> Apparently it needs (e)logind since that version.

That didn't help neither.

Few days ago, I tried their inofficial preview build, which doesn't seem
to crash anymore. Let's see what happens when the official release
comes.


--mtx


-- 
Enrico Weigelt, metux IT consult
Free software and Linux embedded engineering
i...@metux.net -- +49-151-27565287



Re: git vs dfsg tarballs

2018-12-07 Thread Enrico Weigelt, metux IT consult
On 21.11.18 04:22, Paul Wise wrote:

> I don't think Andreas was talking about applying the DFSG but about
> files we don't have permission to distribute at all.

Have there been any cases where those files have been in the
upstream VCS ? I don't recall any such case.

For the case where certain parts shouldn't be built/shipped due to
policy, this can - and IMHO should - be handled with changes within
the VCS, instead of having tarballs laying around w/o any clear
history and no indication how exactly it was created from upstream.

Actually, since about a decade, I'm not doing any code changes outside
git, and I'm building packages only directly from git. Frankly, I don't
see any reason why that can't be the standard case.


--mtx

-- 
Enrico Weigelt, metux IT consult
Free software and Linux embedded engineering
i...@metux.net -- +49-151-27565287



Re: git vs dfsg tarballs

2018-11-19 Thread Enrico Weigelt, metux IT consult
On 19.11.18 17:36, Ian Jackson wrote:

Hi,

> I am saying that for packages whose Debian maintainer follow those> 
> recommendations, much of what you want would be straightforward - or,>
anyway a lot easier.  So I was plugging my recommendations.
Unfortunately, those packages I'm coping w/ don't really follow that.

Kodi is an really unpleasant example:

* unclear orig<->dfsg relationship (I'll have to analyze them one by
  one and adapt my import scripts)
* very non-linear history (eg. new upstream trees, sometimes even
  completely unrelated branches, directly merged down into the deb
  branch)
* lot's of patches against non-existing files
* rules trying to touch missing source files/directories.

>> Here're some examples on how my deb branches look like:> > Not sure what you 
>> mean by `your deb branches',

Those who add the debian/* stuff, and possibly other patches.
In my model, they're always linear descendants of the corresponding
upstream release tag.

> but looking at what> Debian gives you:> >> * canonical ref names> > dgit 
> (dgit clone, dgit
fetch) will give you this, regardless of the> maintainer's behaviour.
hmm, looks like good start. But it doesn't really look easy to clone
from different distros and specific or yet unreleased versions.

and one of my main problems remains unresolved: linear history ontop
of the upstream's release tag.


--mtx

-- 
Enrico Weigelt, metux IT consult
Free software and Linux embedded engineering
i...@metux.net -- +49-151-27565287



Re: Limiting the power of packages

2018-11-19 Thread Enrico Weigelt, metux IT consult
On 07.10.18 21:20, Adrian Bunk wrote:

> For leaf software like Skype or Chrome, approaches like flatpak where> 
> software can be installed by non-root users and then runs confined>
have a more realistic chance of being able to becoming a good solution.
I'd rather put those non-trustworthy code them into a minimal container
w/ fine tuned minimal permissions.

Anyways, Skype doesn't work since 8.30 as it crashes directly on
startup.


--mtx

-- 
Enrico Weigelt, metux IT consult
Free software and Linux embedded engineering
i...@metux.net -- +49-151-27565287



Re: git vs dfsg tarballs

2018-11-19 Thread Enrico Weigelt, metux IT consult
On 19.11.18 13:52, Ian Jackson wrote:

> Clearly the transformation on the *tree* can't be reversible because
> in the usual case it is deleting things.  So you'll need the history.

It certain can be, if you know the exact orig commit.
Maybe I wasn't really clear here: I wanna do a fully automatic import
into a git history (optimally, by just having package name and version).

> With most gitish workflows, the corresponding pre-dfsg upstream
> *commit* can be found with `git-merge-base', assuming you have some
> uploaded (or pushed) Debian commit and a suitable upstream branch.

It's not entirely trivial, if the maintainers are doing wild merges.
(eg. w/ kodi). Even worse: reconstructing the change history ontop
of some given upstream release is pretty complicated and manual.
Merging down from upstream into packaging branch (instead of just
a simple rebase) turns out as bad idea here.

>> My preferred way (except for rare cases where upstream history is
>> extremely huge - like mozilla stuff) would be just branching at the
>> upstream's release tag and adding commits for removing the non-dfsg
>> files ontop of that. From that branching the debianized branch,
>> where all patches are directly applied in git.
> 
> I think that most of the workflows recommended in these manpages
> 
>   https://manpages.debian.org/stretch-backports/dgit/dgit-maint-gbp.7.en.html
>   
> https://manpages.debian.org/stretch-backports/dgit/dgit-maint-merge.7.en.html
>   
> https://manpages.debian.org/stretch-backports/dgit/dgit-maint-debrebase.7.en.html

Yet complicated for me (especially regarding automating/CI).

Here're some examples on how my deb branches look like:

https://github.com/oss-qm/flatbuffers/commits/debian/maint-1.9.0
https://github.com/oss-qm/go/commits/debian/maint-1.11.1

* canonical ref names
* always based on the corresponding upstream's release tag
* changes directly as git commits - no text-based patches whatsoever
* generic changes below the deb-specific ones

While gbp can help a bit here and there, it still far away from an
fully-automated process.

I'm currently helping myself w/ lots of mappings and import scripts,
but I'd like to get rid of maintaining all these little pieces.


--mtx
-- 
Enrico Weigelt, metux IT consult
Free software and Linux embedded engineering
i...@metux.net -- +49-151-27565287



git vs dfsg tarballs

2018-11-19 Thread Enrico Weigelt, metux IT consult
Hi folks,


I'm often seeing packagers directly putting dfsg'ed trees into their git
repos, w/o any indication how the tree was actually created from the
original releases.

As I'm doing all patching exclusively via git (no text-based patches
anymore - adding my changes ontop the upstream release tag and then
rebasing for new releases), this (amongst other problems like
wild merges) is quite a challenge for efficient (heavily automatic)
handling.

Can we agree on some auomatically reproducable (and inversable)
transformation process from orig to dfsg tree

My preferred way (except for rare cases where upstream history is
extremely huge - like mozilla stuff) would be just branching at the
upstream's release tag and adding commits for removing the non-dfsg
files ontop of that. From that branching the debianized branch,
where all patches are directly applied in git.


--mtx


-- 
Enrico Weigelt, metux IT consult
Free software and Linux embedded engineering
i...@metux.net -- +49-151-27565287



Re: A message from CMake upstream: announcing dh-cmake

2018-11-19 Thread Enrico Weigelt, metux IT consult
On 06.07.18 22:00, Colin Watson wrote:

> If the libraries in question are DFSG-free themselves, there's no DFSG
> issue and you don't need to remove them from the tarball (and we'd
> generally encourage not modifying the upstream tarball unnecessarily for
> upload to Debian).  The policy about bundling is separate from the DFSG.
> Of course it'd be incumbent on whoever's doing the Debian upload to
> actually check the licensing status.

last time i've packaged vtk, I removed them (at least those that either
already had been packaged or easy to do so), in order to make sure that
nothing in that really complex cmake file can even try build/use any
piece of them.

the package was just meant for an inhouse installation for my client,
so i didn't care much about policies and orig tarball handling - I've
just patched directly in the git repo.


--mtx

-- 
Enrico Weigelt, metux IT consult
Free software and Linux embedded engineering
i...@metux.net -- +49-151-27565287



Re: You are not seriously considering dropping the support of sysVinit?!

2018-11-19 Thread Enrico Weigelt, metux IT consult
On 17.10.18 08:55, free...@tango.lu wrote:

> Dropping sysvinit would also put an enormous amount of work on the
> Devuan project (the only future for Debian) by making them fork more
> packages.

Well, in that case we can also completely drop other Lennartware
dependencies in the affected packages.


--mtx

-- 
Enrico Weigelt, metux IT consult
Free software and Linux embedded engineering
i...@metux.net -- +49-151-27565287



Re: Limiting the power of packages

2018-10-04 Thread Enrico Weigelt, metux IT consult
On 04.10.2018 01:19, Carl-Valentin Schmitt wrote:
> It would be a possibility, for safety to create a new directory only for
> brandy 3rd-party-software like skype, Google Chrome, Swift, and else
> Software where huge companies are Sponsors.
>  
> This would then mean, to create a second sources list for 3rd-party-links.

We don't need to add anything to dpkg/apt for that - there's a simpler
solution:

Automatically fetch those packages from the vendor and collect them into
our own repo, but run a strict analysis before accepting anything.
Rules could be strictly limiting to certain filename patterns, file
modes (eg. forbid suid or limit to certain owners), no maintainer
scripts, etc, etc. We could either filter out anything suspicious or
reject the package completely (maybe even automatically filing
upstream bugs :p).

Yes, that would have to be customized per-package, but we're only
talking about a hand full of packages, anyways.


What's really important for me: don't add more complexity on the
target apt/deb for these few cases, unless *absolutely* *necessary*


By the way: we can put aside the whole Skype issue for the next few
month, as it's completely broken and unusable anyways - for several
month now. We could reconsider once the Upstream (Microsoft) manages
get it at least running w/o segfaulting.


--mtx

-- 
Enrico Weigelt, metux IT consult
Free software and Linux embedded engineering
i...@metux.net -- +49-151-27565287



Re: Limiting the power of packages

2018-10-04 Thread Enrico Weigelt, metux IT consult
On 03.10.2018 19:19, Lars Wirzenius wrote:

> Sometimes what they do is an unwelcome surprise to the user. For
> example, the Microsoft Skype .deb and the Google Chrome .deb add to
> the APT sources lists and APT accepted signing keys. Some users do not
> realise this, and are unpleasantly surprise.

https://seclists.org/fulldisclosure/2018/Sep/53

> (Note that I'm not saying Microsoft or Google are doing something
> nefarious here: 

But I do think that. If they really wanted to do that in a reasonably
secure and safe way (assuming they're not completely incompetent),
they'd split off the sources.list part from the actual package (there're
many good ways to do that), and added proper pinning to reduce the
attack surface.

And they would have talked to the Distros about a proper process of
bringing Skype into Distro repos.

OTOH, considering the tons of other bugs and design flaws, I'm not
really sure whether they're nefarious or incompetent, maybe a mix of
both ...

> they're trying to make sure security updates for their
> packages will be deployed to user's system; this seems like a worthy
> goal. But it's a surprise to some users.)

The goal is nice, but that's what Distros are for. But it's always
the same since aeons: commercial vendors tend to work against the
distros.

> I don't think it's good enough to say the user shouldn't install
> third-party packages. 

Actually, I do think so (unless the user knows exactly what he does).
It's not about proprietary software in general - this can (and is)
also handled by Distros. But the Distro (or some other neutral
project, that provides an extra repo) is needed as quality gate.

> It's not even good enough to say the user should
> use flatpaks or snaps instead: not everything can be packaged that
> way. Debian's own packages can have equally unwelcome surprises.

Haven't really looked deeper into flatpak, but I'm doing a lot with
docker and lxc containers.

As those proprietary vendors tend to be completely overstrained with
the whole concept of package management (no idea why, but I've seen
this a thousand times w/ my clients), it seems to be the most pragmatic
solution to put everything into strictly isolated containers.

Those packages are only few special cases anyways. (for the average
end-user I don't see much more candidates besides Skype, but there's
still a lot very special business applications each having a petty tiny
user base).

That way, the vendors could just pick some minimal base system (maybe
apline or devuan based) and step by step learn how to use package
management, in their own confined microcosmos. At least they wouldn't
have to cope w/ many different distros, as long as they haven't
understood the whole concept behind (if they did, it would be pretty
trivial for them, and we wouldn't need this this discussion).

> Imagine a package that accidentally removes /var, but only under
> specific conditions. You'd hope that Debian's testing during a release
> cycle would catch that, but there's not guarantee it will. (That's a
> safety issue more than a security issue.)

Did this ever happen ? Why should anybody write such things into a
maintainer script in the first place ?

> A suggestion: we restrict where packages can install files and what
> maintainer scripts can do. The default should be as safe as we can
> make it, and packages that need to do things not allowed by the
> default should declare they that they intend to do that.

Rebuild flatpak+friends ?

Point is: maintainer scripts can do anything they want.

What we can (and should) do is doing most things in a purely declarative
way - at deployment time, instead of autogenerating mnt scripts or
calling predefined functions - so we can concentrate on more careful
reviews of the remaining special cases.

By the way: at lot of the demand for mnt scripts, IMHO, comes from
upstream's bad sw architecture (interestingly, GUI stuff again tends
to be the ugliest area). Usually, good packages should be fine with just
unpacking some files into proper places.

> This could be done, for example, by having each package labelled with
> an installation profile, which declares what the package intends to do
> upon installation, upgrade, or removal.

Who defines these labels ? The packager ? Is there any extra quality
gate (before the user) ?

> * default: install files in /usr only

That's bad enough, if the package is of bad quality or even malicious.

Finally, I'd really like to reduce complexity, not introduce even more.


--mtx

-- 
Enrico Weigelt, metux IT consult
Free software and Linux embedded engineering
i...@metux.net -- +49-151-27565287



Announce: docker-buildpackage

2018-05-01 Thread Enrico Weigelt, metux IT consult

Hi folks,


I've written a tool for isolated deb builds in docker containers.
It's a little bit like pbuilder, but using docker for isolation.

https://github.com/metux/docker-buildpackage

Everything written in shellscript, simple config as sh includes.
Not debianized yet, as it might require some local customizations.
(planned for future releases)

I'm also hacking on another tool which automatically clones repos
and calls dck-buildpackage for building whole pipelines - but that's
still experimental and hackish:

https://github.com/metux/deb-pkg


--mtx

--
Enrico Weigelt, metux IT consult
Free software and Linux embedded engineering
i...@metux.net -- +49-151-27565287



Re: FHS: Where to store user specific plugins / code

2018-03-09 Thread Enrico Weigelt, metux IT consult

On 09.03.2018 14:23, Georg Faerber wrote:

Hi, > I guess we'll go with /usr/local/lib/schleuder then? Does this 
sound> like a reasonable choice?

That would be my choice.

OTOH, it might be nice to have a helper that automatically creates
deb packages. (would also be nice for other applications, eg. moz).


--mtx

--
Enrico Weigelt, metux IT consult
Free software and Linux embedded engineering
i...@metux.net -- +49-151-27565287



Re: MIA ? Miriam Ruiz

2017-10-23 Thread Enrico Weigelt, metux IT consult

On 24.10.2017 05:02, Norbert Preining wrote:


I am trying to contact Miriam Ruiz (uid=miriam) but I haven't seen any
sign of life/answer. All recent uploads of her packages are from other
people, her own uploads are from 2015. Her last blog entry is also from
2015.


I'll try to talk to her.


--mtx
--
Enrico Weigelt, metux IT consult
Free software and Linux embedded engineering
i...@metux.net -- +49-151-27565287



substvars in *.install + friends

2017-05-04 Thread Enrico Weigelt, metux IT consult

Hi folks,


is it possible to use the substvars mechanism for the *.install and
similar files, just like w/ control file ?

For multi-version installations, I'm keeping the whole package in a
prefix w/ the version number (see my other mail - nodejs). I don't like
to change lots of files which each version number.


thx

--mtx



Re: Packaging nodejs-7.9

2017-05-04 Thread Enrico Weigelt, metux IT consult
On 04.05.2017 09:26, Jérémy Lal wrote:

> At the moment, in debian, /usr/lib/nodejs is there to store all node
> modules installed from debian packages.

hmm, would that conflict w/ having certain "nodejs-$version" subdirs
w/ the actual engines (the whole tree - not splitted out the several
FHS parts yet) there ?

Meanwhile I've also added some update-alternatives support. (yet have
to add version into package name). But this will conflict w/ current
versions, as they directly install /usr/bin/nodejs. Can we make a minor
update of 0.10.* for update-alternatives ?

> Are you talking about installing modules depending on their
> compatibility with node engines (as found in package.json) ?

Actually, not sure whether that's really required. Are there any known
(already packaged) modules that break w/ newer nodejs ? If not, I guess,
just adding depend son newer engines where needed should be enough.


--mtx



Packaging nodejs-7.9

2017-05-04 Thread Enrico Weigelt, metux IT consult
Hi folks,


I'm currently packaging nodejs-7.9 for various deb Distros.

I'll have to maintain some applications that use the fanciest
new features, and precompiled binaries from untrusted sources
(eg. nvm+friends) of course are not an option.

Before I go all of this alone - is there anybody here who already
done this ? Or anything I should consider ?

My current plan is:
* install in similar way as jvm (/usr/lib/nodejs/nodejs-$version)
* for now I'll just directly symlink - update-alternatives support
  comes in a later step (or maybe someone here likes to help ?)
* the actual nodejs package will be named "nodejs-$version", the
  symlinks in package "nodejs".

The tricky part will be a safe upgrade path from current 0.10
and npm's dependencies.


What do you folks think about that ?


--mtx



Re: Bug#857394: Debian Policy violation -- libgegl-dev contains duplicate copy of openCL library files

2017-04-14 Thread Enrico Weigelt, metux IT consult
On 14.04.2017 14:34, ian_br...@mail.ru wrote:

> I was right -- it IS a Debian Policy violation:
> 
> * 4.13 Convenience copies of code *


I've got a similar problem while packaging recent webkit (latest surf
needs a newer one). Their git repo is >GB (!). No idea how much I'll
have to cut out here yet (still pulling) ...

By the way: is there any automatic way for creating the -dfsg trees out
of the upstream ? (I prefer working directly w/ git repos instead of
additional patching)


--mtx



Re: init system agnosticism [WAS: how to remove libsystemd0 from a live-running debian desktop system]

2017-04-14 Thread Enrico Weigelt, metux IT consult
On 13.04.2017 11:27, Vincent Danjean wrote:

>   For me, the first argument explain in the first mail is not this one.
> systemd is not portable on lots of system (hurd, kFreeBSD, ...), 

This is just one of many arguments for not making applications
depending on it. (and they shouldn't depend on any other init system
either).

Regarding service status reporting, systemd folks indeed make a good
point. There is some demand for that, and they solved the problem for
their audience (unfortunately only for *their* audience), and moved
one to the next topic. For a prototype that's really fine, but not
for long term maintenance over dozens of different platforms.

Now stating, everybody should just implement their interfaces is just
like asking everybody to implement it in the windows way (NT came up
with it's entirely own service management system, which works quite
well, as long as you're confined within the windows world)

> systemd is not interested in making its code portable, nor to stabilize
> its interfaces so that other system init can easily implement them,

Well, that's their choice, and I respect that. It's just not mine.
I don't wanna be forced into their ways (as I wouldn't ever try to
force them into mine). So, I'm looking for a *generic solution* for the
actual problem, which that functions ins libsystemd aimed to solve,
so applications can just use them, w/o ever having to care which init
system might be installed (or if there even is one at all)

> lots of applications are now using libsystemd to get 'classical' information
> (status, ...) because they do not want to have to deal with several init
> system 

Exactly. They're just looking for some API for that stuff, not caring
what it actually does under the hood. And systemd just happens to
provide one. From the application developer's pov systemd is filling
some gap, and they dont even wanna care about the consequences.

So, it's up to us, to provide a better solution - just telling how bad
systemd is, isn't just enought (from their perspective).

> and porters of platforms not supported by systemd have a really hard
> work to follow systemd developments and patch all things.

Exactly. For some arbitrary application developer (who usually doesn't
even know much about packaging, etc), it's hard to understand the
underlying problem - they just want something they can set their code
ontop (that's also the reason why all these strange proprietary
platforms can even exist). So, it's up to us, who know better, to give
them something they can work with, and that doesn't cause all the
trouble that Lennartware does.

>   From your mail, you seems to deny this issue ("everybody can be pleased
> with systemd" and/or "this is not a general problem, just a problem
> from people that dislike systemd"). For what I see, it seems a problem
> also for people that like systemd but cannot use it on their plate-form
> (Hurd, ...)

Right, it's basicly the same old "shut up and go away" attitude.
Actually, many people already went away, and more will follow.

If it goes on that way, we'll end up w/ an own OS called systemd, which
is as far away from GNU/Linux as Android. Do you folks really want that
or did you just ran out of better ideas ?

>   I'm persuaded that ignoring this issue will lead to an unmaintanable
> Debian distribution on platforms that do not support systemd in the
> middle/long term. But, perhaps, it is what the project wants.

That, in turn, would lead to Debian step by step defeating its own
original goals. I'm pretty sure that it won't take long for lots of
other things (beginning w/ other kernels) are dropped, just because
nobody is willing to keep it compatible w/ systemd.

>   Enrico is proposing something else. I'm not sure if his proposal is
> good and doable (ie with enough support from various parties and
> manpower).

If we get out of the ideologic war (including the upstreams, too),
it wouldn't be such a big deal. A minimal implementation the proposed
library is quite simple and small. We'd just have to touch a bunch of
applications and rewrite a few lines there - and once it works and
included in a major distro, we have good chances for convincing
upstreams to take our patches in. And I'm sure, Devuan folks, which
had been driven out of Debian, will help here, too.

Yes, somebody needs to maintain the systemd-version/branch - but as the
library interface will be stable (it's scope is quite limited, so there
wont be much desire to add anything new). So, we at least have the
overhead of keeping up w/ systemd minimized and centralized in one
small lib. Maybe even someday systemd folks have some moment of insight
and take that part into their hands.


--mtx



init system agnosticism [WAS: how to remove libsystemd0 from a live-running debian desktop system]

2017-04-12 Thread Enrico Weigelt, metux IT consult
On 17.02.2015 18:49, The Wanderer wrote:

Hi folks,


just digging out an older thread that was still laying around in my
inbox - w/ about 2yrs distance, I hope it was enough cool down time
so we discuss it more objectively about that.



> libsystemd0 is not a startup method, or an init system. It's a shared
> library which permits detection of whether systemd (and the
> functionality which it provides) is present.

>From a sw architects pov, I've got a fundamental problem w/ that
appraoch: we'll have lots of sw that somehow has 'magically'
additional functionality if some other sw (in that case systemd)
happens to run.

The official description is: "The libsystemd0 library provides
interfaces to various systemd components." But what does that mean ?
Well, more or less a catchall for anything that somehow wants to
communicate w/ systemd. What this is actually for, isn't clear at all
at that point - you'll have to read the code yourself to find out.
And new functionality can be added anytime, and sooner or later some
application will start using it. So, at least anybody who maintains
and systemd-free environment (eg. platforms that dont even have it)
needs run behind them and keep up.

Certainly, systemd has a lot of fancy features that many people like,
but also many people dislike (even for exactly the same reaons).
The current approach adds a lot of extra load on the community and
causes unnecessary conflicts.

So, why don't we just ask, what kind of functionality do applications
really want (and what's the actual goal behind), and then define open
interfaces, that can be easily implemented anywhere ?

After looking at several applications, the most interesting part seems
to be service status reporting. Certainly an interesting issue that
deserves some standardization (across all unixoid OS'es). There're lots
of ways to do that under the hood - even without having to talk to some
central daemon (eg. extending the classical pidfile approach to
statfiles, etc). All we need yet is an init-system/service-monitor
agnostic API, that can be easily implemented w/o extra hassle.
A simple reference implementation probably would just write some
statfiles and/or log to syslog, others could talk to some specific
service monitor.

Having such an API (in its own library), we'd already have most of
the problems here out of the way. Each init system / service monitor
setup comes with some implementation of that API, and applications
just depend on the corresponding package - everything else can be
easily handled by the existing package management infrastructure.
No need for recompiles (perhaps even no need to opt out in all the
individual packages).

The same can be done for all the other features currently used from
libsystemd, step by step.

Maintenance of these APIs (specification and reference implementation)
should be settled in an open community (perhaps similar to
freedesktop.org for the DE's), not in an individual init system /
service monitor project.


I really wonder why people spent so much time in init system wars,
instead of thinking clearly of the actual root problem to solve.


--mtx



Re: What's a safe way to have extensions in chromium in Debian?

2017-04-11 Thread Enrico Weigelt, metux IT consult
On 11.04.2017 10:22, Andrey Rahmatullin wrote:
> On Tue, Apr 11, 2017 at 04:22:40AM +0200, Enrico Weigelt, metux IT consult 
> wrote:
>>>> 
>>>>
>>>> could anyone please give me some insight, was the security problems
>>>> are here exactly ?
>>> Extension auto-updating is considered "phoning home".
>>
>> Isn't there a way to just disable part ?
> Disabling extension auto-updating is wrong from several perspectives,
> including the security one.

hmm, I'd actually feel better w/ manual update (on user request) for the
unpackaged ones (the packaged ones of course go via apt).


--mtx


-- 

mit freundlichen Grüßen
--
Enrico, Sohn von Wilfried, a.d.F. Weigelt,
metux IT consulting
+49-151-27565287



Re: What's a safe way to have extensions in chromium in Debian?

2017-04-10 Thread Enrico Weigelt, metux IT consult
On 09.04.2017 22:58, Andrey Rahmatullin wrote:
> On Sat, Apr 08, 2017 at 08:28:38AM +0200, Enrico Weigelt, metux IT consult 
> wrote:
>> 
>>
>> could anyone please give me some insight, was the security problems
>> are here exactly ?
> Extension auto-updating is considered "phoning home".

Isn't there a way to just disable part ?


--mtx



Re: What's a safe way to have extensions in chromium in Debian?

2017-04-08 Thread Enrico Weigelt, metux IT consult


could anyone please give me some insight, was the security problems
are here exactly ?

--mtx

-- 

mit freundlichen Grüßen
--
Enrico, Sohn von Wilfried, a.d.F. Weigelt,
metux IT consulting
+49-151-27565287



dpkg packaging problems

2015-01-02 Thread Enrico Weigelt, metux IT consult
Hi folks,


I'm just packaging some library to various deb distros using
pbuilder + git-buildpackage.

Unfortunately, the .so's loose the +x flag in the package
(while usual 'make install' is okay) - it seems that some of the
dh stuff drops that flag :(

maybe some of you guys might have an idea ?

See:

https://github.com/metux/fskit/tree/jessie/master
https://github.com/metux/fskit/tree/trusty/master

the build process is driven by:

https://github.com/metux/packaging


cu
--
Enrico Weigelt,
metux IT consulting
+49-151-27565287


-- 
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: https://lists.debian.org/54a6beaa.8060...@gr13.net



Re: dpkg packaging problems

2015-01-02 Thread Enrico Weigelt, metux IT consult
On 02.01.2015 17:08, Martin Pitt wrote:

Hi,

 Yes, man dh_fixperms. Shared libraries don't need to and should not be
 executable. 

Oh, wasn't aware of that. Just used to that as gcc sets that flag.
Is it a bug in gcc, or are there platforms where +x is required ?


cu
--
Enrico Weigelt,
metux IT consulting
+49-151-27565287


-- 
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: https://lists.debian.org/54a6d261.9070...@gr13.net



Re: Technical committee acting in gross violation of the Debian constitution

2014-12-06 Thread Enrico Weigelt, metux IT consult
On 25.11.2014 16:29, Philip Hands wrote:

 How is it that Debian changing the default for something on some of

What about the enforced replace on dist-upgrade, which at least
produces lots of extra work and can easily cause systems being
unbootable ?


cu
--
Enrico Weigelt,
metux IT consulting
+49-151-27565287


-- 
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: https://lists.debian.org/5482bec5.8030...@gr13.net



Re: Summary:Re: Bug#762194: Proposal for upgrades to jessie (lendows 1)

2014-12-05 Thread Enrico Weigelt, metux IT consult
On 29.11.2014 19:15, Svante Signell wrote:

 Since there is no interest in adding a debconf message on new installs,
 I wish for a menu entry in the advanced part of the installer to be able
 to install a new system with sysvinit-core or upstart!

+1



-- 

mit freundlichen Grüßen
--
Enrico Weigelt,
metux IT consulting
+49-151-27565287


--
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: https://lists.debian.org/5481a9ed.1010...@gr13.net



Re: Technical committee acting in gross violation of the Debian constitution

2014-12-04 Thread Enrico Weigelt, metux IT consult
On 29.11.2014 20:43, Svante Signell wrote:

 The best for kFreeBSD and Hurd would be to abandoning the Debian ship.
 It is sinking :( (just let the devuan people get things in order first)

Well, I'll also put my projecsts on getting rid of polkit into that
direction. Why ? Because I've got the impression that these guys
still value traditional unix concepts, like using the filesystem
for simple hierarchical data structures and access control, tiny
and easily compositable servers and tools, etc.


cu
--
Enrico Weigelt,
metux IT consulting
+49-151-27565287


-- 
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: https://lists.debian.org/5480782f.3050...@gr13.net



Re: Technical committee acting in gross violation of the Debian constitution

2014-12-04 Thread Enrico Weigelt, metux IT consult
On 29.11.2014 20:45, Ivan Shmakov wrote:

   As for Systemd being the default (on Debian GNU/Linux,
   specifically), – I guess I shouldn’t bother.  GNOME is also the
   default, but I cannot readily recall ever having it running on
   my Debian installs.
 

By the way: didn't GNOME originally have the intention of being
crossplaform, not Linux-only ?


cu
--
Enrico Weigelt,
metux IT consulting
+49-151-27565287


--
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: https://lists.debian.org/5480793e.3000...@gr13.net



Re: Technical committee acting in gross violation of the Debian constitution

2014-12-04 Thread Enrico Weigelt, metux IT consult
On 28.11.2014 19:09, Christoph Anton Mitterer wrote:

 For many things, CGI is actually the only way to run them securely,
 since it's the only way to run foreign processes in a container
 environment (chroots, etc.) or with user privilege separation.

Not entirely true. About a decade ago, I've wrote muxmpm, which ran
individual sites under their own uid/gid, chroot, etc. That made things
like cgixec, php's safe_mode etc practically obsolete.

It was even shipped by several large distros, eg. suse (the orignal
one, not novell).


cu
--
Enrico Weigelt,
metux IT consulting
+49-151-27565287


-- 
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: https://lists.debian.org/54807f48.9060...@gr13.net



gnome depending on apache [WAS: Technical committee acting in gross violation of the Debian constitution]

2014-12-04 Thread Enrico Weigelt, metux IT consult
On 02.12.2014 06:01, Paul Wise wrote:

 gnome depends on apache ?
 
 gnome-user-share uses apache2 to share files on the local network via WebDAV.

Is this an purely optional program, or does gnome itself depend on it ?

 seriously ?
 
 Sharing files with other computers on the local network seems like
 perfectly reasonable and useful feature to me. 

Okay. But WebDAV would be one of the last protocols, I'd consider
for that (maybe for the wide internet, but not for local networks).


cu
--
Enrico Weigelt,
metux IT consulting
+49-151-27565287


-- 
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: https://lists.debian.org/548117a8.9010...@gr13.net



mass hosting + cgi [WAS: Technical committee acting in gross violation of the Debian constitution]

2014-12-04 Thread Enrico Weigelt, metux IT consult
On 04.12.2014 22:23, Christoph Anton Mitterer wrote:

 Apart from that, when you speak of non-trivial quantities - I'd
 probably say that running gazillion websites from different entities on
 one host is generally a really bad idea.

No, it's not, and it's pretty cheap, if done right.

Several years ago, I was working for some large ISP (probably the
largest in Germany). Hosting more than 1000 sites per box, several
millions in total. (yes, most of them are pretty small and low
traffic).

IIRC at that time they've been using cgiexec. I just don't recall
why they didn't use my muxmpm. (maybe because apache upstream was
too lazy to pick it up, even though it had been shipped by several
large distros).

A few years earlier I've developed muxmpm for exactly that purpose:
a derivative of worker/perchild, running individual sites under their
own UID, spawning on-demand. This approach not just worked for CGI,
but also builtin content processor like mod_php, mod_perl, etc.

 FastCGI is just a slightly more fancy way of doing this.
 FastCGI is another thing that almost nobody can afford when hosting 
 a significant number of web sites.
 Why not?

It adds additional complexity, especially when you're going to manage
a _large_ number (several k) of users per box. In such scenarios
you wanna be careful about system resources like sockets, fds, etc.

I'm not up to date whether there's meanwhile an efficient solution
for fully on-demand startup (and auto-cleanup) of fcgi slaves
with arbitrary UIDs, and how much overhead copying between
processes (compared to socket-passing) produces on modern systems
(back when I wrote muxmpm, it still was quite significant)

OTOH, for high-volume scenarios, apache might not be the first choice.


cu
--
Enrico Weigelt,
metux IT consulting
+49-151-27565287


-- 
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: https://lists.debian.org/54811cde.5000...@gr13.net



Re: Technical committee acting in gross violation of the Debian constitution

2014-12-04 Thread Enrico Weigelt, metux IT consult
On 25.11.2014 18:30, Stephen Gran wrote:

 Excellent.  I'm sure that if they can create a deb, they can install
 sysvinit, or runit, or some BSD, or whatever else they want.  A default
 is only a default, after all.

Just curious about the term default:

Can I still install a system w/o systemd ever going into my system -
instead of replacing it later (eg. some option in the installer) ?


cu
--
Enrico Weigelt,
metux IT consulting
+49-151-27565287


-- 
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: https://lists.debian.org/54811f6d.7040...@gr13.net



Re: Technical committee acting in gross violation of the Debian constitution

2014-12-04 Thread Enrico Weigelt, metux IT consult
On 27.11.2014 00:29, Noel Torres wrote:

 manpower required to maintain a distribution with more than one init
 system widey installed, manpower to perform the required changes to
 support multiple init systems in Jessie, centered about the most
 important question: our users.

Just curious: how large actually is the overhead for that ?

For most packages, that IMHO should be just still writing/updating init
scripts parallel to systemd service descriptors. I haven't had the time
for a deeper analysis (systemd specifications aren't entirely precise
and complete ;-o), but maybe we could even generate them from an common
primary source, at least for a large portion of the cases.

But there are other cases like GNOME (and IIRC KDE), which now seem
to rely on systemd. I haven't done a deeper analysis what's exactly the
big deal about it, and why we now need a new init system (or parts of
it) for that. The most common argument I've heared from systemd folks is
the multi-seat issue.

Well, I'm maybe a bit old-fashioned, such setups aren't anything but
new to me (actually, done that 20 years ago), and I wonder what that
all has to do with the init system. The primary aspect here is a proper
Xserver configuration. We'll always have to support various unusual
setups, like multi-screen composition, multiple input devices, etc,
so just having multiple Xservers on separate screens seems a rather
simple sub-case. An hardcoded magic like systemd-logind does  (eg. it
generates it's own xserver configs on the fly) sounds like a pretty
bad idea to me. It might be working for a large number of users, but
also limits the whole stack to those rather simple scenarios.

The big question I'd ask the systemd and gnome folks is:

Why do these things all have to be so deeply interdependent ?
I would even question, why each DE needs it own display manager ?
What's so wrong with all the other DMs ?

Certain DEs (like GNOME and KDE) seem trying to build their own
operating system - I really fail to understand why.


cu
--
Enrico Weigelt,
metux IT consulting
+49-151-27565287


-- 
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: https://lists.debian.org/54812cc2.6080...@gr13.net



Re: Technical committee acting in gross violation of the Debian constitution

2014-12-04 Thread Enrico Weigelt, metux IT consult
On 27.11.2014 11:18, Martin Steigerwald wrote:

 Desktops (not only GNOME) use a very tiny bit of systemd, interfaces
 that could be provided elsewhere. The real purpose of systemd is to
 provide a modern init system.
 
 I still wonder why there are provided within systemd then.

Same for me. If there really is some functionality which some DEs
really need, why not having an entirely separate tool for that ?

Anyways, I still didn't understand why udev is bundled within systemd.


cu
--
Enrico Weigelt,
metux IT consulting
+49-151-27565287


-- 
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: https://lists.debian.org/54812dae.9080...@gr13.net



Re: Technical committee acting in gross violation of the Debian constitution

2014-12-04 Thread Enrico Weigelt, metux IT consult
On 27.11.2014 11:53, Matthias Urlichs wrote:

 Yes, the logind-related parte _could_ be provided elsewhere, but part of
 the features logind needs is already implemented in systemd. 

Can you understand, that this method is exactly one of the major reason
why many people dont like the systemd faction ?


cu
--
Enrico Weigelt,
metux IT consulting
+49-151-27565287


-- 
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: https://lists.debian.org/54812e53.9080...@gr13.net



Re: Technical committee acting in gross violation of the Debian constitution

2014-12-01 Thread Enrico Weigelt, metux IT consult
On 27.11.2014 02:18, Josh Triplett wrote:

 gnome Depends: gnome-core, which Depends: gnome-user-share, which
 Depends: apache2-bin (or apache2.2-bin in stable, which is a
 transitional package depending on apache2-bin in unstable).

gnome depends on apache ?
seriously ?


cu
--
Enrico Weigelt,
metux IT consulting
+49-151-27565287


-- 
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: https://lists.debian.org/547d44b3.6030...@gr13.net



Re: Technical committee acting in gross violation of the Debian constitution

2014-11-25 Thread Enrico Weigelt, metux IT consult
On 22.11.2014 02:13, Troy Benjegerdes wrote:

 Someone will find a hole in something, and there will be fire when sysadmins
 have to upgrade in the middle of the night and now are running systemd
 instead of what they are used to.

Well, in that case, I'd say a rain of fire isn't entirely what's going
to happen here ... would be more like a rain of transphasic torpedos ...

I think, the latest decision was really bad. Not because I personally
dont like Lennartware, but because we should leave people the choice.
At lot of people have lots of reasons why they never ever wont let
systemd on their machines, and would even switch whole datacenters
to Gentoo, LFS or BSD, before accepting systemd.

Most of the people I know personally (and that are quite a lot), many
of them traditional *nix operators, integrators, developers from
embedded to enterprise, people who're maintaining missing criticial
systems, large datacenters, etc, give a clear and absolute NO to
systemd. Can't tell how representative that is, but my gutts tell me
Debian will immediately loose 30..50% user base, if systemd becomes
mandatory (or even worse: silently injects it via an upgrade).

That would be desastreos, and directly lead into a fork (in fact,
the preparations for that are already on the way).

I think it would be very wise having a fundamental decision, that:

a) individual (usual) packages do _not_ depend on a specific init
   system (eg. making the systemd-specific stuff has to optional)
b) we will continue to provide the existing alternatives, including
   fresh installation (choosable at installation time, or separate
   installer images)
c) the init system will never be switched w/o _explicit_ order
   by the operator
d) this decision stands until explicitly revoked


cu
--
Enrico Weigelt,
metux IT consulting
+49-151-27565287


-- 
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: https://lists.debian.org/547495e0.7040...@gr13.net