Re: multiarch vs. multilib

2023-06-11 Thread nick black
Simon McVittie left as an exercise for the reader:
> Sorry, I spend a lot of my work time immersed in this sort of thing and
> how it differs between distributions, so I tend to forget that most
> developers are able to stick to one distro and don't need to know this!

no offense at all, and thanks tremendously for the detailed
explanation! i've learned a lot in this thread.

-- 
nick black -=- https://www.nick-black.com
to make an apple pie from scratch,
you need first invent a universe.


signature.asc
Description: PGP signature


Re: multiarch vs. multilib

2023-06-09 Thread Simon McVittie
On Thu, 08 Jun 2023 at 19:22:02 -0400, nick black wrote:
> Simon McVittie left as an exercise for the reader:
> >   Debian-style multiarch or Fedora/Arch-style multilib is a much, much
> 
> this is at least the second time you've drawn this distinction
> in this thread. for anyone else who, like me, was uneasy with
> their understanding of the concept

Sorry, I spend a lot of my work time immersed in this sort of thing and
how it differs between distributions, so I tend to forget that most
developers are able to stick to one distro and don't need to know this!

I think this is important background knowledge for anyone who wants to
change how we handle architectures that are generally used as a foreign
architecture, particularly i386.

> https://wiki.debian.org/ToolChain/Cross#Multiarch_vs_Multilib

That's talking about the different gcc toolchains, but I actually meant
the difference in how libraries are packaged, which is a slightly separate
concept but tends to go together.

Readers of this thread are hopefully all familiar with Debian
multiarch, where the linker (both runtime and compile-time) searches
/{usr/,}lib/x86_64-linux-gnu, /{usr/,}lib/i386-linux-gnu or equivalent
non-x86 paths, as appropriate for the architecture, and we install shared
libraries to those paths. The dynamic string token ${LIB} (see ld.so(8))
expands to lib/x86_64-linux-gnu or similar.

Fedora and its relatives (RHEL, CentOS and so on) use the directory
layout described by LSB/FHS, in which i386 libraries are installed into
/usr/lib and x86_64 libraries into /usr/lib64 (a "libQUAL" directory
using FHS terminology), and the runtime and compile-time linkers search
those paths as appropriate for the architecture. They install most
libraries for x86_64 only, from packages like glib2-*.x86_64.rpm, into
/usr/lib64. A subset of libraries (enough to support Wine and legacy i386
binaries) are also available for i386, from packages lke glib2-*.i686.rpm,
into /usr/lib. The dynamic string token ${LIB} expands to lib64 or lib
as appropriate.

Arch Linux and its relatives (Manjaro and so on) use the reverse of
Fedora's layout: they mostly install x86_64 libraries from packages like
glib2-*-x86_64.pkg.tar.zst into /usr/lib, and a subset of libraries are
also available for i386, from packages like lib32-glib2-*-x86_64.pkg.tar.zst,
installing into /usr/lib32. ${LIB} expands to lib or lib32 as appropriate.

Other distributions *usually* match one of those families (often the same
as Fedora, for example openSUSE and Gentoo seem to match that), but there
are exceptions, like Exherbo which sets the ${prefix} to /usr/GNU_TUPLE for
each architecture, resulting in libraries in /usr/GNU_TUPLE/lib.

Debian *mostly* uses multiarch, but we still have a small subset of
packages that install a library of a different architecture to the libQUAL
directories, for use with multilib compilers. On x86_64 they're named like
lib32stdc++6_*_amd64.deb and follow the Arch-like directory layout with a
lib32 directory, while on i386 they're named like lib64stdc++6_*_i386.deb
and follow the Fedora-like layout with a lib64 directory.

smcv



Re: multiarch coinstallability of libc6 / conflicting loader paths

2014-12-07 Thread Philipp Kern
On Sun, Dec 07, 2014 at 01:08:24AM +0100, Timo Weingärtner wrote:
 A short-term fix for the dpkg errors could be to express the conflicting 
 loader paths with Conflicts: between the relevant libc6 packages.

Have multiarch conflicts/breaks been specified already? Sure, this would not be
the special case outlined in [1], but I don't see any in the archive yet...

Note that /lib64/ld-linux.so.2 on sparc64 is also a conflict because we symlink
/lib64 to /lib (which could not be easily avoided when building the port). Same
for s390x vs. ppc64.

Kind regards
Philipp Kern

[1] 
https://wiki.ubuntu.com/MultiarchSpec#Architecture-specific_Conflicts.2BAC8-Replaces


signature.asc
Description: Digital signature


Re: multiarch coinstallability of libc6 / conflicting loader paths

2014-12-06 Thread Paul Wise
On Sun, Dec 7, 2014 at 8:08 AM, Timo Weingärtner wrote:

 So I enabled architectures in dpkg, updated the package lists and tried
 installing libc6 packages for each architecture, but dpkg refused to unpack
 libc6:mipsel after libc6:powerpc had been installed, because both
 architectures use the same path for their dynamic loader: /lib/ld.so.1

FYI, some earlier discussion of this issue:

https://wiki.debian.org/Multiarch/LibraryPathOverview#ELF_interpreter-1
https://lists.linaro.org/pipermail/linaro-toolchain/2010-July/58.html
https://sourceware.org/glibc/wiki/ABIList
https://wiki.linaro.org/RikuVoipio/LdSoTable
https://gcc.gnu.org/ml/gcc-patches/2012-04/threads.html#00056

-- 
bye,
pabs

https://wiki.debian.org/PaulWise


--
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: 
https://lists.debian.org/CAKTje6EoUvXcnPzY1GosQRYmD9V3x=9j8OazG45tZHvU9CM=i...@mail.gmail.com



Re: Multiarch capabilities for mingw crossbuilds too?

2014-08-09 Thread Stephen Kitt
On Fri, 8 Aug 2014 16:41:22 +0100, Wookey woo...@wookware.org wrote:
 +++ Joerg Desch [2014-08-08 05:38 +]:
  Today I've read about Debians Multiarch capabilities for the first time. 
  Is it possible to use this technique to build deb packages of libraries 
  for the mingw crosscompile toolchain too?
 
 In principle, yes. In practice right now, no. Stephen Kitt has looked
 into this and gave a comprehensive talk at this year's mini-debconf
 (But I can't find a URL as I'm not sure that Sylvestre has actually
 uploaded them anywhere). I think this pdf is the slides though:
 http://www.sk2.org/talks/crossporting-to-windows.pdf

That's it; the video isn't available (yet?).

 It has the potential to be exceedingly slick and remove a whole load
 of packaging cruft we currently have to make windows/mingw-ready versions of
 libraries.
 
 There is some discussion on this therad:
 https://lists.debian.org/debian-devel/2011/04/msg01243.html
 
  I have to build Windows executables and therefore need some libraries. 
  For now, I build and install them locally. It would be fine to have a
  way just to apt-get install them.
  
  Any chance?
 
 Yes, but it needs some work. I'm sure Stephen would love some help :-) 
 I don't know if he's made any progress since feb.

I've been mostly working on the change requested by Guillem in 
https://bugs.debian.org/606825 so that we can get Windows added as an
architecture in dpkg, which is the first step towards a multi-arched
MinGW-w64 toolchain. I need to write stuff up to follow up on the bug, the
short version is that it's very complicated and I now understand very well
why upstream went for a imperfect triplet.

As Wookey says, I'd love some help, once there are things to help with. Keep
an eye on #606825, and once the architecture exists in dpkg we'll be able to
start fixing up packages (debian-ports style) and anyone interested will be
able to chip in!

 The trickiest part is that, having demonstrated that this works, we
 would have to change the definition of 'architecture independent' a
 little to include 'posix/non-posix' which mostly means moving a lot of
 libc stuff from arch-independent locations to arch-dependent
 locations, and that might be a hard sell in Debian. It _should_ only
 affect libc packaging, but work needs to be done to demonstrate
 that. Everything else is straightforward, and indeed a simplification of
 the current state for any package that produces win32/64 libs.

That's a very accurate summary, thanks! And the next big hurdle after getting
dpkg updated...

Regards,

Stephen


signature.asc
Description: PGP signature


Re: Multiarch capabilities for mingw crossbuilds too?

2014-08-08 Thread Wookey
+++ Joerg Desch [2014-08-08 05:38 +]:
 Today I've read about Debians Multiarch capabilities for the first time. 
 Is it possible to use this technique to build deb packages of libraries 
 for the mingw crosscompile toolchain too?

In principle, yes. In practice right now, no. Stephen Kitt has looked
into this and gave a comprehensive talk at this year's mini-debconf
(But I can't find a URL as I'm not sure that Sylvestre has actually
uploaded them anywhere). I think this pdf is the slides though:
http://www.sk2.org/talks/crossporting-to-windows.pdf

It has the potential to be exceedingly slick and remove a whole load
of packaging cruft we currently have to make windows/mingw-ready versions of
libraries.

There is some discussion on this therad:
https://lists.debian.org/debian-devel/2011/04/msg01243.html

 I have to build Windows executables and therefore need some libraries. 
 For now, I build and install them locally. It would be fine to have a
 way just to apt-get install them.
 
 Any chance?

Yes, but it needs some work. I'm sure Stephen would love some help :-) 
I don't know if he's made any progress since feb.

The trickiest part is that, having demonstrated that this works, we
would have to change the definition of 'architecture independent' a
little to include 'posix/non-posix' which mostly means moving a lot of
libc stuff from arch-independent locations to arch-dependent
locations, and that might be a hard sell in Debian. It _should_ only
affect libc packaging, but work needs to be done to demonstrate
that. Everything else is straightforward, and indeed a simplification of the 
current state for any package that produces win32/64 libs.

Wookey
-- 
Principal hats:  Linaro, Emdebian, Wookware, Balloonboard, ARM
http://wookware.org/


-- 
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: https://lists.debian.org/20140808154122.gb7...@stoneboat.aleph1.co.uk



Re: multiarch: arch dependent header file path choice

2014-07-03 Thread Matthias Klose
Am 28.06.2014 19:44, schrieb Osamu Aoki:
 Hi,
 
 The path for the arch dependent header file seems to have several options.
 
  1) /usr/include/multiarch/*.h
  2) /usr/include/multiarch/packagename/*.h
  3) /usr/lib/multiarch/packagename/include/*.h
 
 I would like to know rationale for each choice, especially between 2 and 3.

1) has the advantage that it is found in the standard include path.

3) is a cheap work around for packages installing everything
   into a common prefix and then either providing symlinks in
   other places or forcing users to rely on pkg-config or other
   methods to find files.

You are missing 1b) /usr/include/packagename/*.h

There should almost be no issues for packages putting header files directly in
/usr/include or /usr/include/multiarch. Yes, there are some dumb tests
checking for the existence of files in a certain path which will break.

Packages installing header files in /usr/include/packagename usually need a
knowledge of the include path, and you'll break things just moving all header
files or a part of it to /usr/include/multiarch/packagename, because
depending packages expect that only one include path is sufficient. The solution
here is to use a fake header in /usr/include/packagename which includes the
appropriate header in /usr/include/multiarch/packagename.  python2.7's and
python3.4's pyconfig.h file is such an example.  But even here you find
configure checks to grep pyconfig.h for certain features which break which such
kind of setup ... and of course it needs updates for new architectures.

 I am sure they all are functioning choice but intriguing to see choice 3.
 
 (I was looking for the typedef of gsize in gobject header files in
 /usr/include/glib-2.0.  It tool me time to find it in
 /usr/lib/x86_64-linux-gnu/glib-2.0/include/glibconfig.h)

this doesn't seem to conform to the FHS, and I assume a FHS conformant packaging
wouldn't make things worse.

 As I look around on my system, I observe followings
 
 For 1, *.h are:
 expat_config.h

 ffi.h
 ffitarget.h

one of them can live in /usr/include, however I personally prefer to install
header files in one location.

 fpu_control.h
 gmp.h

this is an example where the upstream header was patched to co-install on 32/64
bit systems not knowing about multiarch.

 ieee754.h
 lua5.1-deb-multiarch.h
 lua5.2-deb-multiarch.h
 zconf.h
 
 For 2, packagename are
 python3.3m

see above for the explanation for python.

 openssl
 ruby-2.0.0
 c++

this is in the standard c++ include path, so it doesn't need any special 
configury.

 ...
 
 For 3, packagename are:
 glib-2.0
 gtk-2.0
 gtk-3.0
 dbus-1.0 (dbus/dbus-arch-deps.h as *.h)

what a surprise ... I assume many upstream developers are working for distros
not using multiarch.

  Matthias


-- 
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: https://lists.debian.org/53b5cbce.8050...@debian.org



Re: multiarch: arch dependent header file path choice

2014-06-29 Thread Simon McVittie
On 28/06/14 18:44, Osamu Aoki wrote:
 The path for the arch dependent header file seems to have several options.
 
  1) /usr/include/multiarch/*.h
  2) /usr/include/multiarch/packagename/*.h
  3) /usr/lib/multiarch/packagename/include/*.h

The correct answer depends:

(a) what users are meant to #include (e.g. foo.h vs. foo/foo.h)

(b) whether users are meant to be able to #include the header from the
default include path, or whether they are meant to have to use
pkg-config or similar to select one of potentially several incompatible
versions http://ometer.com/parallel.html

(1) is correct if you are meant to include foo.h without pkg-config.

(2) is correct if you are meant to include foo/foo.h without
pkg-config, or foo.h with -I.../packagename provided by pkg-config.

(3) is correct if your upstream cares about lib64-style bi-arch
systems, and is something of a workaround for Debian-style multiarch
being relatively new. If lib64 distributions like Fedora had always
had Debian-style multiarch, perhaps it would be
/usr/include/multiarch/glib-2.0/glibconfig.h; but Debian and its
derivatives are the only distros where /usr/include/multiarch exists,
and upstreams want a solution that works everywhere.

I conjecture that there are no significant upstreams that care about
multiarch on Debian, but not biarch on Fedora. :-)

 (I was looking for the typedef of gsize in gobject header files in
 /usr/include/glib-2.0.  It tool me time to find it in
 /usr/lib/x86_64-linux-gnu/glib-2.0/include/glibconfig.h)

FYI, GLib's build-time API is that you use the glib-2.0 pkg-config
module, which adds /usr/lib/multiarch/glib-2.0/include to the search
path; its upstream maintainers would consider anything involving
#include .../glib-2.0/... to be incorrect.

gsize is always meant to be the same size as size_t (as checked by
configure). GLib has a no system headers included in our headers
policy to help its portability to systems with non-ISO or non-POSIX
headers, so it needs configure checks for gint32 (= int32_t) etc.
anyway. size_t doesn't need a header on ISO C systems, but I think GLib
might support (or once have supported) environments where size_t doesn't
exist and sizeof() returns unsigned or something.

S


-- 
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: https://lists.debian.org/53afdf6a.6080...@debian.org



Re: multiarch: arch dependent header file path choice

2014-06-28 Thread brian m. carlson
On Sun, Jun 29, 2014 at 02:44:21AM +0900, Osamu Aoki wrote:
 Hi,
 
 The path for the arch dependent header file seems to have several options.
 
  1) /usr/include/multiarch/*.h
  2) /usr/include/multiarch/packagename/*.h
  3) /usr/lib/multiarch/packagename/include/*.h
 
 I would like to know rationale for each choice, especially between 2 and 3.

Choice 1 is just the default location for most headers.

Choice 2 is useful if you have multiple versions of the same library.
For example, you might want to have the headers for Ruby 2.0 and Ruby
2.1 installed at the same time.  They're going to ship mostly the same
headers, so putting them in different directories allows them to be
co-installable.  Some upstreams prefer this location.

Choice 3 is for private header files.  Most of Glib's headers are of
style 2, but it ships one file, which is autogenerated, in this location
because it's only intended to be called from other Glib header files.

This method is used because on distributions like CentOS that don't have
multiarch, these private header files often contain arch-dependent
configuration, so they are placed in /usr/lib or /usr/lib64 as
appropriate so that the 32-bit and 64-bit packages are co-installable.

-- 
brian m. carlson / brian with sandals: Houston, Texas, US
+1 832 623 2791 | http://www.crustytoothpaste.net/~bmc | My opinion only
OpenPGP: RSA v4 4096b: 88AC E9B2 9196 305B A994 7552 F1BA 225C 0223 B187


signature.asc
Description: Digital signature


Re: OCaml and running shipped binaries (was Re: multiarch and interpreters/runtimes)

2013-05-05 Thread Mehdi Dogguy

Le 2013-05-05 03:50, Adam Borowski a écrit :

On Sun, May 05, 2013 at 12:08:06AM +0200, Stéphane Glondu wrote:

As far as bootstrapping is concerned, the OCaml sources include
precompiled (bytecode) executables that are used in a first stage of 
the

build process (i.e. ocaml doesn't build-depend on itself). So no need
for cross-compilation there. OCaml has very few build-dependencies
(there are Tcl/Tk/libX11, but they are optional) and should always be
buildable natively.


Wait, wait, wait... so OCaml ships precompiled binaries and runs them 
during
the build?  It seems so, as it FTBFSes if you remove the binaries from 
boot/.




If you do it without adapting the packaging… of course, it will FTBFS!

/me o_O


That's RC, I think.



It is not. (and it is not the only package doing that).

--
Mehdi


--
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/b52eb19b11a6911a67723cdaadeb2...@dogguy.org



Re: multiarch and interpreters/runtimes

2013-05-04 Thread Guillem Jover
On Sat, 2013-05-04 at 06:36:31 +0200, Matthias Klose wrote:
 Am 19.04.2013 00:33, schrieb Guillem Jover:
 I think the full-multiarch support for python in
  experimental should really be reverted.
 
 No.  This is backward, and the wrong way to go forward.

Sorry, but the way to go forward is not to subvert the dependency
system with metadata that can allow breaking installed systems.

 I do acknowledge that
 there are issues with the current state of dpkg, but I'm not seeing how you 
 are
 planning to address these for jessie, and if this can be used for the jessie
 release, and not only in jessie+1.  There are other ways to address the issue
 you did raise, but they could be considered as ugly, but in the end I would
 prefer doing with an ugly plan B, if we don't have a plan A.

I'd be interested to hear about those ugly solutions, from where I
sit though, this is simply not a matter of ugly, but wrong.

 You always can remove the Multi-Arch attribute from packages, but developers
 should continue to prepare interpreters for a multiarch environment.

Switching interpreters that allow them to be fully multiarch should be
done and played with, uploading those that can break the system to
Debian unstable should not be considered acceptable.

Also I did say, “full-multiarch support” should be reverted, which
implies partial multiarch support could still be deployed w/o subverting
the dependency system.

Regards,
Guillem


-- 
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/20130504143140.ga11...@gaara.hadrons.org



Re: multiarch and interpreters/runtimes

2013-05-04 Thread Stéphane Glondu
Le 18/04/2013 16:41, Matthias Klose a écrit :
 So what is the status for some runtimes/interpreters (would like to see some
 follow-up/corrections from package maintainers)?
 [...]
  - Lua, Ocaml, Haskell, Guile, ... ?

First, let me explain a few notions that will be useful to grasp the
situation of OCaml wrt multiarch. There are actually two compilers.

One of them is the bytecode one and uses its own binary format for
compiled objects. For pure OCaml code, this bytecode is portable (pretty
much like Java). There is no shared library mechanism and all
executables are statically linked. There is a FFI mechanism that seems
to work like Perl, Python and Java (if there is something fundamentally
different between these three, please tell me as I've probably
misunderstood something). I expect the same issues here wrt multiarch,
but I don't understand them fully yet. Foreign code (from the language
point of view, not dpkg's one!) is usually called bindings in OCaml
community. The bytecode compiler (/usr/bin/ocamlc) and runtime
(/usr/bin/ocamlrun, written in C) are equally supported on all Debian
architectures.

The other compiler is the native one (/usr/bin/ocamlopt), which
compiles to asm and uses the regular toolchain (as, ld...). There is no
shared library mechanism for OCaml code (and the runtime is always
statically linked), but the resulting executables can still use shared
libraries because of bindings. The native compiler only exists on a
handful of architectures. I expect no problems here wrt multiarch (or,
let's say no more that with any C program).

Note that, though portable, bytecode is usually recompiled on all
architectures because of native code (we don't bother to put bytecode
and native code in different binary packages).

Additionally, there is what is called the plugin mechanism with both
compilers, which can be thought as an equivalent of dlopen (but not mere
FFI).

It turns out that most of the files needed for development (static
libraries, compiled headers, metadata files) and runtime (bindings for
bytecode, libraries in plugin form) are all installed in `ocamlc
-where`, which is currently /usr/lib/ocaml, but could be changed easily.
This is basically forced by the upstream toolchain (and our policy in
Debian).

However, there is no real standard for plugins, which can be in `ocamlc
-where` or not, depending of the application using them. But AFAIK,
there are very few packages that use the plugin mechanism with plugins
outside of `ocamlc -where`.

So my take would be to move `ocamlc -where` to
/usr/lib/$DEB_HOST_MULTIARCH/ocaml (it can be done during the next
transition). Most of library packages (runtime AND dev) ship files only
there (and in /usr/share). After that, I don't know exactly what needs
to be done.

Concerning cross-compilation... well, there is no out-of-the-box
upstream support for cross-compilation, but I guess it can be hacked
(see mingw-ocaml... but Romain Beauxis is more knowledgeable on this).

As far as bootstrapping is concerned, the OCaml sources include
precompiled (bytecode) executables that are used in a first stage of the
build process (i.e. ocaml doesn't build-depend on itself). So no need
for cross-compilation there. OCaml has very few build-dependencies
(there are Tcl/Tk/libX11, but they are optional) and should always be
buildable natively.

Concerning embedding the runtime in another application... well, that
doesn't seem to happen often. The only case I'm aware of is an Apache
module and I don't know exactly how multiarch would interact in that.
From what I've seen, when there is a need for OCaml scripting
capabilities, the main application is already written in OCaml, and the
regular toolchain is used to compile the script into a plugin and load
it on the fly.


Cheers,

-- 
Stéphane


--
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/518586c6.3070...@debian.org



OCaml and running shipped binaries (was Re: multiarch and interpreters/runtimes)

2013-05-04 Thread Adam Borowski
On Sun, May 05, 2013 at 12:08:06AM +0200, Stéphane Glondu wrote:
 As far as bootstrapping is concerned, the OCaml sources include
 precompiled (bytecode) executables that are used in a first stage of the
 build process (i.e. ocaml doesn't build-depend on itself). So no need
 for cross-compilation there. OCaml has very few build-dependencies
 (there are Tcl/Tk/libX11, but they are optional) and should always be
 buildable natively.

Wait, wait, wait... so OCaml ships precompiled binaries and runs them during
the build?  It seems so, as it FTBFSes if you remove the binaries from boot/.

That's RC, I think.

-- 
ᛊᚨᚾᛁᛏᚣ᛫ᛁᛊ᛫ᚠᛟᚱ᛫ᚦᛖ᛫ᚹᛖᚨᚲ


-- 
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/20130505015014.gb24...@angband.pl



Re: OCaml and running shipped binaries (was Re: multiarch and interpreters/runtimes)

2013-05-04 Thread Stéphane Glondu
Le 05/05/2013 03:50, Adam Borowski a écrit :
 On Sun, May 05, 2013 at 12:08:06AM +0200, Stéphane Glondu wrote:
 As far as bootstrapping is concerned, the OCaml sources include
 precompiled (bytecode) executables that are used in a first stage of the
 build process (i.e. ocaml doesn't build-depend on itself). So no need
 for cross-compilation there. OCaml has very few build-dependencies
 (there are Tcl/Tk/libX11, but they are optional) and should always be
 buildable natively.
 
 Wait, wait, wait... so OCaml ships precompiled binaries and runs them during
 the build?  It seems so, as it FTBFSes if you remove the binaries from boot/.

They are used only for the first phase of bootstrapping. Then,
everything is recompiled using the newly compiled binaries. And again.
Until there is a fixpoint. Everything that eventually ends up in binary
packages has been compiled from source, and using binaries that have
been compiled from source.

 That's RC, I think.

With the way I explained above, I disagree.

Also, there is no guarantee that one version of ocaml can compile the
next one: a new version of the compiler is free to use the new features
it introduces. Upstream only supports compiling one version with itself.


Cheers,

-- 
Stéphane


--
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/5185e8c5.9020...@debian.org



Re: multiarch and interpreters/runtimes

2013-05-03 Thread Matthias Klose
Am 19.04.2013 00:33, schrieb Guillem Jover:
I think the full-multiarch support for python in
 experimental should really be reverted.

No.  This is backward, and the wrong way to go forward.  I do acknowledge that
there are issues with the current state of dpkg, but I'm not seeing how you are
planning to address these for jessie, and if this can be used for the jessie
release, and not only in jessie+1.  There are other ways to address the issue
you did raise, but they could be considered as ugly, but in the end I would
prefer doing with an ugly plan B, if we don't have a plan A.

You always can remove the Multi-Arch attribute from packages, but developers
should continue to prepare interpreters for a multiarch environment.

  Matthias


-- 
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/5184904f.2000...@debian.org



Re: multiarch and interpreters/runtimes

2013-04-21 Thread Helmut Grohne
On Sun, Apr 21, 2013 at 02:42:32AM +0300, Uoti Urpala wrote:
 Should that set of running architectures be just architecture?

No. Some packages can have multiple runing architecturs. The most
obvious case is M-A:same packages where you can install the same package
for multiple architectures. Similarly Arch:all packages can have
multiple running architectures if all their dependent-upon packages
are M-A:foreign or have multiple running architectures themselves.

 Have you tried to somehow count the affected packages? Where did you get
 the small number from? There are over 2500 packages with a dependency
 relationship on python alone that are not named python-* (to exclude
 python module packages). Is the proportion of those with Python scripts
 in addition to other code really that low?

I imagined that number. No, I don't have any data and I have to admit
that your preliminary data collection hints a problem here.

 Would something like apt-file be split into 3 - apt-file,
 apt-file-perl-scripts, apt-file-python-scripts?

In case of apt-file the split would likely consist of moving rapt-file
to its own package and recommending it from apt-file. Still yes, this
split would be required by my proposal. On the other hand this split
would actually be beneficial. You only have a rapt-file executable if
you can actually execute it.

I have to agree that this can result in thousands of package slits. Some
of the cases you pointed out will just require python without further
modules. There you can use python:any. Many of those splits are likely
the ones adding optional functionality. Some of them would actually
benefit like apt-file would. Still in my opinion splitting thousand
packages appears to be less involved than changing the dependency syntax
in ways.

What I have more doubts about is the actual implementation. There must
have been a reason for why this was not built into dpkg right away. A
quick check of the source indicates that dpkg really does not check
recursive dependencies at the moment. So basically the only way to
implement this proposal would be to write virtual and running
architectures to the status file. Even then this is not as easy as one
might imagine. When installing a M-A:same package suddenly attributes of
any number of other packages may change. The nice isolation that
currently exists by having discrete states and desired states would be
broken somewhat. Possibly we would have to consider reconfiguring
packages that change running architectures due to the installation or
removal of other packages. Worse, checking the reverse dependencies of a
to-be-removed package might simply be impossible to do without a
recursive algorithm.

Helmut


-- 
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/20130421153854.ga11...@alf.mars



Re: multiarch and interpreters/runtimes

2013-04-20 Thread Helmut Grohne
On Sat, Apr 20, 2013 at 04:44:08AM +0300, Uoti Urpala wrote:
 It seems correct at first glance, but not enough to solve all the issues
 mentioned. Currently existing package relationships lack information
 that is necessary to do the right thing in all cases. Consider different
 kinds of dependencies a package P can have on package Q:

Let me attempt to draw these.

 1) P contains code for arch that links with code from Q.
Q needs to work for arch.

P (any) - Q (any)

 2) P executes a binary provided by Q.
Any arch for Q is OK.

P (any) - Q (any)

Most likely Q is M-A:foreign here. Alternatively M-A:allowed with a
different dependency:

P (any) - Q:any (any)

Another variation comes if P is Arch:all, but those cases work similarly
to the first one.

 3) P runs a script using system interpreter X, and depends on the
interpreter environment supporting functionality provided by Q.
Q needs to work for the arch matching installed version of X.

P (all) +-- X (any)
`-- Q (any)

The interpreter will most likely be M-A:allowed. So all P has to do here
is not add :any to its dependencies. Then everything should work out
here. X and Q would usually have singleton sets for their virtual
architectures. (Only Arch:all and M-A:something packages can have
non-singleton sets here.) The dependencies of P are considered satisfied
only if those singleton sets are equal.

 4) P runs a script in an embedded interpreter in its arch code, and
depends on the interpreter environment supporting functionality
provided by Q.
Q needs to work for arch.

Since P uses an interpreter X, it has to depend on it as well. So the
resulting dependency graph should look exactly like the one above,
except that P is arch:any now.

 In the most common case dependencies on a package Q are either all of
 type 1 or all of type 2, as long as Q only exposes one kind of
 interface; in the current multiarch spec Q indicates this by
 Multi-Arch: same for 1 and foreign for 2. However, dependency types
 3 and 4 require adding more information in the depending package to
 allow determining what arch needs to be supported for Q.

The new dependency resolution is designed to catch precisely the cases
for 3 and 4 with no further annotations, because those annotations are
not necessary. We just assume that a package needs all of its
dependencies in the same architecture unless there are declarations that
lift this requirement. Currently there are two declarations that can do
this.

1. M-A:foreign tells the resolver that the rdep should not care about
   this particular package's architecture.
2. A package:any annotation (for M-A:allowed dependencies) says that the
   architecture of this particular dependency should be ignored.

In the light of the above I do not quite understand what is missing to
support your use cases yet (besides an implementation). Can you explain
them in more detail? Examples would be helpful.

Helmut


-- 
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/20130420122744.ga17...@alf.mars



Re: multiarch and interpreters/runtimes

2013-04-20 Thread Uoti Urpala
Helmut Grohne wrote:
 On Sat, Apr 20, 2013 at 04:44:08AM +0300, Uoti Urpala wrote:
  3) P runs a script using system interpreter X, and depends on the
 interpreter environment supporting functionality provided by Q.
 Q needs to work for the arch matching installed version of X.
 
 P (all) +-- X (any)
 `-- Q (any)
 
 The interpreter will most likely be M-A:allowed. So all P has to do here
 is not add :any to its dependencies. Then everything should work out
 here.

But that 'not add :any' is completely impractical. The default system
interpreter can only have one architecture - what #!/usr/bin/python3
executes. Multiple versions of that can not be coinstallable, and so
it's completely unreasonable for a foreign package containing Python
scripts to demand that you change your _default_ Python interpreter to
another architecture. It would immediately lead to conflicts. In a sane
system scripts written in pure Python must work with the default system
interpreter, whatever architecture that is.


 In the light of the above I do not quite understand what is missing to
 support your use cases yet (besides an implementation). Can you explain
 them in more detail? Examples would be helpful.

Consider a package that contains a Python script (#!/usr/bin/python)
doing image manipulation using python-imaging (Depends: python,
python-imaging) and an i686 binary using embedded Python (Depends:
libpython2.7, python-levenshtein). As above, installing this package
must not require changing your default system python to i686. So the
effective dependencies are: python:any, python-imaging:whatever python
is, libpython2.7:i686, python-levenshtein:i686.



-- 
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: 
http://lists.debian.org/1366468972.1964.44.camel@glyph.nonexistent.invalid



Re: multiarch and interpreters/runtimes

2013-04-20 Thread Helmut Grohne
On Sat, Apr 20, 2013 at 05:42:52PM +0300, Uoti Urpala wrote:
 Helmut Grohne wrote:
  On Sat, Apr 20, 2013 at 04:44:08AM +0300, Uoti Urpala wrote:
   3) P runs a script using system interpreter X, and depends on the
  interpreter environment supporting functionality provided by Q.
  Q needs to work for the arch matching installed version of X.
  
  P (all) +-- X (any)
  `-- Q (any)
  
  The interpreter will most likely be M-A:allowed. So all P has to do here
  is not add :any to its dependencies. Then everything should work out
  here.
 
 But that 'not add :any' is completely impractical. The default system
 interpreter can only have one architecture - what #!/usr/bin/python3
 executes. Multiple versions of that can not be coinstallable, and so
 it's completely unreasonable for a foreign package containing Python
 scripts to demand that you change your _default_ Python interpreter to
 another architecture. It would immediately lead to conflicts. In a sane
 system scripts written in pure Python must work with the default system
 interpreter, whatever architecture that is.

You point out a limitation that I'd consider to be a feature. My
proposal requires that every package has a single set of running
architectures that has to apply to all code contained. I did think about
lifting this requirement, but let me first point out how to work around
it. The workaround is not pretty, but doable in all cases.

What you have here is that your script should run in a different
architecture than the rest of the package. You need two sets of running
architectures and that means two packages. You move the script to a
separate Arch:all package with M-A:foreign and depend on it from P.

P (any) --- P-script (all, M-A:foreign) +-- X (any)
 `-- Q (any)

That way P-script will have the same running architecture as the
interpreter, but P can have a different running architecture. (Note that
I am using running architecture to refer to the single element of the
running architecture).

 Consider a package that contains a Python script (#!/usr/bin/python)
 doing image manipulation using python-imaging (Depends: python,
 python-imaging) and an i686 binary using embedded Python (Depends:
 libpython2.7, python-levenshtein). As above, installing this package
 must not require changing your default system python to i686. So the
 effective dependencies are: python:any, python-imaging:whatever python
 is, libpython2.7:i686, python-levenshtein:i686.

After splitting your package, you have the following graph.

 ,-- P-script (all, M-A:foreign) +-- python
P (i686) +-- python-2.7:i686 `-- python-imaging
 `-- python-levenshtein:i386

So why am I proposing this limitation of a single set of running
architectures per package?

Not to do so would mean to annotate the dependencies with whatever
$otherpackage is. In my opinion this would clutter the dependency
syntax and the field itself. We already have/had problems to find a
syntax for the bootstrap stage annotations. A syntax change is very hard
to implement as it has to touch a large number of tools. In addition I
believe the solution outlined to be expressive enough to cover all cases
albeit introducing intermediate packages. Indeed the cases that have
popped up requiring these additional capabilities are a minority.
Having to split a small number of packages to achieve true multiarch
seems like a good trade off to complicating the dependency syntax to me.

Helmut


-- 
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/20130420193753.ga13...@alf.mars



Re: multiarch and interpreters/runtimes

2013-04-20 Thread Uoti Urpala
Helmut Grohne wrote:
 You point out a limitation that I'd consider to be a feature. My
 proposal requires that every package has a single set of running
 architectures that has to apply to all code contained.

Should that set of running architectures be just architecture?

I think that after adding such splitting of packages the system could
represent the required constraints, but I'm not sure if such splitting
is a practical way.


 Having to split a small number of packages to achieve true multiarch
 seems like a good trade off to complicating the dependency syntax to me.

Have you tried to somehow count the affected packages? Where did you get
the small number from? There are over 2500 packages with a dependency
relationship on python alone that are not named python-* (to exclude
python module packages). Is the proportion of those with Python scripts
in addition to other code really that low?

Would something like apt-file be split into 3 - apt-file,
apt-file-perl-scripts, apt-file-python-scripts?



-- 
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: 
http://lists.debian.org/1366501352.1964.55.camel@glyph.nonexistent.invalid



Re: multiarch and interpreters/runtimes

2013-04-19 Thread Steve Langasek
On Thu, Apr 18, 2013 at 04:18:38PM +0100, Neil Williams wrote:
   - Third party modules for interpreters should be cross-buildable.
 Many build systems for interpreter languages are written in the
 interpreter language itself. So you do require the interpreter
 for the build, and the development files for the host.

 To me, that is a traditional cross-build relationship, it doesn't
 require the installation of runtime files for the interpreter for the
 foreign architecture, only the native runtime and the foreign
 development files to support those third party modules which are
 architecture-dependent or have architecture-dependent build
 dependencies.

 I don't see a need to have the perl:i386 interpreter installed on amd64
 in order to build third party i386 perl modules, the amd64 interpreter
 should be fine, just as it is when cross building third party armel perl
 modules.

But you need the foreign-architecture libperl installable, for the perl
modules to be linked against.  This is the bit that's unimplemented today
(for most such interpreters, anyway).

-- 
Steve Langasek   Give me a lever long enough and a Free OS
Debian Developer   to set it on, and I can move the world.
Ubuntu Developerhttp://www.debian.org/
slanga...@ubuntu.com vor...@debian.org


signature.asc
Description: Digital signature


Re: multiarch and interpreters/runtimes

2013-04-19 Thread Niko Tyni
On Thu, Apr 18, 2013 at 04:41:35PM +0200, Matthias Klose wrote:

 So what is the status for some runtimes/interpreters (would like to see some
 follow-up/corrections from package maintainers)?

  - Perl: Afaik, Neil did prepare the interpreter to cross-build, and
to co-install the runtime and development files. What about
cross-building third party modules?

I haven't looked at the cross-building part much myself, so I'll let
Neil or Wookey answer that. I'll just point out that I have worked on
'full multiarch' support in perl [1] with a co-installable libperl and
the standard library, but that turned up the concerns expressed here by
Guillem about shared library 'embedded' interpreters and arch-dep modules.

There's also some status information by Wookey at
 http://wiki.debian.org/Multiarch/Perl

[1] 
http://anonscm.debian.org/gitweb/?p=perl/perl.git;a=shortlog;h=refs/heads/ntyni/multiarch-5.16

http://anonscm.debian.org/gitweb/?p=perl/perl.git;a=shortlog;h=refs/heads/ntyni/multiarch-5.14

-- 
Niko Tyni   nt...@debian.org


-- 
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/20130419100213.GC4918@madeleine.local.invalid



Re: multiarch and interpreters/runtimes

2013-04-19 Thread Niko Tyni
On Thu, Apr 18, 2013 at 11:01:17PM -0700, Steve Langasek wrote:
 On Thu, Apr 18, 2013 at 04:18:38PM +0100, Neil Williams wrote:

  I don't see a need to have the perl:i386 interpreter installed on amd64
  in order to build third party i386 perl modules, the amd64 interpreter
  should be fine, just as it is when cross building third party armel perl
  modules.
 
 But you need the foreign-architecture libperl installable, for the perl
 modules to be linked against.

I don't think you do. The modules aren't linked against libperl, it's
the other way around: libperl loads them at run time with dlopen(3).
They are effectively plugins in a private directory.

Obviously a few header files specific to the target architecture
are needed, but that should be enough AFAICS.
-- 
Niko Tyni   nt...@debian.org


-- 
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/20130419142641.GA4236@madeleine.local.invalid



Re: multiarch and interpreters/runtimes

2013-04-19 Thread Steve Langasek
On Fri, Apr 19, 2013 at 05:26:41PM +0300, Niko Tyni wrote:
 On Thu, Apr 18, 2013 at 11:01:17PM -0700, Steve Langasek wrote:
  On Thu, Apr 18, 2013 at 04:18:38PM +0100, Neil Williams wrote:

   I don't see a need to have the perl:i386 interpreter installed on amd64
   in order to build third party i386 perl modules, the amd64 interpreter
   should be fine, just as it is when cross building third party armel perl
   modules.

  But you need the foreign-architecture libperl installable, for the perl
  modules to be linked against.

 I don't think you do. The modules aren't linked against libperl, it's
 the other way around: libperl loads them at run time with dlopen(3).
 They are effectively plugins in a private directory.

Hmm, well, it seems that's true for the case of libperl; but that means
there's suboptimal behavior when using libperl as an embedded interpreter
(which I assume is still supported?) because of likely use of RTLD_LOCAL by
the calling application: extensions for embedded interpreters that exist as
plugins with undefined symbols ('ldd -d -r') may be unable to resolve their
symbols at load time because the interpreter isn't in the global namespace.

So perhaps this is all a bit theoretical in practice, and people are
muddling through with such suboptimal behavior.

-- 
Steve Langasek   Give me a lever long enough and a Free OS
Debian Developer   to set it on, and I can move the world.
Ubuntu Developerhttp://www.debian.org/
slanga...@ubuntu.com vor...@debian.org


signature.asc
Description: Digital signature


Re: multiarch and interpreters/runtimes

2013-04-19 Thread Niko Tyni
On Fri, Apr 19, 2013 at 08:01:40AM -0700, Steve Langasek wrote:
 On Fri, Apr 19, 2013 at 05:26:41PM +0300, Niko Tyni wrote:

  The modules aren't linked against libperl, it's
  the other way around: libperl loads them at run time with dlopen(3).
  They are effectively plugins in a private directory.

 Hmm, well, it seems that's true for the case of libperl; but that means
 there's suboptimal behavior when using libperl as an embedded interpreter
 (which I assume is still supported?) because of likely use of RTLD_LOCAL by
 the calling application: extensions for embedded interpreters that exist as
 plugins with undefined symbols ('ldd -d -r') may be unable to resolve their
 symbols at load time because the interpreter isn't in the global namespace.
 
 So perhaps this is all a bit theoretical in practice, and people are
 muddling through with such suboptimal behavior.

Well, that's the way it's always been, and Python seems to be doing
the same thing.

Yes, using libperl as an embedded interpreter is supported and is the
main (and possibly the only) use case for linking against libperl in
the first place.

For reference, instances of the RTLD_LOCAL related problems would
be #416266 (freeradius) and #327585 (openldap).

How does this fit in with statically linked interpreters? /usr/bin/perl is
statically linked against libperl on i386, possibly mostly for historical
reasons, and I see so is /usr/bin/python2.7 (against libpython, obviously)
on at least amd64. If the modules were linked against libperl / libpython,
wouldn't the common case of the statically linked binary loading such a
module lead to two copies of the library functions in the same namespace?

(I expect it wouldn't as that seems impossible; maybe the dlopen(3)
 call would open the shared library but not bind any functions from there
 because they'd already be resolved by the versions linked into the binary?
 Would that still imply a cost in memory footprint?)
-- 
Niko Tyni   nt...@debian.org


-- 
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/20130419161645.GA6448@madeleine.local.invalid



Re: multiarch and interpreters/runtimes

2013-04-19 Thread Helmut Grohne
On Fri, Apr 19, 2013 at 12:33:07AM +0200, Guillem Jover wrote:
 As I pointed out on the debian-perl mailing list, after having
 discussed about multiarch support for perl, I don't think a fully
 multiarchified perl (nor at least python) should be uploaded to sid,
 as going full multiarch on the combination of a program frontend to an
 interpreter in shared library form, plus architecture dependent modules
 makes for situations that can bypass the dependency system and get
 into broken installations. Please see [0] for more details, although
 the solution I proposed is bogus and would not solve much as that
 does not help with arch-dep modules being depended by arch-indep ones
 and being run through the shared library interpreter, because the
 dependency to match is against the running interpreter, not just the
 program one.
 
   [0] https://lists.debian.org/debian-perl/2012/12/msg0.html
 
 I've not checked the situation with other interpreters, but if they
 are similar to perl/python then doing full-multiarch might be a bad
 idea too for those. I think the full-multiarch support for python in
 experimental should really be reverted.

Incidentally I discussed this with wookey on #multiarch recently. I
asked him to review a proposed solution before exposing it to a wider
public, but apparently he was unable to allocate enough time yet, so
you, dear -devel readers, will have to do that. What follows is my
thoughts from that discussion.

The background is nicely summarized by Guillem Jover at
http://lists.debian.org/debian-perl/2012/12/msg0.html. I'd like to
revisit this little footnote from the multiarch spec:
https://wiki.ubuntu.com/MultiarchSpec#fnref-b110b386874a9b24eaecd36fa24653a467ee27c4
It essentially claims that a particular way of resolving dependencies
(explained below) is impossible to do with dpkg. Let us assume dpkg not
to be a limitation here and see what would happen.

Let us assume that in addition to an Architecture each binary package
gains two sets of /virtual architectures/ and /running architectures/.
These are not fields in the binary package. They are sets of regular
architectures. Both are to be computed dynamically by the resolver. In
addition we change the rules for dependency resolution. This affects
especially dependency resolution involving Architecture:all packages.

Rules for dependencies and computation of virtual architectures and
running architectures:

 * all is never a member of virtual or running architectures.
 - Running architectures:
* Start with the set of architectures known to dpkg.
* If the package has a real architecture (i.e not all), intersect
  with the singleton set of that architecture.
* For each dependency which is not marked :any compute the set of
  virtual architectures and intersect.
* The resulting set is considered the running architectures. If it
  is empty the dependencies are considered unsatisfied.
 - Virtual architectures:
* By default equal to the running architectures.
* If the package is Multi-Arch:same, use the union of all running
  architecture sets of packages with the same name installed.
* If the package is Multi-Arch:foreign, use the set of architectures
  known to dpkg.

Usually non-data packages would have exactly one running architecture. I
had a hard time coming up with an example where this is not the case. It
is fakechroot.

Let us visit some examples from Guillem's mail to show how this could
fix some of the problems:

 ,-- libperl:amd64
 prog-link:amd64 +-- perl-xs-module-a:amd64
 `-- perl-pm-module-a:all -- perl-xs-module-b:i386

We would ensure that perl-pm-module-a has no M-A:foreign set. Then
perl-pm-module-a would have running and virtual architecture i386. The
set of running architectures of prog-link is a subset of the
intersection of its own architecture (amd64) and perl-pm-module-a's
virtual architecture (i386) and thus empty. Its dependencies would not
be considered satisfied.

  ,-- perl-xs-module-a:i386 (M-A:same)
  perl-script:all +-- perl-xs-module-b:amd64 (M-A:same)
  `-- perl:amd64 (M-A:allowed)

The set of running architectures for perl-script would be empty, since
it consists of the intersection of i386 and amd64 (from the respective
perl modules).

  ,-- xs-pm-wrapper-a:all -- xs-so-module-b:i386 (M-A:same)
  perl-script:all +-- perl-xs-module-b:amd64 (M-A:same)
  `-- perl:amd64 (M-A:allowed)

We would ensure that xs-pm-wrapper-a has not M-A:foreign set. The
virtual architectures for xs-pm-wrapper-a would only contain i386. The
running architectures for perl-script would be the intersection of amd64
(from perl-xs-module-b) and i386 (from xs-pm-wrapper-a), i.e. empty.

Barring any mistakes this appears like a possible solution to the
problem. Did you spot anything obviously wrong? Any example where you
don't see how to work it out?

So let's talk 

Re: multiarch and interpreters/runtimes

2013-04-19 Thread Antonio Terceiro
On Thu, Apr 18, 2013 at 04:41:35PM +0200, Matthias Klose wrote:
  - Ruby: Afaik, not yet started on multiarch.

Ruby 2.0 has multiarch support upstream. The Debian packaging is not
finished yet, but it will have multiarch.

I do not plan to multiarch 1.9.

-- 
Antonio Terceiro terce...@debian.org


signature.asc
Description: Digital signature


Re: multiarch and interpreters/runtimes

2013-04-19 Thread Uoti Urpala
Helmut Grohne wrote:
 Barring any mistakes this appears like a possible solution to the
 problem. Did you spot anything obviously wrong? Any example where you
 don't see how to work it out?

It seems correct at first glance, but not enough to solve all the issues
mentioned. Currently existing package relationships lack information
that is necessary to do the right thing in all cases. Consider different
kinds of dependencies a package P can have on package Q:

1) P contains code for arch that links with code from Q.
   Q needs to work for arch.
2) P executes a binary provided by Q.
   Any arch for Q is OK.
3) P runs a script using system interpreter X, and depends on the
   interpreter environment supporting functionality provided by Q.
   Q needs to work for the arch matching installed version of X.
4) P runs a script in an embedded interpreter in its arch code, and
   depends on the interpreter environment supporting functionality
   provided by Q.
   Q needs to work for arch.

In the most common case dependencies on a package Q are either all of
type 1 or all of type 2, as long as Q only exposes one kind of
interface; in the current multiarch spec Q indicates this by
Multi-Arch: same for 1 and foreign for 2. However, dependency types
3 and 4 require adding more information in the depending package to
allow determining what arch needs to be supported for Q.



-- 
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: 
http://lists.debian.org/1366422248.1964.26.camel@glyph.nonexistent.invalid



Re: multiarch and interpreters/runtimes

2013-04-18 Thread Andrew Shadura
Hello,

On 18 April 2013 16:41, Matthias Klose d...@debian.org wrote:
  - Tcl/Tk: Wookey and Dimitrij did start on that in Ubuntu, patches
are available in Debian bug reports.
Currently the shared libraries are split out into separate packages,
and are co-installable. Not yet tested if this enough to run an
embedded interpreter.

Could I please have more info? :)

-- 
WBR, Andrew


-- 
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: 
http://lists.debian.org/CAMb-mAxnn=_vPXzPL2qvsKqM4kDMUOEZukF=xjg6c5kmms_...@mail.gmail.com



Re: multiarch and interpreters/runtimes

2013-04-18 Thread Goswin von Brederlow
On Thu, Apr 18, 2013 at 04:41:35PM +0200, Matthias Klose wrote:
 There are maybe not many use cases where you do want to install an interpreter
 like python or perl for a foreign architecture, but there are some use case
 where such a setup makes sense.  For now I see this limited for architecture
 pairs like amd64/i386, armel/armhf, ia64/i386, i.e. for architectures where 
 the
 foreign interpreter can be run (without qemu).  Use cases are
 
  - running the OpenJDK hotspot JIT for i386 on ia64, an implementation
which is much faster than the OpenJDK Zero interpreter on ia64.
This can be done today in wheezy. The OpenJDK packages do use
different prefixes and are co-installable.  While the packages come
with binaries, these are all installed using alternatives, so the
packages can even be installed for more than architecture (no, I don't
like this alternatives schema much).

Ia64 does no longer (for a while now) support i386. The support has
been removed from hardware years ago and from the kernel too. You have
to use qemu there.

Is that still faster then the ia64 interpreter?
 
 So what is the status for some runtimes/interpreters (would like to see some
 follow-up/corrections from package maintainers)?
 
  - OpenJDK: runtime and development files are co-installable, the
package itself is not cross-buildable, and afaik nobody did try
to cross build any bindings.
 
  - Perl: Afaik, Neil did prepare the interpreter to cross-build, and
to co-install the runtime and development files. What about
cross-building third party modules?
 
  - Python: co-installable runtime and development files, cross-buildability
upstreamed for 2.7.4 and 3.3.1. There is a way to cross-build third
party modules using distutils/setuptools. Packages are available in
experimental, but because I didn't backport this to 2.6 and 3.2, not
much useful. Install an Ubuntu raring (13.04) chroot to experiment
with these. Details at http://wiki.debian.org/Python/MultiArch
 
  - Ruby: Afaik, not yet started on multiarch.
 
  - Tcl/Tk: Wookey and Dimitrij did start on that in Ubuntu, patches
are available in Debian bug reports.
Currently the shared libraries are split out into separate packages,
and are co-installable. Not yet tested if this enough to run an
embedded interpreter.
 
  - Lua, Ocaml, Haskell, Guile, ... ?
 
 Matthias

The support for this comes with Multiarch: allowed, which will only be
allowed after wheezy. The interpreter declares itself Multiarch:
allowed and then depending packages can specify wether they must match
the abi (loadable binary plugins) or can work with any arch (binary
packages that also contains some perl scripts).

Co-installability of interpreters is generally not planed and would
have to be made as custom solutions, i.e. place the interpreter in
/usr/lib/x86_64-linux-gnu/perl/ and provide /usr/bin/perl as
alternative.

Also cross-building is not relevant for this issue. Normaly you would
build it native (e.g. i386 chroot) and then install it on some other
arch (amd64).

But I think multiarch support for interpreters will be slow. There
aren't many use cases so not a lot of pressure to implement it.

Anyway, all of this has to wait till after wheezy. So get that out
first.

MfG
Goswin


-- 
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/20130418145645.GF21076@frosties



Re: multiarch and interpreters/runtimes

2013-04-18 Thread Dmitrijs Ledkovs
On 18 April 2013 15:55, Andrew Shadura bugzi...@tut.by wrote:
 Hello,

 On 18 April 2013 16:41, Matthias Klose d...@debian.org wrote:
  - Tcl/Tk: Wookey and Dimitrij did start on that in Ubuntu, patches
are available in Debian bug reports.
Currently the shared libraries are split out into separate packages,
and are co-installable. Not yet tested if this enough to run an
embedded interpreter.

 Could I please have more info? :)


Well there are patches to move .so libraries from /usr/lib/tk8.*/ to
/usr/lib/$(MULTIARCH)/tk8.*, same for tcl and matching tcltk-defaults
package to have similar symlinks everywhere.
And basically mark that package with .so's as multiarch:same. The
interpreter packages are still not marked multi-arch anything. And as
doko said, there wasn't anything else done e.g. test embedded
interpreter use-case.

Personally, I'm not yet convinced about this interpreter
multiarchification, but hey Debian is a Universal OS ;-) and I don't
see any reason to not do this.

Regards,

Dmitrijs.


-- 
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: 
http://lists.debian.org/CANBHLUg8ibjrY0p+A=VXk_mS=ne26i1ignypdxbcx6cuwcz...@mail.gmail.com



Re: multiarch and interpreters/runtimes

2013-04-18 Thread Neil Williams
On Thu, 18 Apr 2013 16:41:35 +0200
Matthias Klose d...@debian.org wrote:

  - running a gdb:amd64 on i386 to debug 64bit binaries. This is the
reason for our current gdb64 package on i386, but it is missing the
support for the python based pretty printer.
Installing gdb:amd64 on i386 in wheezy will remove the whole i386
python stack and then install the amd64 python stack. For this use
case the needed amd64 python stuff should just be installed without
removing the i386 packages.
 
  - install vim for a foreign architecture. Ok, not really a good use case,
but it comes with an insane amount of embedded interpreters. It
should be installable without removing the native interpreter
packages, and the embedded interpreters should still be usable.

Hmm, is this really a strong use case? vim-tiny can be a pain
sometimes, I know, but are those embedded interpreters in vim-full all
that useful? (Or am I missing something?)

  - Third party modules for interpreters should be cross-buildable.
Many build systems for interpreter languages are written in the
interpreter language itself. So you do require the interpreter
for the build, and the development files for the host.

To me, that is a traditional cross-build relationship, it doesn't
require the installation of runtime files for the interpreter for the
foreign architecture, only the native runtime and the foreign
development files to support those third party modules which are
architecture-dependent or have architecture-dependent build
dependencies.

I don't see a need to have the perl:i386 interpreter installed on amd64
in order to build third party i386 perl modules, the amd64 interpreter
should be fine, just as it is when cross building third party armel perl
modules.

When trying to build a base system, you meet build dependencies
on interpreters very early.  You can either work around these
by staged builds, or provide an interpreter which supports cross
builds for third party packages. However most interpreters require
some work to cross-build, and you have to figure out how to cross
build the extensions.

Staged builds will still be needed for bootstrapping to sort out
build-dependencies on things like documentation-build tools used by
packages which need to be built quite early in the process. However, if
the need for staged builds can be reduced somewhat by better support
for interpreters and the tools which use them, that's good. I'm just
not sure it provides that for the typical cross build use cases.

I don't see the link between cross-building and this
special-architecture pair relationship. After all, it is rare that
someone needs to cross build packages for armhf on an armel machine
and the reverse is just a chroot. Building i386 on amd64 is similar,
just use debootstrap. I've seen just an individual enquiry about
building amd64 on i386 but that is getting very specialised. Much more
likely to be an amd64 or i386 buildd preparing packages for armhf where
this support won't matter. The typical use case for cross-building is
to speed up the build or build where no tools yet exist and will
therefore remain with the installation of development files for the
foreign architecture rather than runtime files.

 So the gdb/vim use cases would require co-installation of the runtime files 
 for
 more than one architecture (including in most cases the shared library, some
 standard libraries, or subset of these), and keeping the interpreter for the
 default architecture.

I agree that this is limited to the gdb/vim use cases.
 
 The cross build case would require in addition co-installed development
 packages, provided that these can be used for a cross-build.

The cross-build use case would require the co-installed development
packages but I don't see a need for foreign runtime files. Interpreters
(except special cases inside their own builds) should isolate the third
party modules from the architecture of the interpreter itself. Bugs in
that support need to be fixed in the interpreter, not the distro or
toolchain.

 So what is the status for some runtimes/interpreters (would like to see some
 follow-up/corrections from package maintainers)?
 
  - OpenJDK: runtime and development files are co-installable, the
package itself is not cross-buildable, and afaik nobody did try
to cross build any bindings.
 
  - Perl: Afaik, Neil did prepare the interpreter to cross-build, and
to co-install the runtime and development files. What about
cross-building third party modules?

As far as bootstrapping is concerned, the only third party perl modules
which are an issue are those which have architecture-dependent
build-dependencies (bindings for arch-dependent libraries etc.) but
there aren't that many of those which are also essential to getting a
new port to the point where there is sufficient perl support to start
building packages natively. Yes, there are a number of those inside the

Re: multiarch and interpreters/runtimes

2013-04-18 Thread Neil Williams
On Thu, 18 Apr 2013 16:41:35 +0200
Matthias Klose d...@debian.org wrote:

  - running a gdb:amd64 on i386 to debug 64bit binaries. This is the
reason for our current gdb64 package on i386, but it is missing the
support for the python based pretty printer.
Installing gdb:amd64 on i386 in wheezy will remove the whole i386
python stack and then install the amd64 python stack. For this use
case the needed amd64 python stuff should just be installed without
removing the i386 packages.
 
  - install vim for a foreign architecture. Ok, not really a good use case,
but it comes with an insane amount of embedded interpreters. It
should be installable without removing the native interpreter
packages, and the embedded interpreters should still be usable.

Hmm, is this really a strong use case? vim-tiny can be a pain
sometimes, I know, but are those embedded interpreters in vim-full all
that useful? (Or am I missing something?)

  - Third party modules for interpreters should be cross-buildable.
Many build systems for interpreter languages are written in the
interpreter language itself. So you do require the interpreter
for the build, and the development files for the host.

To me, that is a traditional cross-build relationship, it doesn't
require the installation of runtime files for the interpreter for the
foreign architecture, only the native runtime and the foreign
development files to support those third party modules which are
architecture-dependent or have architecture-dependent build
dependencies.

I don't see a need to have the perl:i386 interpreter installed on amd64
in order to build third party i386 perl modules, the amd64 interpreter
should be fine, just as it is when cross building third party armel perl
modules.

When trying to build a base system, you meet build dependencies
on interpreters very early.  You can either work around these
by staged builds, or provide an interpreter which supports cross
builds for third party packages. However most interpreters require
some work to cross-build, and you have to figure out how to cross
build the extensions.

Staged builds will still be needed for bootstrapping to sort out
build-dependencies on things like documentation-build tools used by
packages which need to be built quite early in the process. However, if
the need for staged builds can be reduced somewhat by better support
for interpreters and the tools which use them, that's good. I'm just
not sure it provides that for the typical cross build use cases.

I don't see the link between cross-building and this
special-architecture pair relationship. After all, it is rare that
someone needs to cross build packages for armhf on an armel machine
and the reverse is just a chroot. Building i386 on amd64 is similar,
just use debootstrap. I've seen just an individual enquiry about
building amd64 on i386 but that is getting very specialised. Much more
likely to be an amd64 or i386 buildd preparing packages for armhf where
this support won't matter. The typical use case for cross-building is
to speed up the build or build where no tools yet exist and will
therefore remain with the installation of development files for the
foreign architecture rather than runtime files.

 So the gdb/vim use cases would require co-installation of the runtime files 
 for
 more than one architecture (including in most cases the shared library, some
 standard libraries, or subset of these), and keeping the interpreter for the
 default architecture.

I agree that this is limited to the gdb/vim use cases.
 
 The cross build case would require in addition co-installed development
 packages, provided that these can be used for a cross-build.

The cross-build use case would require the co-installed development
packages but I don't see a need for foreign runtime files. Interpreters
(except special cases inside their own builds) should isolate the third
party modules from the architecture of the interpreter itself. Bugs in
that support need to be fixed in the interpreter, not the distro or
toolchain.

 So what is the status for some runtimes/interpreters (would like to see some
 follow-up/corrections from package maintainers)?
 
  - OpenJDK: runtime and development files are co-installable, the
package itself is not cross-buildable, and afaik nobody did try
to cross build any bindings.
 
  - Perl: Afaik, Neil did prepare the interpreter to cross-build, and
to co-install the runtime and development files. What about
cross-building third party modules?

As far as bootstrapping is concerned, the only third party perl modules
which are an issue are those which have architecture-dependent
build-dependencies (bindings for arch-dependent libraries etc.) but
there aren't that many of those which are also essential to getting a
new port to the point where there is sufficient perl support to start
building packages natively. Yes, there are a number of those inside the

Re: multiarch and interpreters/runtimes

2013-04-18 Thread Josselin Mouette
Le jeudi 18 avril 2013 à 16:41 +0200, Matthias Klose a écrit : 
 - Python: co-installable runtime and development files, cross-buildability
upstreamed for 2.7.4 and 3.3.1. There is a way to cross-build third
party modules using distutils/setuptools. Packages are available in
experimental, but because I didn't backport this to 2.6 and 3.2, not
much useful. Install an Ubuntu raring (13.04) chroot to experiment
with these. Details at http://wiki.debian.org/Python/MultiArch

Will there be a way to co-install modules too?

As for GObject introspection modules (which replace native Python
modules for the whole of GNOME), there is nothing stopping a generic
multiarch implementation, it just needs to be done (and will probably be
soon).

 - Lua, Ocaml, Haskell, Guile, ... ?

GJS / Seed: I don’t think there is any use case but it could be done
once GI modules are multiarch capable.

-- 
 .''`.  Josselin Mouette
: :' :
`. `'
  `-


-- 
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/1366301198.14131.774.camel@pi0307572



Re: multiarch and interpreters/runtimes

2013-04-18 Thread Ian Jackson
Goswin von Brederlow writes (Re: multiarch and interpreters/runtimes):
 Co-installability of interpreters is generally not planed and would
 have to be made as custom solutions, i.e. place the interpreter in
 /usr/lib/x86_64-linux-gnu/perl/ and provide /usr/bin/perl as
 alternative.

I think it's important to distinguish between (a) coinstallability of
interpreter executables for use in #!, or explicit invocation and
(b) coinstallability of the interpreter code as a library which can be
embedded in other applications.

So for example, you are saying that coinstalling i386 and amd64
versions of tclsh (which is normally found in /usr/bin) is not
generally planned.  But coinstalling i386 and amd64 versions of
libtcl.so _is_ intended and supported by multiarch, and presumably
also of tcl extensions.

Please correct me if I'm wrong.

 Anyway, all of this has to wait till after wheezy. So get that out
 first.

Right.  Thanks for the explanation, anyway.

Ian.


-- 
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/20848.10798.95982.664...@chiark.greenend.org.uk



Re: multiarch and interpreters/runtimes

2013-04-18 Thread Andrew Shadura
Hello,

On Thu, 18 Apr 2013 16:07:44 +0100
Dmitrijs Ledkovs x...@debian.org wrote:

  On 18 April 2013 16:41, Matthias Klose d...@debian.org wrote:
   - Tcl/Tk: Wookey and Dimitrij did start on that in Ubuntu, patches
 are available in Debian bug reports.
 Currently the shared libraries are split out into separate
  packages, and are co-installable. Not yet tested if this enough to
  run an embedded interpreter.

  Could I please have more info? :)

 Well there are patches to move .so libraries from /usr/lib/tk8.*/ to
 /usr/lib/$(MULTIARCH)/tk8.*, same for tcl and matching tcltk-defaults
 package to have similar symlinks everywhere.
 And basically mark that package with .so's as multiarch:same. The
 interpreter packages are still not marked multi-arch anything. And as
 doko said, there wasn't anything else done e.g. test embedded
 interpreter use-case.

By the way, have you contacted Sergei on this?

 Personally, I'm not yet convinced about this interpreter
 multiarchification, but hey Debian is a Universal OS ;-) and I don't
 see any reason to not do this.

Well, it may make sense, but really there will be not many people
running foreign interpreters at all, in my opinion.

Is there a wiki page on Tcl/Tk multiarchification?

To Sergei (added to Cc): I'd like to join the effort in packaging Tcl/Tk
and stuff, as I said before; but as you've been the most active person
on the team for quite some time I'm a bit hesitant about interrupting
the process by committing things :) I guess, we need some
co-ordination; also, in my opinion, the mailing list needs revival.

-- 
WBR, Andrew


signature.asc
Description: PGP signature


Re: multiarch and interpreters/runtimes

2013-04-18 Thread Sergei Golovan
Hi!

On Thu, Apr 18, 2013 at 9:56 PM, Andrew Shadura bugzi...@tut.by wrote:

 Hello,


 By the way, have you contacted Sergei on this?

I saw the bugreports and I'm planning to start working on them after
wheezy release.


  Personally, I'm not yet convinced about this interpreter
  multiarchification, but hey Debian is a Universal OS ;-) and I don't
  see any reason to not do this.

 Well, it may make sense, but really there will be not many people
 running foreign interpreters at all, in my opinion.

 Is there a wiki page on Tcl/Tk multiarchification?

Not yet.


 To Sergei (added to Cc): I'd like to join the effort in packaging Tcl/Tk
 and stuff, as I said before; but as you've been the most active person
 on the team for quite some time I'm a bit hesitant about interrupting
 the process by committing things :) I guess, we need some
 co-ordination; also, in my opinion, the mailing list needs revival.

There's pkg-tcltk-de...@lists.alioth.debian.org mailing list for that.

Cheers!
--
Sergei Golovan


-- 
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: 
http://lists.debian.org/caoq2pxeaan9odnwhwtrtoaacsotabvwh3ppwmgurjmrkk9o...@mail.gmail.com



Re: multiarch and interpreters/runtimes

2013-04-18 Thread Andrew Shadura
Hello,

On Thu, 18 Apr 2013 22:13:04 +0400
Sergei Golovan sgolo...@debian.org wrote:

  To Sergei (added to Cc): I'd like to join the effort in packaging
  Tcl/Tk and stuff, as I said before; but as you've been the most
  active person on the team for quite some time I'm a bit hesitant
  about interrupting the process by committing things :) I guess, we
  need some co-ordination; also, in my opinion, the mailing list
  needs revival.

 There's pkg-tcltk-de...@lists.alioth.debian.org mailing list for that.

I know that, which is exactly why I said the above. The list seems
to be overspammed, and there's very little communication going on
there, unfortunately.

-- 
WBR, Andrew


signature.asc
Description: PGP signature


Re: multiarch and interpreters/runtimes

2013-04-18 Thread Goswin von Brederlow
On Thu, Apr 18, 2013 at 06:15:26PM +0100, Ian Jackson wrote:
 Goswin von Brederlow writes (Re: multiarch and interpreters/runtimes):
  Co-installability of interpreters is generally not planed and would
  have to be made as custom solutions, i.e. place the interpreter in
  /usr/lib/x86_64-linux-gnu/perl/ and provide /usr/bin/perl as
  alternative.
 
 I think it's important to distinguish between (a) coinstallability of
 interpreter executables for use in #!, or explicit invocation and
 (b) coinstallability of the interpreter code as a library which can be
 embedded in other applications.
 
 So for example, you are saying that coinstalling i386 and amd64
 versions of tclsh (which is normally found in /usr/bin) is not
 generally planned.  But coinstalling i386 and amd64 versions of
 libtcl.so _is_ intended and supported by multiarch, and presumably
 also of tcl extensions.
 
 Please correct me if I'm wrong.

You are fully correct there. Having dynamic libraries in /usr/lib/
coinstallable is a simple matter of moving them to the multiarch dir.
The problem is indeed only the binary and its #! invocation.

I'm not sure what the situation is for plugins with such interpreter
libs. If the plugin is a libPlugin.so that is linked against
libInterpreter.so then this should just work automatically like any
library dependency with a simple Multiarch: same. If it is not linked
a simple Depends should do. But don't nail me on it. A specific
example might turn out to be more complex and need M-A: allow.

So split second thought, suboptimal solution looks like this:

Package: interpreter
Architecture: any
Depends: libinterpreter
Multi-Arch: foreign

Package: libinterpreter
Architecture: any
Multi-Arch: same

Package: plugin (metapackage)
Architecture: any
Depends: interpreter, libplugin
Multi-Arch: foreign

Package: libplugin
Architecture: any
Depends: libinterpreter
Multi-Arch: same

Package: app-with-lib
Architecture: any
Depends: libinterpreter, libplugin
Multi-Arch: foreign (optional)

Package: app-with-script
Architecture: any/all
Depends: interpreter, plugin 
Multi-Arch: foreign (optional)

Package: script
Architecture: all
Depends: interpreter
Multi-Arch: foreign (optional)


Note: This uses an extra package (plugin) to work around Multi-Arch:
allowed not being allowed.

  Anyway, all of this has to wait till after wheezy. So get that out
  first.
 
 Right.  Thanks for the explanation, anyway.
 
 Ian.

MfG
Goswin


-- 
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/20130418183535.GA16749@frosties



Re: multiarch and interpreters/runtimes

2013-04-18 Thread Russ Allbery
Matthias Klose d...@debian.org writes:

 There are maybe not many use cases where you do want to install an
 interpreter like python or perl for a foreign architecture, but there
 are some use case where such a setup makes sense.

One additional use case: I want to be able to do this in order to
cross-grade (take an existing i386 system and convert it to amd64 without
having to reinstall).

-- 
Russ Allbery (r...@debian.org)   http://www.eyrie.org/~eagle/


-- 
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/87li8fljeo@windlord.stanford.edu



Re: multiarch and interpreters/runtimes

2013-04-18 Thread Guillem Jover
Hi!

[ I had pending warning about this on debian-devel before the release,
  so this is a good way to do that. :) ]

On Thu, 2013-04-18 at 16:41:35 +0200, Matthias Klose wrote:
 There are maybe not many use cases where you do want to install an interpreter
 like python or perl for a foreign architecture, but there are some use case
 where such a setup makes sense.  For now I see this limited for architecture
 pairs like amd64/i386, armel/armhf, ia64/i386, i.e. for architectures where 
 the
 foreign interpreter can be run (without qemu).  Use cases are

I agree this would be desirable, but unfortunately I don't think it's
currently possible (w/o compromising the dependency system).

 So what is the status for some runtimes/interpreters (would like to see some
 follow-up/corrections from package maintainers)?

  - Perl: Afaik, Neil did prepare the interpreter to cross-build, and
to co-install the runtime and development files. What about
cross-building third party modules?
 
  - Python: co-installable runtime and development files, cross-buildability
upstreamed for 2.7.4 and 3.3.1. There is a way to cross-build third
party modules using distutils/setuptools. Packages are available in
experimental, but because I didn't backport this to 2.6 and 3.2, not
much useful. Install an Ubuntu raring (13.04) chroot to experiment
with these. Details at http://wiki.debian.org/Python/MultiArch

As I pointed out on the debian-perl mailing list, after having
discussed about multiarch support for perl, I don't think a fully
multiarchified perl (nor at least python) should be uploaded to sid,
as going full multiarch on the combination of a program frontend to an
interpreter in shared library form, plus architecture dependent modules
makes for situations that can bypass the dependency system and get
into broken installations. Please see [0] for more details, although
the solution I proposed is bogus and would not solve much as that
does not help with arch-dep modules being depended by arch-indep ones
and being run through the shared library interpreter, because the
dependency to match is against the running interpreter, not just the
program one.

  [0] https://lists.debian.org/debian-perl/2012/12/msg0.html

I've not checked the situation with other interpreters, but if they
are similar to perl/python then doing full-multiarch might be a bad
idea too for those. I think the full-multiarch support for python in
experimental should really be reverted.

Thanks,
Guillem


-- 
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/20130418223307.ga12...@gaara.hadrons.org



Re: multiarch and interpreters/runtimes

2013-04-18 Thread Dmitrijs Ledkovs
On 18 April 2013 19:13, Sergei Golovan sgolo...@debian.org wrote:
 Hi!

 On Thu, Apr 18, 2013 at 9:56 PM, Andrew Shadura bugzi...@tut.by wrote:

 Hello,


 By the way, have you contacted Sergei on this?

 I saw the bugreports and I'm planning to start working on them after
 wheezy release.


Yeah there is no rush really. We pushed the patches to ubuntu,
realised that it's silly to have tcl multiarched without tk, then had
full archive rebuild noticed loads of things failing and hence tweaked
up tcltk-defaults to provide compat Config.sh scripts (basically
call dpkg-architecture and source correct default Config.sh from
multiarch location). And even after that a few things still required
manual patching.

The package history in launchpad has all the debdiffs as usual, and
debian pts should have links to those as well by now ;-)

I'm sure a few things are still missing for the initial
multiartification, but it did help cross-building a few
reverse-(build)-depends already (which was the original wookey's
driving intend).

If there are any questions, or more cleanedup / rebased debdiffs
required I am more than happy to help out. But somehow I am expecting
loads of transitions to start early in jessie and a big havoc for a
few months =)



  Personally, I'm not yet convinced about this interpreter
  multiarchification, but hey Debian is a Universal OS ;-) and I don't
  see any reason to not do this.

 Well, it may make sense, but really there will be not many people
 running foreign interpreters at all, in my opinion.

 Is there a wiki page on Tcl/Tk multiarchification?

 Not yet.


I'm not sure we need one. Transition tracker setup in ben, might help
though to track the transition to the multi-arched lib packages.


 To Sergei (added to Cc): I'd like to join the effort in packaging Tcl/Tk
 and stuff, as I said before; but as you've been the most active person
 on the team for quite some time I'm a bit hesitant about interrupting
 the process by committing things :) I guess, we need some
 co-ordination; also, in my opinion, the mailing list needs revival.

 There's pkg-tcltk-de...@lists.alioth.debian.org mailing list for that.


FYI, i'm not subscribed to that one.

 Cheers!
 --
 Sergei Golovan


-- 
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: 
http://lists.debian.org/canbhlugxpqvraw9gmldr-afxvdeg+6sxd57kysh3lmcnbfg...@mail.gmail.com



Re: multiarch and interpreters/runtimes

2013-04-18 Thread James McCoy
On Thu, Apr 18, 2013 at 04:18:20PM +0100, Neil Williams wrote:
 On Thu, 18 Apr 2013 16:41:35 +0200
 Matthias Klose d...@debian.org wrote:
 
   - running a gdb:amd64 on i386 to debug 64bit binaries. This is the
 reason for our current gdb64 package on i386, but it is missing the
 support for the python based pretty printer.
 Installing gdb:amd64 on i386 in wheezy will remove the whole i386
 python stack and then install the amd64 python stack. For this use
 case the needed amd64 python stuff should just be installed without
 removing the i386 packages.
  
   - install vim for a foreign architecture. Ok, not really a good use case,
 but it comes with an insane amount of embedded interpreters. It
 should be installable without removing the native interpreter
 packages, and the embedded interpreters should still be usable.
 
 Hmm, is this really a strong use case? vim-tiny can be a pain
 sometimes, I know, but are those embedded interpreters in vim-full all
 that useful? (Or am I missing something?)

They're useful if you want to use or develop plugins for Vim that use
those language bindings instead of basic vim script.

Cheers,
-- 
James
GPG Key: 4096R/331BA3DB 2011-12-05 James McCoy james...@debian.org


signature.asc
Description: Digital signature


Re: multiarch dependency hell. build amd64, can't install without also building i386

2013-02-10 Thread Dmitrijs Ledkovs
On 24 January 2013 04:56, Paul Johnson pauljoh...@gmail.com wrote:
 This is a multiarch issue I had not considered before. Have you seen
 it? I never wanted to be a cross compiler, I really only want to
 build amd64.  But I have some i386 libraries for a particular program
 (acroread).


I recently had to build packages that only build on i386, while having
an amd64 host:
$ mk-sbuild --arch i386 sid
$ sbuild -d sid --arch i386 foo*.dsc

Since amd64 cpu's can execute i386, it's not cross compilation, but a
native one instead.
Yes, it's a second build, but it's fairly trivial to do.

Regards,

Dmitrijs.


-- 
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: 
http://lists.debian.org/CANBHLUhHHX_=laqq1mjewxhtw-ij+m+2770bbd3hhz2zygf...@mail.gmail.com



Re: multiarch dependency hell. build amd64, can't install without also building i386

2013-01-25 Thread Thorsten Glaser
Paul Johnson pauljohn32 at gmail.com writes:

 I've just learned that, if I build amd64 packages, I can't install
 them for testing because I've not also built the i386 packages.

Just test in cowbuilder --login.

 That's really inconvenient! I don't understand why there has to be a
 linkage between the shared library versions on amd64 and i386. Aren't
 they separate?

No. Not in Multi-Arch, which is about sharing those things that aren’t
separate across architectures in packages of the precisely same version.

HTH  HAND,
//mirabilos


-- 
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/loom.20130125t221327-...@post.gmane.org



Re: multiarch dependency hell. build amd64, can't install without also building i386

2013-01-23 Thread Chow Loong Jin
On 24/01/2013 12:56, Paul Johnson wrote:
 [...]
 I've just learned that, if I build amd64 packages, I can't install
 them for testing because I've not also built the i386 packages.
 [...]
 That's really inconvenient! I don't understand why there has to be a
 linkage between the shared library versions on amd64 and i386. Aren't
 they separate?

Simply put, when foo:amd64 and foo:i386 are installed, all common files that are
shared between them must be bit-for-bit identical (though in practice this isn't
so, e.g. the gzipped files in /usr/share/doc/$pkg/). This is a reasonable
expectation when foo:amd64 and foo:i386 are of the same version, but it's
probably going to fail very miserably when they're of different versions, so
that's how it is.

See
https://wiki.ubuntu.com/MultiarchSpec#Architecture-independent_files_in_multiarch_packages
for more information.

 You ask why cairo?  I am curious to know if a new libcairo2 fixes a
 little bug in Evince (invisible vertical quotes). So I worked through
 the packaging for cairo-1.12.10. dpkg-buildpackage -rfakeroot gives me
 the goods (after fiddling some patch fuzz):
 [...]

In your case, you could just very well install the packages, leave the
dependencies unresolved, and just run Evince as is to test.

 I expect your answer will be yes, it really is that hard, you have to
 learn how to compile for i386 too. I'm trying
 (http://wiki.debian.org/Multiarch/HOWTO), but not making progress. I'm
 like a collection of monkies trying to type the Bible at random, I'm
 afraid.
 [...]

Just use sbuild/pbuilder, which basically compile things inside a chroot so you
don't have to deal with the cross-compilation mess. As long as the architecture
of the chroot you're compiling for is supported by the kernel you're running,
it'll work.

-- 
Kind regards,
Loong Jin



signature.asc
Description: OpenPGP digital signature


Processed: Re: Multiarch breaks support for non-multiarch toolchain

2012-07-22 Thread Debian Bug Tracking System
Processing commands for cont...@bugs.debian.org:

 affects 637232 + release-notes
Bug #637232 [general] general: Multiarch breaks support for non-multiarch 
toolchain
Bug #639214 [general] eglibc: changes to paths concerning crt1.o, crti.o and 
crtn.o breaks building LLVM Trunk
Bug #644986 [general] i386: Compiling gcc-snapshots from upstream with 
multiarch-toolchain?
Bug #648889 [general] /usr/include/features.h(323): catastrophic error: could 
not open source file bits/predefs.h
Added indication that 637232 affects release-notes
Added indication that 639214 affects release-notes
Added indication that 644986 affects release-notes
Added indication that 648889 affects release-notes
 quit
Stopping processing here.

Please contact me if you need assistance.
-- 
637232: http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=637232
639214: http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=639214
644986: http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=644986
648889: http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=648889
Debian Bug Tracking System
Contact ow...@bugs.debian.org with problems


--
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: 
http://lists.debian.org/handler.s.c.134297954716408.transcr...@bugs.debian.org



Re: multiarch, required packages, and multiarch-support

2012-06-15 Thread Ted Ts'o
On Thu, Jun 14, 2012 at 09:22:43PM -0700, Russ Allbery wrote:
 Theodore Ts'o ty...@mit.edu writes:
 
  If a required package (such as e2fslibs, which is required by e2fsprogs)
  provides multiarch support, then Lintian requires that the package have
  a dependency on the package multiarch-support[1].
 
  However, this causes debcheck to complain because you now have a
  required package depending on a package, multiarch-support, which is
  only at standard priority[2] 
 
 multiarch-support should be priority: required.  It's already a dependency
 of several other priority: required packages, such as libselinux1 and
 zlib1g.
 
 That implies that in the interim you should ignore debcheck.

Thanks, I've filed a bug against multiarch-support to this effect.


  - Ted


-- 
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/20120615131712.gb30...@thunk.org



Re: multiarch, required packages, and multiarch-support

2012-06-14 Thread Russ Allbery
Theodore Ts'o ty...@mit.edu writes:

 If a required package (such as e2fslibs, which is required by e2fsprogs)
 provides multiarch support, then Lintian requires that the package have
 a dependency on the package multiarch-support[1].

 However, this causes debcheck to complain because you now have a
 required package depending on a package, multiarch-support, which is
 only at standard priority[2] 

 [1] 
 http://lintian.debian.org/tags/missing-pre-dependency-on-multiarch-support.html
 [2] http://qa.debian.org/debcheck.php?dist=unstablepackage=e2fsprogs

 What is the right thing to do to resolve this mutually irreconcilable
 set of complaints from either Lintian or debcheck?

multiarch-support should be priority: required.  It's already a dependency
of several other priority: required packages, such as libselinux1 and
zlib1g.

That implies that in the interim you should ignore debcheck.

-- 
Russ Allbery (r...@debian.org)   http://www.eyrie.org/~eagle/


-- 
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/87aa05kxv0@windlord.stanford.edu



Processed: Re: multiarch tuples are not documented/defined

2012-04-26 Thread Debian Bug Tracking System
Processing commands for cont...@bugs.debian.org:

 reassign 664257 debian-policy 3.9.3.1
Bug #664257 [general] multiarch tuples are not in the FHS
Bug reassigned from package 'general' to 'debian-policy'.
Ignoring request to alter found versions of bug #664257 to the same values 
previously set
Ignoring request to alter fixed versions of bug #664257 to the same values 
previously set
Bug #664257 [debian-policy] multiarch tuples are not in the FHS
Marked as found in versions debian-policy/3.9.3.1.
 affects 664257 =
Bug #664257 [debian-policy] multiarch tuples are not in the FHS
Removed indication that 664257 affects debian-policy
 tags 664257 = upstream
Bug #664257 [debian-policy] multiarch tuples are not in the FHS
Removed tag(s) sid and wheezy.
 quit
Stopping processing here.

Please contact me if you need assistance.
-- 
664257: http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=664257
Debian Bug Tracking System
Contact ow...@bugs.debian.org with problems


--
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: 
http://lists.debian.org/handler.s.c.133548273925753.transcr...@bugs.debian.org



Processed: Re: multiarch tuples are not documented/defined

2012-04-18 Thread Debian Bug Tracking System
Processing commands for cont...@bugs.debian.org:

 retitle 664257 multiarch tuples are not in the FHS
Bug #664257 [general] multiarch tuples are not documented/defined
Changed Bug title to 'multiarch tuples are not in the FHS' from 'multiarch 
tuples are not documented/defined'
 tags 664257 + upstream
Bug #664257 [general] multiarch tuples are not in the FHS
Added tag(s) upstream.
 affects 664257 + debian-policy
Bug #664257 [general] multiarch tuples are not in the FHS
Added indication that 664257 affects debian-policy

End of message, stopping processing here.

Please contact me if you need assistance.
-- 
664257: http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=664257
Debian Bug Tracking System
Contact ow...@bugs.debian.org with problems


--
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: 
http://lists.debian.org/handler.s.c.133476422627604.transcr...@bugs.debian.org



Re: multiarch conversion for packages with lib32* packages

2012-03-24 Thread Sven Joachim
On 2012-03-24 04:04 +0100, Jay Berkenbilt wrote:

 I would like to do multiarch conversion for the icu packages.  I
 understand the concept and the implementation, and I have looked at
 http://wiki.debian.org/Multiarch/Implementation.  One issue not covered
 is what to do if your package already builds 32-bit libraries on a
 64-bit system by building 32-bit explicitly and packaging as
 lib32whatever.

Those packages aren't really affected by the switch to the multiarch
paths.  Just continue to build them as before.

 The ICU source package creates lib32icu-dev and lib32icu48 on amd64,
 ppc64, and kfreebsd-amd64.  Do I just stop doing this and let packages
 that build depend on lib32icu-dev just stop doing it, or is there some
 kind of transitional package that I should create?

No, you should not do either of that.

 Anyway it would be nice if the wiki page were explicit about this.  I
 could certainly just stop doing it and let packages that have this build
 dependency FTBFS until they do whatever changes they need to do, but I'm
 not sure whether a precedent or convention has been established.

The lib32* packages need to built as long as they have reverse (build)
dependencies.  I suppose most of them should go away in the long term,
but this is not going to happen in wheezy.

Cheers,
   Sven


-- 
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/87vclue64l@turtle.gmx.de



Re: multiarch conversion for packages with lib32* packages

2012-03-24 Thread Adam Borowski
On Sat, Mar 24, 2012 at 09:48:58AM +0100, Sven Joachim wrote:
 On 2012-03-24 04:04 +0100, Jay Berkenbilt wrote:
  One issue not covered is what to do if your package already builds
  32-bit libraries on a 64-bit system by building 32-bit explicitly and
  packaging as lib32whatever.
 
 The lib32* packages need to built as long as they have reverse (build)
 dependencies.  I suppose most of them should go away in the long term,
 but this is not going to happen in wheezy.

All of lib32icu's rdepends seem to have already dropped *32* builds, so
it can be removed right now.

-- 
// If you believe in so-called intellectual property, please immediately
// cease using counterfeit alphabets.  Instead, contact the nearest temple
// of Amon, whose priests will provide you with scribal services for all
// your writing needs, for Reasonable and Non-Discriminatory prices.


-- 
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/20120324095047.ga3...@angband.pl



Re: multiarch conversion for packages with lib32* packages

2012-03-24 Thread Sven Joachim
On 2012-03-24 10:50 +0100, Adam Borowski wrote:

 On Sat, Mar 24, 2012 at 09:48:58AM +0100, Sven Joachim wrote:
 On 2012-03-24 04:04 +0100, Jay Berkenbilt wrote:
  One issue not covered is what to do if your package already builds
  32-bit libraries on a 64-bit system by building 32-bit explicitly and
  packaging as lib32whatever.
 
 The lib32* packages need to built as long as they have reverse (build)
 dependencies.  I suppose most of them should go away in the long term,
 but this is not going to happen in wheezy.

 All of lib32icu's rdepends seem to have already dropped *32* builds, so
 it can be removed right now.

Oh, indeed.  Somehow I assumed that ia32-libs would build-depend on
lib32icu-dev, but it doesn't.

Cheers,
   Sven


-- 
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/87pqc2e107@turtle.gmx.de



Re: multiarch conversion for packages with lib32* packages

2012-03-24 Thread Goswin von Brederlow
Sven Joachim svenj...@gmx.de writes:

 On 2012-03-24 10:50 +0100, Adam Borowski wrote:

 On Sat, Mar 24, 2012 at 09:48:58AM +0100, Sven Joachim wrote:
 On 2012-03-24 04:04 +0100, Jay Berkenbilt wrote:
  One issue not covered is what to do if your package already builds
  32-bit libraries on a 64-bit system by building 32-bit explicitly and
  packaging as lib32whatever.
 
 The lib32* packages need to built as long as they have reverse (build)
 dependencies.  I suppose most of them should go away in the long term,
 but this is not going to happen in wheezy.

 All of lib32icu's rdepends seem to have already dropped *32* builds, so
 it can be removed right now.

 Oh, indeed.  Somehow I assumed that ia32-libs would build-depend on
 lib32icu-dev, but it doesn't.

 Cheers,
Sven

Ia32-libs does not compile/link anything so it has no build-depends on
the -dev packages. And the plan is still to not have ia32-libs in
wheezy.


FYI: Since I have recieved no objections from ftp-master nor the release
team the plan for ia32-libs now looks as follows:

Ia32-libs becomes a transitional package depending on ia32-libs-i386.
ia32-libs-i386 is a new transitional package with architecture i386 that
depends on all the 32bit libs that used to be in ia32-libs.

Package: ia32-libs
Architecture: amd64 ia64
Depends: ia32-libs-i386
Description: ia32-libs transitional package

Package: ia32-libs-i386
Architecture: i386
Depends: libacl1 (= ver), libaio1 (= ver), libartsc0 (= ver), ...
Conflicts: ia32-libs ( ver)
Description: ia32-libs-i386 transitional package

That means that ia32-libs in amd64 alone will be uninstallable and users
will have to enable multiarch before they can install or upgrade
ia32-libs. But other than that the upgrade should be smooth.


It makes sense to provide this upgrade mechanism for ia32-libs as it has
reverse depends that aren't in Debian. It probably doesn't make sense
for individual lib32 package to do the same. Lib32 packages can normaly
be dropped when they have no more reverse depends or build-depends.

MfG
Goswin


-- 
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/87sjgy3xx9.fsf@frosties.localnet



Re: multiarch conversion for packages with lib32* packages

2012-03-24 Thread Ben Hutchings
On Sat, 2012-03-24 at 14:56 +0100, Goswin von Brederlow wrote:
[...]
 FYI: Since I have recieved no objections from ftp-master nor the release
 team the plan for ia32-libs now looks as follows:
 
 Ia32-libs becomes a transitional package depending on ia32-libs-i386.
 ia32-libs-i386 is a new transitional package with architecture i386 that
 depends on all the 32bit libs that used to be in ia32-libs.
 
 Package: ia32-libs
 Architecture: amd64 ia64
 Depends: ia32-libs-i386
 Description: ia32-libs transitional package
[...]

Why is this still built for ia64?  AFAIK we no longer have any support
for IA-32 emulation: hardware emulation was removed (since Montecito),
the kernel support bitrotted (prior to 2.6.32) and has been removed
(2.6.34), and the software emulator is non-free.

Ben.

-- 
Ben Hutchings
I'm always amazed by the number of people who take up solipsism because
they heard someone else explain it. - E*Borg on alt.fan.pratchett


signature.asc
Description: This is a digitally signed message part


Re: multiarch conversion for packages with lib32* packages

2012-03-24 Thread Goswin von Brederlow
Ben Hutchings b...@decadent.org.uk writes:

 On Sat, 2012-03-24 at 14:56 +0100, Goswin von Brederlow wrote:
 [...]
 FYI: Since I have recieved no objections from ftp-master nor the release
 team the plan for ia32-libs now looks as follows:
 
 Ia32-libs becomes a transitional package depending on ia32-libs-i386.
 ia32-libs-i386 is a new transitional package with architecture i386 that
 depends on all the 32bit libs that used to be in ia32-libs.
 
 Package: ia32-libs
 Architecture: amd64 ia64
 Depends: ia32-libs-i386
 Description: ia32-libs transitional package
 [...]

 Why is this still built for ia64?  AFAIK we no longer have any support
 for IA-32 emulation: hardware emulation was removed (since Montecito),
 the kernel support bitrotted (prior to 2.6.32) and has been removed
 (2.6.34), and the software emulator is non-free.

 Ben.

Because nobody told the ia32-libs team about that.

MfG
Goswin


-- 
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/87k4296gsr.fsf@frosties.localnet



Re: multiarch conversion for packages with lib32* packages

2012-03-24 Thread Ben Hutchings
On Sat, 2012-03-24 at 18:37 +0100, Goswin von Brederlow wrote:
 Ben Hutchings b...@decadent.org.uk writes:
 
  On Sat, 2012-03-24 at 14:56 +0100, Goswin von Brederlow wrote:
  [...]
  FYI: Since I have recieved no objections from ftp-master nor the release
  team the plan for ia32-libs now looks as follows:
  
  Ia32-libs becomes a transitional package depending on ia32-libs-i386.
  ia32-libs-i386 is a new transitional package with architecture i386 that
  depends on all the 32bit libs that used to be in ia32-libs.
  
  Package: ia32-libs
  Architecture: amd64 ia64
  Depends: ia32-libs-i386
  Description: ia32-libs transitional package
  [...]
 
  Why is this still built for ia64?  AFAIK we no longer have any support
  for IA-32 emulation: hardware emulation was removed (since Montecito),
  the kernel support bitrotted (prior to 2.6.32) and has been removed
  (2.6.34), and the software emulator is non-free.
 
  Ben.
 
 Because nobody told the ia32-libs team about that.

Well now you know.  Note, this wasn't a kernel team decision, it's an
upstream change that I wasn't aware of until recently.

Ben.

-- 
Ben Hutchings
I'm always amazed by the number of people who take up solipsism because
they heard someone else explain it. - E*Borg on alt.fan.pratchett


signature.asc
Description: This is a digitally signed message part


Re: multiarch conversion for packages with lib32* packages

2012-03-24 Thread Jay Berkenbilt
Goswin von Brederlow goswin-...@web.de wrote:

 Sven Joachim svenj...@gmx.de writes:

 On 2012-03-24 10:50 +0100, Adam Borowski wrote:

 On Sat, Mar 24, 2012 at 09:48:58AM +0100, Sven Joachim wrote:
 On 2012-03-24 04:04 +0100, Jay Berkenbilt wrote:
  One issue not covered is what to do if your package already builds
  32-bit libraries on a 64-bit system by building 32-bit explicitly and
  packaging as lib32whatever.
 
 The lib32* packages need to built as long as they have reverse (build)
 dependencies.  I suppose most of them should go away in the long term,
 but this is not going to happen in wheezy.

 All of lib32icu's rdepends seem to have already dropped *32* builds, so
 it can be removed right now.

 Oh, indeed.  Somehow I assumed that ia32-libs would build-depend on
 lib32icu-dev, but it doesn't.

Thanks for all the replies.  I'll convert icu as soon as I have time
(hopefully in the next couple of weeks) and drop the lib32 packages.
This will greatly simplify icu's build.

-- 
Jay Berkenbilt q...@debian.org


-- 
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/20120324183528.0296425563.qww314159@soup



Re: Multiarch file overlap summary and proposal

2012-03-04 Thread Marco d'Itri
On Mar 04, Goswin von Brederlow goswin-...@web.de wrote:

  Also, why does refcounting have to be perfect?
  What would break if it did not actually check that the two files 
  provided by the same package for different architectures are identical?
 Everything that can go wrong when splitting packages. You would loose
 the stability advantage.
Yes, but with much less work needed by maintainers. So it still looks 
like a better option to me.

-- 
ciao,
Marco


signature.asc
Description: Digital signature


Re: Multiarch file overlap summary and proposal

2012-03-03 Thread Goswin von Brederlow
m...@linux.it (Marco d'Itri) writes:

 On Mar 01, Russ Allbery r...@debian.org wrote:

 The situation with refcounting seems much less fragile than the situation
 without refcounting to me.
 I totally agree.

 Also, why does refcounting have to be perfect?
 What would break if it did not actually check that the two files 
 provided by the same package for different architectures are identical?

Everything that can go wrong when splitting packages. You would loose
the stability advantage.

MfG
Goswin


-- 
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/87aa3wkhpv.fsf@frosties.localnet



Re: Multiarch file overlap summary and proposal

2012-03-03 Thread Goswin von Brederlow
Guillem Jover guil...@debian.org writes:

 On Wed, 2012-02-15 at 16:32:38 -0800, Russ Allbery wrote:
 Guillem Jover guil...@debian.org writes:
  If packages have to be split anyway to cope with the other cases, then
  the number of new packages which might not be needed otherwise will be
  even smaller than the predicted amount, at which point it makes even
  less sense to support refcnt'ing.
 
 I don't think the package count is really the interesting metric here,
 unless the number of introduced packages is very large (see below about
 -dev packages).  I'm more concerned with maintainer time and with
 dependency complexity, and with the known problems that we introduce
 whenever we take tightly-coupled files and separate them into independent
 packages.

 Well, people have been using the amount of packages as a metric, I've
 just been trying to counter it. It also in a way represents the amount
 of work needed.

 About tightly-coupled files, they can cause serious issues also with
 refcounting, consider that there's always going to be a point when
 unpacking one of the new instances will have a completely different
 vesion than the other already unpacked instance(s). So packages could
 stop working for a long time if say unpacked libfoo0:i386 1.0 has
 file-format-0, but new libfoo0:amd64 4.0 has file-format-2, and the
 file didn't change name (arguably this could be considered an upstream
 problem, depending on the situation), this would be particularly
 problematic for pseudo-essential packages.

That is not an argument for or against refcounting. If at all it would
be marginally for refcounting:

The same situation would occur with libfoo0:i386 1.0, libfoo0:amd64 4.0
and libfoo0-common:all 2.0. But now even worse because you have 3
versions that can be out-of-sync.

Actualy if the file is shipped in the package then ref counting would
automatically detect the difference in contents and fail to install the
new libfoo0:amd64 4.0. And if the file is not shipped in the package
then ref counting has no effect on it. Again ref counting comes out
better.

Ref counting would catch some of those cases but not all and it never
makes it worse. What solves this problem is the same version requirement
or simply adding Breaks: libfoo0 ( 4.0~) to libfoo0:* 4.0. The only
point you've made is that ref counting isn't a magic bullet that brings
us world peace.

MfG
Goswin


-- 
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/8762ekkh3g.fsf@frosties.localnet



Re: Multiarch file overlap summary and proposal

2012-03-01 Thread Marco d'Itri
On Mar 01, Russ Allbery r...@debian.org wrote:

 The situation with refcounting seems much less fragile than the situation
 without refcounting to me.
I totally agree.

Also, why does refcounting have to be perfect?
What would break if it did not actually check that the two files 
provided by the same package for different architectures are identical?

-- 
ciao,
Marco


signature.asc
Description: Digital signature


Re: Multiarch file overlap summary and proposal

2012-03-01 Thread Russ Allbery
m...@linux.it (Marco d'Itri) writes:
 On Mar 01, Russ Allbery r...@debian.org wrote:

 The situation with refcounting seems much less fragile than the situation
 without refcounting to me.

 I totally agree.

 Also, why does refcounting have to be perfect?
 What would break if it did not actually check that the two files 
 provided by the same package for different architectures are identical?

Well, it would break most of the things that make it less fragile.  :)

-- 
Russ Allbery (r...@debian.org)   http://www.eyrie.org/~eagle/


-- 
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/87399sgog4@windlord.stanford.edu



Re: Multiarch file overlap summary and proposal (was: Summary: dpkg shared / reference counted files and version match)

2012-02-29 Thread Guillem Jover
On Wed, 2012-02-15 at 16:41:21 +, Ian Jackson wrote:
 Guillem Jover writes (Re: Multiarch file overlap summary and proposal (was: 
 Summary: dpkg shared / reference counted files and version match)):
   [...]  But trying to workaround this by coming
  up with stacks of hacked up solutions  [...]
 
 I disagree with your tendentious phrasing.  The refcnt feature is not
 a hacked up solution (nor a stack of them).  It is entirely normal
 in Debian core tools (as in any substantial piece of software serving
 a lot of diverse needs) to have extra code to make it easier to deploy
 or use in common cases simpler.

All along this thread, when referring to the additional complexity and
the additional hacks, I've not been talking about the refcnt'ing at
all, but to all the other fixes needed to make it a workable solution.

regards,
guillem


-- 
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/20120229195152.ga4...@gaara.hadrons.org



Re: Multiarch file overlap summary and proposal

2012-02-29 Thread Guillem Jover
On Wed, 2012-02-15 at 19:31:10 -0800, Russ Allbery wrote:
 I agree that it's asymmetric.  apt-get install libfoo means libfoo:native,
 but apt-get remove libfoo means libfoo:*.  And asymmetric is bad, all
 things being equal.  But I think this may be one place where asymmetric is
 still the right thing to do; I would argue that it means you're
 implementing the most common operation in both cases.  apt-get install
 libfoo generally means give me a native libfoo since non-native libfoo
 is going to be an unusual case, and apt-get remove libfoo generally means
 I have no more interest in libfoo, make it go away.  I think that people
 who want to get rid of one architecture of libfoo but keep the other are
 already going to be thinking about architectures, and it's natural to ask
 them to qualify their request.
 
 If removing the non-native architecture has cascading effects, apt is
 obviously going to warn them about that already and they'll realize what's
 going on.

This was already contemplated at least as part of one of the threads David
referenced:

  http://lists.debian.org/debian-dpkg/2011/12/msg00068.html

 David Kalnischkies kalnischk...@gmail.com writes:
  (Note though that e.g. APT is not able to handle installed architectures
  as an 'attribute'. It not only has to handle them as 'different'
  packages (and more specific different versions) to keep
  backward-compatibility, also different dependencies on different
  architectures would make it pretty messy in practice. But double-think
  is a requirement for APT development anyway. ;) )
 
 Yes, definitely the internals of our package management software can't
 fully compress the packages together; at the least, the dependencies are
 going to be different between architectures and have to be stored
 separately. [...]

 But I think what we should be telling the *user*, regardless of our
 internals, is don't think of libfoo:i386 and libfoo:amd64 as two separate
 packages that you can maintain independently; think of libfoo as being
 installed for one or more architectures.

The thing is, in practice they cannot share much at all, because even
if they might end up being at the same version, they need to go
through different versions inbetween. For dpkg, only the package name
and the reverse dependencies are shared, assuming any other field is
equal will only come down to lost metadata.

And while I have initially actually been working with the mental model
of pkgname with multiple arch instances as an internal detail, the fact
is that this abstraction just leaks everywhere, and trying to shield
the users and maintainers from that reality will only cause pain. It's
just a nice illusion coming from the fact that those packages share the
package name. But considering pkgname == pkgname:* means for example that
all query commands have to output information for multiple instances, so
packages/scripts/etc have to be adapted anyway to handle those, and
while I don't consider that a problem, just another side of the changes
needed for multiarch, it shows how the interface can only possibly be
transparent on one side of the interface, if at all.

Finally, the thing is, those packages are really independent, they just
happen to share a name and a common source ancestor, but they can contain
different files per arch instance, different metadata, even different
maintainer script behaviour per arch, etc. And only packages from the
same arch can depend on them.

  Mhh. The current spec just forbids binNMU for M-A:same packages -
  the 'sync' happens on the exact binary version.
  Somewhere else in this multiarch-discussion was hinted that we could
  sync on the version in (optional) Source tag instead to allow binNMU.
 
 I think that the best long-term way to handle binNMUs may be to move the
 build number into a different piece of package metadata from the version.
 So a binNMU of a package with version 1.4-1 would still have version 1.4-1
 but would have a build number of 2 instead of 1.  I think this would be
 way cleaner in the long run, and not just for multiarch.

That means then we cannot state a relationship based on the binNMU
version. And while that might be desirable most of the times, it makes
it impossible when it might be desirable. Without considering this
deeper, it also reminds me of when Revision was a distinct field. In
any case how to handle binNMUs is something that should be carefully
considered and not be rushed out now, just because suddently they cannot
be used...

regards,
guillem


-- 
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/20120229215831.gb4...@gaara.hadrons.org



Re: Multiarch file overlap summary and proposal

2012-02-29 Thread Russ Allbery
Guillem Jover guil...@debian.org writes:
 On Wed, 2012-02-15 at 19:31:10 -0800, Russ Allbery wrote:

 I think that the best long-term way to handle binNMUs may be to move
 the build number into a different piece of package metadata from the
 version.  So a binNMU of a package with version 1.4-1 would still have
 version 1.4-1 but would have a build number of 2 instead of 1.  I think
 this would be way cleaner in the long run, and not just for multiarch.

 That means then we cannot state a relationship based on the binNMU
 version. And while that might be desirable most of the times, it makes
 it impossible when it might be desirable.

Good point.

 Without considering this deeper, it also reminds me of when Revision was
 a distinct field. In any case how to handle binNMUs is something that
 should be carefully considered and not be rushed out now, just because
 suddently they cannot be used...

I agree with this sentiment.  Personally, I'm fine with moving forward
with a multiarch approach that doesn't allow for binNMUs on a subset of
arches as the first cut, and then go back and figure out what we're doing
with binNMUs later.

-- 
Russ Allbery (r...@debian.org)   http://www.eyrie.org/~eagle/


-- 
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/87boohxpa8@windlord.stanford.edu



Re: Multiarch file overlap summary and proposal

2012-02-29 Thread Guillem Jover
On Thu, 2012-02-16 at 10:43:53 -0800, Russ Allbery wrote:
 I was thinking more about this, and I was finally able to put a finger on
 why I don't like package splitting as a solution.
 
 We know from prior experience with splitting packages for large
 arch-independent data that one of the more common mistakes that we'll make
 is to move the wrong files: to put into the arch-independent package a
 file that's actually arch-dependent.

This was brought up by Steve in the thread, my reply:

  http://lists.debian.org/debian-devel/2012/02/msg00497.html

regards,
guillem


-- 
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/20120301020029.ga8...@gaara.hadrons.org



Re: Multiarch file overlap summary and proposal

2012-02-29 Thread Russ Allbery
Guillem Jover guil...@debian.org writes:
 On Thu, 2012-02-16 at 10:43:53 -0800, Russ Allbery wrote:

 I was thinking more about this, and I was finally able to put a finger
 on why I don't like package splitting as a solution.

 We know from prior experience with splitting packages for large
 arch-independent data that one of the more common mistakes that we'll
 make is to move the wrong files: to put into the arch-independent
 package a file that's actually arch-dependent.

 This was brought up by Steve in the thread, my reply:

   http://lists.debian.org/debian-devel/2012/02/msg00497.html

Thanks for the pointer, Guillem, but I'm afraid I don't think this reply
addresses my concerns.  See the specific enumeration of things that we
would have to split, and the ways in which they can break.  I think the
issue with C headers is particularly severe.

I don't think this mirrors an existing problem.  The sorts of things we
split into arch: all packages are nowhere near as intrusive or as tightly
coupled as the things we're talking about splitting to avoid refcounting;
for example, right now, splitting out C headers into arch: all packages is
very rare.  The sort of package splitting that we would do to avoid
refcounting would run a serious risk of introducing substantial new
problems that we don't currently have.

The situation with refcounting seems much less fragile than the situation
without refcounting to me.  Refcounting puts the chance of error in the
right place (on people who want to use the new feature, since the
situation will not change for users who continue using packages the way
they do today), provides a clear error message rather than silent
corruption, and fails safely (on any inconsistency) rather than appearing
to succeed in situations that are not consistent.  Those are all good
design principles to have.

I think the principle of not changing things for people who are not using
multiarch is particularly important, and is inconsistent with either
package splitting or with moving files into arch-qualified paths.  We
should attempt to adopt new features in a way that puts most of the risk
on the people who are making use of the new features, and tries to be as
safe as possible for existing users.  I agree that we should not pursue
that to an extreme that leads to an unmaintainable system, but I don't
believe refcounting has that problem.

-- 
Russ Allbery (r...@debian.org)   http://www.eyrie.org/~eagle/


-- 
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/874nu9vzbb@windlord.stanford.edu



Re: Multiarch file overlap summary and proposal

2012-02-29 Thread Guillem Jover
On Wed, 2012-02-15 at 16:32:38 -0800, Russ Allbery wrote:
 Guillem Jover guil...@debian.org writes:
  If packages have to be split anyway to cope with the other cases, then
  the number of new packages which might not be needed otherwise will be
  even smaller than the predicted amount, at which point it makes even
  less sense to support refcnt'ing.
 
 I don't think the package count is really the interesting metric here,
 unless the number of introduced packages is very large (see below about
 -dev packages).  I'm more concerned with maintainer time and with
 dependency complexity, and with the known problems that we introduce
 whenever we take tightly-coupled files and separate them into independent
 packages.

Well, people have been using the amount of packages as a metric, I've
just been trying to counter it. It also in a way represents the amount
of work needed.

About tightly-coupled files, they can cause serious issues also with
refcounting, consider that there's always going to be a point when
unpacking one of the new instances will have a completely different
vesion than the other already unpacked instance(s). So packages could
stop working for a long time if say unpacked libfoo0:i386 1.0 has
file-format-0, but new libfoo0:amd64 4.0 has file-format-2, and the
file didn't change name (arguably this could be considered an upstream
problem, depending on the situation), this would be particularly
problematic for pseudo-essential packages.

 I just posted separately about version lockstep: I think this is a
 feature, not a bug, in our multiarch implementation.  I think this is the
 direction we *should* go, because it reduces the overall complexity of the
 system. [...]

I've replied to that separately, in any case I think the best compromise
would be to add version lockstep to dpkg, but not refcounting. Because
the first is a restriction that can always be lifted if it's confirmed
to cause issues (which I think it will), and the second can always be
added later because it's something that allows things not permitted
previously.

But at this point it seems I'm alone in thinking that refcounting has
more negative implications than positive ones, and I cannot get myself
to care enough any longer to push for this. So some weeks ago I added
back both those things to my local repo.

regards,
guillem


-- 
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/20120301030201.gb8...@gaara.hadrons.org



Re: Multiarch file overlap summary and proposal

2012-02-29 Thread Russ Allbery
Guillem Jover guil...@debian.org writes:

 About tightly-coupled files, they can cause serious issues also with
 refcounting, consider that there's always going to be a point when
 unpacking one of the new instances will have a completely different
 vesion than the other already unpacked instance(s). So packages could
 stop working for a long time if say unpacked libfoo0:i386 1.0 has
 file-format-0, but new libfoo0:amd64 4.0 has file-format-2, and the file
 didn't change name (arguably this could be considered an upstream
 problem, depending on the situation), this would be particularly
 problematic for pseudo-essential packages.

Yes, I agree.  Refcounting does complicate the upgrade situation, since
you really want to upgrade all installed architectures in lockstep to
ensure that we maintain as many of the guarantees of file consistency as
we do now with single-arch upgrades.

 I've replied to that separately, in any case I think the best compromise
 would be to add version lockstep to dpkg, but not refcounting. Because
 the first is a restriction that can always be lifted if it's confirmed
 to cause issues (which I think it will), and the second can always be
 added later because it's something that allows things not permitted
 previously.

I definitely understand where you're coming from, and I would be lying if
I said that introducing refcounting doesn't make me nervous.  You're
right, it's something that's going to be very difficult to back out of if
we decide it's a mistake.

I do think it's the best solution to a complex set of issues, but we're
going to have to use it in conjunction with pretty tight version lockstep
to avoid problems with file inconsistency.

 But at this point it seems I'm alone in thinking that refcounting has
 more negative implications than positive ones, and I cannot get myself
 to care enough any longer to push for this. So some weeks ago I added
 back both those things to my local repo.

Well... no one likes to win an argument under those terms.  I'd much
rather have us all agree.  But I do want to wholeheartedly second
Christian's thanks for all your work on dpkg in the middle of a really
difficult situation, and your willingness to make compromises like this
even when you think they're the wrong technical decision.  That's really
hard to do, and I think it's also very admirable.

If this all turns out to be a horrible mistake, I for one will try to help
us back out of it as needed, to put my resources where my advocacy has
been.

-- 
Russ Allbery (r...@debian.org)   http://www.eyrie.org/~eagle/


-- 
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/87sjhtuidr@windlord.stanford.edu



Re: Multiarch file overlap summary and proposal

2012-02-23 Thread Goswin von Brederlow
Russ Allbery r...@debian.org writes:

 If this is comprehensive, then I propose the following path forward, which
 is a mix of the various solutions that have been discussed:

 * dpkg re-adds the refcounting implementation for multiarch, but along
   with a Policy requirement that packages that are multiarch must only
   contain files in classes 1 and 2 above.

 * All packages that want to be multiarch: same have to move all generated
   documentation into a separate package unless the maintainer has very
   carefully checked that the generated documentation will be byte-for-byte
   identical even across minor updates of the documentation generation
   tools and when run at different times.

 * Lintian should recognize arch-qualified override files, and multiarch:
   same packages must arch-qualify their override files.  debhelper
   assistance is desired for this.

I think that, provided the files are byte for byte identical across
architectures they need not be arch qualified. So they should be
refcounted and having non-identical files across archs should be
forbidden by policy. The maintainer must then resolve this by 1) making
the file identical across archs or 2) arch qualifying the name.

So lintian should support arch qualified names but policy should not
needlessly require them.

 * Policy prohibits arch-varying data files in multiarch: same packages
   except in arch-qualified paths.

 * The binNMU process is changed to add the binNMU changelog entry to an
   arch-qualified file (changelog.Debian.arch, probably).  We need to
   figure out what this means if the package being binNMU'd has a
   /usr/share/doc/package symlink to another package, though; it's not
   obvious what to do here.

 Please note that this is a bunch of work.  I think the Lintian work is a
 good idea regardless, and it can start independently.  I think the same is
 true of the binNMU changelog work, since this will address some
 long-standing issues with changelog handling in some situations, including
 resolving just how we're supposed to handle /usr/share/doc symlinks.  But
 even with those aside, this is a lot of stuff that we need to agree on,
 and in some cases implement, in a fairly short timeline if this is going
 to make wheezy.

In case /usr/share/doc/pkg is a symlink the binNMU changelog should be
stored in the destination of the symlink. For this to work the (binNMU)
changelog should be both arch and package qualified
(changelog.Debain.bin-pkg.arch). This would allow libfoo:any,
foo-bin:any + foo-common:all to be binNMUed. Without the package
qualifier the libfoo and foo-bin package would both contain
/usr/share/doc/foo-common/changelog.Debian.arch and produce a file
overwrite error.

After a binNMU (on amd64) the following files would exist:

foo-bin:/usr/share/doc/foo-bin - foo-common
foo-bin:/usr/share/doc/foo-common/changelog.Debian.foo-bin.amd64
foo-common: /usr/share/doc/foo-common/changelog.Debian
libfoo: /usr/share/doc/foo-common/changelog.Debian.libfoo.amd64
libfoo: /usr/share/doc/libfoo - foo-common

The binNMU changelogs would be identical wasting a little disk and
mirror space and bandwidth but that is probably unavoidable.

One way to reduce the overhead would be to split the binNMU entry into a
separate changelog:

foo-bin:/usr/share/doc/foo-bin - foo-common
foo-bin:/usr/share/doc/foo-common/changelog.binNMU.foo-bin.amd64
foo-common: /usr/share/doc/foo-common/changelog.Debian
libfoo: /usr/share/doc/foo-common/changelog.binNMU.libfoo.amd64
libfoo: /usr/share/doc/libfoo - foo-common

The binNMU changelogs would be just one entry each, the reason for the
binNMU.

MfG
Goswin


-- 
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/87d395zn6m.fsf@frosties.localnet



Re: Multiarch file overlap summary and proposal

2012-02-23 Thread Goswin von Brederlow
Russ Allbery r...@debian.org writes:

 Carsten Hey cars...@debian.org writes:
 * Russ Allbery [2012-02-16 14:55 -0800]:

 Every file that differs has to be fixed in the current multi-arch plan.
 Documentation that contains its build date is going to need to be split
 out into a separate -docs package.

 I doubt that ftpmaster would be happy about -doc packages that contain
 just a few small man pages.

 I think they'll be okay with it when it's the cost of reasonable
 multiarch.

 I feel fairly strongly that it isn't sane to have a file overlap when the
 file doesn't match.  You then lose the error detection when there are real
 problems, and I don't trust any of us, myself included, to correctly tag
 files where it doesn't matter.

 On this front, I agree with Guillem: some amount of package splitting is
 fine, and package splitting instead of additional complexity, like tagging
 files that are allowed to vary, looks like a good tradeoff to me.  The
 splitting that I'm worried about is where the files are tightly coupled,
 which is not the case for development man pages that are now in -dev
 packages.

+1.

I find the argument about past experience with splitting packages and
upstream later changing files so they become arch dependent convincing
in this regard. Refcounting and no exception for file differences seems
to be the best way.

 debianutils uses a special make target 'prebuild' in debian/rules to
 update build system related files and PO files before the actual source
 package is built.

 This basic idea also could be used to build problematic documentation
 files on the maintainers computer before he/she builds the package.  The
 other targets would then install the prebuilt documentation into the
 package without the need to build it first.  A proper dependency on
 debian/$prebuilt_doc could ensure that maintainers do not forget to run
 debian/rules prebuild.

 If maintainers choose to use such a target, suggesting a common name for
 it in the developers reference could be reasonable.

 That's an interesting idea.  That's very similar to what I already do as
 upstream (I build POD-generated man pages from my autogen script, and in
 Debian packaging don't bother to regenerate them).

Indeed. Another +1.

You are probably not the only one doing something like this. Who else
does this? What automatism do you have in your debian/rules to help?
Lets see if we can get a good set of examples to work out some
recommendations.

MfG
Goswin


-- 
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/878vjtzmjj.fsf@frosties.localnet



Re: Multiarch file overlap summary and proposal

2012-02-23 Thread Goswin von Brederlow
Josselin Mouette j...@debian.org writes:

 Le lundi 13 février 2012 à 22:43 -0800, Russ Allbery a écrit : 
 There's been a lot of discussion of this, but it seems to have been fairly
 inconclusive.  We need to decide what we're doing, if anything, for wheezy
 fairly soon, so I think we need to try to drive this discussion to some
 concrete conclusions.

 Thank you very much for your constructive work.

 3. Generated documentation.  Here's where I think refcounting starts
failing.

 So we need to move a lot of documentation generated with gtk-doc or
 doxygen from -dev packages to -doc packages. But it really seems an
 acceptable tradeoff between the amount of work required and the
 cleanness of the solution.

 Does this seem comprehensive to everyone?  Am I missing any cases?

 Are there any cases of configuration files in /etc that vary across
 architectures? Think of stuff like ld.so.conf, where some plugins or
 library path is coded in a configuration file.

Generally conffiles in library packages is a violation of policy
8.2. They would create a file overwrite conflict if the SONAME
changes. Putting the conffile into the -common package is a good
solution.

There are some exceptions where the conffiles are version qualified.
E.g.: libgtk2.0-0: /etc/gtk-2.0/im-multipress.conf

For conffiles that vary across architectures the path or name must
include the multiarch triplet.
E.g: libc6: /etc/ld.so.conf.d/x86_64-linux-gnu.conf

(Note: this is actualy a violation of policy 8.2 and needs to be fixed
when we get a libc7).

MfG
Goswin


-- 
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/874nuhzm2m.fsf@frosties.localnet



Re: Multiarch file overlap summary and proposal

2012-02-23 Thread Goswin von Brederlow
Joey Hess jo...@debian.org writes:

 Goswin von Brederlow wrote:
 pkg:arch will still be unique and the dpkg/apt output will use the
 architecture where required for uniqueness. So I think that after some
 getting used to it it will be clear enough again.

 Here are a few examples of the problems I worry about. I have not
 verified any of them, and they're clearly biased toward code I am
 familiar with, which suggests there are many other similar problems.

 * Puppet not only installs packages, it may remove them. A puppet config
   that does dpkg --purge foo will fail if multiarch is enabled, now
   it needs to find and remove foo:*

 * dpkg-repack pkg:arch will create a package with that literal name (or fail)

 * dpkg-reconfigure probably can't be used with M-A same packages.
   debconf probably generally needs porting to multiarch.

 * tasksel uses dpkg --query to work out if a task's dependencies are
   installed. In the event that a library is directly part of a task,
   this will fail when multiarch is enabled.

 * Every piece of documentation that gives commands lines manipulating
   library packages is potentially broken.

 Seems like we need a release where multiarch is classed as an
 experimental feature, which when enabled can break the system. But the
 sort of problems above are the easy to anticipate ones; my real worry is
 the unanticipated classes of problems. Especially if we find intractable
 problems or levels of complexity introduced by dropping the unique
 package name invariant.

 My nightmare scenario is that we release with multiarch, discover that
 it's a net negative for our users (proprietary software on amd64 aside,
 nearly all the benefits are to developers AFAICS), and are stuck with it.

The specs were initialy written in such a way that single arch systems
would not change, that multiarch packages would keep functioning with a
mono-arch apt/dpkg and I think this was preserved so far.

If all interface changes foolow that idea then worst case tools will not
work in a multiarch configuration but still work in a monoarch
configuration. So let multiarch be experimental and only for developers
and risk takers. That is already a huge number of people.

MfG
Goswin


-- 
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/87zkc9y6p4.fsf@frosties.localnet



Re: Multiarch file overlap summary and proposal

2012-02-23 Thread Goswin von Brederlow
Russ Allbery r...@debian.org writes:

 I think it would be better to have a world in which all the architectures
 of the foo package on the system have to have the same version, because
 then you don't have to treat foo:i386 and foo:amd64 like they're separate
 packages.  The list of installed architectures is an *attribute* of the
 package.  A package is still either installed or not installed, but when
 it's installed, it can be installed for one or more architectures.  But if
 it's installed for multiple architectures, those architectures are always
 upgraded together and always remain consistent.  That avoids all weirdness
 of having a package work differently because the version varies depending
 on the ABI, and it significantly simplifies the mental model behind the
 package.

In such a world architecture all could also be considered another
architecture. And then foo:i386, foo:amd64 and foo:all could be
coinstallable. That would mean that files shared between architectures
could be moved into foo:all and foo:any could implicitly depend on
foo:all. The benefit of this over foo-common would be that apt-cache
search, apt-cache policy, aptitude, dpkg --remove, ... would only have
one package (foo) instead of 2 (foo + foo-common).

This has been previously suggested too but has been droped because it
would be incompatible with existing systems (i.e. monoarch dpkg couldn't
install packages from such a world).

MfG
Goswin


-- 
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/87vcmxy6dk.fsf@frosties.localnet



Re: Multiarch file overlap summary and proposal

2012-02-23 Thread Goswin von Brederlow
David Kalnischkies kalnischk...@gmail.com writes:

 On Thu, Feb 16, 2012 at 23:10, Carsten Hey cars...@debian.org wrote:
 * David Kalnischkies [2012-02-16 03:59 +0100]:
 On Thu, Feb 16, 2012 at 00:39, Russ Allbery r...@debian.org wrote:
 (the only problem i see is that i don't have ${source:Version} available
  currently in the version structure, but we haven't even tried pushing
  apt's abibreak to sid specifically as i feared last-minute changes…)

 I'm not sure if you meant this with Source tag, anyway, the 'Packages'
 files miss the source version too, but this could be added as optional
 field that would be used if it differs from the 'Version:' field.

 It's already in for quiet some time ('current' sid amd64, first hit):
 Package: 3depict
 Source: 3depict (0.0.9-1)
 Version: 0.0.9-1+b1
 […]

 It's used in other places in APT, e.g. 'apt-get source', which just looks
 at the Packages file stanza. That's fine as this isn't a speed critical
 operation - but if we want it for the lock-step operation apt needs that
 piece of information in its internal structures for fast access to it and
 adding new fields in these structures will require an abibreak.
 That's the intended meaning of the quoted sentence.

Except that doesn't have to work (sorry for the ubuntu part):

Package: gcc
Source: gcc-defaults (1.93ubuntu1)
Version: 4:4.4.3-1ubuntu1

What would the version be for a binNMU of gcc-defaults? I think it would
be

Package: gcc
Source: gcc-defaults (1.93ubuntu1)
Version: 4:4.4.3-1ubuntu1+b1

What we want is for apt/dpkg to consider this to be compatible with
4:4.4.3-1ubuntu1.

MfG
Goswin


-- 
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/87r4xly64e.fsf@frosties.localnet



Re: Multiarch file overlap summary and proposal

2012-02-23 Thread Raphael Hertzog
On Thu, 23 Feb 2012, Goswin von Brederlow wrote:
  Package: 3depict
  Source: 3depict (0.0.9-1)
  Version: 0.0.9-1+b1
 
 Except that doesn't have to work (sorry for the ubuntu part):
 
 Package: gcc
 Source: gcc-defaults (1.93ubuntu1)
 Version: 4:4.4.3-1ubuntu1
 
 What would the version be for a binNMU of gcc-defaults?

What about trying it?

$ head -n 1 debian/changelog 
gcc-defaults (1.112+b1) unstable; urgency=low
$ debuild -us -uc
[...]
$ dpkg -I ../gcc_4.6.2-4+b1_i386.deb 
[...]
 Package: gcc
 Source: gcc-defaults (1.112)
 Version: 4:4.6.2-4+b1

In any case, the fact that the source version is unrelated to the binary
version doesn't change anything to the requirement. We just want to ensure
that all (M-A: same) co-installable packages are synchronized at the same
version (either the source version, or the binary version but stripped
from its bin-NMU suffix).

Cheers,
-- 
Raphaël Hertzog ◈ Debian Developer

Pre-order a copy of the Debian Administrator's Handbook and help
liberate it: http://debian-handbook.info/liberation/


-- 
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/20120223145305.gd5...@rivendell.home.ouaza.com



Re: Multiarch file overlap summary and proposal

2012-02-22 Thread Goswin von Brederlow
Jonathan Nieder jrnie...@gmail.com writes:

 Jonathan Nieder wrote:
 David Kalnischkies wrote:

 Why would it be intuitive to add a specific value for the arch attribute 
 with
 apt-get install foo   # arch |= native
 but remove all values of the attribute with
 apt-get remove foo# arch = ~all-architectures
 ?
 [...]
 But I really think this is something anyone can get used to.  In the
 examples you listed above:

  apt-get install foo;# install foo with default arch-list (native)
  apt-get remove foo; # remove foo

 If foo is installed for no architectures, that does not mean it is
 installed with an empty architecture list.  It means it is simply not
 installed.

 Ok, now I think I figured out the inconsistency you are pointing to.
 If i386 is the native architecture, what would you expect the
 following sequence of commands to do?

   apt-get install linux-image-3.2.0-1-amd64:amd64

   ... wait a few weeks ...

   apt-get install linux-image-3.2.0-1-amd64

 I would expect it to install the kernel with 'Architecture: amd64' and
 then to upgrade it.

 So the proposed semantics are not quite 'arch |= native'.  They are
 more like 'arch defaults to native for non-installed packages'.

 Jonathan

Assuming linux-image-3.2.0-1-amd64:i386 still exists I would expect apt
to install that if it has a equal or greater version than the installed
linux-image-3.2.0-1-amd64:amd64.

Current apt behaviour is a bit strange there though:

mrvn@frosties:~% apt-cache policy acl
acl:
  Installed: 2.2.51-4
  Candidate: 2.2.51-5
  Version table:
 2.2.51-5 0
500 http://ftp.de.debian.org/debian/ sid/main amd64 Packages
 *** 2.2.51-4 0
100 /var/lib/dpkg/status

mrvn@frosties:~% apt-cache policy acl:i386
acl:i386:
  Installed: (none)
  Candidate: 2.2.51-5
  Version table:
 2.2.51-5 0
500 http://ftp.de.debian.org/debian/ sid/main i386 Packages

I would expect something like:

mrvn@frosties:~% apt-cache policy acl
acl:
  Installed: 2.2.51-4 amd64
  Candidate: 2.2.51-5 amd64
  Version table:
 2.2.51-5 0
500 http://ftp.de.debian.org/debian/ sid/main amd64 Packages
499 http://ftp.de.debian.org/debian/ sid/main i386 Packages
 *** 2.2.51-4 0
100 /var/lib/dpkg/status amd64


But it seems my patch to reduce the pin of non-native architectures is
not in current apt and policy doesn't list all archs for M-A:foreign
packages.

MfG
Goswin


-- 
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/87r4xn4171.fsf@frosties.localnet



Re: Multiarch file overlap summary and proposal

2012-02-17 Thread Carsten Hey
* Russ Allbery [2012-02-16 14:55 -0800]:
 Carsten Hey cars...@debian.org writes:
  There are still files that differ that do not need to be fixed, for
  example documentation that contains it's build date.

 Every file that differs has to be fixed in the current multi-arch plan.
 Documentation that contains its build date is going to need to be split
 out into a separate -docs package.

I doubt that ftpmaster would be happy about -doc packages that contain
just a few small man pages.

 I'm fine with splitting documentation; that has far fewer problems than
 splitting other types of files, since documentation isn't tightly coupled
 at a level that breaks software.

debianutils uses a special make target 'prebuild' in debian/rules to
update build system related files and PO files before the actual source
package is built.

This basic idea also could be used to build problematic documentation
files on the maintainers computer before he/she builds the package.  The
other targets would then install the prebuilt documentation into the
package without the need to build it first.  A proper dependency on
debian/$prebuilt_doc could ensure that maintainers do not forget to run
debian/rules prebuild.

If maintainers choose to use such a target, suggesting a common name for
it in the developers reference could be reasonable.


Regards
Carsten


-- 
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/20120217082256.ga19...@furrball.stateful.de



Re: Multiarch file overlap summary and proposal

2012-02-17 Thread Russ Allbery
Carsten Hey cars...@debian.org writes:
 * Russ Allbery [2012-02-16 14:55 -0800]:

 Every file that differs has to be fixed in the current multi-arch plan.
 Documentation that contains its build date is going to need to be split
 out into a separate -docs package.

 I doubt that ftpmaster would be happy about -doc packages that contain
 just a few small man pages.

I think they'll be okay with it when it's the cost of reasonable
multiarch.

I feel fairly strongly that it isn't sane to have a file overlap when the
file doesn't match.  You then lose the error detection when there are real
problems, and I don't trust any of us, myself included, to correctly tag
files where it doesn't matter.

On this front, I agree with Guillem: some amount of package splitting is
fine, and package splitting instead of additional complexity, like tagging
files that are allowed to vary, looks like a good tradeoff to me.  The
splitting that I'm worried about is where the files are tightly coupled,
which is not the case for development man pages that are now in -dev
packages.

 debianutils uses a special make target 'prebuild' in debian/rules to
 update build system related files and PO files before the actual source
 package is built.

 This basic idea also could be used to build problematic documentation
 files on the maintainers computer before he/she builds the package.  The
 other targets would then install the prebuilt documentation into the
 package without the need to build it first.  A proper dependency on
 debian/$prebuilt_doc could ensure that maintainers do not forget to run
 debian/rules prebuild.

 If maintainers choose to use such a target, suggesting a common name for
 it in the developers reference could be reasonable.

That's an interesting idea.  That's very similar to what I already do as
upstream (I build POD-generated man pages from my autogen script, and in
Debian packaging don't bother to regenerate them).

-- 
Russ Allbery (r...@debian.org)   http://www.eyrie.org/~eagle/


-- 
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/87vcn5hml3@windlord.stanford.edu



Re: Multiarch file overlap summary and proposal

2012-02-17 Thread David Kalnischkies
On Thu, Feb 16, 2012 at 23:10, Carsten Hey cars...@debian.org wrote:
 * David Kalnischkies [2012-02-16 03:59 +0100]:
 On Thu, Feb 16, 2012 at 00:39, Russ Allbery r...@debian.org wrote:
    it needs to find and remove foo:*

 foo:all (or foo:any) instead of foo:* would save the need to quote it.

:all is already an architecture and currently it seems like dpkg accepts
only that while APT will accept :amd64 (or whatever is native), too,
partly for internal reasons, but also because the difference for an apt-get
user is not really important. Either way, overloading wouldn't be nice.

:any has a special meaning in build-dependencies already meaning - surprise
surprise - give me any package. Overloading would be uncool, beside that
its the wrong word for it anyway if you want all, and not just any package.


  Actually, why would that be the behavior?  Why would dpkg --purge foo not
  just remove foo for all architectures for which it's installed, and
  require that if you want to remove only a specific architecture you then
  use the expanded syntax?

 We (as in APT team and dpkg team) had a lot of discussions about that,
 see for starters (there a properly more in between the 10 months…)
 [0] http://lists.debian.org/debian-dpkg/2011/01/msg00046.html
 [1] http://lists.debian.org/debian-dpkg/2011/12/msg5.html

 In short, i think the biggest counter is that it feels unintuitive to
 install a library (in native arch) with e.g. apt-get install libfoo
 while you have to be specific at removal to avoid nuking 'unrelated' packages
 with apt-get remove libfoo.

 I would expect this (especially if the package foo is not a library, but
 I would also expect this for libraries):

You generously left out the paragraph describing how APT should
detect that the package foo is in fact a library and not, say, a
plugin, a dev-package, a dbg-package or a future-coinstallable binary.
And the foo:* default would be okay and intuitive for all of those?

You also skipped the part of backward-compatibility with tools which
expect a single line/stanza of response and commands like
dpkg --{get,set}-selection which by definition work only with a single
package.

And backward-compatibility means in this context also to support
a dist-upgrade from squeeze to wheezy. If a new version of dpkg
doesn't accept the 'old' commands APT uses the upgrade will
be a pain.


The two threads i mentioned contain a lot more of these
considerations, so it might be in order to read these before
coming up with 'new' ideas.


  * apt-get remove foo removes all installed foo packages (on all
   architectures).

More of a theoretical nitpick, but this was never the case.
apt-get pre-multi-arch handled only packages from native arch (and :all),
so if you had installed foreign packages with dpkg --force-architecture
apt-get would have ignored it, not removed it.


 This summarises how apt without multi-arch handles this, the above would
 make apt with multi-arch also behave so:

   apt-get install foo
   ---
 foo is not installed  foo is installed
   apt-get remove foo
  ---

Is 'apt-get remove foo+' then going to install all foo's or just one?

The current implementation of always foo == foo:native doesn't fail
your diagram, too, so what is this going to show us?


 (the only problem i see is that i don't have ${source:Version} available
  currently in the version structure, but we haven't even tried pushing
  apt's abibreak to sid specifically as i feared last-minute changes…)

 I'm not sure if you meant this with Source tag, anyway, the 'Packages'
 files miss the source version too, but this could be added as optional
 field that would be used if it differs from the 'Version:' field.

It's already in for quiet some time ('current' sid amd64, first hit):
Package: 3depict
Source: 3depict (0.0.9-1)
Version: 0.0.9-1+b1
[…]

It's used in other places in APT, e.g. 'apt-get source', which just looks
at the Packages file stanza. That's fine as this isn't a speed critical
operation - but if we want it for the lock-step operation apt needs that
piece of information in its internal structures for fast access to it and
adding new fields in these structures will require an abibreak.
That's the intended meaning of the quoted sentence.


Best regards

David Kalnischkies


--
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: 
http://lists.debian.org/CAAZ6_fB3CeUcFuue1CjPbkqoHNaSvaKt8Q=imgfx4uqmw_m...@mail.gmail.com



Re: Multiarch file overlap summary and proposal

2012-02-17 Thread Jonathan Nieder
David Kalnischkies wrote:

 You generously left out the paragraph describing how APT should
 detect that the package foo is in fact a library and not, say, a
 plugin, a dev-package, a dbg-package or a future-coinstallable binary.
 And the foo:* default would be okay and intuitive for all of those?

Yes, the foo:native default for installation and foo:* default for
removal would be intuitive for all of those.

See [1] for a mental model.

Hope that helps,
Jonathan

[1] 
http://thread.gmane.org/gmane.linux.debian.devel.dpkg.general/14028/focus=14031


-- 
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/20120217144631.GA5039@burratino



Re: Multiarch file overlap summary and proposal

2012-02-17 Thread David Kalnischkies
On Fri, Feb 17, 2012 at 15:46, Jonathan Nieder jrnie...@gmail.com wrote:
 David Kalnischkies wrote:

 You generously left out the paragraph describing how APT should
 detect that the package foo is in fact a library and not, say, a
 plugin, a dev-package, a dbg-package or a future-coinstallable binary.
 And the foo:* default would be okay and intuitive for all of those?

 Yes, the foo:native default for installation and foo:* default for
 removal would be intuitive for all of those.

 See [1] for a mental model.

 Hope that helps,
 Jonathan

 [1] 
 http://thread.gmane.org/gmane.linux.debian.devel.dpkg.general/14028/focus=14031

Why would it be intuitive to add a specific value for the arch attribute with
apt-get install foo   # arch |= native
but remove all values of the attribute with
apt-get remove foo# arch = ~all-architectures
?

Isn't it more intuitive to have it this way:
apt-get remove foo# arch = ~native
?

Maybe we just have different definitions of intuitive.


Best regards

David Kalnischkies


-- 
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: 
http://lists.debian.org/caaz6_fbq6z7tj1slohgs0kwh+xocsa_xjcvq+zs2omg3_y1...@mail.gmail.com



Re: Multiarch file overlap summary and proposal

2012-02-17 Thread Jonathan Nieder
David Kalnischkies wrote:

 Why would it be intuitive to add a specific value for the arch attribute with
 apt-get install foo   # arch |= native
 but remove all values of the attribute with
 apt-get remove foo# arch = ~all-architectures
 ?

 Isn't it more intuitive to have it this way:
 apt-get remove foo# arch = ~native
 ?

 Maybe we just have different definitions of intuitive.

Intuitions vary from person to person; that's definitely not news.

But I really think this is something anyone can get used to.  In the
examples you listed above:

 apt-get install foo;   # install foo with default arch-list (native)
 apt-get remove foo;# remove foo

If foo is installed for no architectures, that does not mean it is
installed with an empty architecture list.  It means it is simply not
installed.

In practice, that would match what I want to do, too.

 * There is a web browser I would like to use.  I don't care which
   arch --- that's an implementation detail.

apt-get install iceweasel

 * Oops, never mind --- not interested in using that web browser any
   more.

apt-get --purge remove iceweasel

 * I've never heard of this multiarch stuff, but the unpackaged
   software I am trying to install is giving complaints about missing
   libfoo.so.1

apt-get install libfoo1

 * Ok, now I've learned about multiarch, and I want to install libfoo
   to satisfy a dependency for a binary on a foreign architecture.

apt-get install libfoo1:amd64

 * I don't want libfoo any more --- remove it completely from the
   system.

apt-get --purge remove libfoo1

Wait! you might protest.  Isn't that last command too aggressive?
After all, I did not specify which architecture motivated the removal
of libfoo1.  Maybe I was removing libfoo1 for the sake of my
unpackaged i386 software but I still need it for unpackaged amd64
software, and apt could help me out by picking the architecture I
intended and not removing it elsewhere, right?

But no, that would not be helpful at all.  It's true that libfoo1
might be installed for more than one reason and I might have forgotten
about some and therefore remove it when that is not warranted, but
that's true whether multiarch is involved or not.  This safety feature
does not add any real consistent safety.

I can think of only one advantage to making apt-get remove libfoo1
remove libfoo1:native, though it's a good one.  That's to support
muscle memory and scripts that rely on the libfoo1 always means
libfoo1:native semantics that have been present in Ubuntu for a
little while.  I think it's worth losing that, since as we've seen,
most scripts dealing with cases like this are going to need changes to
work with multiarch: same packages anyway (and humans can grow to
appreciate the simple mental model Russ suggested).

Jonathan


-- 
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/20120217165937.GA9360@burratino



Re: Multiarch file overlap summary and proposal

2012-02-17 Thread Jonathan Nieder
Jonathan Nieder wrote:
 David Kalnischkies wrote:

 Why would it be intuitive to add a specific value for the arch attribute with
 apt-get install foo   # arch |= native
 but remove all values of the attribute with
 apt-get remove foo# arch = ~all-architectures
 ?
[...]
 But I really think this is something anyone can get used to.  In the
 examples you listed above:

  apt-get install foo; # install foo with default arch-list (native)
  apt-get remove foo;  # remove foo

 If foo is installed for no architectures, that does not mean it is
 installed with an empty architecture list.  It means it is simply not
 installed.

Ok, now I think I figured out the inconsistency you are pointing to.
If i386 is the native architecture, what would you expect the
following sequence of commands to do?

apt-get install linux-image-3.2.0-1-amd64:amd64

... wait a few weeks ...

apt-get install linux-image-3.2.0-1-amd64

I would expect it to install the kernel with 'Architecture: amd64' and
then to upgrade it.

So the proposed semantics are not quite 'arch |= native'.  They are
more like 'arch defaults to native for non-installed packages'.

Jonathan


-- 
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/20120217171734.GB9360@burratino



Re: Multiarch file overlap summary and proposal

2012-02-17 Thread Carsten Hey
* David Kalnischkies [2012-02-17 14:15 +0100]:
 You generously left out the paragraph describing how APT should
 detect that the package foo is in fact a library ...

My impression was that you think very library centric.  All I wrote was
(in other words), that we should consider non-library packages as much
as library packages, and I did not write nor implied that libraries
should be handled in a different way.


 Is 'apt-get remove foo+' then going to install all foo's or just one?

apt-get install g+++ is a weird syntax.


 The current implementation of always foo == foo:native doesn't fail
 your diagram, too, so what is this going to show us?

It depends on how one reads it, anyway, examples I consider to be
inconsistent are more helpful than a diagram without clear semantic.


  # dpkg --print-architecture
  amd64

  # perl -00 -lne 'print if /^Package: (clang|tendra)$/m 
  /^Status: install ok installed$/m' /var/lib/dpkg/status \
  | awk '/^Package:/ {printf %s:, $2} /^Architecture:/ {print $2}'
  tendra:i386
  clang:i386

I was not able to find a command that shows this information.


  # apt-cache policy tendra | sed -n 1p
  tendra:i386:

  # apt-cache policy clang | sed -n 1p
  clang:

  # apt-get remove tendra
  The following packages will be REMOVED:
tendra:i386

  # apt-get remove clang
  Package clang is not installed, so not removed

The above shows that it seems to depend on the availability of
foo:native if apt-get remove foo removes foo:foreign.


  # dpkg -l | awk '$2==clang{print}'
  ii  clang   3.0-5 Low-Level ...

  # dpkg -S bin/clang
  clang: /usr/bin/clang
  clang: /usr/bin/clang++

  # dpkg -r clang
  dpkg: warning: there's no installed package matching clang

  # apt-get remove clang
  Package clang is not installed, so not removed

According to dpkg's command line interface, the file /usr/bin/clang is
in the package clang and dpkg -l shows it as installed, but it can not
be removed using this name, neither by apt nor by dpkg.


  # dpkg -l | grep libzookeeper-st2
  ii  libzookeeper-st2:amd64  3.3.4+dfsg1-3 Single ...
  ii  libzookeeper-st2:i386   3.3.4+dfsg1-3 Single ...

Unlike the above dpkg -l output showing the foreign clang package,
libzookeeper-st2 is shown with the architecture appended.


Regards
Carsten


-- 
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/20120217185319.ga29...@furrball.stateful.de



Re: Multiarch file overlap summary and proposal

2012-02-17 Thread Carsten Hey
* David Kalnischkies [2012-02-17 17:20 +0100]:
 Why would it be intuitive to add a specific value for the arch attribute with
 apt-get install foo   # arch |= native
 but remove all values of the attribute with
 apt-get remove foo# arch = ~all-architectures
 ?

We had a similar discussion years ago.

Package: foo
Depends: a, b, c
Conflicts: x, y, z

To be able to install foo, you need to:

  * ensure that a  b  c is true.
The command to do so is apt-get install a b c

  * ensure that x || y || z is false.
The command to do so is (or rather should in my opinion be)
apt-get remove x y z


To satisfy the dependency line, *all* packages in it need to be installed.

To satisfy a conflicts field (that is, there is a conflict), *any* of
the packages in it needs to be installed.


Carsten


-- 
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/20120217191404.gb29...@furrball.stateful.de



Re: Multiarch file overlap summary and proposal

2012-02-17 Thread David Kalnischkies
On Fri, Feb 17, 2012 at 19:53, Carsten Hey cars...@debian.org wrote:
 * David Kalnischkies [2012-02-17 14:15 +0100]:
 You generously left out the paragraph describing how APT should
 detect that the package foo is in fact a library ...

 My impression was that you think very library centric.  All I wrote was
 (in other words), that we should consider non-library packages as much
 as library packages, and I did not write nor implied that libraries
 should be handled in a different way.


 Is 'apt-get remove foo+' then going to install all foo's or just one?

 apt-get install g+++ is a weird syntax.

But its a syntax we support since basically ever and comes in handy
if you want to tell APT to not choose a specific alternative
(or disable specific recommends, …) without holds.
aptitude supports a few more of these modifiers (e.g. ) btw.


 The above shows that it seems to depend on the availability of
 foo:native if apt-get remove foo removes foo:foreign.

It's the availability of the native (or for that matter: most preferred arch)
which 'changes' this behavior. As tendra is not available for amd64 its
a pretty fair guess that i386 was meant and - as the result would be an
error otherwise - install it.
It's removed with the same command s/install/remove/.

A similar guess isn't done for removes in case the native is not installed
(but available and foreigns are installed) as it is a destructive command
(beside that it would fail the s/remove/install/ test).
See also the arguments against the
foo == foo:whatever provided that whatever is unique
in Raphaels mail in thread [0] mentioned above.


And as i said, its not only about apt-get install/remove.
It would be nice to have an approach usable for the various
commands of apt-mark and apt-cache, too.
(bonus points if it doesn't break usage with dpkg completely)


  # dpkg -l | awk '$2==clang{print}'
  ii  clang                           3.0-5                 Low-Level ...

  # dpkg -S bin/clang
  clang: /usr/bin/clang
  clang: /usr/bin/clang++

  # dpkg -r clang
  dpkg: warning: there's no installed package matching clang

The last one is an inconsistency in what dpkg should do.
If i understood the outcome of thread [1] above right dpkg
doesn't want to arch qualify packages for which only one package
can be meant. I personally think this is misfortune, but so it be.
APT can't do this as while you might have only one installed at a
time you can still jungle with different archs in one apt command
so we need differentiate here.


Long story short, 'apt-get remove clang' fails, yes, but you have
it installed with 'apt-get install clang:i386' so we are at least
consistent (see foo == foo:whatever).
I could have a look at printing a notice through to be nice…


  # dpkg -l | grep libzookeeper-st2
  ii  libzookeeper-st2:amd64          3.3.4+dfsg1-3         Single ...
  ii  libzookeeper-st2:i386           3.3.4+dfsg1-3         Single ...

 Unlike the above dpkg -l output showing the foreign clang package,
 libzookeeper-st2 is shown with the architecture appended.

dpkg prefers users which are specific if they refer to a M-A:same package.
It allows no arch as ~ 'installed' arch, but mostly only for backward-
compatibility as apt/squeeze obviously doesn't know about that new
requirement.


Best regards

David Kalnischkies


--
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: 
http://lists.debian.org/caaz6_fdgs4zegelof9sb8-r-nfur2zo5bcjy0_8zpqfrfyz...@mail.gmail.com



Re: Multiarch file overlap summary and proposal

2012-02-16 Thread Goswin von Brederlow
Russ Allbery r...@debian.org writes:

 David Kalnischkies kalnischk...@gmail.com writes:
 On Thu, Feb 16, 2012 at 00:39, Russ Allbery r...@debian.org wrote:

 Actually, why would that be the behavior?  Why would dpkg --purge foo
 not just remove foo for all architectures for which it's installed, and
 require that if you want to remove only a specific architecture you
 then use the expanded syntax?

 We (as in APT team and dpkg team) had a lot of discussions about that,
 see for starters (there a properly more in between the 10 months…)
 [0] http://lists.debian.org/debian-dpkg/2011/01/msg00046.html
 [1] http://lists.debian.org/debian-dpkg/2011/12/msg5.html

 In short, i think the biggest counter is that it feels unintuitive to
 install a library (in native arch) with e.g. apt-get install libfoo
 while you have to be specific at removal to avoid nuking 'unrelated'
 packages with apt-get remove libfoo.

 Ah, hm... I suppose that's a good point, although honestly I wouldn't mind
 having apt-get remove libfoo remove all instances of libfoo that are
 installed.  I think that would be quite reasonable behavior, and don't
 find it particularly unintuitive.

 I agree that it's asymmetric.  apt-get install libfoo means libfoo:native,
 but apt-get remove libfoo means libfoo:*.  And asymmetric is bad, all
 things being equal.  But I think this may be one place where asymmetric is
 still the right thing to do; I would argue that it means you're
 implementing the most common operation in both cases.  apt-get install
 libfoo generally means give me a native libfoo since non-native libfoo
 is going to be an unusual case, and apt-get remove libfoo generally means
 I have no more interest in libfoo, make it go away.  I think that people
 who want to get rid of one architecture of libfoo but keep the other are
 already going to be thinking about architectures, and it's natural to ask
 them to qualify their request.

In another thread we discussed the problem with plugins (e.g. input
methods for chinese/japanese) and LD_PRELOAD (e.g. fakeroot) using
stuff. For those packages it would be great if

apt-get install plugin

would install all architectures of the package (for various values of
all :). This would add asymetry in that apt-get install libfoo would
sometimes mean libfoo:native and sometimes libfoo:*. Having apt-get
install libfoo:* for anything M-A:same would make it more symetric in
that case.

apt-get install libfoo generaly means please upgrade libfoo to the
latest version. That should be apt-get upgrade libfoo which doesn't
yet exists. Libraries should be pulled in from binaries and not
installed manually so I wouldn't give that case much weight.

Instead concentrate on the more usefull cases:

apt-get install plugin binary libfoo-dev bindings-for-some-interpreter

Plugins will be M-A:same and depend on something M-A:same. They will
have some other indication (to be implemented) that they are
plugins. Libfoo-dev will be M-A:same. Binaries will be M-A:foreign.
Bindings will be M-A:same but depends on something M-A:allowed.

Now think what would be most usefull for those cases.

MfG
Goswin


-- 
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/8739abnpz4.fsf@frosties.localnet



Re: Multiarch file overlap summary and proposal

2012-02-16 Thread David Kalnischkies
On Thu, Feb 16, 2012 at 09:26, Goswin von Brederlow goswin-...@web.de wrote:
 Russ Allbery r...@debian.org writes:
 David Kalnischkies kalnischk...@gmail.com writes:
 On Thu, Feb 16, 2012 at 00:39, Russ Allbery r...@debian.org wrote:

 Actually, why would that be the behavior?  Why would dpkg --purge foo
 not just remove foo for all architectures for which it's installed, and
 require that if you want to remove only a specific architecture you
 then use the expanded syntax?

 We (as in APT team and dpkg team) had a lot of discussions about that,
 see for starters (there a properly more in between the 10 months…)
 [0] http://lists.debian.org/debian-dpkg/2011/01/msg00046.html
 [1] http://lists.debian.org/debian-dpkg/2011/12/msg5.html

 In short, i think the biggest counter is that it feels unintuitive to
 install a library (in native arch) with e.g. apt-get install libfoo
 while you have to be specific at removal to avoid nuking 'unrelated'
 packages with apt-get remove libfoo.

 Ah, hm... I suppose that's a good point, although honestly I wouldn't mind
 having apt-get remove libfoo remove all instances of libfoo that are
 installed.  I think that would be quite reasonable behavior, and don't
 find it particularly unintuitive.

 I agree that it's asymmetric.  apt-get install libfoo means libfoo:native,
 but apt-get remove libfoo means libfoo:*.  And asymmetric is bad, all
 things being equal.  But I think this may be one place where asymmetric is
 still the right thing to do; I would argue that it means you're
 implementing the most common operation in both cases.  apt-get install
 libfoo generally means give me a native libfoo since non-native libfoo
 is going to be an unusual case, and apt-get remove libfoo generally means
 I have no more interest in libfoo, make it go away.  I think that people
 who want to get rid of one architecture of libfoo but keep the other are
 already going to be thinking about architectures, and it's natural to ask
 them to qualify their request.

 In another thread we discussed the problem with plugins (e.g. input
 methods for chinese/japanese) and LD_PRELOAD (e.g. fakeroot) using
 stuff. For those packages it would be great if

    apt-get install plugin

 would install all architectures of the package (for various values of
 all :). This would add asymetry in that apt-get install libfoo would
 sometimes mean libfoo:native and sometimes libfoo:*. Having apt-get
 install libfoo:* for anything M-A:same would make it more symetric in
 that case.

 apt-get install libfoo generaly means please upgrade libfoo to the
 latest version. That should be apt-get upgrade libfoo which doesn't
 yet exists. Libraries should be pulled in from binaries and not
 installed manually so I wouldn't give that case much weight.

But M-A:same will end-up on dev-packages as well, and these are quiet
likely to be installed manually. And in the end are libraries more often
installed by hand then they are removed - think e.g. of the binary
distribution of certain applications (aka mostly games).
I need libfoo for my new amd64 game so i install it. Later i remove the
game and remember to remove libfoo with it also. I just forgot that i have
a i386 game i play from time to time which requires libfoo:i386 which is
killed by that, too. That i haven't packaged my games is misfortune, but
we are talking about real-world usage here…

Also, in some distant future we might be able to co-install binaries.
It's easy to think of M-A:same just as libraries but i personally think that
this is an unnecessary mental limitation and just exists because it is
currently the (mostly imaginative) case.

And it seems like you assume apt-get and co are only used by humans.
In fact i think it is at least equally common used in scripts, usually with -y
to e.g. remove obsolete packages. I can't wait for the resulting shitstorm…

(btw, you know that 'apt-get purge foo+' is possible, right?
 Which behavior would you expect? The same as 'apt-get install foo' ?)


The same-as thing in the plugin thread just smells like poor-man's
conditional dependency - and it's trying to solve something which isn't
solvable on that level: Just because i have armel packages installed on my
system doesn't mean that i am going to execute an armel binary.
Cross-building for example will install libc6:armel for me, but i am still
not even close to be interested in libfakeroot:armel.
To get libfakeroot:armel on the system is the responsibility of whatever tool
helps the administrator to setup an foreign armel system on his host,
which is his brain if (s)he chooses to setup it by hand with apt-get.

It's comparable with the dependency grouping for xul-applications:
The user has a variety of usecases to choose from but all these usecases
include the same apt-get command. Which usecase is the most popular
is not really measurable and even if it would it changes over time, but the
behavior of the apt-* tools is expected to be stable at the same time.

I was 

Re: Multiarch file overlap summary and proposal

2012-02-16 Thread Russ Allbery
I was thinking more about this, and I was finally able to put a finger on
why I don't like package splitting as a solution.

We know from prior experience with splitting packages for large
arch-independent data that one of the more common mistakes that we'll make
is to move the wrong files: to put into the arch-independent package a
file that's actually arch-dependent.

Look at the failure mode when that happens with the sort of package that
we're talking about splitting out of m-a:same packages:

* The arch-independent package gets arch-dependent content that happens to
  match the architecture of the maintainer's build machine, since that's
  the only place the arch-independent package is built.  The maintainer
  will by definition not notice, since the content is right for their
  system.

* The maintainer is probably using a popular system type (usually either
  i386 or amd64), and everyone else on that system type will also not
  notice, so the bug can be latent for some time.

* Systems with the wrong architecture will get data files that have the
  wrong format or the wrong information.  This is usually not a case that
  the software is designed to detect, so the result is normally random
  segfaults or similar sorts of major bugs.  The failure case for header
  files is *particularly* bad: C software will generally compile fine with
  the wrong-sized data types and then, at *runtime*, happily pass the
  wrong data into the library, resulting in random segfaults and possibly
  even data corruption.  This won't happen until runtime, so could go
  undetected for long periods of time.

This is a particularly nasty failure mode due to how long it can stay
undetected and how much havoc it causes.

Now, compare to the failure mode with refcounting if the maintainer
doesn't realize that an arch-specific file can't be shared:

* Each arch-specific package will continue to get the appropriate files
  for that architecture.  Each package will still be usable and consistent
  independently, so users who don't care about multiarch won't ever see a
  problem.

* Users who want to co-install separate architectures will immediately
  encounter a dpkg error saying that the files aren't consistent.  This
  means they won't be able to co-install the packages, but dpkg will
  prevent any actual harm from happening.  The user will then report a bug
  and the maintainer will realize what happened and be able to find some
  way to fix it.

* Even better, we can automatically detect this error case by scanning the
  archive for architecture pairs that have non-matching overlapping files
  and deal with it proactively.

The refcounting failure mode behavior is just completely superior here.
And this *is* a mistake that we're going to make frequently; we know that
from past experience with splitting packages.  Note that this problem
often happens because, when the maintainer originally split the package,
there was nothing arch-specific in the file, but upstream made it
arch-specific later on and the maintainer didn't notice.  (It's very easy
to miss.)  This is particularly common with header files.

Note that arch-qualifying all of the files does not have the problems of
package splitting, but it's also a much more intrusive fix.

-- 
Russ Allbery (r...@debian.org)   http://www.eyrie.org/~eagle/


-- 
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/87pqdeobyu@windlord.stanford.edu



Re: Multiarch file overlap summary and proposal

2012-02-16 Thread Carsten Hey
* David Kalnischkies [2012-02-16 03:59 +0100]:
 On Thu, Feb 16, 2012 at 00:39, Russ Allbery r...@debian.org wrote:
    it needs to find and remove foo:*

foo:all (or foo:any) instead of foo:* would save the need to quote it.

  Actually, why would that be the behavior?  Why would dpkg --purge foo not
  just remove foo for all architectures for which it's installed, and
  require that if you want to remove only a specific architecture you then
  use the expanded syntax?

 We (as in APT team and dpkg team) had a lot of discussions about that,
 see for starters (there a properly more in between the 10 months…)
 [0] http://lists.debian.org/debian-dpkg/2011/01/msg00046.html
 [1] http://lists.debian.org/debian-dpkg/2011/12/msg5.html

 In short, i think the biggest counter is that it feels unintuitive to
 install a library (in native arch) with e.g. apt-get install libfoo
 while you have to be specific at removal to avoid nuking 'unrelated' packages
 with apt-get remove libfoo.

I would expect this (especially if the package foo is not a library, but
I would also expect this for libraries):

 * apt-get install foo tries to install foo:native if possible, if it is
   not possible, it tries to install the package foo from an other
   architecture but ask before proceeding (as if additional dependencies
   are required to install a package).
 * apt-get remove foo removes all installed foo packages (on all
   architectures).


This summarises how apt without multi-arch handles this, the above would
make apt with multi-arch also behave so:

apt-get install foo
---
  foo is not installed  foo is installed
apt-get remove foo
   ---

  Note that this obviously requires that a binNMU not be considered a
  different version of the package for this purpose.  But I think that too
  makes sense.  A binNMU *isn't* a full new version of the package; it's a
  new build of the same version.  We've historically been a bit sloppy about
  this distinction, but I think it's a real distinction and a meaningful
  one.

 Mhh. The current spec just forbids binNMU for M-A:same packages -
 the 'sync' happens on the exact binary version.
 Somewhere else in this multiarch-discussion was hinted that we could
 sync on the version in (optional) Source tag instead to allow binNMU.
 It's a bit too late (in my timezone) for me to do serious predictions on
 difficult-levels on changing this in APT but i guess its relatively easy.

 (the only problem i see is that i don't have ${source:Version} available
  currently in the version structure, but we haven't even tried pushing
  apt's abibreak to sid specifically as i feared last-minute changes…)

I'm not sure if you meant this with Source tag, anyway, the 'Packages'
files miss the source version too, but this could be added as optional
field that would be used if it differs from the 'Version:' field.


Regards
Carsten


-- 
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/20120216221059.ga8...@furrball.stateful.de



Re: Multiarch file overlap summary and proposal

2012-02-16 Thread Carsten Hey
* Russ Allbery [2012-02-16 10:43 -0800]:
 * Users who want to co-install separate architectures will immediately
   encounter a dpkg error saying that the files aren't consistent.  This
   means they won't be able to co-install the packages, but dpkg will
   prevent any actual harm from happening.  The user will then report a bug
   and the maintainer will realize what happened and be able to find some
   way to fix it.

 * Even better, we can automatically detect this error case by scanning the
   archive for architecture pairs that have non-matching overlapping files
   and deal with it proactively.

There are still files that differ that do not need to be fixed, for
example documentation that contains it's build date.

One way to address this is to use a new dpkg control file (placed in
/var/lib/dpkg/info) that lists files that dpkg considers to be equal
even if they differ.


Carsten


-- 
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/20120216224340.gb8...@furrball.stateful.de



Re: Multiarch file overlap summary and proposal

2012-02-16 Thread Russ Allbery
Carsten Hey cars...@debian.org writes:
 * Russ Allbery [2012-02-16 10:43 -0800]:

 * Users who want to co-install separate architectures will immediately
   encounter a dpkg error saying that the files aren't consistent.  This
   means they won't be able to co-install the packages, but dpkg will
   prevent any actual harm from happening.  The user will then report a bug
   and the maintainer will realize what happened and be able to find some
   way to fix it.

 * Even better, we can automatically detect this error case by scanning the
   archive for architecture pairs that have non-matching overlapping files
   and deal with it proactively.

 There are still files that differ that do not need to be fixed, for
 example documentation that contains it's build date.

Every file that differs has to be fixed in the current multi-arch plan.
Documentation that contains its build date is going to need to be split
out into a separate -docs package.

I'm fine with splitting documentation; that has far fewer problems than
splitting other types of files, since documentation isn't tightly coupled
at a level that breaks software.

 One way to address this is to use a new dpkg control file (placed in
 /var/lib/dpkg/info) that lists files that dpkg considers to be equal
 even if they differ.

I don't think this is a good idea.  I don't think we should allow this
sort of inconsistency depending on what package is installed first.

-- 
Russ Allbery (r...@debian.org)   http://www.eyrie.org/~eagle/


-- 
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/87d39eie26@windlord.stanford.edu



Re: Multiarch file overlap summary and proposal

2012-02-15 Thread Goswin von Brederlow
Ian Jackson ijack...@chiark.greenend.org.uk writes:

 Russ Allbery writes (Multiarch file overlap summary and proposal (was: 
 Summary: dpkg shared / reference counted files and version match)):
 5. Data files that vary by architecture.  This includes big-endian
vs. little-endian issues.  These are simply incompatible with
multiarch as currently designed, and incompatible with the obvious
variations that I can think of, and will have to either be moved
into arch-qualified directories (with corresponding patches to the
paths from which the libraries load the data) or these packages
can't be made multiarch.

 Yes.  Of these, arch-qualifying the path seem to be to be obviously
 the right answer.  Of course eg if the data files just come in big-
 and little-endian, you can qualify the path with only the endianness
 and use refcounting to allow the equal-endianness packages to share.

 Ian.

Preferably -data-be:all and -data-le:all packages if they can be build
irespective of the buildds endianness.

MfG
Goswin


-- 
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/87zkckrwnl.fsf@frosties.localnet



Re: Multiarch file overlap summary and proposal

2012-02-15 Thread Goswin von Brederlow
Ian Jackson ijack...@chiark.greenend.org.uk writes:

 Guillem Jover writes (Re: Multiarch file overlap summary and proposal (was: 
 Summary: dpkg shared / reference counted files and version match)):
 This still does not solve the other issues I listed, namely binNMUs
 have to be performed in lock-step, more complicated transitions /
 upgrades.

 I don't think I see where this is coming from.  Are you talking about
 variation in gzip output ?  Given the evidence we've seen here, in
 practice I think that is not going to be a problem.  Certainly it
 won't demand that binNMUs be performed in lock-step.

Note that splitting files (specifically changelog) into -common package
would require an explicit versioned dependency on the -common package and
produce the same (or similar) lock-step problem for upgrades and
binNMUs. Arch qualifying the files on the other hand would avoid that.

Splitting data files into -common packages will also often need a close
versioned dependency forcing a lock-step of packages. But probably not
so terse that binNMUs would have to be lock-steped.

Overall I think the lock-step being required for reference counted files
won't have such a large effect as you might think.



I think the idea of splitting the binNMU changelog into an extra file is
a great idea as that would allow putting the changelog into -common:all
and depend on the source version and then have the binNMU changelog in
the foo:any package in the symlinked directory. For this to work the
binNMU changelog should be arch and pkg qualified, e.g.

/usr/share/doc/foo-common/Changelog
/usr/share/doc/foo-common/Changelog.binNMU-foo-amd64
/usr/share/doc/foo-common/Changelog.binNMU-bar-i386
/usr/share/doc/foo - foo-common
/usr/share/doc/bar - foo-common 

MfG
Goswin


-- 
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/87vcn8rw1z.fsf@frosties.localnet



Re: Multiarch file overlap summary and proposal

2012-02-15 Thread Goswin von Brederlow
Russ Allbery r...@debian.org writes:

 Joey Hess jo...@debian.org writes:

 Anyway, my worry about the refcounting approach (or perhaps M-A: same in
 general) is not the details of the implementation in dpkg, but the added
 mental complexity of dpkg now being able to have multiple distinct
 packages installed under the same name. I had a brief exposure to rpm,
 which can install multiple versions of the same package, and that was
 the main cause of much confusing behavior in rpm. While dpkg's invariant
 that all co-installable package names be unique (and have unique files)
 has certianly led to lots of ugly package names, it's kept the users'
 and developers' mental models quite simple.

 I worry that we have barely begun to scratch the surface of the added
 complexity of losing this invariant.

 This does seem to be more M-A: same in general, to me, since whether we
 have file overlaps or not we still have multiple packages with the same
 name.  Which will force changes in everything that deals with packages,
 like Puppet, to be able to specify packages with particular architectures.

 I definitely agree on the complexity this adds.  But I don't think there's
 an alternative to that complexity without using something like --sysroot
 or mini-chroots, and I don't think those are satisfying solutions to the
 set of problems we're trying to solve.

pkg:arch will still be unique and the dpkg/apt output will use the
architecture where required for uniqueness. So I think that after some
getting used to it it will be clear enough again.

MfG
Goswin


-- 
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/87r4xwrvw2.fsf@frosties.localnet



Re: Multiarch file overlap summary and proposal

2012-02-15 Thread Joey Hess
Goswin von Brederlow wrote:
 pkg:arch will still be unique and the dpkg/apt output will use the
 architecture where required for uniqueness. So I think that after some
 getting used to it it will be clear enough again.

Here are a few examples of the problems I worry about. I have not
verified any of them, and they're clearly biased toward code I am
familiar with, which suggests there are many other similar problems.

* Puppet not only installs packages, it may remove them. A puppet config
  that does dpkg --purge foo will fail if multiarch is enabled, now
  it needs to find and remove foo:*

* dpkg-repack pkg:arch will create a package with that literal name (or fail)

* dpkg-reconfigure probably can't be used with M-A same packages.
  debconf probably generally needs porting to multiarch.

* tasksel uses dpkg --query to work out if a task's dependencies are
  installed. In the event that a library is directly part of a task,
  this will fail when multiarch is enabled.

* Every piece of documentation that gives commands lines manipulating
  library packages is potentially broken.

Seems like we need a release where multiarch is classed as an
experimental feature, which when enabled can break the system. But the
sort of problems above are the easy to anticipate ones; my real worry is
the unanticipated classes of problems. Especially if we find intractable
problems or levels of complexity introduced by dropping the unique
package name invariant.

My nightmare scenario is that we release with multiarch, discover that
it's a net negative for our users (proprietary software on amd64 aside,
nearly all the benefits are to developers AFAICS), and are stuck with it.

-- 
see shy jo


signature.asc
Description: Digital signature


Re: Multiarch file overlap summary and proposal (was: Summary: dpkg shared / reference counted files and version match)

2012-02-15 Thread Ian Jackson
Guillem Jover writes (Re: Multiarch file overlap summary and proposal (was: 
Summary: dpkg shared / reference counted files and version match)):
 On Tue, 2012-02-14 at 14:28:58 +, Ian Jackson wrote:
  I think the refcounting approach is very worthwhile because it
  eliminates unnecessary work (by human maintainers) in many simple
  cases.
 
 Aside from what I said on my other reply, I just wanted to note that
 this seems to be a recurring point of tension in the project when it
 comes to archive wide source package changes, where supposed short
 term convenience (with its usually long term harmful effects) appears
 to initially seduce people over what seems to be the cleaner although
 slightly a bit more laborious solution.

The refcnt doesn't just eliminate unnecessary multiarch
conversion work.  It also eliminates unnecessary maintenance effort.
Maintaining a split package will be more work than without.

I think that over the lifetime of the multiarch deployment this extra
packaging work will far outweigh the extra maintenance and
documentation burden of the refcnt feature.

  [...]  But trying to workaround this by coming
 up with stacks of hacked up solutions  [...]

I disagree with your tendentious phrasing.  The refcnt feature is not
a hacked up solution (nor a stack of them).  It is entirely normal
in Debian core tools (as in any substantial piece of software serving
a lot of diverse needs) to have extra code to make it easier to deploy
or use in common cases simpler.

Ian.


-- 
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: 
http://lists.debian.org/20283.57393.237949.649...@chiark.greenend.org.uk



  1   2   3   4   >