Bug#741573: Two menu systems

2014-12-27 Thread Wouter Verhelst
On Mon, 22 Dec 2014 14:29:44 + Ian Jackson 
ijack...@chiark.greenend.org.uk wrote:
 The traditional Debian menu system (mostly done by Bill Alombert) has
 been providing menu entries for bc and dc and everything for years.
 That is what its users expect.  It is what users like Matthew Vernon
 want:
   https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=741573#20
 
 What you are suggesting above is that the Debian menu will simply be
 abolished.

This seems correct.

 No-one will be allowed[1] to provide a comprehensive menu in Debian.

This doesn't.

The trad menu system and the desktop menu system are both, in essence,
just a bunch of metadata. What's represented in that metadata is how do
you start this particular bit of software. To that extent, they are the
same.

The actual *contents* of the trad menu system and the desktop menu
system is vastly different. I suspect that the opposition to losing the
trad menu system is not so much about the metadata *format* as it is
about the *contents* of those menu systems; about the actual menus that
result from interpreting the metadata.

But I don't see why that would need to be a problem, or indeed be part
of this question.

There is no reason why we wouldn't, theoretically, be able to build a
menu system that had a semantically similar (although perhaps differing
in minor details, such as categories etc) contents as does the trad
menu system, but using desktop metadata rather than trad metadata.

There is no reason why moving to desktop files as supported menu
system must imply losing most or all of the contents that the trad
menu currently contains. It could, yes, and maybe it would make sense
if some of the more... unusual menu entries (such as those for bash
or python) were removed from the menu system. However, that is a
wholly different question as to the question of which metadata format we
decide to go with, long-term.

I submit that the TC, for the purpose of answering this question before
it, should at first simply decide on a preferred metadata format. The
contents of the resulting menus is something they can then decide on as
a separate question (or ignore altogether if they decide it is not
appropriate for them to make that decision).

I will add that the debian menu is an all-or-nothing approach; TTBOMK it
is not possible to create an entry in the Debian menu saying something
along the lines of this should not be shown by default or this should
not be shown by default in environment X. This might be one reason for
the choice of some of our DE maintainers to decide not to show the
Debian menu anymore.

The same is not true for the desktop metadata format.

-- 
It is easy to love a country that is famous for chocolate and beer

  -- Barack Obama, speaking in Brussels, Belgium, 2014-03-26


-- 
To UNSUBSCRIBE, email to debian-ctte-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: https://lists.debian.org/20141227161131.ga2...@grep.be



Bug#771070: requirements for cross toolchain packages in the distribution

2014-12-27 Thread Wookey
+++ Ben Longbons [2014-12-18 12:23 -0800]:
 On Thu, Dec 18, 2014 at 8:36 AM, Wookey woo...@wookware.org wrote:
  MA-built vs in-arch
  ---
  I guess an interesting question is 'what does the cross-compiler
  actually _use_ the foreign arch libc for'? Does it need its own
  independent copy? What happens when the compiler libc-$arch-cross and
  the system libc:$arch get out of sync. Does it matter?
 
 The thought of this gives me nightmares. Glibc is very good at
 backwards-compatibility as far as packages go, but does not even
 attempt forwards-compatibility.

Indeed. This seems like a bad thing, but in fairness, emdebian and
ubuntu have been shipping cross-compilers where the versions do not
necessarily match for quite some time, and it hasn't caused obvious
practical problems.

So maybe this doesn't matter, but I'd really like to understand
exactly what's going on so we could see what might break, and thus
allocate an importance.

In practice I presume that all these libraries are dynamically linked
so in fact what matters is what versions are available at runtime, and
how tight the dependencies are.

 If anything (I'm still confused if this means only libstdc++, or every
 user program - but for those of us writing C++ I suppose it makes no
 difference) gets built against libc-$arch-cross (= 2.999) but at
 runtime there is only libc:$arch (= 2.998), then programs will almost
 certainly fail to load because of missing symbol versions, and
 possibly even fail to link.

In general, even if the compiler is built against libc-$arch-cross,
(and thus it is installed along with the toolchain) programs built
with that compiler should link against libc:$arch. I have seen cases
where they don't, and end up linking against the libc-$arch-cross copy
instead. I don't have a good handle on how common this is.

But clearly, if there isn't a libc-$arch-cross 'internal' copy present
then this problem can't arise. On the other hand, having the two
'flavours' of libc lets you install the toolchain 'within-arch', and
avoids uninstallability-due-to-multiarch-skew in unstable. 

  multilib vs mulitarch
  -
  Native compilers are not yet co-installable so have to use multilib to
  target more than one ABI.
 
 Can we please fix this? I'm tired of having to special-case my
 buildbot scripts for arches only available through multilib (i586 and
 gnux32 on an amd64 host; this also prevents ever running my buildbot
 on anything other than amd64). For all other arches (native or
 multiarch cross) the script is trivial.

This is the relevant page:
https://wiki.debian.org/CoinstallableToolchains

which has links to the bug at the bottom:666743. Helmut has the best
handle on the changes needed for this, and so far as I know Doko did
not object to the moving around of symlinks which makes this possible
(it was discussed at the bootstrap sprint 
https://wiki.debian.org/Sprints/2014/BootstrapSprint/Results section 3.11).

Up to date patches would be good, to get this moving.

 It does not work to just make my own ~/bin/i586-linux-gnu-gcc etc
 scripts that call x86_64-linux-gnu-gcc -m32, because I also have to
 worry about the libs being in the wrong place ({/usr,}/lib32 instead
 of {/usr,}/lib/i586-linux-gnu, except that it's really
 {/usr,}/lib/i386-linux-gnu ... why can't the numbers be consistent?
 It's even i486-linux-gnu in places too ... meh, unimportant to this
 discussion, except I suppose I'll need to special-case the library
 locations anyway, or else lie any call my binaries packages i386 even
 though they're really i586 ... it's not like everyone else doesn't
 anyway).

Yeah that's all a pain. Blame the x86 people for using 3 different
triplets for the same ABI.

  Multilibs do make much better sense for arches that are not in the
  archive (x32/mipsn32) and are possibly the easier way to support
  those, but even there, all the difficulty is in getting the right
  libc:$arch or libc-$arch-cross packages. Once you have those you can
  build per-arch toolchains or multilib toolchains.
 
 The most correct solution to me seems to be a libs only archive even
 for unsupported arches. This would be a huge win even for supported
 arches, because 'apt-get update' with N architectures enabled is
 really slow already.

Right. This is the logical outcome of using multiarch for library
dependencies. If you don't do this you have treat supported and
unsupported arches differently.

 Conceptually it would be split into 3 areas that I can think of:
 1. anything that ships architecture-specific binaries (not usable in
 cross situations in general, but there are still useful packages like
 wine32)
 2. anything that ships architecture-specific libraries but not
 binaries (useful for cross)
 3. anything that ships no architecture-specific binaries or libraries
 
 2 is Multi-Arch: same and 3 is Architecture: all, so we already have
 an easy way to identify these sets of packages. That said, I can't
 think of any