Re: RFC: pam: dropping support for NIS/NIS+?

2022-04-21 Thread Gabor Gombas
Hi,

On Wed, Apr 20, 2022 at 04:26:02PM -0400, Boyuan Yang wrote:
 
> Before any discussion takes place, I would like to point out a previous
> attempt of Fedora trying to get rid of NIS/NIS+ back in 2021. Please check out
> the LWN article at https://lwn.net/Articles/874174/ , which would definitely
> be helpful for the condition in Debian.

That discussion seems to be about removing NIS/NIS+ support from the
entire distribution. This thread is about removing NIS support from PAM.
That's an important distinction, because in practice, NIS/NIS+ support
mostly means the NSS modules, and the tools/servers in the case of NIS.

Dropping NIS support from PAM would mean losing only the ability to
change the passwords of users coming from NIS. It would not affect user
lookups, and password change would still be possible using yppasswd.
There does not even seem to be any NIS+ support in PAM - nothing seems
to include .

Personlly, I think bundling NIS password changing capability in pam_unix
was a design mistake. It should always have been a distinct module.

Regards,
Gabor



Re: RFC: pam: dropping support for NIS/NIS+?

2022-04-20 Thread Gabor Gombas
Hi,

On Wed, Apr 20, 2022 at 10:57:58AM -0700, Steve Langasek wrote:

> So I'd like to take a step back and challenge an underlying assumption by
> asking: do any of our users actually *need* this functionality?  The RPC
> functionality is only used for NIS and NIS+.  NIS is historically quite
> insecure, and I'm not aware of any efforts to improve its security (AFAIK
> the linkage of the crypto libraries doesn't fix the fundamentally insecure
> interfaces of NIS).  NIS+ is intended to be a more secure version of NIS,
> but to my knowledge there has never been a free implementation in the
> archive; this was a Sun-specific technology, which Sun deprecated two
> decades ago[1].
> 
> If we dropped support for NIS and NIS+ in the next Debian release, would
> anybody miss it?  Or has everyone moved on to LDAP / AD by now?

NIS still has uses in small, closed environments where setting up LDAP
would be overkill, or if you have to interface with some ancient
systems. NIS+ was a nice idea in its own time, and it allowed making NFS
more secure before RPCSEC_GSS took over. However, the strength of the
crypto used by NIS+ probably does not worth much today anymore, so I'd
be surprised if anyone still used it on Linux.

Doing a quick check, PAM only seems to rely on the RPC libraries for
changing NIS passwords. Personally, I think losing that would not be a
big deal. While I can still see NIS being useful in some corners of the
world, I cannot imagine such an environment wanting to enforce password
expiration. And if you don't expire passwords, then you don't need PAM
to be able to change passwords - running yppasswd should be fine for
voluntary password changes.

Regards,
Gabor



Re: merged-/usr transition: debconf or not?

2021-11-20 Thread Gabor Gombas
Hi,

On Sat, Nov 20, 2021 at 09:22:27AM -0800, Russ Allbery wrote:

>   The drawback here is that dpkg is going to rewrite all paths like /lib64
>   to /usr/lib64, which would naively *also* apply to the base-files
>   package when it looks at that package, but that can't be allowed because
>   now we're back to the situation where dpkg's state database is
>   inconsistent with the file system and dpkg thinks that base-files
>   contains some nonsensical /usr/lib64 to /usr/lib64 symlink.

If you replace "rewrite /lib64 to /usr/lib64" with "rewrite /lib64/* to
/usr/lib64/*", then this can easily be avoided.

>   I think in this approach there would need to be some special-case code
>   directly in dpkg that recognizes the usrmerge symlinks [...]

Talking about "special casing" in dpkg is bothering me for a while. And
there is a relatively simple way to avoid any kind of special casing:
move the information out to a configuration file (which would _not_ be a
conffile) - and now the code has no special casing, just
configuration-driven logic.

This new configuration file could be shipped by base-files itself, to
ensure it does not get out of sync with the filesystem structure shipped
in base-files. Then base-files' postinst would more or less need to
include the current usrmerge package, plus whatever is needed to convert
dpkg's database.

This new configuration file would not be consumed by dpkg directly when
installing packages, but only when the database conversion is called,
and dpkg would keep an internal list of the rewriting rules which are
active. Doing so would allow enforcing rewriting rules can only be added
but never removed, which would avoid potential issues if base-files gets
downgraded.

Gabor



Re: What are desired semantics for /etc/shells?

2021-06-24 Thread Gabor Gombas
On Thu, Jun 24, 2021 at 06:12:05PM +0200, Felix C. Stegerman wrote:

> * Helmut Grohne  [2021-06-24 08:10]:
> > Felix C. Stegerman cautioned that the contents of /etc/shells depends on
> > whether the underlying system is /usr-merged.
> 
> It also means that on /usr-merged systems e.g. /bin/screen is not a
> "valid" shell, but /usr/bin/screen is (even though they are the same
> file), which may be fine in practice but seems counter-intuitive to
> me.

That will be a problem in environments having a central user database.
Since password entries will be the same on all hosts, the contents of
/etc/shells should also be the same - otherwise, users may not be able to
log in. So /etc/shells changing depending on usrmerge is not just
counter-intuitive, but it has the potential of breaking things.

Gabor



Re: Unifying logging by default

2019-02-21 Thread Gabor Gombas
On Wed, Feb 20, 2019 at 02:44:37PM -0800, Josh Triplett wrote:

> Both syslog and journald support multi-line log messages; I'd *love* to
> see /var/log/aptitude and /var/log/apt/history.log end up in syslog or
> journald.

Both journald and syslog have problems with retention policies, or
rather the lack of - and journald is especially bad. In a production
environment, I want to keep package upgrade history going back several
months or even years - but I want to purge cron job execution history
after a week. journald simply cannot do that; syslog can if you log
everything to distinct files - getting back to where we are now, just
with lots of extra complexity.

Gabor



Re: Can we kill net-tools, please?

2016-12-29 Thread Gabor Gombas
On Thu, Dec 29, 2016 at 09:01:51PM +0500, Andrey Rahmatullin wrote:

> > OK, you can remove the last half, but keep in mind there are plenty of
> > people who aren't using the exotic features provided by iproute2
> ... like two IPs on one iface.

Actually, that is only a problem if you re-use labels (including empty
ones). If you give distinct labels to all IPs, then ifconfig has no
trouble displaying/manipulating the addresses.

The issue is really at a lower level - ioctl() vs. netlink (*). Anything
which uses the SIOCGIF*/SIOCSIF* ioctl()s will have the same limitations
as ifconfig.  So if you _really_ care about having multiple IPs sharing
the same label, then replacing the ifconfig command with ip is just the
beginning - you'll need to find all applications and libraries which use
the ioctl() interface, and replace/re-write them to use netlink instead.
The problem is, using netlink is more complicated than using ioctl(), so
that may not be a simple task.

If you don't have any legacy applications which would use ioctl()s, then
you're lucky. Otherwise, assigning unique labels to IP addresses could
turn out to be simpler.

Gabor

(*) This is all about IPv4, IPv6 is different



Re: TMPDIR - Do we also need a drive backed TPMDIR ?

2016-07-21 Thread Gabor Gombas
On Thu, Jul 21, 2016 at 04:06:26PM +0300, Dmitry Bogatov wrote:

> For example, I, as happy owner of 8GB RAM, is perfectly fine with 3GB
> in /tmp, because I actually use less then 1Gb. On other hand, would I
> start 50 instances of Firefox, Gimp and other stuff, I would object
> putting 3GB in my /tmp, since it would make system swap.

I don't quite buy your argument. If you want to write 3GB to disk, then
you need to write 3GB to disk - it's the same amount of data, whether
you call it "filesystem" or "swap". But if you use a real filesystem to
store that data, then you will also have to deal metadata, which needs
to be protected from crashes so you want to flush caches from time to
time, which is expensive. tmpfs does not have such requirements, so it
should give you better performance. I guess someone should measure it...

Gabor



Re: Sourcing init-functions

2010-01-24 Thread Gabor Gombas
On Sat, Jan 16, 2010 at 02:36:48AM -0800, Steve Langasek wrote:

 Looks like nonsense to me.  I think you should file a bug.  For one thing,
 any init script that needs lsb-base (= 3.0-6) *should depend on lsb-base
 (= 3.0-6)*, not throw an error if it's not installed.

Dependencies do not help if the package gets uninstalled but the init
script is not purged.

Gabor

-- 
 -
 MTA SZTAKI Computer and Automation Research Institute
Hungarian Academy of Sciences
 -


-- 
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org



Re: Xen support on Squeeze

2010-01-03 Thread Gabor Gombas
On Sun, Jan 03, 2010 at 06:31:20PM +0200, Pasi Kärkkäinen wrote:

 So the change has happened, lthough it took painfully long to get the
 upstream Linux pv_ops framework in shape and all that.. and obviously
 the pv_ops dom0 patches still need to get merged upstream.

That was opposed quite strongly by the kernel folks last time it was
attempted. Were there any fundamental changes in the Xen dom0 patches
since then?

Gabor

-- 
 -
 MTA SZTAKI Computer and Automation Research Institute
Hungarian Academy of Sciences
 -


-- 
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org



Re: where is /etc/hosts supposed to come from?

2009-12-30 Thread Gabor Gombas
On Tue, Dec 29, 2009 at 10:31:25PM +0100, Vincent Lefevre wrote:

 Well, the node name is unique. From that, you'll obtain the FQDN with
 either the obsolete function gethostbyname or the new POSIX function
 getaddrinfo (by using the AI_CANONNAME flag). POSIX says:
 
   If the AI_CANONNAME flag is specified and the nodename argument is
   not null, the function shall attempt to determine the canonical name
   corresponding to nodename (for example, if nodename is an alias or
   shorthand notation for a complete name).

Read what you have written: _attempt_. It does not say that you can
expect it to succeed even in common situations.

 And here's what the getaddrinfo(3) man page says under Debian:
 
   If hints.ai_flags includes the AI_CANONNAME flag, then the ai_canonname
   field of the first of the addrinfo structures in the returned  list  is
   set to point to the official name of the host.
 
 Then you need to configure your machine according to the spec, i.e.
 you need a single FQDN / canonical name / official name of the host.

If getaddrinfo(AI_CANONNAME) fails, that is fully conformant with the
spec you have quoted.

  Example: there is a router box called gw which has about a dozen
  addresses that resolve to gw.domain for just as many domains. Some
  addresses even share the same NIC. Which FQDN should hostname -f
  display?
 
 This doesn't really matter. The FQDN may also be another name, i.e.
 the nodename may be something more meaningful than gw.

But it is not. This is a real world example. Reality does not match your
dream world.

  Why that one, and not some other?
 
 You should ask this question to those who configured such routers
 (but this would be more a practical matter, as you may have plenty
 of choices).

_I_ did configure it. I _know_ that none of the addresses is more
important than the other.

And you know, if you do not pretend such silly things that a host should
have just a single FQDN or that hostname -f should return anything
meaningful, then the above configuration works flawlessly. Only if you
start to pretend things that are simply not true you start having
problems.

 A FQDN is not associated with an IP address, but with a host. You
 cannot call them FQDN, which already has a well-established meaning.

Now this is bullshit. FQDN is a term related to DNS. An FQDN resolves to
a set of resource records, which may be IPv4 or IPv6 addresses and a
couple of other things, but definitely _NOT_ hosts, as that term has no
meaning for the DNS.

If the FQDN resolves to multiple IP addresses, then the very same FQDN
can belong to multiple hosts simultaneously. Similarly, if a host has
multiple IP addresses, then multiple FQDNs may point to it. You can even
mix these:

- host1 has addresses 192.168.1.1 and 192.168.2.1
- host2 has addresses 192.168.1.2 and 192.168.2.2
- the DNS has the following records:

service1.domain.IN  A   192.168.1.1
IN  A   192.168.1.2
service2.domain.IN  A   192.168.2.1
IN  A   192.168.2.2

Now both hosts has two FQDNs, and both FQDNs point to two hosts; neither
host1 nor host2 is resolvable. And it all works just fine if you do
not make invalid assumptions about what FQDNs are and how they are used.

 If I understand correctly, you do a reverse DNS lookup. Now, I'm
 wondering... Can a hostname obtained by reverse DNS lookup resolve
 to different IP addresses?

Of course it can. And it is common to refuse connections from such hosts
using the PARANOID option of TCP wrappers (which was first released more
than 18 years ago, so don't pretend it is some new thing).

 hostname -f just follows the POSIX notion of canonical name (a.k.a.
 FQDN). So, I doubt it will die.

Please quote the exact text from POSIX that says that

- there MUST be a canonical name,
- and that name MUST be an FQDN.

Gabor

-- 
 -
 MTA SZTAKI Computer and Automation Research Institute
Hungarian Academy of Sciences
 -


-- 
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org



Re: where is /etc/hosts supposed to come from?

2009-12-30 Thread Gabor Gombas
On Wed, Dec 30, 2009 at 02:36:12AM +0100, Vincent Lefevre wrote:

 BTW, Debian defines /etc/mailname as containing the FQDN. So,
 this notion is explicitly defined on Debian, and one should
 expect hostname -f to return the same name (according to its
 documentation).

What makes you think that /etc/mailname should have any resemblance to
the host name? Did you never administer a host that used a dedicated IP
address for sending/receiving mail, and did any other communication on
different addresses? Leaking the real host name in such a situation can
be considered a serious security issue...

Gabor

-- 
 -
 MTA SZTAKI Computer and Automation Research Institute
Hungarian Academy of Sciences
 -


-- 
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org



Re: where is /etc/hosts supposed to come from?

2009-12-30 Thread Gabor Gombas
On Wed, Dec 30, 2009 at 08:37:21AM +0100, Vincent Bernat wrote:

 If this is a real question, put:
 127.0.1.1 fqdn nodename
 
 This seems a  very acceptable way to give a FQDN  to your laptop without
 relying  on network.  hostname -f  and  programs using  a similar  inner
 working will be able to get the right result.

Adding meaningless configuration to work around programs that are broken
by design does not seem like a good solution.

Gabor

-- 
 -
 MTA SZTAKI Computer and Automation Research Institute
Hungarian Academy of Sciences
 -


--
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org



Re: where is /etc/hosts supposed to come from?

2009-12-29 Thread Gabor Gombas
On Tue, Dec 29, 2009 at 08:46:09AM +0100, Vincent Bernat wrote:

 And BTW, this is exactly what hostname -f does. It does not read 
 /etc/hostname.

Nothing should read /etc/hostname except /etc/init.d/hostname.sh during
boot. Everything else should use either uname(2) or gethostname(3)
(which in turn calls uname() internally).

For example, on NFSROOT setups /etc/hostname usually does not exist
to prevent the host name received from DHCP being overwritten.

Gabor

-- 
 -
 MTA SZTAKI Computer and Automation Research Institute
Hungarian Academy of Sciences
 -


-- 
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org



Re: where is /etc/hosts supposed to come from?

2009-12-29 Thread Gabor Gombas
On Tue, Dec 29, 2009 at 02:52:44PM +0100, Vincent Lefevre wrote:

 When the machine is correctly configured (i.e. really has a FQDN),
 hostname -f is reliable.

No, it is not. hostname -f can return one value only, while a host may
have dozens or hundreds of valid FQDNs.

Example: there is a router box called gw which has about a dozen
addresses that resolve to gw.domain for just as many domains. Some
addresses even share the same NIC. Which FQDN should hostname -f
display? Why that one, and not some other?

I've submitted a patch for hostname (#562830) to add two new options:
one that displays all IP addresses of the host, while the other displays
all the FQDNs for those addresses. Neither relies on the value returned
by gethostname(), so the hostname must be an FQDN misbelief together
with any usage of hostname -f can die a silent death.

Gabor

-- 
 -
 MTA SZTAKI Computer and Automation Research Institute
Hungarian Academy of Sciences
 -


-- 
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org



Re: where is /etc/hosts supposed to come from?

2009-12-28 Thread Gabor Gombas
On Sun, Dec 27, 2009 at 10:38:58PM -0800, John H. Robinson, IV wrote:

 This is one place where Solaris has gotten this right: /etc/nodename
 refers to the system itself, while each interface has its own (cf:
 /etc/hostname.hme0).

That is still no good for linux since a single interface can have many
addresses that all map to different names in the DNS.

Gabor

-- 
 -
 MTA SZTAKI Computer and Automation Research Institute
Hungarian Academy of Sciences
 -


-- 
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org



Re: where is /etc/hosts supposed to come from?

2009-12-28 Thread Gabor Gombas
On Mon, Dec 28, 2009 at 03:52:44AM +0100, Marco d'Itri wrote:

 Considering that any non-trivial server needs to send email out, having
 a working FQDN configured is not obsolete.

Anything mail related must use /etc/mailname if it needs something that
can be translated to an IP address.

 Your solution to #562780 is broken anyway, /etc/hostname can (and
 actually should) be a FQDN.

No. /etc/hostname has _nothing_ to do with networking. People
historically was lazy to do the proper interface/address enumeration(*)
and instead pretended that /etc/hostname is something resolvable, but it
is simply not true. It may be made to work in some really simple
configurations (read: the host has just a single static IP address), but
it cannot work in any serious server configuration having multiple
interfaces and every interface having multiple addresses.

Anything that uses fqdn -f today should really do the following:

L := empty list
loop I for all configured interfaces
loop P for all supported network protocols
loop A for all addresses on I of protocol P
append getnameinfo(A) to L
remove duplicates from L

Gabor

(*) mostly because doing this enumeration in a portable way is a PITA

-- 
 -
 MTA SZTAKI Computer and Automation Research Institute
Hungarian Academy of Sciences
 -


--
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org



Re: where is /etc/hosts supposed to come from?

2009-12-28 Thread Gabor Gombas
On Mon, Dec 28, 2009 at 12:58:03AM -0800, John H. Robinson, IV wrote:

 Hence /etc/hostname.hme0:1 and the like. Already solved.

# ip addr add eth0 192.168.1.1
# ip addr add eth0 192.168.1.2
# cat /etc/hosts
192.168.1.1 www.foo.com
192.168.1.2 smtp.bar.org

Now what /etc/hostname.eth0 should contain? smtp or www?

You know, we have that kind of infrastructure already. It's called doing
a getnameinfo() on the address returned by getsockname() on the file
descriptor returned by accept() - if you do not explicitely bind() to a
specific address in the first place.

 http://www.spitzner.net/interfaces.html

Oh yeah, I remember those times. However it is a bit outdated regarding
what Linux can do today...

Gabor

-- 
 -
 MTA SZTAKI Computer and Automation Research Institute
Hungarian Academy of Sciences
 -


-- 
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org



Re: GDM, getty and VTs

2009-11-16 Thread Gabor Gombas
On Mon, Nov 16, 2009 at 11:07:52AM +0100, Josselin Mouette wrote:

 We remove entirely the getty respawning from /etc/inittab. Instead, a
 new daemon is started by a regular init script. This daemon does the
 following: 
   * Opens all /dev/tty1 to tty6 and display a d-i-like “press enter
 to activate this console” in them. 
   * Provide a very simple interface to reserve a VT, that can be
 queried by the display manager. 
   * Whenever you press enter on a VT, reserve it and start a getty
 process. 
   * When almost all ttys are allocated, start opening tty7+ and so
 on. 
   * If no display manager is started, always run a getty process in
 tty1.

If you always send a service name in a VT reservation request, then the
daemon can implement policy based on the service name, like always
assign VTs starting from 7 for gdm. That would allow people to continue
using their existing VT allocation scheme (I'm running the X server on
vt24 for example), while the default config can be debated separately.

Gabor

-- 
 -
 MTA SZTAKI Computer and Automation Research Institute
Hungarian Academy of Sciences
 -


--
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org



Re: GDM, getty and VTs

2009-11-15 Thread Gabor Gombas
On Sat, Nov 14, 2009 at 03:45:11PM +0100, Josselin Mouette wrote:

   * For desktop machines, the display manager starts on tty7, which
 means there is a tty switch to display it. This causes a small
 latency and can also create some bugs when you’re using a
 graphical boot splash.

Is that still true with KMS and framebuffer console? IMHO if the boot
splash and gdm both use the same mode, then KMS can avoid re-programming
the card on a VT switch, thus the latency goes away.

Gabor

-- 
 -
 MTA SZTAKI Computer and Automation Research Institute
Hungarian Academy of Sciences
 -


-- 
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org



Re: GR proposal: the AGPL does not meet the DFSG (take 2)

2009-11-12 Thread Gabor Gombas
On Wed, Nov 11, 2009 at 01:07:24PM -0800, Rodrigo Gallardo wrote:
 On Wed, Nov 11, 2009 at 09:41:31PM +0100, Luk Claes wrote:
  -- The code is modified to interact with the user using a network 
   protocol
 that does not allow to display a prominent offer.
  
  Any example of this?
 
 One could add an IMAP interface to a blog management system, allowing
 one to post from an email client.

RFC 3501, 7.1. Server Responses - Status Responses

ALERT

 The human-readable text contains a special alert that MUST be
 presented to the user in a fashion that calls the
 user's attention to the message.

That's how warnings like your mailbox is near quota work today.

Gabor

-- 
 -
 MTA SZTAKI Computer and Automation Research Institute
Hungarian Academy of Sciences
 -


-- 
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org



Re: segmentation fault with libcrypto.so (but not libcrypto.a)

2009-11-03 Thread Gabor Gombas
On Tue, Nov 03, 2009 at 12:29:46PM -0500, N N wrote:

 #include openssl/hmac.h
 #include stdio.h
 
 int main(int argc, char** argv) {
   unsigned char foo[10] = boo;
   unsigned char* res = malloc(20);
   unsigned char* res2 = res;
   res = SHA1(foo, 3, 0);
   //res = SHA1(foo, 3, res);
 
   int i;
   for(i = 0; i  20; i++)
 printf(%x , res[i]);
   printf(\n);
   free(res2);
 }

Try to compile it. Read and understand the gcc warnings, fix the bugs,
and the segfault is gone.

Gabor

-- 
 -
 MTA SZTAKI Computer and Automation Research Institute
Hungarian Academy of Sciences
 -


-- 
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org



Re: /var/www is depracated, which directory to use?

2009-11-02 Thread Gabor Gombas
On Sat, Oct 31, 2009 at 05:51:26PM +0200, Holger Levsen wrote:

 /var/lib/munin/www is wrong (FHS says: Users must never need to modify files 
 in /var/lib to configure a package's operation. since users might want to 
 modify the css files)

IMHO that's not different from some user wanting to modify the sources
of any random package. If you really think that editing the css file is
an every-day operation, then it should be moved to /etc and marked as a
conffile. Ohterwise, don't bother.

Gabor

-- 
 -
 MTA SZTAKI Computer and Automation Research Institute
Hungarian Academy of Sciences
 -


--
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org



Re: Switch on compiler hardening defaults

2009-11-01 Thread Gabor Gombas
On Sun, Nov 01, 2009 at 08:10:44PM +0100, Samuel Thibault wrote:

  In general you cannot rely on checking errno because it is not defined
  whether a successful operation clears it.
 
 But you can clear it by hand before calling them.

That's only true in some special cases; for example, SuSv3 says you
should manually clear errno before calling functions in math.h and you
should check errno to see if the function was successful. But otherwise,
even a successful operation may have called other operations internally
that have failed and thus have modified errno.

Gabor

-- 
 -
 MTA SZTAKI Computer and Automation Research Institute
Hungarian Academy of Sciences
 -


-- 
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org



Re: Switch on compiler hardening defaults

2009-10-26 Thread Gabor Gombas
On Mon, Oct 26, 2009 at 11:14:25AM +0100, Bastian Blank wrote:
 On Sun, Oct 25, 2009 at 11:55:25AM -0700, Kees Cook wrote:
  I would like to propose enabling[1] the GCC hardening patches that Ubuntu
  uses[2].
 
 How do they work? Do they also change the free-standing compiler or only
 the hosted one? There is a lot of software, which (I would say) missuse
 the hosted compiler to build non-userspace-code, including the Linux
 kernel.

It seems the kernel will not be happy if the stack protector is switched
on unconditionally:

http://osdir.com/ml/linux-kernel/2009-10/msg07064.html

Gabor

-- 
 -
 MTA SZTAKI Computer and Automation Research Institute
Hungarian Academy of Sciences
 -


-- 
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org



Re: Is it time to remove sun-java6?

2009-10-09 Thread Gabor Gombas
On Thu, Oct 08, 2009 at 11:44:21AM -0400, Barry deFreese wrote:

 There has also been some similar discussions in Ubuntu with some
 users reporting that some web sites and packages don't work with
 openjdk but I have not seen a lot of concrete proof.

I have tried icedtea6-plugin a couple of times (last time about 2 weeks
ago) but always went back to sun-java6-plugin because icedtea simply
didn't work. I thought that openjdk is still an experimental thingy so I
did not bother reporting it. May I suggest you to

- start advertising that openjdk/icedtea is now supposed to be usable,
- convince people to install icedtea6-plugin instead of
  sun-java6-plugin,
- and when it is in fact working, _then_ start to talk about sun-java6
  removal.

Gabor

-- 
 -
 MTA SZTAKI Computer and Automation Research Institute
Hungarian Academy of Sciences
 -


-- 
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org



Re: Is it time to remove sun-java6?

2009-10-09 Thread Gabor Gombas
Hi,

On Fri, Oct 09, 2009 at 08:33:48PM +, Florian Weimer wrote:

 * Gabor Gombas:
 
  - start advertising that openjdk/icedtea is now supposed to be usable,
 
 Note that the non-applet stuff has been quite usable for a while.
 Even the openjdk-6 in lenny is not too bad (it's certainly possible to
 run various production loads on it).

I don't use Java much, so I have not followed openjdk development.

 I'm surprised by the interest in this type of technology.  For the
 security team, the web plugin will be somewhat nightmarish.  There are
 also software freedom issues surrounding it (due to the University of
 California's recent lawsuits).

I'm not a big fan of Java applets. However sometimes I have to access
web sites using applets to get some work done, and if those applets do
not work with icedtea6-plugin then I have no choice other than to
install sun-java6-plugin instead.

Missing security support is a good argument for removing the sun-java6-*
packages. Claiming that there are no user complaints about
icedtea6-plugin however is not, because the reason why people don't
complain may be that they still use sun-java6-plugin.

Gabor

-- 
 -
 MTA SZTAKI Computer and Automation Research Institute
Hungarian Academy of Sciences
 -


-- 
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org



Re: /var/www is depracated, which directory to use?

2009-09-28 Thread Gabor Gombas
On Mon, Sep 28, 2009 at 10:19:22PM +0800, Holger Levsen wrote:

  As I read it, putting stuff there is absolutely not fine. 
 
 Where do you read this? 
 
 http://www.pathname.com/fhs/pub/fhs-2.3.html#FTN.AEN1192 explicitly 
 says: This is particularly important as these areas will often contain both 
 files initially installed by the distributor, and those added by the 
 administrator. which to me very much sounds like the distributor (=Debian 
 here) can place directories there...

The problem is that people already put a lot of things under /srv and
therefore it is really hard to make sure you do not overwrite anything.
What do you do e.g. if the name of the directory you want to create
already exists as a file?

IMHO the only safe way to populate /srv is inside the Debian Installer
(and even then there can be issues when the user selects to mount a
pre-existing file system over /srv).

Gabor

-- 
 -
 MTA SZTAKI Computer and Automation Research Institute
Hungarian Academy of Sciences
 -


--
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org



Re: Re: For the grub maintainers II

2009-09-08 Thread Gabor Gombas
On Tue, Sep 08, 2009 at 04:35:42PM +0200, Fabian Greffrath wrote:

 With the namespace issue fixed and a blacklist to avoid mounting
 partitions in a virtualization environment, would it make sense to
 make grub-pc recommend (or even depend on) os-prober again?

The problem is not just virtualization but also exporting the block
device over the network. E.g. vblade does not open the device with
O_EXCL, so it is possible to mount it locally while some remote client
also have it mounted, resulting in data corruption.

Gabor

-- 
 -
 MTA SZTAKI Computer and Automation Research Institute
Hungarian Academy of Sciences
 -


-- 
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org



Re: The future of the boot system in Debian

2009-09-07 Thread Gabor Gombas
Hi,

On Sun, Sep 06, 2009 at 06:21:33PM -0500, Manoj Srivastava wrote:

 Right. I did not copy the upstream. I also think that we have
  invested a lot of effort in Debian in order to make Squeeze SELinux
  compliant, and make it so that turning on SELinux is fairly easy. I
  have asked the release managers to consider making Squeeze have SELinux
  working out of the box a release goal, and so far there has been no
  denial; and I consider the patch consistent with the choices we have
  made as a project in the past.
 
 I am also saying that I would be willing to maintain the SELinux
  patches  in sysvinit/upstart, if it comes to that, and the burden of
  keeping the patch around would be fairly small (it is a small patch,
  and  fairly self contained).

The original announcement said that Fedora is already using upstart.
AFAIK Fedora is also commited to using SELinux. Do they use a similar
patch? Can they help convincing upstream?

Gabor

-- 
 -
 MTA SZTAKI Computer and Automation Research Institute
Hungarian Academy of Sciences
 -


-- 
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org



Re: shared library in binary-package?

2009-09-07 Thread Gabor Gombas
On Mon, Sep 07, 2009 at 02:12:52PM +0800, Paul Wise wrote:

 Sounds like upstream should be persuaded to move the shared library
 code into the daemon since there is no reason for it to be in a
 library.

That won't work if upstream wants to support OSes other than Linux. My
memory is getting hazy but I had to use the same technique in the past
because not all OSes are capable of letting plugins resolve symbols from
the main binary, at least not without extra complications. So there are
two options:

- design a complicated plugin interface that contains function pointers
  for every function in the main program you want to use in a plugin, or

- implement the service as a library, and let the daemon and the plugins
  all link to this library.

The second method results in much cleaner code. The first method is
preferred if you expect to have 3rd party modules since (when done
right) it makes it easier to maintain module ABI stability.

Gabor

-- 
 -
 MTA SZTAKI Computer and Automation Research Institute
Hungarian Academy of Sciences
 -


-- 
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org



Re: shared library in binary-package?

2009-09-07 Thread Gabor Gombas
On Mon, Sep 07, 2009 at 03:19:19PM +0200, Stéphane Glondu wrote:

 Do you have an example of such OS that is likely to be supported by
 freesmartphone.org ?

I know nothing about freesmartphone.org so I have no idea what they want
to support.

Gabor

-- 
 -
 MTA SZTAKI Computer and Automation Research Institute
Hungarian Academy of Sciences
 -


-- 
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org



Re: DeviceKit and /usr

2009-09-07 Thread Gabor Gombas
On Mon, Sep 07, 2009 at 04:36:53PM +0200, Josselin Mouette wrote:

 Case 1:
 char *foo;
 if (asprintf(foo, %s equals %i, somestring, someint)  0) {
 fprintf(stderr, Failed to allocate memory);
 abort();
 }
 
 Case 2:
 char *foo = g_strdup_printf (%s equals %i, somestring,
 someint);

That shows exactly why glib cannot be used for low-level stuff and
daemons: it aborts unconditionally if a memory allocation fails. It's
rather sad, otherwise I love to use glib.

Gabor

-- 
 -
 MTA SZTAKI Computer and Automation Research Institute
Hungarian Academy of Sciences
 -


--
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org



Re: DeviceKit and /usr

2009-09-05 Thread Gabor Gombas
On Fri, Sep 04, 2009 at 06:43:35PM +0200, Michael Biebl wrote:

 For your proposal to work, you'd need some kind of replay mechanism, which
 allows udev to replay the add/remove events when /usr is available the 
 extended
 ruleset is activated.

You mean udevadm trigger?

Gabor

-- 
 -
 MTA SZTAKI Computer and Automation Research Institute
Hungarian Academy of Sciences
 -


--
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org



Re: For the grub maintainers II

2009-09-05 Thread Gabor Gombas
On Sat, Sep 05, 2009 at 03:03:40PM +0200, Felix Zielcke wrote:

 Robert filed already after the upload of grub-legacy a RC bug so it
 doestn't migrate after the usual 10 days to testing.
 
 Note that we only Suggests: os-prober and not Recommend: it like Ubuntu
 does because of 491872
 So if anyone want to help that we Recommend it again, then help the
 os-prober maintainers to fix that bug.

I don't know the specifics, but wouldn't it be possible for os-prober to
create its own private mount name space (see clone(2), CLONE_NEWNS),
and do the probing inside that name space? That way the desktop
environments would not be able to intercept it.

Gabor

-- 
 -
 MTA SZTAKI Computer and Automation Research Institute
Hungarian Academy of Sciences
 -


-- 
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org



Re: udev and /usr

2009-09-04 Thread Gabor Gombas
On Fri, Sep 04, 2009 at 11:41:41AM +0200, Marco d'Itri wrote:

 So you believe that the upstream maintainers are incompetent and
 released something which is unreliable by design?

Incompetent, no. Careless, yes. Just think about the udev-related
breakages in the past. And speaking about design, udev was originally
praised because it can do everything in user space. Now, the authors of
udev are proposing devtmpfs, because as it turned out, it's not _so_
rosy doing everything in userspace.

Gabor

-- 
 -
 MTA SZTAKI Computer and Automation Research Institute
Hungarian Academy of Sciences
 -


--
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org



Re: DeviceKit and /usr

2009-09-04 Thread Gabor Gombas
On Fri, Sep 04, 2009 at 04:36:52PM +0200, Michael Biebl wrote:

 I'd like to add here, that devicekit-disks will install udev helpers
 /lib/udev/devkit-disks-* which are called in
 /lib/udev/rules.d/95-devkit-disks.rules.
 
 devkit-disks-part-id and devkit-disks-probe-ata-smart both link against
 libraries which are (currently) in /usr/lib, i.e.
 devkit-disks-part-id links against libglib-2.0 (784K)
 devkit-disks-probe-ata-smart links against (48K)

IMHO this looks more and more like the udev rules has to be split into
at least two categories:

- a basic set that is used during boot and early system setup. Services
  in rcS.d are allowed to rely on these rules only, and these rules must
  not rely on anything outside the root file system.

- an extended set that is activated at the beginning of rc[2345].d and
  contains all the bells and whistles GNOME  co. wants.

The separation can be done by depending on an environment variable that
only gets set using udevadm when transitioning to the default initlevel.
Or even better, let's export the current initlevel as an env. variable
in udevd, and let rules in the extended set test for that.

Gabor

-- 
 -
 MTA SZTAKI Computer and Automation Research Institute
Hungarian Academy of Sciences
 -


--
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org



Re: RFC: Moving gpg to /bin?

2009-09-03 Thread Gabor Gombas
On Thu, Sep 03, 2009 at 04:06:53PM +0200, Daniel Leidert wrote:

  I'm thinking about moving gpg to /bin to solve bugs #386980 and #477671.

That may be a workaround, but IMHO this is really a bug/limitation in
the way the current init scripts are set up.

There is already the _netdev flag in fstab to defer mounting some
filesystems after the network has been initialized. There could be a
similar _cryptdev tag for encrypted devices. Then the boot process
would look like:

- do the equivalent of mount -a -O no_netdev,no_cryptdev. /usr
  should be mounted by this step, since it should not contain sensitive
  information, therefore it should not be encrypted, or at least not
  using gpg.
- configure the network
- mount -a -O _netdev,no_cryptdev
- unlock encrypted devices (incl. encrypted iSCSI/AoE/etc. devices)
- mount -a -O _netdev,_cryptdev

Now the question is when/how to run fsck, but it is already a problem if
you want to have a file system on an LVM device where one of the PVs is
an AoE device, as I've found out the other day...

Gabor

-- 
 -
 MTA SZTAKI Computer and Automation Research Institute
Hungarian Academy of Sciences
 -


-- 
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org



Re: udev and /usr

2009-09-01 Thread Gabor Gombas
On Tue, Sep 01, 2009 at 11:18:47AM +0200, Giacomo A. Catenazzi wrote:
 Josselin Mouette wrote:
 Le mardi 01 septembre 2009 à 10:32 +0200, Petter Reinholdtsen a
 écrit :
 In Debian, /usr/ is allowed to be on NFS.
 
 So is /.
 
 I was thinking the same, but #441291 (root over nfs) is still open.

Reading #441291, it's not about root over NFS but rather installing
over NFS. Root over NFS works fine here. I had to hack around a bit
because / not only comes from NFS but it is also R/O, but that's a
different can of worms to open.

Gabor

-- 
 -
 MTA SZTAKI Computer and Automation Research Institute
Hungarian Academy of Sciences
 -


-- 
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org



Re: udev and /usr

2009-09-01 Thread Gabor Gombas
On Tue, Sep 01, 2009 at 01:45:23PM +0200, Marco d'Itri wrote:

  How will usb-id and pci-id behave, if the ids files are not accessible?
 Print an error on stderr and exit with rc=1.
 The more interesting question is which packages care about this
 information and how they will behave when it is missing.

$ ls -l /usr/share/misc/usb.ids 
lrwxrwxrwx 1 root root 25 2009-08-24 12:01 /usr/share/misc/usb.ids - 
/var/lib/usbutils/usb.id

So the title is not correct, it's not just /usr, but /var also needs to
be mounted.

How about re-running the rules after all the filesystems have been
mounted?

Gabor

-- 
 -
 MTA SZTAKI Computer and Automation Research Institute
Hungarian Academy of Sciences
 -


-- 
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org



Re: Re: Introduction to multiarch: What maintainers must do

2009-07-30 Thread Gabor Gombas
On Thu, Jul 30, 2009 at 11:04:46AM +0200, Samuel Thibault wrote:

  Yes, but however pkg-config won't yet find things in
  /usr/lib/x86_64-linux-gnu/pkgconfig, so take care of putting .pc files
  in /usr/lib/pkgconfig.

$ pkg-config --list-all --debug
[...]
Cannot open directory '/usr/local/lib/pkgconfig' in package search path: No 
such file or directory
Cannot open directory '/usr/local/lib/pkgconfig/x86_64-linux-gnu' in package 
search path: No such file or directory
Scanning directory '/usr/lib/pkgconfig'
[...]
Cannot open directory '/usr/lib/pkgconfig/x86_64-linux-gnu' in package search 
path: No such file or directory
[...]

So pkg-config already has some support for multi-arch, it just uses
different directories than the current proposal. That can be fixed with
either a single symlink, or by modifying the --with-pc-pass=... argument
in pkgconfig's debian/rules. Neither will allow building stuff for a
different architecture, but as you note below that's not a requirement
for now.

Later pkg-config should be extended to have an --arch command-line
option (or env. variable) that is substituted into the default search
path at run time rather than at build time, but that can wait.

  Please don't as those files can be different on different  
  architectures.
 
 Yes, but for now multi-arch support for -dev packages won't be done so
 it's not a problem.

But if it can be fixed with a simple change to pkg-config, then why not
move the .pc files now? Esp. if that means less work for the library
package maintainers.

Gabor

-- 
 -
 MTA SZTAKI Computer and Automation Research Institute
Hungarian Academy of Sciences
 -


-- 
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org



Re: Switching /bin/sh to dash (part two)

2009-07-25 Thread Gabor Gombas
On Fri, Jul 24, 2009 at 06:17:30PM +0200, Goswin von Brederlow wrote:

  Or we need to set explicitly use #!/bin/dash in umountall?
  [not so flexible solution, but IMHO enough good]
 
 If it needs dash then yes, set #!/bin/dash and Pre-Depend on dash.
 
 But in this case that really needs to be fixed to work with any posix
 shell.

It's not the usual umountall uses features not in POSIX case but
rather bash does lots of unneccessary things that in turn break
umountall. umountall wants a shell that does not do any NSS lookups
internally when it is not asked to. So it's bash that needs to be fixed
to do a lot less when it is invoked as /bin/sh.

Gabor

-- 
 -
 MTA SZTAKI Computer and Automation Research Institute
Hungarian Academy of Sciences
 -


-- 
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org



Re: Switching /bin/sh to dash (part two)

2009-07-25 Thread Gabor Gombas
On Fri, Jul 24, 2009 at 06:39:53PM +0200, Giacomo Catenazzi wrote:

 BTW it seems that all previous tries to remove the bug in bash failed.

Actually it's not a bug in bash at all. The bug is the combined effect
of how bash behaves and how the NSS functionality is implemented inside
glibc.

AFAIR fixing bash would make it incompatible with existing user
scripts. So it would have (almost) all of the drawbacks of the
bash-dash transition without none of the benefits.

Gabor

-- 
 -
 MTA SZTAKI Computer and Automation Research Institute
Hungarian Academy of Sciences
 -


-- 
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org



Re: Switching /bin/sh to dash without dash essential

2009-07-25 Thread Gabor Gombas
On Fri, Jul 24, 2009 at 06:31:59PM +0200, Goswin von Brederlow wrote:

 Why would you think the one transition would be helpfull in the second
 or that there would be less breakage in the second if we do the first
 one first? I would rather say you are doubling the problems and
 breakages as the two are completly different mechanisms.

Making the shell selectable means more code than hardcoding a single
string. More code means more bugs. Since a bug in this case can result
in an unbootable system, doing things one step at a time so you only
have to look for bugs in one component at a time makes perfect sense
IMHO.

Gabor

-- 
 -
 MTA SZTAKI Computer and Automation Research Institute
Hungarian Academy of Sciences
 -


-- 
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org



Re: Switching /bin/sh to dash without dash essential

2009-07-24 Thread Gabor Gombas
Hi,

On Fri, Jul 24, 2009 at 09:31:04AM -0500, Manoj Srivastava wrote:

 I think you are not going far enough. Why should I have dash on
  the system when my default shell is posh? or (gasp) zsh?

posh (or strict POSIX in general) is simply not practical, and zsh is
even more bloated than bash. But this was discussed to death...
 
 I think one of the objections here is that we ought to have a
  more generic approach that allows shells other than dash/bash to be the
  default shell, and that the vendor not make the choice.

And a possible response to such an objection that the bash-dash
transition is difficult enough. Do this specific transition first, and
revisit the generalization only after the lessons from the bash-dash
transition have been learned.

Gabor

-- 
 -
 MTA SZTAKI Computer and Automation Research Institute
Hungarian Academy of Sciences
 -


-- 
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org



Re: Bug#529624: netbase: networking should not be stopped on reboot or halt

2009-05-22 Thread Gabor Gombas
On Wed, May 20, 2009 at 05:56:16PM +0200, Marco d'Itri wrote:

 Does anybody see any downsides to this?

If there are any pre-down/post-down commands in /etc/network/interfaces,
then this can cause surprises. The same holds from custom scripts in
/etc/network/{if-down.d,if-post-down.d}.

The justification in the bug report is completely bogus. If the clients
did not shut down by the time the UPS master goes down then you have
bigger problems. Making the UPS daemon available for half a second more
does not make sense.

Gabor

-- 
 -
 MTA SZTAKI Computer and Automation Research Institute
Hungarian Academy of Sciences
 -


-- 
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org



Re: deprecating /usr as a standalone filesystem?

2009-05-14 Thread Gabor Gombas
On Wed, May 13, 2009 at 12:38:45PM -0500, Manoj Srivastava wrote:

 it is the principle of the thing. /root is the home directory
  for the  root user.  Home directories are mutable, programs may store
  configuration files there, as may the user, by themselves. The root
  user should not be more constrained than other users on the machine are;
  making wirking as root irritating, less customizable, and harder does
  not help the end user admin any.
 
 Ideally, we should map /root somewhere persistent, writable, and
  also a location available in single user mode; and there are few
  pleasing solutions that meet that criteria; though less than perfect
  solutions exist.

I fail to see how root is different to any other random user in this
regard. If you want / to be read-only, then you should ensure that /home
points to something writable. The same thing holds for /root. You can
make /home and /root to be separate filesystems, or bind mounts or
symlinks pointing to a writable location. If you can handle /home today
then you can also handle /root exactly the same way.

So the only thing to do is ensure that whatever code/documentation talks
about /home should also talk about/handle /root as well. In fact, if /
is supposed to be read-only, then I see absolutely no reason to use
/root instead of /home/root. Maybe we need an option in the installer to
set root's HOME directory to /home/root instead of /root?

Gabor

-- 
 -
 MTA SZTAKI Computer and Automation Research Institute
Hungarian Academy of Sciences
 -


-- 
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org



Re: deprecating /usr as a standalone filesystem?

2009-05-14 Thread Gabor Gombas
On Thu, May 14, 2009 at 03:53:23PM +0200, Giacomo A. Catenazzi wrote:

 No, /root cannot be a separate filesystem.
 /root is part of very basic system, and it is required for super user
 when he/she is restoring the systems or doing some kind of administration
 (e.g. moving filesystems, etc.).

Obviously not. If fscking / fails then / _will_ be read-only and you
_must_ be able to fix it without being able to write under /root, so any
system restoration task must work without /root being writeable.

If you want to write to /root, then _make_ it writable! That's why you
are the system administrator after all. If you want / to be read-only,
then move /root to some other filesystem. If you want /root to be on the
same filesystem as /, then do not make / read-only. Really, this is a
Doctor, it hurts if I shoot myself in the foot - Don't do it, then
kind of situation...

Gabor

-- 
 -
 MTA SZTAKI Computer and Automation Research Institute
Hungarian Academy of Sciences
 -


-- 
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org



Re: deprecating /usr as a standalone filesystem?

2009-05-14 Thread Gabor Gombas
On Thu, May 14, 2009 at 04:21:53PM +0200, Giacomo A. Catenazzi wrote:

 I totally agree that / (thus /root) could be read-only.

 I pointed out to you that /root is required to be in the same
 filesystem as / (FHS) and I gave you the rationale.

What's the FHS says is a little different:

/root : Home directory for the root user (optional)

Purpose

The root account's home directory may be determined by developer
or local preference, but this is the recommended default
location.

So the presence of /root is not required and root's home directory can
be set to /home/root by the installer if a read-only / is wanted.

Gabor

-- 
 -
 MTA SZTAKI Computer and Automation Research Institute
Hungarian Academy of Sciences
 -


-- 
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org



Re: deprecating /usr as a standalone filesystem?

2009-05-14 Thread Gabor Gombas
On Fri, May 15, 2009 at 07:12:59AM +0200, Goswin von Brederlow wrote:

 There is absolutely no reason why you can not mount a filesystem over
 /root later in the boot process. I agree that /root should/must exist
 at all time so one can login when for example fsck fails.

No, you must be able to log in even if /root have ended up in
/lost+found. Anything that relies on the existence of /root is bogus
and should be fixed. Note 17 in the FHS (that Giaocomo already quoted)
specifies how the system should handle the case when root's home cannot
be located - it must just work.

Gabor

-- 
 -
 MTA SZTAKI Computer and Automation Research Institute
Hungarian Academy of Sciences
 -


-- 
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org



Re: Environment variables, debian/rules and dpkg-buildpackage

2009-05-11 Thread Gabor Gombas
On Mon, May 11, 2009 at 03:43:41PM +0200, Giacomo A. Catenazzi wrote:

 You are a very special case: a developer since very long time, with a
 enormous knowledge of debian policy (and dpkg internal).
 But I really think that most people outside DD use dpkg-buildpackage
 because it is the easier way (without need to remember a lot of
 details).  I think also that most of DDs use dpkg-buildpackage.

My experiences are quite the contrary: people who are not deeply
involved with Debian tend to run debian/rules directly, and the only
Debian-specific command they know is dpkg -i --force-depends...

Gabor

-- 
 -
 MTA SZTAKI Computer and Automation Research Institute
Hungarian Academy of Sciences
 -


-- 
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org



Re: deprecating /usr as a standalone filesystem?

2009-05-06 Thread Gabor Gombas
On Wed, May 06, 2009 at 12:30:14AM +0200, Stefano Zacchiroli wrote:

 Of course the problem is that if you update on the NFS server, then
 related /etc and /var files [1] will not get updated on the NFS client
 machines and you need to propagate changes there.

One thing to remember is when you export /usr (or /) over NFS, then
you usually do not expect to install new software often (maybe once or
twice a year), and security updates rarely bring big changes under /etc
or /var.

/etc can be managed with a couple of scripts; if you have a non-trivial
amount of machines you already have the scripts to populate and
customize it for a new machine. After an update, you just re-run that
script for all the clients and you're done.

/var is not an issue either. You can mount it read-only just like /usr
and then you can mount some tmpfs instances over the locations where
write access is really needed. /etc/fstab fragment:

tmpfs   /tmptmpfs   size=100m,mode=1777 0   0
/tmp/var/tmpbindbind0   0
tmpfs   /var/logtmpfs   size=10m0   0
tmpfs   /var/lib/gdmtmpfs   size=10m0   0
tmpfs   /var/lib/xkbtmpfs   size=10m0   0
tmpfs   /var/lib/nfstmpfs   size=10m0   0
tmpfs   /var/cache/hald tmpfs   size=10m0   0
tmpfs   /media  tmpfs   size=128k   0   0

You of course need a couple of mkdir/chown commands in an init script to
create some required subdirectories.

If you need persistence, then you mount a writable FS somewhere else,
and you do something like

mount --bind /home/terem/boinc-client/$HOSTNAME /var/lib/boinc-client

(that's from a running cluster setup).

If I take a look of what is actually under /var on that cluster, then I
get:

nfs-server# du -s .
147300  .
nfs-server# du -s cache lib/apt lib/aptitude lib/dpkg log
[...]
135616  total

So even if you want a local /var on every machine, you can ignore over
92% of the data when you synchronize with say rsync (you can actually
ignore even more, but then the above du -s line would have been too
long).

Gabor

-- 
 -
 MTA SZTAKI Computer and Automation Research Institute
Hungarian Academy of Sciences
 -


-- 
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org



Re: deprecating /usr as a standalone filesystem?

2009-05-06 Thread Gabor Gombas
On Wed, May 06, 2009 at 03:31:23PM +0200, Stefano Zacchiroli wrote:

 Anyhow, *you* don't understand the problem and you are probably the
 only one thinking I'm selling vapor. From other people's replies I
 conclude that the problem is quite clear and my vapor was so concrete
 that others hinted at technical solutions.  But let me spell the
 problem out for you, as you are raising the tone of the discussion
 with exclamation marks (which was not my intention).
 
 The problem is that our package manager (dpkg) assumes it is in charge
 of files which reside on different top-level FHS directories: /usr,
 /var, /boot, /bin, /sbin, /lib, /lib64, ...
 
 In a scenario where /usr is remotely exported for NFS mounting, if you
 use dpkg on the exporting machine, client machines will get out of
 sync. Some files need to be copied over statically and, more
 interestingly, maintainer scripts will need to be re-run on client
 machines to deliver their side effects to all machines. Also the
 status of the dpkg database need to be synced with clients.
 
 
 My argument is mainly that we should not ask our user to do the above
 sync by hand, still claiming we support it.

But _NOBODY_ said to support the sync part in Debian. Just leave things
as-is, i.e. let it possible to have /usr as a separate filesystem. We
can do the rest, thank you very much. The fact that clients can get out
of sync is perfectly understood and handled when needed. There is
nothing new here; mounting /usr over NFS on Solaris boxes a decade ago
had exactly the same basic issues.

Don't ask users to do the sync by hand. Just _let_ them do it if they
wish.

Mounting /usr over NFS is an old technique. I wouldn't recommend it
to anyone today but it exists and deliberately breaking it just because
you do not like it is stupid.

Gabor

-- 
 -
 MTA SZTAKI Computer and Automation Research Institute
Hungarian Academy of Sciences
 -


-- 
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org



Re: AVR32 port - config.{sub,guess} bug filing

2009-04-24 Thread Gabor Gombas
On Fri, Apr 24, 2009 at 01:37:01PM +0800, Paul Wise wrote:

 cur_v=`echo $timestamp | sed s/-//g`
 
 for path in \
   $HOME/.config/automake \
   /usr/local/share/automake \
   /usr/local/share/misc \
   /usr/share/automake \
   /usr/share/misc \
 ; do
 
 if test -x $path/config.sub ; then
   v=`$path/config.sub --time-stamp | sed s/-//g`
   if test $v -gt $cur_v ; then
 $path/config.sub $*
 exit $?
   fi
 fi
 done

Hmm, wouldn't it be better to look for the newest version instead of
first one that's newer? The above would work on a buildd where there is
nothing in $HOME or under /usr/local, but could break on user's machines
if they have a script in say /usr/local/share/automake that's newer than
what the package has, but still older than what would be needed to build
correctly.

Gabor

-- 
 -
 MTA SZTAKI Computer and Automation Research Institute
Hungarian Academy of Sciences
 -


-- 
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org



Re: RFA: acpi-support -- glue layer for translating laptop buttons, plus legacy suspend support

2009-04-16 Thread Gabor Gombas
On Wed, Apr 15, 2009 at 10:25:36AM +0200, Bjørn Mork wrote:

 I still haven't got a clue how to really fix this, but have resorted to
 this for now:
 
 ?xml version=1.0 encoding=UTF-8?
 deviceinfo version=0.2
   device
 match key=info.capabilities contains=input.keyboard
   merge key=input.xkb.model type=stringpc105/merge
   merge key=input.xkb.layout type=stringno/merge
 /match
   /device
 /deviceinfo

...except with latest hal/X.org/whatever it also stopped working. Latest
X.org pulled in console-setup, and now the settings under
/etc/hal/fdi/policy get ignored. What a mess.

Gabor

-- 
 -
 MTA SZTAKI Computer and Automation Research Institute
Hungarian Academy of Sciences
 -


-- 
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org



Re: cgroup mount point

2009-02-03 Thread Gabor Gombas
On Tue, Feb 03, 2009 at 10:24:16AM +, Daniel P. Berrange wrote:

 Putting new mount points in / is not really acceptable, so that rules
 out the first two. /opt is just totally wrong, since that is intended
 for add on software packages. /dev/ feels a little odd, since it is
 not really device nodes, but perhaps that doesn't matter. So my pref
 would be something in /dev/cgroups or /sys/cgroups

How about /var/lib/cgroup? There's already precedent
(/var/lib/nfs/rpc_pipefs) of mounting virtual file systems under
/var/lib.

Gabor

-- 
 -
 MTA SZTAKI Computer and Automation Research Institute
Hungarian Academy of Sciences
 -


-- 
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org



Re: Override changes standard - optional

2009-01-07 Thread Gabor Gombas
Hi!

On Tue, Jan 06, 2009 at 10:38:26AM +, Tzafrir Cohen wrote:

  I disagree. I use strace a lot and it is very handy to verify that a
  service really uses the config/data files it is supposed to use or does
  it react to a network packet or not even if it does not log anything
  etc. OTOH gdb is only useful if you really know the internals of the
  thing and you at least have debugging symbols installed (which is still
  not available for the majority of packages). If you only have a binary
  compiled with -O2 and no debugging symbols, and you do not have deep
  understanding of the internal working of the program, then gdb is
  practically useless.
 
 I don't think gdb is considered, due to its size.

I didn't say gdb should be standard, I wanted to say that strace should
not be standard because gdb isn't is a completely bogus reasoning.

Gabor

-- 
 -
 MTA SZTAKI Computer and Automation Research Institute
Hungarian Academy of Sciences
 -


-- 
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org



Re: Override changes standard - optional

2009-01-06 Thread Gabor Gombas
On Fri, Jan 02, 2009 at 06:05:35PM +0100, Petter Reinholdtsen wrote:

 It reduces the load on the LDAP server when using LDAP for PAM/NSS,
 and has proven to be required to avoid overloading the server and
 prompt response on the clients.  The new nss-ldapd package help, but
 caching LDAP results is needed too.

One who knows how to configure LDAP also knows how to install nscd. nscd
problems (like crashing or eating hundreds of megabytes of memory) are
way too frequent and the symptoms are hard to understand for a novice.

Gabor

-- 
 -
 MTA SZTAKI Computer and Automation Research Institute
Hungarian Academy of Sciences
 -


-- 
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org



Re: Override changes standard - optional

2009-01-06 Thread Gabor Gombas
On Tue, Dec 30, 2008 at 02:00:51PM -0600, Steve Langasek wrote:

 I think we ought to even consider adding gdb in addition to strace, size
 allowing, since these two tools are rather complementary in their use; but
 certainly, I'd prefer having strace over not having either.

I disagree. I use strace a lot and it is very handy to verify that a
service really uses the config/data files it is supposed to use or does
it react to a network packet or not even if it does not log anything
etc. OTOH gdb is only useful if you really know the internals of the
thing and you at least have debugging symbols installed (which is still
not available for the majority of packages). If you only have a binary
compiled with -O2 and no debugging symbols, and you do not have deep
understanding of the internal working of the program, then gdb is
practically useless.

Gabor

-- 
 -
 MTA SZTAKI Computer and Automation Research Institute
Hungarian Academy of Sciences
 -


-- 
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org



Re: [Pkg-xen-devel] Xen status in lenny?

2008-09-18 Thread Gabor Gombas
On Wed, Sep 17, 2008 at 11:03:13AM -0300, Gustavo Noronha Silva wrote:

 I upgraded a dom0 I maintain to Lenny, the kernel got upgraded and I had
 of course a boot failure when trying to boot Xen 3.2 and linux 2.6.26.
 I'm not really sure about the reason since it is a remotely hosted box,
 but I had to go back to 2.6.18.

Something is fishy with your boot configuration. I've multiple dom0s
running lenny; kernel 2.6.26 is installed of course, but grub always
puts the Xen-enabled 2.6.18 before 2.6.26, so the new kernel won't be
booted unless you select it manually.

Gabor

-- 
 -
 MTA SZTAKI Computer and Automation Research Institute
Hungarian Academy of Sciences
 -


--
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Intel Atom Processor

2008-07-22 Thread Gabor Gombas
On Tue, Jul 22, 2008 at 03:03:27PM +0200, Goswin von Brederlow wrote:

 I might be wrong but I thought the atom CPU would add 64bit
 support. As such the Debian amd64 port should work as well.

According to Wikipedia and the linked Intel sheets, only the desktop
version has 64-bit support.

Gabor

-- 
 -
 MTA SZTAKI Computer and Automation Research Institute
Hungarian Academy of Sciences
 -


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: RFC: Removal of user/groups

2008-07-14 Thread Gabor Gombas
On Sat, Jul 12, 2008 at 04:56:03PM +0200, Carl Fürstenberg wrote:

 I was thinking of the reusability problem, and came up with the following:
 When an user/group is removed, it's placed in quarantine. That ID
 isn't used unless the same user/group is recreated, or that all other
 possible ID:s is exhausted. For most of the time, that would prevent
 an ID to be used for an other user/group.

Not removing the user/group gives you most of the above quarentine
effect, except automatic re-use. OTOH automatic re-use would present
_exactly_ the same issues as normal account removal. So the quarantine
you describe would only help to mask the symptoms but does not provide
any real solution.

Gabor

-- 
 -
 MTA SZTAKI Computer and Automation Research Institute
Hungarian Academy of Sciences
 -


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Considerations for lilo removal

2008-06-17 Thread Gabor Gombas
On Mon, Jun 16, 2008 at 11:55:49PM +0300, Riku Voipio wrote:

 Having one well working tool is better than having multiple mediocre,
 buggy tools to choose from.

The problem is that we do not have one well working tool. Grub certainly
does not qualify as such and there is no hope it ever will. So until
grub2 reaches perfection or somebody writes a brand new bootloader,
we're better with two imperfect bootloaders that have a different set of
bugs.

 Grub and Lilo do a simple well defined task.

Simple? Well defined?!? Gimme' the crack you're smoking...

Gabor

-- 
 -
 MTA SZTAKI Computer and Automation Research Institute
Hungarian Academy of Sciences
 -


--
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: How to build XEN dom0 and domU kernels based on 2.6.25?

2008-05-07 Thread Gabor Gombas
On Tue, May 06, 2008 at 11:37:44PM +0200, [EMAIL PROTECTED] wrote:

 Can someone please help me?

See http://fedoraproject.org/wiki/Features/XenPvops and
http://fedoraproject.org/wiki/Features/XenPvopsDom0 .

 Do I need to install some other kernel-patch?

No, you need to wait, wait, wait... Or use some other virtualization
product.

Gabor

-- 
 -
 MTA SZTAKI Computer and Automation Research Institute
Hungarian Academy of Sciences
 -


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: RFH: Chrony goes into endless loop on x86_64

2008-04-23 Thread Gabor Gombas
On Tue, Apr 22, 2008 at 01:50:43PM -0500, John Hasler wrote:
 See bug #474294.
 
 If you have an x86_64 system you can help by
 
  a) installing chrony-1.21 from Stable or Unstable and confirming the bug
 
 or
 
  b) installing chrony-1.23 from Experimental and determining if the new
 upstream release has fixed it.

That will be difficult since sometimes the bug does not hit for weeks
and then suddenly chrony starts to loop all the time. So I'd say go
ahead and upload the new version to unstable, and if there are no new
occurances of the bug for 1-2 months then you can close it.

Gabor

-- 
 -
 MTA SZTAKI Computer and Automation Research Institute
Hungarian Academy of Sciences
 -


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Should -dev packages providing .pc files depend on pkg-config?

2008-04-17 Thread Gabor Gombas
On Thu, Apr 17, 2008 at 07:58:44AM +0200, Tollef Fog Heen wrote:

 Yes, in the simple case, you can just do this.  In the more complex
 case (which upstream might want to cater for), you need to use
 pkg-config.

No. Even in this case, I _don't_ need to use pkg-config. I just should
be able to parse the .pc file, but I can do that with my own tools
without calling pkg-config at all. It's the information inside the .pc
file that has importance, not the choice of the tool that parses the .pc
file.

Gabor

-- 
 -
 MTA SZTAKI Computer and Automation Research Institute
Hungarian Academy of Sciences
 -


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Should -dev packages providing .pc files depend on pkg-config?

2008-04-17 Thread Gabor Gombas
On Thu, Apr 17, 2008 at 12:02:20PM +0200, Bas Wijnen wrote:

 How is this different with _any_ dependency on the system?  Do you
 suggest that iceweasel should drop its libgtk dependency, because users
 might want to use their own compiled version of it?

iceweasel _uses_ libgtk. A -dev package that ships a .pc file does _not_
use pkg-config - it just provides a data file that pkg-config (or some
other similar tool) can use. If you want an analogy, think about -data
packages:

$ apt-cache show console-data | grep ^Depends:
Depends: debconf | debconf-2.0, debconf (= 0.5) | debconf-2.0

Notice it does not depend on console-tools or kbd, even though you need
one of them to make use of the data. The same thing holds for .pc files
- they are just data files that some other tool can process.

Gabor

[Of course, as has been already mentioned in this thread, if the -dev
package also ships a script that calls pkg-config, then it should depend
on pkg-config, but for a different reason.]

-- 
 -
 MTA SZTAKI Computer and Automation Research Institute
Hungarian Academy of Sciences
 -


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Should -dev packages providing .pc files depend on pkg-config?

2008-04-16 Thread Gabor Gombas
On Wed, Apr 16, 2008 at 11:23:51AM +0100, Neil Williams wrote:

 What about these clauses as a Policy amendment?
 
 1. If a library *only supports the retrieval of FOO_LIBS and / or
 FOO_CFLAGS by the use of pkg-config*, pkg-config becomes part of the API
 of that library and the -dev package of that library must depend on
 pkg-config. The mere presence of a .pc file in the -dev package of the
 library does *not* mean that only pkg-config is supported. e.g. where a
 library requires the use of an m4 macro that involves calling
 pkg-config, this would require the -dev package to depend on pkg-config
 but if a library provides a .pc file but also supports alternative
 method(s), the -dev package does not need to depend on pkg-config.
 
 2. If a source package uses libraries that package a .pc but where all
 the libraries also support other methods of obtaining the relevant data,
 and the source package requires the use of pkg-config despite those
 other methods being available, then that choice by the source package
 upstream must result in a Build-Depends on pkg-config in the source
 package.
 
 Is that suitable as a Policy clause? (probably needs a few tweaks for
 clarity and examples in clause 1).

Wow, that's awfully complicated. This is much more straightforward:

If a package wants to call /usr/bin/foo during build and fails
to build properly if /usr/bin/foo is not present, then the
package MUST Build-Depend: on some other package providing
/usr/bin/foo.

And by this definition, it is the package _invoking_ pkg-config that
should Build-Depend on it, not the package that happens to ship a .pc
file.

Gabor

-- 
 -
 MTA SZTAKI Computer and Automation Research Institute
Hungarian Academy of Sciences
 -



Re: Should -dev packages providing .pc files depend on pkg-config?

2008-04-16 Thread Gabor Gombas
On Wed, Apr 16, 2008 at 07:15:53PM +0200, Goswin von Brederlow wrote:

 You are missing the point.
 
 What if the library says You must call /usr/bin/foo during build?

But the library can't say foo must come from a Debian package. What if
I have my local replacement? Why should I be forced to install a package
that is now useless for me (and installing it would only cause confusion
as there are now two different tools with the same name present in
$PATH)?

 The libarry does not use foo, only the user, so no depends?

Of course no dependency is needed. If the library is not used by anyone
(think about an NFS server that just exports the library), then a
missing foo would not hurt anyone. And if someone _does_ use the
library, then that user must depend on foo, and everything is fine.

 Or idoes forcing users to use foo make foo part of the API and hence
 the library should depend on it?

You can't _force_ anyone to use foo. At most you can say I'm not going
to give you support if I somehow find out you didn't use foo but that's
it. I should be able to write my own tools and use the library in
whatever way I want - or the library must go into non-free.

Gabor

-- 
 -
 MTA SZTAKI Computer and Automation Research Institute
Hungarian Academy of Sciences
 -


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: exim, local resolver, host name lookups and IPv6

2008-04-14 Thread Gabor Gombas
On Sat, Apr 12, 2008 at 10:41:54AM +0200, Marc Haber wrote:

 Where can I obtain the FQDN of the system instead?

_Which_ FQDN? A machine may have several IP addresses, in the DNS there
may be multiple A records for every IP address (and the reverse PTR
records may be completely meaningless placeholders).

Gabor

-- 
 -
 MTA SZTAKI Computer and Automation Research Institute
Hungarian Academy of Sciences
 -


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: exim, local resolver, host name lookups and IPv6

2008-04-11 Thread Gabor Gombas
On Fri, Apr 11, 2008 at 04:50:17PM +0200, Mike Hommey wrote:

 Host name can be returned by gethostname(2), for example, and you can
 add the result from getdomainname(2) for an FQDN.

Those syscalls has _nothing_ to do with DNS so they can not be used to
form FQDNs. gethostname() is sadly often misused and that usually causes
trouble on multi-homed machines. getdomainname() returns the NIS domain
that does not neccessarily match _anything_ in DNS.

Gabor

-- 
 -
 MTA SZTAKI Computer and Automation Research Institute
Hungarian Academy of Sciences
 -


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Should -dev packages providing .pc files depend on pkg-config?

2008-04-08 Thread Gabor Gombas
On Mon, Apr 07, 2008 at 06:49:24PM -0500, Manoj Srivastava wrote:

 In this case, again, if my dev package requires a tool not in
  build depends now, I should declare it, for the same reason -- the next
  upload of the dev package might have different tools, or eliminate
  tools -- and putting that build dependency in all the packages that
  use my dev package is hard -- especially when we consider the cases
  when the scenarios where these dependencies might change over time.

But it's not the -dev package that uses the tool. It's the user of the
-dev package that uses the tool so it should depend on it. For example,
calls of pkg-config are hard-coded in the user of the -dev package, not
in the -dev package itself.

If a new -dev package requires different tools, then all users of the
-dev package must be updated since they know nothing about the change
and they will happily continue to call the old tool.

Also, if it's the -dev package that depends on the tool and the tool
changes, then the users will get worse error messages. Instead of a
message like:

Package foo was not found in the pkg-config search path.
Perhaps you should add the directory containing `foo.pc'

you'll get:

pkg-config: No such file or directory

OTOH if it's the user of the -dev package that depends on pkg-config,
then you will always get a meaningful error message even if libfoo-dev
stops providing a .pc file.

Gabor

-- 
 -
 MTA SZTAKI Computer and Automation Research Institute
Hungarian Academy of Sciences
 -


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Should -dev packages providing .pc files depend on pkg-config?

2008-04-08 Thread Gabor Gombas
On Tue, Apr 08, 2008 at 08:47:38AM +0100, Neil Williams wrote:

 It's also a lot of packages - does such a dependency ever become
 inferred by other packages? It probably shouldn't, for your reasons
 above, so this would appear to be a case for a lintian check.
 If ./configure exists and calls pkg-config or configure.in|ac calls
 pkg-config or uses an m4 macro that calls pkg-config, the package should
 build-depend on pkg-config ? (We don't seem to have many lintian checks
 on Build-Depends.)

Unfortunately that's not that easy since an autoconf-using package may
build just fine if pkg-config is missing. pkg-config may only be needed
for optional components that may not even be part of Debian. Only the
maintainer can tell that pkg-config is required or not.

Gabor

-- 
 -
 MTA SZTAKI Computer and Automation Research Institute
Hungarian Academy of Sciences
 -



Re: conditional dependency?

2008-02-26 Thread Gabor Gombas
On Tue, Feb 26, 2008 at 11:26:30PM +1100, Hamish Moffatt wrote:

 What are the constrained environments where you think static linking
 would be useful? I'm developing embedded systems and I prefer shared
 libraries - unless you have only one application using a particular
 library then you will save space.

Desktop grid applications that will be running on an unknown version of
an unknown distribution, but I want to be able to build them on Debian.
These applications won't ever use NSS, PAM or anything else that relies
on some system-level configuration because you can't assume anything
about the local system.

Gabor

-- 
 -
 MTA SZTAKI Computer and Automation Research Institute
Hungarian Academy of Sciences
 -


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: conditional dependency?

2008-02-26 Thread Gabor Gombas
On Mon, Feb 25, 2008 at 11:07:10PM +, Roger Leigh wrote:

 I stopped providing static libraries in all my library packages quite a
 while back.  No one used them, and they were just needless bloat.  I
 can't say I would be upset if we dropped all the static libraries from
 the entire archive--is there actually a real world requirement for them
 in 2008?

Yes, desktop grid applications.

Gabor

-- 
 -
 MTA SZTAKI Computer and Automation Research Institute
Hungarian Academy of Sciences
 -


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Faster shutdown and the ubuntu multiuser update-rc.d extention

2008-01-07 Thread Gabor Gombas
On Sun, Jan 06, 2008 at 10:10:03AM +0100, Tollef Fog Heen wrote:

 Taking this argument a bit further, do you think that the sshd init
 script should wait until all users have saved their work and logged
 out before it gives control back to the init sequence?

On a multi-user system that would be a trivial DoS so no. Just think
what happens if someone opens a file in an editor, leaves it open and
then goes on vacation...

It's the responsibility of the sysadmin to use the '-t' option of
shutdown properly and to check that users have really logged out.

Gabor

-- 
 -
 MTA SZTAKI Computer and Automation Research Institute
Hungarian Academy of Sciences
 -


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Faster shutdown and the ubuntu multiuser update-rc.d extention

2008-01-04 Thread Gabor Gombas
On Thu, Jan 03, 2008 at 09:10:10PM -0800, Steve Langasek wrote:

 In that case, why would we not just migrate toward upstart as an init with
 service supervisor capabilities? :)

In the long run that may be desirable, but IMHO it won't happen in the
near future. Or do you already know something about lenny+1? :-)

Gabor

-- 
 -
 MTA SZTAKI Computer and Automation Research Institute
Hungarian Academy of Sciences
 -


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Faster shutdown and the ubuntu multiuser update-rc.d extention

2008-01-03 Thread Gabor Gombas
On Thu, Jan 03, 2008 at 02:45:40AM +, Colin Watson wrote:

 If this is a real problem for a given service, surely its init script
 should actually wait for the process to shut down cleanly? If so, it
 wouldn't be a candidate for this refactoring.

IMHO there can be many init scripts that currently do not wait for the
process to stop but they should if you want to do this refactoring. Some
random checks:

- samba: there is a sleep in stop but that may not be enough if
  there is heavy I/O. An explicit wait for process termination should be
  added.
- bind9: there is a sleep in restart but not in stop. Killing
  named in the middle of say a zone transfer may not be nice, so waiting
  should be added.
- heimdal-kdc: waits in restart but not in stop. Killing the KDC in
  the middle of a database update is not nice, so waiting should be added.
- squid: waits properly.
- acpid: there is a sleep in restart but not in stop and IMHO it
  does not need one - it can go without an explicit stop.
- chrony: there is a sleep in restart but not in stop. AFAIK chrony
  writes the RTC status file on exit so it must not be killed before
  that's done.
- apache2: there is a (rather long) sleep in restart but not in
  stop. Waiting for process termination should be added.

Also, in the proposed scheme sync() must be called _before_ sending the
final TERM signal since sync() may take longer than 5 seconds and
therefore heavy I/O may prevent even simple processes to shut down
properly between the final SIGTERM and SIGKILL.

Gabor

-- 
 -
 MTA SZTAKI Computer and Automation Research Institute
Hungarian Academy of Sciences
 -


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Faster shutdown and the ubuntu multiuser update-rc.d extention

2008-01-03 Thread Gabor Gombas
On Thu, Jan 03, 2008 at 12:14:15PM -0500, Joey Hess wrote:

 If a package only shuts down cleanly because the rest of the shutdown
 process is slow, it is already buggy. Especially on systems where the
 shutdown is much faster, either due to their being fewer shutdown
 scripts than usual or the cpu being a lot faster than usual, or due to
 its shutdown script being reordered to run later than usual.

That's what I want to say. Such bugs are really hard to trigger and if
something gets corrupted during say a reboot then most people will blame
the HW or the kernel before thinking about the shutdown script.

I'm wondering if init could be modified to warn if it really has to kill
something with SIGKILL but of course syslog is long dead by then so
unless you've serial console you'll likely miss that warning.

  - apache2: there is a (rather long) sleep in restart but not in
stop. Waiting for process termination should be added.
 
 It's fairly common to add a sleep in restart to (try to) deal with
 issues such as reopening a socket.

But if the listening socket is still open then some apache module may
still be doing disk I/O/database access/etc. as well, which means stop
should wait till apache really quits.

Gabor

-- 
 -
 MTA SZTAKI Computer and Automation Research Institute
Hungarian Academy of Sciences
 -


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Faster shutdown and the ubuntu multiuser update-rc.d extention

2008-01-03 Thread Gabor Gombas
On Thu, Jan 03, 2008 at 10:40:32AM -0800, Steve Langasek wrote:

 It's also, as commented already in the init script, recognized as a bug in
 the associated daemon.  Fixing that bug would drop the need for the sleep,
 though if there's a possibility of SIGKILL coming before the daemon is done
 shutting down then you still don't have a guaranteed cleanup, and there's no
 good wait for process termination facility that we can use from init
 scripts.

Yep, waiting for an unrelated process to exit is surprisingly hard to
do correctly. I wonder if the processor connector support in recent
kernels could be used to create a kill_and_wait utility:

- start listening on netlink for process-related events
- send the signal to the process
- wait until we receive a notification that the process has died (or a
  timeout has occured).
- from time to time do a kill(pid, 0) just to be sure we did not loose
  netlink messages

Non-linux ports could fall back to sending kill(pid, 0) in a loop.

Gabor

-- 
 -
 MTA SZTAKI Computer and Automation Research Institute
Hungarian Academy of Sciences
 -


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Faster shutdown and the ubuntu multiuser update-rc.d extention

2008-01-03 Thread Gabor Gombas
On Thu, Jan 03, 2008 at 09:24:59PM +0100, Petter Reinholdtsen wrote:

 Btw, if the 5 second wait isn't long enough for sendsigs, we can
 extend it.  There is code there to make sure sendsigs terminates as
 soon as the last process it tries to kill is dead, so we could
 increase the timeout without affecting the normal shutdown times.  It
 will wait from 0 to 5 seconds at the moment, depending on how long it
 take for the processes to die.  It would not be a problem to let it
 wait from say 0 to 10 seconds, or 0 to 30 seconds.

That may be a good safety measure. I think it is really hard to hit the
5 second limit but when that happens it is very hard to diagnose later
what went wrong. So if we can increase the max. timeout without imposing
a real delay in the common case (i.e. when everything shuts down
properly) that's good.

Also, how about doing a sync before sending the signals? That way I/O
generated by the services that _do_ have a proper shutdown script won't
interfere with killing the trivial services. Sure, that sync can take
time, but then the final sync will be that much shorter.

Gabor

-- 
 -
 MTA SZTAKI Computer and Automation Research Institute
Hungarian Academy of Sciences
 -


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Faster shutdown and the ubuntu multiuser update-rc.d extention

2008-01-02 Thread Gabor Gombas
On Wed, Jan 02, 2008 at 12:47:12PM -0800, Russ Allbery wrote:

 Right.  The only case where a shutdown script makes sense to me is if it's
 doing something other than sending signals or if it's waiting
 (intelligently, not just blindly for five seconds) for the process to shut
 down cleanly.

So the only question is how many scripts _should_ wait but currently get
away without waiting because the shutdown sequence takes so long. IMHO
if a daemon does not write anything to disk except maybe log messages
then it should be fine without a shutdown script, but everything else
should have one.

Gabor

-- 
 -
 MTA SZTAKI Computer and Automation Research Institute
Hungarian Academy of Sciences
 -


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Bug#457353: gdome2-xslt: should not be a Debian-native package

2007-12-28 Thread Gabor Gombas
On Thu, Dec 27, 2007 at 10:18:38AM -0800, Russ Allbery wrote:

 I don't think this is horribly relevant to what we're discussing, namely
 how to go about packaging software for inclusion in Debian.  Generating
 upstream-provided packages that don't meet Debian Policy and therefore
 won't be included in Debian but which are useful for some users is
 certainly of interest to some, but it seems rather off-topic for
 debian-devel.  We're focused on including software in Debian rather than
 creating problems like one sees in the Red Hat world where there are
 random RPMs scattered hither and yon all over the net that may or may not
 work together.

Well, this was in response to having debian/ in upstream release is
harmful opinions. You're right that the best thing would be to have
everything packaged officially, but in reality sometimes that just does
not work out, for various reasons:

- Having to work with unreleased development snapshots because the
  official release does not yet have some critical (for me) feature
- Maintainer not uploading new upstream versions for a long time
- Official package lacking some feature due to legal reasons that may be
  important for Debian as an organization but not for me as an
  individual

In these cases upstream help for creating a Debian package is really
nice as it saves me time. Of course it can be expected that the
upstream-provided packaging does not play nice together with official
Debian packages but as long as having to install unofficial packages is
the exception rather the norm I'm willing to pay that price.

  There is also the method e.g. nut upstream uses that can be viewed as a
  compromise: they put the upstream-provided packaging info into a
  subdirectory (packaging/debian), so it does not conflict with the
  distro-provided packaging.
 
 This, of course, is ideal from a Debian packaging perspective.  It would
 be nice if more upstreams who feel like they *really* want to provide
 packaging files for Debian would use a strategy like this.

Maybe it should be described somewhere on www.debian.org why is this the
preferred method. I suspect most upstreams providing their own packaging
simply do not even aware that it may cause problems for distro makers.

 My experience, though, with maintainer-provided Debian packaging files
 except for the special case where the Debian and upstream maintainer are
 the same person is very poor.  The Debian packaging often hasn't been
 updated, doesn't reflect the current version of the package, may be
 written for some ancient release of Debian and sometimes won't work with
 unstable, and often has dependencies that reflect whatever the last person
 to touch it had sitting around on their system.  They maintain their
 Debian packaging about as well as they maintain their RPM spec files, but
 Debian puts more effort into integration and transitions and sloppy
 packaging is far more apparent than it is in the messy RPM universe.

In general I agree with you. However in my experience fixing the
upstream-provided packaging to the point when I'm able to build an
installable .deb is just 1-2 minutes, much less than having to create
debian/ from scratch. Yes, it is a hack, it may not be perfect (or it
may even be completely buggy from a packaging POV), but it saves _me_
time and that's what counts.

Maybe the webpage proposed above should also mention that binary
packages built using the upstream-provided packaging scripts should not
be put on the web, so it is less likely that people unaware of the
possible risks download  use them.

(Btw. I'm quite aware of the RPM hell problem. We're running
Scientific Linux on our grid nodes, and every gLite upgrade - even just
to update the CA certificates - tends to break the system in new and
exciting ways...)

 In most cases, the Debian packaging files end up just confusing users and
 the upstream maintainer would be better off deleting them and letting the
 Debian packager do their job.

In an ideal world, maybe. But until then they are useful.

Gabor

-- 
 -
 MTA SZTAKI Computer and Automation Research Institute
Hungarian Academy of Sciences
 -


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Bug#457353: gdome2-xslt: should not be a Debian-native package

2007-12-27 Thread Gabor Gombas
On Wed, Dec 26, 2007 at 04:32:39PM +, Neil Williams wrote:

In general, you seem to rant about a lot of things that may make sense
on their own, but they do not seem to have _anything_ to do with a
package being Debian-native or not. More specifically, you try to imply
that a package being versioned Debian native _must_ mean that it is
unportable and buggy.

 That's not the point. It hinders sharing the code. I'm not unaware of
 the issues here, I'm upstream for many of my Debian packages and only a
 few are native. I make double uploads because it helps people package my
 upstream code for Fedora and SuSE and I believe that free software
 should never hinder the sharing of code. Releasing code with the debian/
 directory intact, just complicates the work for other distros.

Why? Is it really a problem to just ignore the debian/ directory when
writing a .spec file? Lots of upstream packages ship directories that do
not contain anything related to the build, why would the debian/
directory be any different?

 How does that look to other upstream teams developing on Fedora? Oh,
 we'll hack together some rubbish in debian/ since they put useless .spec
 files in their upstream code.

So you think a Debian-native package should contain an useless .spec
file? Otherwise I do not understand what do you want to say.

   How are they to know whether the latest native version is Debian
   specific or contains useful upstream improvements?
  
  By reading debian/changelog -- that's what it's for!
 
 So they have to download the new *debian* version just to see what has
 changed when if it was an SF project they could see that the Debian
 release is of no interest to them as they have the .orig.tar.gz. Why
 should people wanting to use your code have to watch (and understand)
 Debian practices to package your POSIX code for a different
 distribution? What about forcing others to make repeated (useless)
 downloads and wasting their time reading Debian webpages / changelogs,
 trying to pick out what they want from the Debian stuff they don't? The
 package can be used outside Debian - why should someone outside Debian
 need to read debian/changelog in the first place!

That already happens when an upstream release contains a fix for
AIX/Solaris/HP-UX/(horrible dictu)Windows. You _do_ have to download the
upstream source or check the upstream website to see if the change is
relevant for Linux or not. How is this any different?

 If the code is not dependent on Debian itself, why should someone from
 another distribution even need to know about how Debian works just
 because upstream happens to be a DD?

Why would they? _You_ insist that they shold know about the Debian
packaging, but they can just completely ignore it and write a .spec file
(or whatever) from scratch just if the debian/ directory would not even
exist.

 Write portable code in the first place and help others. What's wrong
 with that?

What has portability to do with a package being versioned Debian-native
or not?

 Some native packages even 'make install' directly into debian/tmp/ - how
 unfriendly is that?!

You continuously mix normal software/packaging bugs with being versioned
Debian native or not. In my experiences some software (esp. ones that do
_not_ use autoconf but try to invent their own build system) are a real
PITA to install in the way I want, even if their authors never have even
seen a Debian machine. So I don't buy your argument that this attitude
has _any_ relation with being Debian-native or not.

 It's about reuse of code, it is about sharing code and about not
 thinking of Fedora et al as competition or a burden but as
 colleagues, even friends - people who help us from time to time and who
 should get some help in return.

Er, how does an rm -rf debian/ command in an (according to you)
distinct upstream release improves code sharing or reusing?!? If
anything it makes life of other packagers harder because they can't peek
for hints about how Debian handles things...

 Would you read the rpm webpage logs to try to work out whether you need
 to package a Fedora update?

AFAIR some packages regularily took patches from RedHat/Fedora when
their upstream went missing and the RedHat/Fedora release became the de
facto upstream. So yes, this happened and undoubtedly will happen in the
future too.

And of course it can also happen in the other direction too when some
other distro decides to import some hunks from Debian's .diff.gz - the
package does not need to be Debian-native in this case either, yet that
other distro would have to follow every new upload in Debian.

Gabor

-- 
 -
 MTA SZTAKI Computer and Automation Research Institute
Hungarian Academy of Sciences
 -


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Bug#457353: gdome2-xslt: should not be a Debian-native package

2007-12-27 Thread Gabor Gombas
On Wed, Dec 26, 2007 at 03:16:36PM +0100, Vincent Danjean wrote:

   I can tell you that this is not a easy way to cleanly package these
 softwares. I did not talk to upstream yet because I would like to present
 them new clean packages. Nevertheless, for now, I need to recreate a
 X.Y.Z+debian.1.orig.tar.gz without the debian/ directory so it is more
 difficult for a user to check that the orig.tar.gz has the same software
 as in the upstream site.

Why repack? The .diff.gz ought to be enough to describe the changes
under the debian/ directory. Even if you re-do the packaging from
scratch so looking at the diff itself is not very useful, it would still
accurately represent the changes made to the original sources.

Admittedly I've never tried to re-package something that already come
with a debian/ directory so if some tool barfs when the .orig.tar.gz
already contains a debian/ directory, then that tool should be fixed
instead of requiring to recreate the .orig.tar.gz.

   More generally, having a tar.gz without debian/ makes easier to create
 the debian package. Some people say they are also the debian maintainer. But
 are they also the Ubuntu maintainer ? the knoppix maintainer ? the backport
 maintainer ? ...

You seem to make the mistake to think that the debian/ directory
provided by upstream is there to help the distro maintainer. I think
this is not true. The upstream-provided debian/ directory is often for
_users_ who just want to download the latest-and-greatest version or
CVS/SVN/...  snapshot and install it The Debian Way. There are a lot
of packages where official Debian uploads take a lot of time due to
either technical or political reasons or just due to lack of time from
the official maintainer, so I think it's quite nice if upstream wants to
make users' life easier.

If you want to improve the Debian packaging included in the upstream
sources, that's great, go ahead and submit patches. But insisting on
removing the debian/ directory upstream IMHO goes against the Social
Contract that rates users' interests higher than maintainer convenience.

Gabor

-- 
 -
 MTA SZTAKI Computer and Automation Research Institute
Hungarian Academy of Sciences
 -


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Bug#457353: gdome2-xslt: should not be a Debian-native package

2007-12-27 Thread Gabor Gombas
On Thu, Dec 27, 2007 at 05:04:17PM +0100, Vincent Danjean wrote:

 Gabor Gombas wrote:
  You seem to make the mistake to think that the debian/ directory
  provided by upstream is there to help the distro maintainer.
 [false assumptions]

Huh? I, as a user, routinely use upstream-provided debian/ directory to
create packages for some software (most frequently mplayer). So those are
not assumptions but facts.

And as a user, I can say that if e.g. the debian mplayer maintainer
considers the upstream-provided debian/ directory inconvenient, I
couldn't care less. If the upstream-provided packaging is not perfect or
does not fully follow Debian guidelines - I do not care, as long as it
_works_.

   I remove the upstream debian/ directory because the program that
 create the diff.gz (dpkg-deb ?) does not record *removal* of files [1].
 It only record changes. And I need to remove some files...
   A workaround can be to add 'rm debian/' in the 'clean' and 'configure'
 target of debian/rules but I think it is a lot clearer with a new
 upstream tarball without debian/ directory.

Then why not just do the unpacking of the upstream tarball from
debian/rules? That way you can use the unaltered upstream tarball yet
the .diff.gz will contain just your version of debian/.

There is also the method e.g. nut upstream uses that can be viewed as a
compromise: they put the upstream-provided packaging info into a
subdirectory (packaging/debian), so it does not conflict with the
distro-provided packaging.

Gabor

-- 
 -
 MTA SZTAKI Computer and Automation Research Institute
Hungarian Academy of Sciences,
 Laboratory of Parallel and Distributed Systems
 Address   : H-1132 Budapest Victor Hugo u. 18-22. Hungary
 Phone/Fax : +36 1 329-78-64 (secretary)
 W3: http://www.lpds.sztaki.hu
 -


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Heimdal changes

2007-12-05 Thread Gabor Gombas
On Fri, Nov 30, 2007 at 03:48:19PM -0800, C.J. Adams-Collier wrote:

  I modified my krb5.conf file so that heimdal stores its principals in an
  LDAP data store.  A peculiarity of this configuration is that kadmind
  expects the access control file to be named after the LDAP dn of the
  principal container node and to be in the current directory:EOF
 
  [EMAIL PROTECTED]:~$ ls -l /etc/heimdal-kdc/*.acl
  -rw-r--r-- 1 root root 156 Nov  8 18:00 /etc/heimdal-kdc/kadmind.acl
  lrwxrwxrwx 1 root root  28 Nov  8 17:31
  /etc/heimdal-kdc/ldap:ou=KerberosPrincipals,dc=cls2,dc=colliertech,dc=
  org.acl - /etc/heimdal-kdc/kadmind.acl
  EOF
 
  In order to set $PWD to the correct value, inetd would need to call a
  wrapper which does the chdir(2) and exec(3)

Is this still the case with Heimdal 1.0? If it is, I'd suggest reporting it
upstream. No daemon should depend on the current directory.

  At this point, it seems that kadmind and kpasswd should be de-coupled
  from kdc and moved into their own packages.  I can imagine many use
  cases where administrators wouldn't feel comfortable opening a TCP port
  for remote administration of kerberos principals, and slave servers
  should not run kpasswd.  Allowing sysadmins to choose not to install the
  daemons seems a useful feature.

Quite the contrary. You want everything installed on the slaves in case
the master dies and you have to do a failover. Maybe add a low-priority
debconf question whether this will be the master or a slave and
configure the kadmind/kpasswdd daemons appropriately.

  Currently, all heimdal servers run as root.  Yipe.  We should probably
  create a system user and group as well as an SELinux MAC policy.
  This /is/ a network authentication infrastructure, after all.

IMHO if the KDC is compromised then whether the intruder gets root or
not is the least of your worries. You must assume that the complete
database was stolen and they're happily cracking the master key so you
must re-key every principal. Re-installing the KDC from scratch is about
half an hour (you do not run _anything_ else on it, right?),
re-generating and re-distributing the host keys and issuing new
passwords for users may take days or weeks depending on the number of
principals.

  While going through all of this, I considered requirements to ease the
  process of modifying the heimdal packages to configure the system to use
  alternative principal sources.

What do you mean by that?

  Is it possible to have one package query
  another's entries in the database?  How about making modifications to
  another's configuration?  In order to store to OpenLDAP, heimdal would
  benefit from being able to discover some system settings:
 
  * the configured LDAP base DN

Which one? A single KDC instance can serve multiple realms using
multiple databases (well, you need some small patches for 0.7; they
should be included in 1.0 - I'll find out once I upgrade). Or different
kdc/kadmind/kpasswdd instances may run on the same machine bound to
different IP addresses.

  * LDAP bind dn and password capable of creating the KerberosPrincipals
  ou and a bind dn for the heimdal daemons' access to principals

Huh? You want to discover a _password_?!? If that password is stored
anywhere else than the kdc's configuration that would be quite a serious
misconfiguration...

  Additionally, the package would need to cause some modifications
  to /etc/default/slapd and /etc/ldap/slapd.conf, which are owned by the
  slapd package.

That would be a very _BAD_ thing to do. Mucking with other packages'
config files only increases the possibility that something will go wrong
if the admin already did non-trivial modifications to those files.

  If heimdal is configured to store principals to LDAP, removal of slapd
  would break the system's kerberos settings, unless principals were
  dumped to heimdal's native database and the /etc/krb5.conf were updated
  to reflect the change.

Well, the UNIX philosophy is to let sysadmins shot themselves in the
foot if they want to. Do not employ clueless sysadmins.

  I would like to see a simple set of regression tests run after
  (re)configuration of slapd and heimdal packages.  This would ensure that
  the heimdal user is able to access and modify principals.  There should
  also be a rollback mechanism in case the regression tests fail.  I'd
  hate to see an automated update cause a kerberos outage until a human
  was able to fix the problem.

Automated update? On a KDC?!?! No way! An update can _ALWAYS_ go wrong
and can _ALWAYS_ cause breakage, so

- no sane man wants to do it on the master before being absolutely sure
  that it worked on a replica
- no sane man wants to do it at a time when there are no sysadmins near
  the console

Gabor

-- 
 -
 MTA SZTAKI Computer and Automation Research Institute
Hungarian Academy of Sciences
 

Re: Long-term mass bug filing for crossbuild support

2007-11-07 Thread Gabor Gombas
On Wed, Nov 07, 2007 at 06:54:04PM +, Neil Williams wrote:

 So it is preferable for me to add --build to native builds even in a
 patch that is meant to only affect the cross build? I'm sure some
 maintainers will query why I'm setting --build outside the crossbuild.

Provide two distinct patches: one that adds support for --build, and a
second one that does whatever modifications are neccessary for
cross-compiling.

Gabor

-- 
 -
 MTA SZTAKI Computer and Automation Research Institute
Hungarian Academy of Sciences
 -


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: crossbuild support: help2man considered harmful

2007-11-07 Thread Gabor Gombas
On Wed, Nov 07, 2007 at 05:08:43PM +, Neil Williams wrote:

 I've come across one issue: help2man
 
 If packages (like coreutils) use 'help2man' during the build, help2man
 tries to execute the compiled binary to get the error output to make
 into the manpage. Needless to say executing the compiled binary in a
 cross-build causes a build failure.

IMHO this is a bug in automake, so discuss it with automake upstream.
Automake should just treat help2man as it was unavailable when
cross-compiling.

 Just a heads-up to anyone thinking of using help2man in their build -
 if the package is likely to be used in an embedded environment, you can
 expect a bug report from me. :-)

If you get automake fixed then that bug report becomes please run
autoreconf.

Gabor

-- 
 -
 MTA SZTAKI Computer and Automation Research Institute
Hungarian Academy of Sciences
 -


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Mandatory support for -nocheck in DEB_BUILD_OPTIONS

2007-11-06 Thread Gabor Gombas
On Tue, Nov 06, 2007 at 08:08:03PM +, Neil Williams wrote:

 What about:
 
 Packages that run a test suite during the default build must support
 omitting the tests either upon detecting cross-compiling using
 dpkg-architecture or when -nocheck is specified in DEB_BUILD_OPTIONS.

If a package can handle cross-compilation properly then it will skip
tests in the testsuite that would require to run compiled binaries. So
cross-compiling in itself should not be a reason to omit the test suite.

Packages should just check for -nocheck; adding -nocheck to
DEB_BUILD_OPTIONS should be the job of the cross-compiling environment.

Gabor

-- 
 -
 MTA SZTAKI Computer and Automation Research Institute
Hungarian Academy of Sciences
 -


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: RFC: changes to default password strength checks in pam_unix

2007-09-05 Thread Gabor Gombas
On Tue, Sep 04, 2007 at 08:26:41PM +, Oleg Verych: gmane reading wrote:

 I.e *i don't care* about entering passwords on middle ground, without
 knowing, WTF this installer may do with them, not having comfortable
 environment for that _important_ action.
 
 Thus i have silly, empty passwords after installation. Then, i get my
 imagination and compose really super-druper passwords for root and users
 (that i create myself by script with, IDs i want/have on filesystems, not
 by installation process).

Well, if you give public network access to any machine before the
initial configuration is complete, that's _your_ problem. The UNIX way
was always if the admin wants to shot himself in the foot, let him.

Gabor

-- 
 -
 MTA SZTAKI Computer and Automation Research Institute
Hungarian Academy of Sciences
 -


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: conflicting gssapi libraries

2007-08-12 Thread Gabor Gombas
On Sat, Aug 11, 2007 at 07:13:01PM -0700, Russ Allbery wrote:

 Why?  Could you explain what the UMich indirection library practically
 adds for our users?  Why would we want to continue using it rather than
 linking directly against an appropriate GSSAPI implementation?

GSSAPI was created to allow the use of multiple authentication
mechanisms. If you do not want to allow that, then you should just get
rid of GSSAPI completely and use the Kerberos APIs directly, as in this
case GSSAPI just adds a lot of unneccessary complexity.

Apart from the library naming issue, the UMich library is doing the
Right Thing wrt. the original intentions of the GSSAPI. Applications
should just depend on the _interface_. The actual implementation
selection should be a system-local policy and should not be hard-coded
in dependencies.

Gabor

-- 
 -
 MTA SZTAKI Computer and Automation Research Institute
Hungarian Academy of Sciences
 -


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: adding desktop files to misc packages

2007-07-25 Thread Gabor Gombas
On Wed, Jul 25, 2007 at 06:57:54PM +0200, Josselin Mouette wrote:

 If the users installs the distribution with default settings or starts a
 session on a multi-user setup, he should find a usable menu, not a menu
 with all possible applications he never wanted to install.

So the menu system should know if an application was explicitely
selected by the user during installation or it was pulled in due to some
strange dependency? And in the latter case, the menu system should know
that the user _wanted_ the application to be installed regardless (and
therefore he/she expects to see a menu entry by default)?

 Why shouldn't we attempt to make menus usable?

IMHO the best would be a uniof of the two viewpoints: show everything by
default, and gradually hide entries that were not used for some time. Or
did Microsoft patent that?

Gabor

-- 
 -
 MTA SZTAKI Computer and Automation Research Institute
Hungarian Academy of Sciences
 -


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Mass bug filing: Dependency/file list predictability

2007-07-20 Thread Gabor Gombas
On Thu, Jul 19, 2007 at 11:02:39AM -0700, Russ Allbery wrote:

 Now, I'm willing to lead the way for Kerberos packages going forward, I
 guess, if I can figure out a good way to do that, but I don't know how
 that configure logic would even work or what those --with flags would look
 like.  The problem would be avoided if I required krb5-config be
 available, but I don't really want to do that, both because older versions
 of Kerberos don't have it and because krb5-config adds a bunch of needless
 shared library dependencies that create unnecessary interpackage
 dependencies in Debian.

Ideally everything should use libgssapi2 (or some other mechglue
implementation) and nothing should build directly aginst the
Kerberos-specific GSSAPI implementations. That would allow users to
select at run-time between different Kerberos versions or use other
GSSAPI implementations. It would also solve the dependency problems you
mentioned.

The only question is how hard it would be to make Heimdal work with
libgssapi2. I know there was some mechglue-related work in Heimdal but I
did not follow that.

Gabor

-- 
 -
 MTA SZTAKI Computer and Automation Research Institute
Hungarian Academy of Sciences
 -


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: (glibc's opinion on malloc) Re: Bug#430140: ITP: hoard -- Fast, scalable, and efficient replacement memory allocator

2007-06-25 Thread Gabor Gombas
On Mon, Jun 25, 2007 at 08:07:55AM -0700, Russ Allbery wrote:

 The problem with this theory (basically, that glibc is taking a
 performance penalty by giving memory back to the system and hence being
 more space efficient) is that not only is Hoard significantly faster than
 glibc for OpenLDAP, it's also more space-efficient and allocates less
 total memory as soon as there are multiple clients querying the server at
 the same time.

Looking at the links there is no mention what memory size means here.
Is it the amount of RAM mapped, or the amount of memory dirtied?
Mapping more memory is less important (unless you're running out of
address space of course). Dirtying more memory is certainly much more
telling.

Gabor

-- 
 -
 MTA SZTAKI Computer and Automation Research Institute
Hungarian Academy of Sciences
 -


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: [EMAIL PROTECTED] Package

2007-06-24 Thread Gabor Gombas
On Sun, Jun 24, 2007 at 05:51:29PM +0200, Frank Küster wrote:

 As I understood it, the idea was more to keep information *comparable*,
 which wouldn't be the case if someone improved the script by using a
 faster minimizer, linking against an improved libfoo or whatever.  You
 simply cannot publish a work based on input which clients sent to us
 that are somehow more or less the same as what we describe in the
 methods section, it needs to match exactly what's in the methods
 section.  It's not a means against evil attackers, but against
 goodwilling helpers.

Josselin is right here, being closed source does not protect against
these kind of problems _AT ALL_. We're running a BOINC project that runs
a closed source application, but that did not prevent a guy with some
free time to dissassemble the code and produce a binary patch to speed
up the program in order to gain more credits.

There is ongoing research about how to make public distributed computing
more reliable and tampering more detectable, but being closed source does
not help at all.

Gabor

-- 
 -
 MTA SZTAKI Computer and Automation Research Institute
Hungarian Academy of Sciences
 -


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Using standardized SI prefixes

2007-06-14 Thread Gabor Gombas
On Wed, Jun 13, 2007 at 08:45:13PM +0200, Josselin Mouette wrote:

 The meaning of 1 TB is approximate only for approximate people. I'd
 expect more rigor from people working in computer science (if we can
 call it a science).

... and since most Debian users are not computer scientists, Scott is
right.

Yesterday my collegue asked me how much storage a server has that we
bought from some project money, and he had to write a report. When I
told him 931 MiB, he said No, I want a number like 1T or 2T (and he
has an IT degree, although not CS in the strict sense). That's how
people think, and if you do not acknowledge that, you're living outside
of reality.

Gabor

-- 
 -
 MTA SZTAKI Computer and Automation Research Institute
Hungarian Academy of Sciences
 -


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Two proposals for a better Lenny (testing related).

2007-06-13 Thread Gabor Gombas
On Wed, Jun 13, 2007 at 12:42:34AM +0100, Luis Matos wrote:

  - Smooth passages are not always smooth (who had a working xorg after
 the upgrade for 7, please raise their hands)

AFAIR apart from having to edit a few config files it was quite painless
(I've upgraded when Xorg was still in experimental).

OTOH the current xserver-xorg-video-ati snapshot in experimental is not
suitable for everyday use (the crash in DPMS is a blocker for me) so I'd
be quite annoyed if it was uploaded to unstable; but being able to
easily test new versions to see if the bugs are still there is very
useful.

Gabor

-- 
 -
 MTA SZTAKI Computer and Automation Research Institute
Hungarian Academy of Sciences
 -


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Two proposals for a better Lenny (testing related).

2007-06-13 Thread Gabor Gombas
On Tue, Jun 12, 2007 at 05:40:29PM -0300, Gustavo Franco wrote:

 I disagree, that's what we've with experimental today mainly due to
 the fact that there's just a few packages there. Consider everybody
 uploading every package for unstable instead.

Experimental can and does contain packages that are _known_ to be broken
and unusable. Uploading these to unstable would mean that no one would
test unstable any more (right now you can _decide_ if you want to risk
installing known-broken packages from experimental; removing
experimental also removes that choice).

And if no one tests unstable because it's just too broken, then bugs
will not be found before packages migrate to testing (the method of
migration, being manual or automatic does not matter here at ALL),
meaning the quality of testing would drop significantly.

I don't see that as an improvement...

Gabor

-- 
 -
 MTA SZTAKI Computer and Automation Research Institute
Hungarian Academy of Sciences
 -


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Best practices for cron jobs?

2007-06-13 Thread Gabor Gombas
On Wed, Jun 13, 2007 at 12:28:09AM -0400, Duncan Findlay wrote:

 What can I do to satisfy those with and without anacron, and to avoid
 hammering the sa-update servers at a specific time?

Idea:

- Generate a random minute number in the postinst
- Set up an entry in cron.d that runs every hour at the above specified
  minute and calls a helper script that checks if the last invocation of
  sa-update was more than 23 hours ago (using a timestamp file) and if
  so calls sa-update

Gabor

-- 
 -
 MTA SZTAKI Computer and Automation Research Institute
Hungarian Academy of Sciences
 -



Re: Two proposals for a better Lenny (testing related).

2007-06-13 Thread Gabor Gombas
On Wed, Jun 13, 2007 at 11:28:52AM +0100, Luis Matos wrote:

 kernel upgrades from 2.6.50 to 2.6.51 ... nvidia packages don't build in
 time (they are not free, right?) ... kernel passes to testing ...
 automatically, the nvidia-module-2.6.50 uses 2.6.50 and not *.51, so ...
 after a reboot, my xorg server will not run... when it used to.

Then create an empty nvidia-module package that depends on the latest
nvidia-module-X.Y.Z package and conflicts with linux-image-$ARCH  X.Y.Z.
Just because you're using non-free kernel modules does not mean that
everyone else _not_ using those modules should be penalized.

Or alternatively, just reboot with the old kernel just like you'd do
when you found out that any random driver you happen depend on stops
working in the new kernel version.

Gabor

-- 
 -
 MTA SZTAKI Computer and Automation Research Institute
Hungarian Academy of Sciences
 -


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



  1   2   3   4   >