Re: KVM: GPU passthrough

2021-04-27 Thread Christian Seiler

Hi there,

Am 2021-04-09 00:37, schrieb Gokan Atmaca:

error:
pci,host=:01:00.0,id=hostdev0,bus=pci.0,addr=0x9: vfio
:01:00.0: group 1 is not viable
Please ensure all devices within the iommu_group are bound to their
vfio bus driver.


This is a known issue with PCIe passthrough: depending on your
mainboard and CPU, some PCIe devices will be grouped together,
and you will either be able to forward _all_ devices in the
group to the VM or none at all.

(If you have a "server" GPU that supports SR-IOV you'd have
additional options, but that doesn't appear to be the case.)

This will highly depend on the PCIe slot the card is in, as well
as potentially some BIOS/UEFI settings on PCIe lane distribution.

First let's find out what devices are in the same IOMMU group.
From your kernel log:

[0.592011] pci :00:01.0: Adding to iommu group 1
[0.594091] pci :01:00.0: Adding to iommu group 1
[0.594096] pci :01:00.1: Adding to iommu group 1

Could you check with "lspci" what these devices are in your case?

If you are comfortable forwarding the other two devices into the
VM as well, just add that to the list of passthrough devices,
then this should work.

If you need the other two devices on the host, then you need to
either put the GPU into a different PCIe slot, put the other
devices into a different PCIe slot, or find some BIOS/UEFI setting
for PCIe lane management that separates the devices in question
into different IOMMU groups implicitly. (BIOS/UEFI settings will
typically not mention IOMMU groups at all, so look for "lane
management" or "lane distribution" or something along those
lines. You might need to drop some PCIe lanes from other devices
and give them directly to the GPU you want to pass through in
order for this to work, or vice-versa, depending on the specific
situation.)

Note: the GUI tool "lstopo" from the package "hwloc" is _very_
useful to identify how the PCIe devices are organized in your
system and may give you a clue as to why your system is grouped
together in the way it is.

Hope that helps.

Regards,
Christian



Re: going beyond a ch341 uart-usb convertor

2021-04-12 Thread Christian Seiler

Hi there,

Am 2021-04-12 05:55, schrieb Gene Heskett:

Building a design/builder for a 3d printer, which when a std usb to
printer cable is connected between the computer and the 3d printer,
Identifies as a ch341 convertor cable once it is plugged into the
printer.

[...]

What would the next thing to try and discover why its not working?


Do you actually have permissions for the device? Typically serial
devices have permissions such that only root and the group 'dialout'
can access them. You can check that via:

ls -l /dev/ttyUSB0

That will typically look something like

crw-rw 1 root dialout 4, 64 Apr 12 08:54 /dev/ttyUSB0

You can use id to determine if you are in the 'dialout' group. If
that doesn't appear (but your user and other groups do appear), then
you may add your user to that group via

gpasswd -a USERNAME dialout

(Replace USERNAME by your username, run this command as root.)

(Note that I've used ttyUSB0 here, because CH341 devices typically
appear as such, but other USB to serial converters may also appear
as ttyACM0 instead of ttyUSB0.)

Additionally, even if the permissions are OK, if you have ModemManager
installed (which is typically the case on desktop systems), for the
first 30 to 60 seconds it will try to detect if the device in question
is a model, and only once that fails will it release the device. So it
could be that the device is "stolen" by ModemManager directly after
pluggin it in and/or powering it up, and you have to wait a bit until
ModemManager releases it.

Furthermore, if you have software installed that is used for devices
that help people with vision impairments, it could be that that
interferes with USB serial devices, as some such devices also use a
USB serial interface.

You can check if another software (ModemManager, or something else) is
currently using the serial device by calling:

lsof /dev/ttyUSB0

(Run the command as root.)

Hope that helps!

Regards,
Christian



Re: OT: Question about 10/100 switch on a LAN with a faster router

2019-12-31 Thread Christian Seiler

Hi there,

Am 2019-12-31 14:03, schrieb rhkra...@gmail.com:
I'm about to recommend that he get a 10/100 5 port Ethernet switch to 
connect
to the two cameras and then a short cat5 (or better) Ethernet cable to 
connect

from the switch to the router.

I'm abouit 99.9% sure that using such a switch will not slow down any 
other

parts of his network, but I don't want to mislead him.


I assume you want to do the following?

   +-- Other device (Gbit)
   |
   |  (100 MBit)
 Router - Switch
   |   / |  \
   |  /  |   \
   |  Camera A   |   (potentially more in the future)
   | |
   | Camera B
   |
   + Other device (GBit)


How much peak bandwidth are the cameras going to use simultaneously?

If both cameras won't ever use more than 100MBit/s _combined_
(either because they only use the bandwidth at different times OR
they only actually use 50MBit/s or less anyway), then this
configuration will be fine. Otherwise I wouldn't recommend this
setup. (Also consider the future-proofing of this setup, even if
you only add more 100MBit/s devices, because once you connect all
4 switch ports, all of these devices combined will share only a
single 100MBit/s link to the router.)


Am I missing anything?


Do you (or he) still have a 100MBit/s switch lying around so it
doesn't cost you anything? If so this will be fine. Otherwise I don't
see the point in buying a 100MBit/s switch -- I don't know about the
US, but here in Germany I can get a 5 port Gigabit switch for the
equivalent of ~ 20$, and that includes a VAT that is more than twice
that of the typical sales tax in the US. Heck, I can get an 8port
Gigabit switch for the equivalent of ~ 25$. Sure, I can get a
100MBit/s switch for ~ 10$, but unless I want to deploy 100s of
these, I don't see the point in saving this small amount of money;
especially since a Gigabit switch will likely still be something
useful in 10 years once your brother completely changes his current
setup, but a 100MBit/s switch might not be.

Regards,
Christian



Re: RCA Cable to USB Video input device

2019-12-30 Thread Christian Seiler

Hi there,

Am 2019-12-14 07:45, schrieb Marc Shapiro:

I want to copy some videos from VCR and DVD to my computer for editing
(simple stuff, like removing commercials).  I found this device on
Amazon:


https://www.amazon.com/Digital-Converter-Capture-Support-Android/dp/B06X42H9VZ/ref=sr_1_3?m=A3ENZ260X3A00C=ATVPDKIKX0DER=1576302348=merchant-items=1-3=1


It says in the title that it works on Linux, and at least one of the
reviews says it works on Debian.


My father has one that looks just like it. I don't really know
whether it works on Linux (because my father uses Windows), but
when he upgraded his Laptop to Windows 10, he asked me to help
him find drivers for this thing, and the main problem we ran
into were that different chips are sold in the same format - so
just from looking at it from the outside it is unclear what
chip is actually used there. There appears to be Linux support
for some of the common chips used in this kind of device, but
there's no guarantee.

From the listing you posted the device you have appears to have
a UTV007 chipset, and you can find some documentation on how to
make that work on Linux here:

https://linuxtv.org/wiki/index.php/Easycap#Making_it_work_4

As for recording software: after searching for quite a while,
the best software I was able to recommend to my father was OBS
studio to make recording very easy to use (though it requires
some setup) - it technically isn't what OBS was designed to do,
but you can use it for that purpose nonetheless.

As for editing, you might want to take a look at kdenlive or
avidemux. I don't have much experience myself with this though,
so YMMV here.

Hope that helps!

Regards,
Christian



Re: how to enable trim for an external encrypted SSD?

2017-12-18 Thread Christian Seiler
Hi again,

A quick follow-up, because cryptsetup 2.0.0 was recently released:

On 11/06/2017 02:28 PM, Christian Seiler wrote:
> And while you might be able to reconfigure udisks to pass the discard
> option to cryptsetup (though I'm also doubtful about that), that
> configuration would have to happen on each individual computer, and
> can't be put onto the external drive.

The new LUKS2 header format (requires cryptsetup 2.0.0) does support
this, the release announcement specifically mentions discards as a
feature option:

Quoting
<http://www.saout.de/pipermail/dm-crypt/2017-December/005771.html>:

  * Persistent flags
The activation flags (like allow-discards) can be stored in
metadata and used automatically by all later activations (even
without using crypttab).

To store activation flags permanently, use activation command
with required flags and add --persistent option.

For example, to mark device to always activate with TRIM
enabled, use (for LUKS2 type):

 $ cryptsetup open   --allow-discards --persistent

You can check persistent flags in dump command output:

$ cryptsetup luksDump 

Doesn't help you directly because of the problems you've described
with your USB adapter not forwarding TRIM commands properly, and
this feature will not be part of Debian before the release of
Debian 10/Buster (I hope at least ;-)), but I still wanted to
mention this in case anyone stumbles over this thread in the
mailing list archives.

Regards,
Christian



Re: Embarrassing security bug in systemd

2017-12-15 Thread Christian Seiler

Am 2017-12-08 21:31, schrieb Gene Heskett:

On Friday 08 December 2017 14:26:41 Jonathan Dowland wrote:

No objection there, and I agree that the release notes should probably
have covered the policy changes. That ship has now sailed
unfortunately.


So now, no effort will ever be made to fix the man pages. Hell of a way
to run a train.


If you want to update the release notes, please file a bug against the
'release-notes' pseudo-package, ideally (but not necessarily) with the
proposed wording change. (But please also check if someone else has
already done so, so you don't cause a duplicate.)

It _is_ possible to update the release notes after the fact; see e.g.
https://bugs.debian.org/865632 for something that was added after the
release of Stretch. (And there are other examples, see
https://anonscm.debian.org/viewvc/ddp/manuals/trunk/release-notes/en/
for a full history of what has been changed in the release notes of
Stretch so far, both before and after the release.)

Regards,
Christian



Re: Missing pyuic5, pyrcc5 and pylupdate5 from python3-pyqt5 (was Re: Missing qt5-designer in Stretch)

2017-11-15 Thread Christian Seiler

Hi there,

Am 2017-11-15 06:12, schrieb Marc Shapiro:

Now that QtDesigner is running I can look at the rest of the build
chain and I find that the programs listed in the subject (pyuic5,
pyrcc5 and pylupdate5 ) are missing from python3-pyqt5.  According to
the docs on Sourceforge these should all be included in the upstream
package.  Has Debian split them off somewhere and not listed it as a
Depends, or Recommends?


$ apt-file search pyuic5
pyqt5-dev-tools: /usr/bin/pyuic5
pyqt5-dev-tools: /usr/share/man/man1/pyuic5.1.gz

To see what binary packages are built from the same source package
as python3-pyqt5:

1. Get the source package name:

$ apt-cache show python3-pyqt5 | grep ^Source:
Source: pyqt5

2. Get all binary packages associated with it (the "tr" is for better
readability):

$ apt-cache showsrc pyqt5 | grep Binary: | tr ',' '\n'

The reason Debian doesn't have a single binary package is that you
don't actually need these tools to _use_ PyQt5, only to develop it.
(Similarly to C/C++ development, where there also are separate
development packages.) This way you can select what you want to
install.

Regards,
Christian



Re: Missing qt5-designer in Stretch

2017-11-14 Thread Christian Seiler

Am 2017-11-14 08:05, schrieb Marc Shapiro:
Am I missing something, somewhere?  Or is qt5-designer not packaged for 
Debian?


Designer for Qt5 can be found in the qttools5-dev-tools package
in Stretch.

Regards,
Christian



Re: how to enable trim for an external encrypted SSD?

2017-11-11 Thread Christian Seiler
On 11/11/2017 02:31 PM, Joerg Desch wrote:
> Am Sat, 11 Nov 2017 14:13:12 +0100 schrieb Christian Seiler:
> 
>> Also, did you mount the device with -o discard or set that option with
>> tune2fs (_before_ mounting it)?
> 
> No, not now. I want to use fstrim in favour to the discard option. But 
> I've tested it with an Arch Linux installation with a more fresh kernel. 
> There both vairante (fstrim and discard-option in fstab) fails too. In 
> case of the discard option, the error is written to the logs.
> 
>> What does the command
>> findmnt | grep /media/jd/TEST
> 
> $ findmnt | grep /media/jd/TEST
> ├─/media/jd/TEST  /dev/sdg1  ext4
> rw,nosuid,nodev,relatime,data=ordered
> 
> $ sudo lsblk -D
> [sudo] password for jd: 
> NAME   DISC-ALN DISC-GRAN DISC-MAX DISC-ZERO
> sda   0  512B   2G 0
> ├─sda10  512B   2G 0
> └─sda20  512B   2G 0
> sdb   00B   0B 0
> sdg   00B   0B 0
> └─sdg100B   0B 0
> 
> 
> In this case, sda is a internal SSD (Samsung EVO) with working TRIM and 
> sdb is a HDD.

Well, then it appears that the chip (or the kernel driver for it) doesn't
support passing through the TRIM command to the underyling SSD.

If it's the kernel driver there's a chance that'll be fixed in a future
kernel version.

If it's the the chip itself you're stuck without TRIM on the external
SSD.

Unfortunately I can't say much more about that, I've never had an
external SSD.

Regards,
Christian



Re: how to enable trim for an external encrypted SSD?

2017-11-11 Thread Christian Seiler
On 11/11/2017 02:05 PM, Joerg Desch wrote:
> Am Mon, 06 Nov 2017 14:28:32 +0100 schrieb Christian Seiler:
> 
>> I don't what you want is possible. When you plug in an external drive
>> (that is _not_ in /etc/fstab etc.) the process of how it gets mounted is
>> different:
> ...
>> tune2fs -o +discard /dev/external_disk_device
> 
> My drive has arrived and even a ext4 without LUKS fails to (fs)trim the 
> mount! Any idea why?
> 
> 
> $ sudo hdparm -I /dev/sdg1
> ...
>  *Data Set Management TRIM supported (limit 8 blocks)
>  *Deterministic read data after TRIM
> $ sudo fstrim /media/jd/TEST
> fstrim: /media/jd/TEST: Verwerfungsvorgang wird nicht unterstützt.
> 

Maybe the external controller (or at least the driver for the external
controller) that interfaces the SSD with USB (or whatever you're using
to connect it externally to your system) doesn't understand TRIM
properly?

Also, did you mount the device with -o discard or set that option with
tune2fs (_before_ mounting it)?

What does the command

findmnt | grep /media/jd/TEST

say in terms of active filesystem options?

Regards,
Christian



Re: Ethernet card locking up when acting as virtual bridge

2017-11-10 Thread Christian Seiler
Hi there,

On 11/10/2017 07:24 PM, Andrew W wrote:
> That turned out to be the Cisco switch on the other end which is an
> ESW500 series Small Business Switch (i.e web gui only no IOS CLI). On
> there there is a 'Smartports Wizard' which allows you to set a 'role'
> for each port and unless it is set to 'Access Point' I got the
> strange behaviour.
> 
> As the virtual bridge and wifi access point are pretty much the same
> thing in principle I tried changing the port role to 'Access Point'
> as well and it seems to have solved the issue. Ive no idea what that
> is changing in the switch config, and when I asked on the Cisco forum
> I couldnt get an answer but it seems to be the cause of the trouble.

I presume that if a port is not set in "Access point' mode the switch
will assume that packets coming from that port will always come from
the same MAC address, and since the VMs you have all have differing
addresses this causes the switch to silently drop packets from a
different MAC address. And depending on how they actually implemented
that in detail the behavior may be quite weird.

Regards,
Christian



Re: Ethernet card locking up when acting as virtual bridge

2017-11-08 Thread Christian Seiler

Am 2017-11-08 11:54, schrieb Andrew Wood:

My configuration is below. Initially it worked fine, except that once
in a while the card would seemingly 'lock up' i.e no VMs could get
network access but unplugging and replugging the Cat 5 cable fixed it.

Recently however the issue has been occuring more and more, sometimes
some VMs can get network access and not others. Ive tried swapping the
card for an identical model but its made no difference. Im not sure if
it something to do with my config (maybe the made up MAC addresses?)
or if its a bug in the cards driver or firmware.

Config is below eth0 is the hosts Realtek port, enp3s0 is the 3Com
used for the VMs

iface br1 inet dhcp


No idea if this is the problem or not, but you are using DHCP for the
bridge interface. Maybe the DHCP client's management of the bridge
interface interferes with this? (DHCP clients like to 'take over' an
interface and may set flags that don't work properly.) I don't think
I've ever used a DHCP client on a bridge before other than for short
testing purposes.

Is is possible for you to try a static IP on this interface and see
if that solves your problem?

If that doesn't help:

 - The Linux bridge interface will always use the numerically lowest
   (in a Big-Endian sense) MAC address that's configured as the MAC
   address for the bridge. If you're doing virtualization, you
   should stick to MAC addresses that are very high for this reason,
   so that the network card's own address always wins out. Many
   virtualization tools ensure that the MAC addresses they generate
   start with 0xfe for this reason. (This might also confuse DHCP
   clients btw. if the MAC address of the interface changes behind
   their back.)

 - When the problem occurs: do the VMs at least see each other? If
   that doesn't even work, your problem isn't your network card,
   but something else.

 - Anything in the kernel log?
   If not, is there maybe a debug option for your specific network
   driver you could enable so that something is added to the logs?

 - Try running 'ip l' and 'ip a' both when the problem occurs and
   when it doesn't occur and see if there's any difference in the
   output of either of these.

 - You said that unplugging and replugging the cable made it work
   again - maybe the link sensing between your network card and the
   switch / whatever you have on the other side is broken for some
   reason? Could you try to force the right speed / duplex settings
   via ethtool and see if that helps?

 - Maybe your network card supports various offloading features
   (such as TCP checksums) but when used as a switch they don't
   always work properly - you could try disabling them (also via
   ethtool, see the manpage).

 - You said that you've tried replacing the network card - have you
   tried replacing the cable? I've had some weird experiences with
   broken network cables. (For example, very strange intermittent
   packet loss in some instances, which cause very weird effects in
   higher-level protocols.)

Regards,
Christian



Re: Anyone using stretch/buster/sid on ARMv4t ?

2017-11-07 Thread Christian Seiler

Hi,

Am 2017-11-07 11:49, schrieb Rick Thomas:

How do I know if a machine is ARMv4t?  I have a sheevaplug and a
couple of openrd machines (one “client”, the other “ultimate”) that
are still doing useful work.  Are they v4t?


cat /proc/cpuinfo should do the trick. It might not show the 't'
after the 4, but it should definitely show whether it's an ARMv4
or not. (And Debian's armel doesn't support any non-'t' ARMv4
CPUs, so if it's ARMv4 and running Debian's armel port, it's
ARMv4t.)

Regards,
Christian



Re: how to enable trim for an external encrypted SSD?

2017-11-06 Thread Christian Seiler

Am 2017-11-06 13:09, schrieb Joerg Desch:
Now I have bought a SanDisk Extreme Portable SSD. My goal is to add a 
LUKS
encrypted partition without an explicit fstab entry. I've done this 
with
some USB thumbdrives before, but not with TRIM support. The drvie 
should

be plugged into any Linux device without the need to change the
configuration.

What is the correct way to add TRIM support to an external SSD with 
LUKS

encrypted partition?


I don't what you want is possible. When you plug in an external drive
(that is _not_ in /etc/fstab etc.) the process of how it gets mounted
is different:

 - For configured drives (/etc/fstab etc.) the init system (e.g.
   systemd, but also initscripts when you are using sysvinit) will
   read /etc/crypttab and /etc/fstab and apply those options to
   the LUKS container and the filesystem.

 - For non-configured drives the 'udisks' helper program will
   enable the user to decrypt & mount those devices.

Now, at least the ext4 filesystem allows you to set default mount
options, such as 'discard':

   tune2fs -o +discard /dev/external_disk_device

btrfs on the other hand tries to auto-detect SSDs and enable
discard automatically if on an SSD - but I have no idea whether that
works under LUKS or not.

But all that doesn't really help you if the underlying LUKS container
isn't opened with the discard option set. And as far as I know there
is no possibility of tagging a LUKS container with that option, you
must always supply that option to the cryptsetup command (which is
done implicitly via /etc/crypttab).

And while you might be able to reconfigure udisks to pass the discard
option to cryptsetup (though I'm also doubtful about that), that
configuration would have to happen on each individual computer, and
can't be put onto the external drive.

In summary:

 - If your filesystem is not encrypted, btrfs out of the box, and
   ext4 with the proper option set, should make it possible to
   automatically enable mounting with the correct option.

 - But I don't know of any method to tag a LUKS container so that it
   is opened with TRIM support by default.

Hence I don't think what you want is possible with the current state
of affairs, unfortunately. Sorry.

You could ask the LUKS developers to include an additional flag in
their headers that allows you to specify that this volume should be
opened with discards allowed by default - to maybe solve this in the
very long term. No idea how amenable they'd be towards that though,
as they do discourage the usage of TRIM in LUKS because it weakens
possible security guarantees somewhat.

Regards,
Christian



Re: NFSv4 without Kerberos and permissions

2017-10-16 Thread Christian Seiler
On 10/16/2017 07:57 PM, John Ratliff wrote:
> On 10/15/2017 3:38 AM, Christian Seiler wrote:
>> Furthermore, the MANAGED_GIDS setting is only for NFSv2/3 and only
>> for supplementary groups, not the primary group. It is not a security
>> setting, it really is just for bypassing the 16 group limit of older
>> NFS protocols, and is useless with NFSv4.
> 
> I'm not sure this is entirely the case. Turning off managed GIDs and
> using an NFSv4 mount (i.e. on client: mount -t nfs4 blahblahblah),
> this let me use secondary groups on the server. Before, I could not.

Sorry, you're right. rpc.mountd is typically only NFSv2/3, but with
NFSv4 it does provide some corner case functionality (I rechecked
the code just now) and MANAGED_GIDS is one of those.

>> Basically, if you're running a sec=sys NFS server (or an NFSv2/3
>> server) you're implicitly trusting all the clients that are allowed
>> to connect to you and all the network components (such as routers)
>> in the middle. Anything that is not owned by the root user will be
>> able to be read or written to by any NFS client if the client wants
>> to: they can just read out the user ID of the file and send that
>> to the server together with the read/write request. Only the root
>> user is a bit more protected due to root_squash, and you can make
>> the entire export read-only - but that's it when it comes to the
>> protections NFS without Kerberos gives you.
> 
> Thanks. This has been quite helpful. I will adjust the configurations
> on my actual servers to make use of ID mapping there as well.

Note that idmapping is not the same as Kerberos. idmapping just
means that you map between the user id and a string before any
ID is sent of the network. (Client translates user id to string,
sends string over network, server receives string, translates
it back to a user ID and uses that - and vice versa.)

NFSv4 without Kerberos _always_ has the same security properties,
irrespective of whether you enable idmapping or not: none.

If you want authentication you need sec=krb5. If you also want
data integrity and tamper-resistance, you need sec=krb5i. And
if you also want encryption you need sec=krb5p. In all of these
cases you need a Kerberos setup, which is not trivial.

Regards,
Christian



Re: NFSv4 without Kerberos and permissions

2017-10-15 Thread Christian Seiler
On 10/15/2017 03:55 AM, John Ratliff wrote:
> In my case, the user on the client I was testing was UID 1003, which
> on the server he was UID 1000. So they both had the group, but UID
> 1003 on the server did not have the group, because that user did not
> exist. Therefore, permission denied.

Then you are not idmapping correctly.

NFSv4 has two modes of operation when it comes to users:

 1) Use raw UIDs/GIDs like NFSv2/3 did. This is available since Linux
3.2 or 3.5 (I don't remember which) and only possible if sec=sys
(i.e. no Kerberos) is used. In that case the user IDs are simply
sent over the wire directly.

This requires the UIDs and GIDs to be identical on client and
server.

 2) Use names for users / groups. Each user and group is translated
into a string on the client, and the server translates the string
back to a user id. This is done via the idmapping mechanism
(rpc.idmapd on the server side, and nfsidmap in combination with
request-key on the client side, both configured via
/etc/idmapd.conf). Here the UIDs/GIDs don't need to be identical
on client and server.

In your case, since you are not using Kerberos, current Linux
versions will default to not using the ID mapping mechanism when a
non-Kerberos setup is in place, and will use raw UIDs/GIDs instead.
But you _do_ want idmapping since the the UIDs and GIDs on the client
and/or server don't match up.

To make this work, you can tell the server to never accept raw UIDs
and GIDs:

echo 0 > /sys/module/nfsd/parameters/nfs4_disable_idmapping

The setting is a bit of a misnomer: while it is on by default, the
setting does not actually disable idmapping on the server, it being
on will just also enable raw UIDs and GIDs. If a client that
connects doesn't support raw UIDs/GIDs (such as older Linux versions
e.g. 2.6.x or other operating systems) the server will still happily
do idmapping with the setting on. But disabling this setting will
have the NFS server fall back to the mode where it will only accept
idampped strings and never accept raw UIDs/GIDs. This will cause
some log messages to be shown in recent enough Linux clients (the
message being "v4 server [...] does not accept raw uid/gids.
Reenabling the idmapper", but that's actually the behavior you want.

To make the setting permanent, you may add it to your modprobe.conf:

echo "options nfsd nfs4_disable_idmapping=0" > /etc/modprobe.d/nfsd.conf



There is also a setting for the clients here, found in
/sys/module/nfs/parameters/nfs4_disable_idmapping (note the missing
'd' in the module name) that works just the same, but only on the
client side.

I am not aware of any way to change this setting on a per-mount
basis.

It is up to you whether you'll want to add this setting on the server,
the clients, or both.


In an unrelated note:

> Although it's not the best solution from a security standpoint, I'm
> going to disable the manage-gids option for now and limit access by
> hosts.allow and the firewall.

NFSv4 without Kerberos does not have any security at all. The server
will implicitly trust the UID the client sends to the server, so a
compromised client may impersonate any (!) user on the server except
root (unless no_root_squash is set in /etc/exports, which I don't
recommend). It may also impersonate any group.

Furthermore, the MANAGED_GIDS setting is only for NFSv2/3 and only
for supplementary groups, not the primary group. It is not a security
setting, it really is just for bypassing the 16 group limit of older
NFS protocols, and is useless with NFSv4.



Basically, if you're running a sec=sys NFS server (or an NFSv2/3
server) you're implicitly trusting all the clients that are allowed
to connect to you and all the network components (such as routers)
in the middle. Anything that is not owned by the root user will be
able to be read or written to by any NFS client if the client wants
to: they can just read out the user ID of the file and send that
to the server together with the read/write request. Only the root
user is a bit more protected due to root_squash, and you can make 
the entire export read-only - but that's it when it comes to the
protections NFS without Kerberos gives you.

Regards,
Christian



Re: Required help on local Debain mirror

2017-08-30 Thread Christian Seiler

Hi there,

Am 2017-08-29 11:57, schrieb Kala Techies:

I am using (Debian GNU/Linux 6.0.10 (squeeze)) in my environment and I
want to update all systems using one local mirror.


I don't think it's a good idea to setup a real local mirror,
as that means you'll download the entire archive, which is
likely going to be a _lot_ more stuff (especially if you
download all available architectures) than upgrading each
machine individually.

What you'll rather want is to setup a local proxy server
that'll cache the packages. This way you'll only download
what you actually need, but you'll also only download it
once.

I can recommend the apt-cacher-ng package for that.

Regards,
Christian



Re: [Multiarch] armhf on arm64 is not working

2017-08-29 Thread Christian Seiler
Hi there,

On 08/29/2017 06:07 PM, Adam Cecile wrote:
> Could be an alternative indeed, but what about the speed compared to
> my quad-core i5 with qemu ?

I haven't actually tried that specific comparison, but form my
experience a Pi tends to be a tiny bit faster in pure CPU
performance than qemu on Intel. (But not much.) The RPi3 is
also a quad core, so that is similar. YMMV may vary depending
on the precise workload though.

That said:

 - It has only 1 GiB of RAM. That might be a problem.
 - It doesn't have as much cache as an Intel Core CPU, so if
   you have workloads that require a lot of memory access,
   that'll probably offset any small advantages in pure CPU
   performance.
 - I/O is quite slow. If you compile large things my guess is
   that just because of I/O it'll take longer on the Pi than
   with qemu.

OTOH, it's cheap, so even if it's not the right thing in the
end you're not going to waste a ton of money. You could also
first buy just the board and power supply and only buy a case
and other accessories once you've verified that it's sufficient
for your use case.

Then again, there are also other ARM boards in a similar price
range out there, which might suit your use case better. But I
really am not an expert here, I've just played around with the
Pi a bit in the past...

Regards,
Christian



Re: [Multiarch] armhf on arm64 is not working

2017-08-29 Thread Christian Seiler
Hi,

Am 29. August 2017 17:04:29 MESZ schrieb Adam Cecile :
>I was not aware of this optional 32 bit compatibility. That kinda
>sucks.

Yeah, especially since you had the misfortune of getting the one chip that is 
sold that doesn't support it.

>Actually, I'm already using qemu with cowbuilder (I mean, a lot) but my
>biggest problem is not the slowness but the broken thread 
>implementation.

Yeah, that is a problem indeed. And one that will take a long time to fix, 
because threading is already hard enough when you don't need to juggle hardware 
architectures with different memory coherency models...

>If you have any hint about online cheap arm server, let me know...

Does it have to be a server? The raspberry pi 3 has an ARMv8 chip with 32bit 
compat mode. Granted, it's not the fastest, doesn't have that much RAM, and if 
you boot it in 64bit mode some peripherals don't work yet, but for a pure 
compile/build box...? Especially if you're kind of OK with the speed of qemu?

Regards,
Christian



Re: [Multiarch] armhf on arm64 is not working

2017-08-29 Thread Christian Seiler

Hi,

32bit ARM compatibility is optional according to the specification,
and if your CPU doesn't support it, you won't be able to natively
run armhf executables. This is in contrast to x86, where all[*]
64bit x86 CPUs also support running old 32bit programs.

From what I've read it appears to be that the vast majority of chips
currently sold do have 32bit ARM compatibility - except for one, the
Cavium Thunder X. And if you look at your /proc/cpuinfo output:

Am 2017-08-29 08:49, schrieb Adam Cecile:

CPU implementer: 0x43
CPU architecture: 8
CPU variant: 0x1
CPU part: 0x0a1
CPU revision: 1


Well, yeah, that's the one.

See also:
https://askubuntu.com/questions/928249/how-to-run-armhf-executables-on-an-arm64-system

So it appears you're out of luck on your hardware: your CPU
simply doesn't support running armhf executables. You could run them
emulated in qemu-user-static, but that's probably not what you want,
because that's going to be _really_ slow.

Regards,
Christian

[*] There were very, very few exceptions, and nothing marketed as a
general purpose CPU.



Re: No ifconfig

2017-08-22 Thread Christian Seiler

Am 2017-08-22 17:11, schrieb Sven Hartge:

Christian Seiler <christ...@iwakd.de> wrote:


auto eth0
iface eth0 inet static
  address 192.168.0.1/24
  address 192.168.0.42/24
  address 10.5.6.7/8



This will work, and it will assign all IPs to the interface (the first
one being the primary and the source IP of outgoing packets where the
program doesn't explicitly bind anything).


No, this does not work in Stretch. Only the first address is added. To
get additional addresses, it has to look like this:

,
| auto eth0
| iface eth0 inet static
|   address 192.168.0.1/24
|
| iface eth0 inet static
|   address 192.168.0.42/24
|
| iface eth0 inet static
|   address 10.5.6.7/8
`


Oh yeah, sorry, I wrote that from memory and misremembered a bit.
Thanks for the clarification!

Regards,
Christian



Re: Relocated Header Directories

2017-08-22 Thread Christian Seiler

Am 2017-08-22 16:47, schrieb Mario Castelán Castro:
What about the ELF shared objects that *are* under “/usr/lib”? Are 
these

programs that do not have support for multi-arch?


Not programs, but packages, yes. Not all library packages in Debian
have been updated to use the Multi-Arch scheme yet (in some cases
other aspects of the package may make this difficult, even if it
is easy to put the .so file into the new location), though the
number of packages that are still in /usr/lib directly has decreased
with every Debian release since Wheezy (the first with Multi-Arch).

Regards,
Christian



Re: Relocated Header Directories

2017-08-21 Thread Christian Seiler
On 08/21/2017 09:12 PM, Dutch Ingraham wrote:
> For example, Fedora (and Gentoo,
> etc. )
> also installs glibc for both 32- and 64-bit on the same machine, but
> they have not
> relocated these header files.  So are you saying this was just Debian's
> method
> of solving the multi-arch issue, and other distributions solved in some
> other way?

Other distributions' support for multiple architectures is limited to
32bit and 64bit of the same fundamental CPU - for example 32bit and
64bit Intel/AMD CPUs. In these cases the header files that are
installed have traditionally been the same (just with #ifdef in them
for things that differ). What you can't do on those distributions is
install packages from foreign architectures on your local system;
for example you can't install a library for ARM on a system with
Intel/AMD CPUs on those systems.

Debian and Ubuntu decided to go a more thorough route here: instead
of just supporting the parallel installation of 32bit and 64bit
packages they decided to fully support the installation of packages
from arbitrary architectures. This means that instead of having
/usr/lib32 and /usr/lib64, you have /usr/lib/ARCH (where ARCH is
the triplet that specifies the architecture, for example
x86_64-linux-gnu or arm-linux-gnueabihf) and /usr/include/ARCH.
Libraries themselves should not be installed in /usr/lib directly
anymore (and most libraries have already moved to /usr/lib/ARCH,
but some packages haven't yet), while /usr/include (without ARCH)
does still have a purpose: for those headers where the architecture
doesn't play a role at all. Architecture-dependent headers should
go into /usr/include/ARCH instead though. The compiler and linker
are configured in such a way that they'll look in _both_ directories
by default (with a preference for the ARCH directory) when searching
for header files and libraries.

Now it might seem strange to do this, as you can't natively run an
ARM application on an Intel/AMD CPU (you'd need an emulator for
that, though there are such things for binaries alone, take a look
at qemu-user for that), but there are other benefits to this scheme:

 - You can use this scheme to cross-compile to other architectures.
   Want to create an arm64 Debian package? Install the build
   dependent libraries in their arm64 variants on your system and
   you can do so (if the package supports cross-compiling, which
   not all do).

 - Since there's no arbitrary restriction of 32bit and 64bit that
   may be installed in parallel, there are cases when there are
   two different 32bit architectures you may want to install in
   parallel. For example, there are two 32bit ARM variants
   supported by Debian at the moment: armel and armhf. Both use
   the ARM EABI binary interface, but armhf assumes that ARMv7
   floating point instructions are present in the CPU (along with
   the corresponding registers) while armel uses software emulation
   for floating point instructions (to support ARM CPUs that don't
   have them). Now say you have an ARM application (from a 3rd-
   party repository for example) that was compiled for armel, but
   your system is armhf. In Debian you can just install the
   libraries required for that program in their armel variant onto
   your otherwise armhf system and you can run that program.
   Whereas other distributions would put the libraries for both
   variants in /lib32 (or just /lib) in those cases, so you could
   not install the same libraries for both architectures - which
   also includes the system C library.

 - Code compiled for alternative C libraries (e.g. musl instead
   of glibc) can't be linked against code compiled for the standard
   C library. In Debian you have a trivial way of installing code
   that was compiled for both: alternative C libraries use a
   different architecture specifier, in the case of musl for
   example x86_64-linux-musl (vs. x86_64-linux-gnu) and you can
   also install libraries compiled for both variants in parallel.

The only disadvantage in my eyes seems to be that Debian and its
derivatives (Ubuntu, etc.) are the only ones that do this, and all
the other distributions seem to favor the /lib32 vs /lib64 variant
to do this.

Regards,
Christian



Re: No ifconfig

2017-08-21 Thread Christian Seiler
On 08/21/2017 07:40 PM, Gene Heskett wrote:
> I'll have to study up on this "binding" and how its done.

Note that that's something a program can do if it wants to, but not
something you can generically configure (though individual programs
might offer you configuration options for this), and most programs
that make outgoing connections don't bind the outgoing socket
because they don't care about which IP their packets originate from
and are happy to use the OS's defaults.

In case you want a pointer on how this works from a programming
perspective, I can always recommend Richard Stevens's book UNIX
Network Programming (_the_ book about this topic), and the manpage
of the bind syscall (section 2; man 2 bind) is also a possible
starting point.

Regards,
Christian



Re: No ifconfig

2017-08-21 Thread Christian Seiler
On 08/21/2017 07:07 PM, Gene Heskett wrote:
> On Monday 21 August 2017 12:11:38 Christian Seiler wrote:
> 
>> On 08/21/2017 05:03 PM, Gene Heskett wrote:
>> iface eth0 inet static
>>   address 192.168.0.1/24
>>   address 192.168.0.42/24
>>   address 10.5.6.7/8
>>
>> This will work, and it will assign all IPs to the interface (the first
>> one being the primary and the source IP of outgoing packets where the
>> program doesn't explicitly bind anything). And "ip a" will show all
>> three addresses, but "ifconfig -a" will only show the first.
>>
> Ok, but then how do you differentiate between the addresses without 
> the :1 [:2 etc] notation?

I don't understand the question? Where do you want to specify an
address? When removing the address you just say "remove address XYZ
from interface ABC".

> It doesn't seem right that is would bang all the assigned addresses with 
> duplicate data.

I don't get what you mean here. What is duplicate? If you open an
outgoing connection by default the primary (first) IP that matches
the outgoing subnet will be used as the source IP for that
connection - but a program can override that by binding the socket
to any of the other IPs of that interface.

In the above example: any connection to 192.168.0.23 will by default
carry the source IP 192.168.0.1, and any connection to 10.1.1.1 will
by default carry the source IP 10.5.6.7. An application can create
an outgoing connection with source IP 192.168.0.42 by explicitly
binding the socket to that IP before making the connection.

Which is kind of similar to alias interfaces: with alias interfaces
the route metric of the alias interfaces relative to each other
defines what IP will be used by default, but again it is possible
for an application to override that by binding the socket to a
specific IP address.

And incoming connections are trivial anyway in these setups.

>From the point of view of applications that just use the socket layer
(and don't care about network interface names) the system will react
in the same way whether you use multiple addresses per interface or
whether you use alias interfaces. The main differences are in how it
is configured and how the kernel code works.

Regards,
Christian



Re: No ifconfig

2017-08-21 Thread Christian Seiler
On 08/21/2017 05:03 PM, Gene Heskett wrote:
> On Monday 21 August 2017 09:08:11 Christian Seiler wrote:
>> 2. Can't add multiple IP addresses to the same interface and
>> (worse) even if multiple IP addresses are assigned to the
>> same interfaces it only shows the primary address
> 
> I don't know as to how ifconfig sets it up, but its a piece of cake to 
> edit /etc/network/interfaces to do that. If I bring in a new router, I 
> uncomment this stanza in the interfaces file:
> 
> #auto eth0:1
> 
> # to access reset to 192.168.0.1 routers/switches on the 2nd cat5 port
> #iface eth0:1 inet static
> #address 192.168.0.3
> #netmask 255.255.255.0
> ==
> giving me an address I can use to talk to and configure the new router.

Yeah, that's the old way of doing this via an alias interface. I was
talking about the new-style way of doing so though.

For example:

auto eth0

iface eth0 inet static
  address 192.168.0.1/24
  address 192.168.0.42/24
  address 10.5.6.7/8

This will work, and it will assign all IPs to the interface (the first
one being the primary and the source IP of outgoing packets where the
program doesn't explicitly bind anything). And "ip a" will show all
three addresses, but "ifconfig -a" will only show the first.

Alias interfaes are kind of legacy, and while they still work, they do
have a couple of drawbacks: they aren't really an own interface because
they share options with the interface they are based on (which can be
confusing if you want to change interface options), there is no way to
automatically add a new IP to a given interface without probing first
which aliases have already been "used up", the alias namespace is
limited by both the max length of an interface name and the limitation
of the alias part itself.

But don't get me wrong: if it works for you with alias interfaces, I'm
certainly not going to tell you to change that - because those also do
work with the "ip" utility. The major issue I "ifconfig" has here is
that it doesn't see the additional IP addresses of interfaces added by
other tools - so that when you rely on ifconfig you _don't_ see the
actual entire network configuration of the system, but only a part of
it. So it's actually counter-productive when you're troubleshooting a
system.

>> (2) is really bad, especially the part where it does not show
>> all of the IPs that were assigned by other tools, for example
> 
> Huh? ifconfig doesn't even need a -a option to show me eth0:1 if ts 
> configured and up.

Yes, for alias interfaces it does. For the additional IPs added to the
interface itself it doesn't.

>> NetworkManager, or Debian's own ifupdown via
>> /etc/network/interfaces.
> 
> Please don't equate those two.

I was talking about how these configure multiple IPs when you use
them. Both use the newer kernel interface that allows you to specify
multiple IPs on the same interface, while ifconfig uses the old
interface that assumes a single IP per interface. And I just used
the most prominent programs in Debian as examples for this, but all
other management tools I know of (conman, systemd-networkd, ...)
also use the newer interface.

I really didn't want to discuss the merits or problems of each
individual software package.

Regards,
Christian



Re: No ifconfig

2017-08-21 Thread Christian Seiler

Am 2017-08-21 14:50, schrieb Greg Wooledge:

[missing features in ifconfig]
(Like Gene, I don't even know what those featues *are*.)


From my personal experience, the following two things are
features I'm actually using regularly and that don't work
with it:

1. IPv6 doesn't really work properly (as explained elsewhere
   by other people in this thread)
2. Can't add multiple IP addresses to the same interface and
   (worse) even if multiple IP addresses are assigned to the
   same interfaces it only shows the primary address

(2) is really bad, especially the part where it does not show
all of the IPs that were assigned by other tools, for example
NetworkManager, or Debian's own ifupdown via
/etc/network/interfaces.

Regards,
Christian



Re: Systemd: Error when replacing postfix LSB init with postfix.service on Debian 8 (jessie)

2017-08-21 Thread Christian Seiler

Am 2017-08-21 11:52, schrieb Tom Browder:

On Mon, Aug 21, 2017 at 02:36 Sven Hartge  wrote:

Question: Why do you want to manually replace the init-script from
postfix in Jessie with a systemd.unit? What do you want to
accomplish by
doing so (other than creating a possible broken system)?


I thought I needed to be able to create service files since the init.d
system is going away.


Maybe in 20 years or so, but not in the foreseeable future. And
especially not during the lifetime of a Debian release. There
will be no point release or security update for Jessie that will
drop support for init scripts. Same goes for Stretch. So even
_if_ Debian should decide to drop init script support in Debian
10 (Buster) - which won't happen, not even systemd upstream has
dropped init script support yet, and they're much less
conservative than Debian when it comes to these things  - you'd
still be able to use Stretch for 5 years before support runs
out. And as I said: support is not going to go away anytime
soon.

Now, that doesn't mean that you should still write _new_ init
scripts for custom services if you're going to use systemd
anyway. There it will be a good idea to learn how to do that
with native systemd service units.

But I don't think it's a productive use of your time to go
around and start replacing all init scripts that are currently
present on your system by systemd services. Those that are
included with Debian are going to be taken care of by the
maintainers in Debian in subsequent releases. And any old
custom scripts that you still use I'd transition whenever you
need to change something in them anyway.


Postfix seems simple enough that its service file would also be
simple.


Well, other than the fact that postfix isn't simple (as you've
noticed), the main issue here is that Debian 9 comes with Postfix
3, while Debian 8 comes with Postfix 2. Furthermore the Postfix
packaging in Debian 9 has started to make use of advanced systemd
features for their own units, so it's not just a port of a simple
init script, but something rather more complicated. You had the
misfortune of picking one of the worst examples here.

Regards,
Christian



Re: customizing systemd config

2017-08-11 Thread Christian Seiler
Hi there,

On 08/11/2017 04:42 AM, Gregory Seidman wrote:
> I'm trying to recreate under systemd something I had previously cobbled
> together with shell scripts and init levels under sysvinit.
> 
> Only a few services ran under init 2, the default set in /etc/inittab,
> including privoxy and ssh; the rest of the services I wanted running, such
> as fetchmail, exim4, courier-imap, apache2, etc. would be started at init
> level 3. Those services required an encrypted volume (actually a RAID that
> was an encrypted LVM PV for a VG with several volumes) to be configured and
> mounted before they could be started.

I've blogged about this very scenario a while back:
https://blog.iwakd.de/headless-luks-decryption-via-ssh

Note that I wrote that mainly to explain some details about
systemd using a specific example, I personally am not actually
using that kind of setup. For a headless server of mine I use
full disk encryption (LUKS) for everything except /boot and
unlock the entire system in the initramfs. I also mention that
approach in my blog post, but wanted to stress it here again
because I think that the initramfs-based decryption is the
better way to do this. For that alternative take a look at:
https://projectgus.com/2013/05/encrypted-rootfs-over-ssh-with-debian-wheezy/

Regards,
Christian



Re: Btrs vs ext4. Which one is more reliable?

2017-08-11 Thread Christian Seiler
Hi there,

On 08/11/2017 06:29 PM, Dejan Jocic wrote:
> On 11-08-17, Christian Seiler wrote:
>> You can also set DefaultTimeoutStopSec= in /etc/systemd/system.conf
>> to alter the default for all units (though individual settings for
>> units will still override that).
>>
> Thank you for suggestion. I did find that solution, some time ago, can't
> remember exactly where. But it was followed by warning that it is bad
> idea, can't remember exactly why. Do you have any hint of why it could
> be bad idea to limit timeout, or I've just misunderstood whatever I've
> read about it?

Well, there's a reason the default is 90s. And for some services even
that might be too short. Take for example a database server where the
regular stop script might take 10 minutes to shut down properly (when
no error occurs).

On the other hand for other services you can easily get away with a
lot less of a timeout. For example, I have apt-cache-ng running on
my system (to cache stuff for sbuild), and I think it's perfectly
reasonable to set the stop timeout for that service to 10s or even
lower because that's just a stupid proxy. On the other hand I've
never experienced apt-cacher-ng to take longer than 1s or so to stop,
so I haven't bothered.

The right timeout is always a balancing act - and systemd's default
is a compromise to provide something that won't break most use cases
but still cause the system to shut down after a finite time.

It's up to you to decide what the best option here is. I wouldn't
set the default to anything lower than 30s myself, but that's just
a gut feeling, and I don't actually have any hard data to back that
number up.

> As for more reliable during shutdown part, not in
> my experience, at least on Stretch.

I don't recall ever running into the timeout on shutdown since Stretch
has been released as stable. And I am running a couple of Strech
systems myself, both at home and at work.

> It was on Jessie though, where that
> feature was hitting me not more than once in every 15-20
> shutdowns/reboots. 

Even every 15-20 shutdowns is too much. I never experience those
unless something's wrong. And then I debug hat problem to see what
is causing it and get rid of the root problem so that it doesn't
occur again.

Regards,
Christian



Re: Btrs vs ext4. Which one is more reliable?

2017-08-11 Thread Christian Seiler

Am 2017-08-10 16:02, schrieb Dejan Jocic:

On 10-08-17, David Wright wrote:

On Thu 10 Aug 2017 at 07:04:09 (-0400), Dan Ritter wrote:
> On Wed, Aug 09, 2017 at 09:46:09PM -0400, David Niklas wrote:
> > On Sat, 29 Jul 2017 04:59:40 +
> > Andy Smith  wrote:
> >
> > Also, my use case is at home where the power can and *does* fail. I also
> > find myself using the latest kernel and oftentimes an experimental driver
> > for my AMD graphics card, hence my need for a *very* stable fs over
> > sudden unmount.
>
> Buy a cheap UPS with a USB or serial connection to your
> computer. Even if it only supplies power for 2 minutes, that's
> enough time for the computer to receive the power outage signal
> and do an orderly shutdown.

Two minutes barely covers the timeouts that can often occur when
shutting down systemd; the commonest timeout period here seems
to be 90 seconds. I wouldn't mind reducing them if that's possible.
Processes got just a few seconds with sysvinit before they were
killed.



Yes, those 90 sec waiting for nothing is one of the most annoying
"features" of systemd that I would love to get rid of.


You can set TimeoutStopSec= for some units explicitly, for example
via drop-in. Example:

mkdir -p /etc/systemd/system/XYZ.service.d
cat > /etc/systemd/system/XYZ.service.d/stop-timeout.conf <
And most annoying
aspect of it is that problem is rarely constant. It can exist in one
release in systemd, vanish in other, and then come back again in next
release. And it can occur once in every 10 shutdowns/reboots, or not
occur once in every 10 shutdowns/reboots.


That is an indication that you have a race condition during
shutdown.

The "90s" thing is basically just systemd saying: yeah, I've tried
to shutdown a specific unit and it's still active, now I'm going
to wait for the timeout before I send a hard SIGKILL. You can't
really compare that to sysvinit, because sysvinit doesn't actually
track processes properly, so what most often would happen is that
the init script would send a TERM signal to a process, the better
ones maybe also a KILL signal after some time, before they'd just
consider the service stopped. But if other processes had been
started by the service, sysvinit wouldn't care about them, and
only kill those in the final "let's kill all that's still left
over" killing spree. systemd by contrast actually tracks what's
happening with a service and kills the remaining processes.

That said: what could happen here is that the systemd unit created
for a given service has a bug. For example it could not be ordered
correctly and hence systemd tries to stop it too early while other
services still depend on it.

Or the stop command that is called by systemd hangs because it
tries to do something that it shouldn't do during shutdown (for
example start another service).

See the following page for information on how to debug shutdown
issues with systemd (and keep in mind that Debian has systemd stuff
installed  in /lib and not /usr/lib):
https://freedesktop.org/wiki/Software/systemd/Debugging/#index2h1

I've found systemd to be far more reliable during shutdown (even
if you have to wait for a timeout if something's gone wrong),
because at least there is a timeout. With sysvinit I've sometimes
had the problem that a shutdown script would hang and then nothing
further would happen and the computer would never properly shut
down. This was especially frustrating with headless machines. What
systemd does do is make it much more apparent if there's a
misconfiguration somewhere.

Regards,
Christian



Re: howto restart a service in postinst script (Stretch and newer)

2017-08-06 Thread Christian Seiler
Hi there,

On 08/04/2017 12:30 PM, Harald Dunkel wrote:
> What is the right way to restart a service from the postinst
> script for Stretch and newer?

The same way as before: if it has both an init script and a
systemd service, just call

invoke-rc.d script restart

or

invoke-rc.d script restart || :

depending on whether you want errors to be fatal or not.

You could also take a look at what debhelper generates for
you if you use dh_installinit:

http://sources.debian.net/src/debhelper/10.2.5/autoscripts/postinst-init/

(#ERROR_HANDLER# is "exit $?" - without the quotes - by default.)

> Reason for asking is: opensmtpd died once too often when it got
> restarted via invoke-rc.d from a postinst script on my desktop 
> PC.

I just looked at the opensmtpd package: it uses debhelper compat
9, so it defaults to the following behavior on upgrades:

 - prerm of the old package: stops the service
 - dpkg unpacks the new binaries
 - postinst of the new package: starts the service again

See https://wiki.debian.org/MaintainerScripts#Upgrading for a
detailed graph on the order in which the maintainer scripts are
executed in on upgrade.

If restarting opensmtpd fails in your case, this is either

 - a bug in your configuration that leads to opensmtp failing
   to start again

or 

 - a bug in the package (either in how upgrades are handled
   or in how the package works)

But without further details (i.e. what error message was given,
both in the APT output and in the system logs) I don't think
this can be diagnosed further. I can only tell you that from
what I can see the postinst script does the right thing and
that if there's a bug in the package, it's not there but in
some other place.

Regards,
Christian



Re: howto restart a service in postinst script (Stretch and newer)

2017-08-06 Thread Christian Seiler
On 08/06/2017 05:28 AM, Richard Hector wrote:
> On 06/08/17 04:43, Sven Hartge wrote:
>> Harald Dunkel  wrote:
>>> On Sat, 5 Aug 2017 11:56:07 +0900 Mark Fletcher  wrote:
 On Fri, Aug 04, 2017 at 12:30:25PM +0200, Harald Dunkel wrote:
>>
> What is the right way to restart a service from the postinst
> script for Stretch and newer?
>>
 I may be misunderstanding your question but on a system that has 
 migrated to systemd, you can restart a service with: 

 systemctl restart 
>>
>>> I think you missed the point. To run it from a postinst script we need
>>> a universal(!) way to restart a service, regardless whether systemd or
>>> sysvinit-core or whatever is installed.
>>
>> invoke-rc.d does just that and is included in postinst by
>> dh_installinit for both SysV-init *and* systemd.
> 
> I've only looked through it briefly, but it looks like it invokes the
> initscript regardless of whether systemd is in use

No. You should only use it for things that also have an init script
so that it doesn't fail on sysvinit systems, but it will invoke
systemd directly if the system is currently systemd.

http://sources.debian.net/src/init-system-helpers/1.48/script/invoke-rc.d/#L542-L592

For things that are only available on systemd (for example if you
have split the service additionally for systemd, while sysvinit is
still just a single script) you should use the code that is
generated from dh_systemd in postinst, which you can see e.g. here:

http://sources.debian.net/src/debhelper/10.2.5/autoscripts/postinst-systemd-restart/

Note that it is important _not_ to call the init script or
systemctl directly from any maintainer scripts, as policy dictates
that the administrator should be able to use a custom script or
program in /usr/sbin/policy-rc.d to influence whether maintainer
scripts actually perform any actions. (For example, in chroots
you can use that to completely disable services from being started
from maintainer scripts.) Both invoke-rc.d and deb-systemd-invoke
will take care of that.

Regards,
Christian



Re: Diskless Debian stretch ISCSI boot (uefi) and shutdown hanging problem

2017-07-19 Thread Christian Seiler
Hi,

(I'm one of the maintainers of the open-iscsi package in Debian.)

On 07/19/2017 07:40 AM, Franz Angeli wrote:
> i have one diskless server able to boot with ISCSI, uefi is configures
> to reach iscsi target and volume correctly;
> 
> i installed Debian 9 with debian installer ad all works fine, at the
> end of installation process i remount root filesystem with (chroot
> /target) and edit initaramfs.conf with:
> 
> IP=10.10.200.150::10.10.200.1:255.255.255.0:ti**1.mk***.it:eno1
> 
> and after i update initramfs with:
> 
> update-initramfs -u
> 
> system boot correctly and works fine.
> 
> Problem is during shutdown, system hanging with:
> 
> a stop job is running for ifup for eno1
> 
> a stop job is running for Raise network interfaces
> 
> and i have to reset the server...
> 
> I know i can do the same with:
> 
> "ISCSI_AUTO=true" on /etc/iscsi/iscsi.initramfs
> 
> but i need a static IP configured as i do.

Do you perhaps also have something in /etc/network/interfaces or
/etc/network/interfaces.d, perhaps even DHCP configured? Because
if that takes over IP configuration and systemd kills the DHCP
client (which in turn removes the IP of the interface at
shutdown), then you'll see a hang because the network is gone
even though you still need it.

(ifupdown _should_ detect rootfs on iSCSI and not try to down
the interface by script, but the SIGTERM from systemd might
cause the dhcp client to drop the IP anyway.)

(Also note that you'd need to reboot twice after changing this
to test if that works.)

Regards,
Christian



Re: Stretch generates SLAAC IPv6 address even with /etc/network/interfaces set to manual static address

2017-07-05 Thread Christian Seiler
On 07/05/2017 08:09 PM, Eike Lantzsch wrote:
> On a Stretch client I'd like to have manually set static IPv6 addresses.
> But what I get are SLAAC addresses with $prefix + MAC-derived according to 
> the 
> IEEE-Tutorial EUI-64 .

First of all: you can turn off the automatic addresses by configuring
the interface to not accept router advertisements. Just set the
accept_ra flag to 0:

iface enp3s0 inet6 static
[other options]
accept_ra 0

(See man 5 interfaces for details.)

That doesn't help you with the problem you have, but once that's fixed
tihs is obviously what you'd want as a setting.

> I can cope with a SLAAC address AND an additional manual static address on 
> the 
> same interface - no problem - but what is going on here? Why is the manual 
> address in the file "/etc/network/interfaces" ignored?

They shouldn't be the case.

> excerpt of my /etc/network/interfaces:
> [...]
> iface enp3s0 inet6 static
>   address 2001:470:7075:e2::21
>   netmask 64
>   gateway 2001:470:7075:e2::

That looks fine.

> Anyway I get ip addr:

What did you do after changing /etc/network/interfaces? Did you
reboot?

> Or would
> iface enp3s0 inet6 static
>   address 2001:470:7075:e2::21/64
>   gateway 2001:470:7075:e2::
> be any better?

That should be equivalent and make no difference.

So I just tested the precise IPv6 configuration of yours (albeit without
a router on the network that sends advertisements, and a different IPv4
configuration so I could still access it) in a Stretch VM and it worked
just fine after a reboot (or at least after a restart of the networking
init script). Worked in the sense that the address and default route
got assigned, not that it worked in the sense of connectivity here.

To debug this further:

 - try a clean reboot
 - if it still doesn't work, if you're running systemd, try (as root):
  journalctl -u networking
 - try the command ifup enp3s0 and see if that works temporarily
   (ifup enp3s0 is what should happen at boot automatically)

Regards,
Christian



Re: Replace systemd

2017-07-03 Thread Christian Seiler
Hi,

On 07/04/2017 02:06 AM, Jason Wittlin-Cohen wrote:
> I assume this will work fine for a server system, but will it work on
> a desktop system using GNOME? From what I've read, GNOME has several
> systemd dependencies, but it's not clear to me whether this requires
> systemd to be used as init, or merely that systemd's packages must be
> installed.

For both Jessie and Stretch, the following holds true:

 - GNOME requires systmed-logind's interfaces to work. (Or any
   alternative that implements the same DBus interface, but none
   exist in Debian at the moment)

 - systemd-logind is part of the 'systemd' package, that must be
   instaled.

 - systemd-logind requires DBus methods of systemd, so you will
   either need systemd running ss init system (the 'systemd-sysv'
   package) _OR_ an alternative implementation of these interfaces
   to make logind work on non-systemd systems

 - the 'systemd-shim' package provides an alternative
   implementation of the interfaces required by systemd-logind
   so that it may be used on non-systemd systems

 - this means that you can indeed run GNOME without systemd as
   the init system (i.e. without the 'systemd-sysv' package) on
   Jessie and Stretch, if you have both the 'systemd' and
   'systemd-shim' packages installed

 - however, there will be some slight degradation in some corner
   cases of functionality

For the future (Buster and onwards), note that this all hinges
on systemd-shim continuing to implement the required interfaces
to make systemd-logind work _or_ someone writing and packaging
and alternative to systemd-logind that provides the same DBus
interfaces. It is currently not completely clear whether either
of these is going to happen: there is no alternative to logind
packaged (I know some people have been working on an alternative
that implements the same DBus interfaces, but I don't know the
status of that) and systemd-shim is currently an orphaned
package (both upstream and in Debian), so it's unclear how well
supported this is going to remain. (Of course, if there are no
significant changes between how systemd and logind talk to each
other, this might not be an issue at all, because stuff that
currently works will continue working.)

Regards,
Christian



Re: Clarifying what 'systemd' actually means

2017-07-02 Thread Christian Seiler
On 07/02/2017 01:37 PM, Alessandro Vesely wrote:
> On Sun 02/Jul/2017 12:37:33 +0200 Christian Seiler wrote:
>> This bug has nothing to do with systemd as the init system, it's in an
>> optional component that's disabled by default on Debian. In principle,
>> I suspect that resolved could also be used on sysvinit, if you really
>> wanted to, though I haven't tried it.
>>
>> Furthermore, the systemd versions of Wheezy and Jessie are too old to
>> already include systemd-resolved, so they are not affected at all.
> 
> Yet, there's a man page:
> https://manpages.debian.org/jessie/systemd/systemd-resolved.service.8.en.html

Oh, my bad, it's shipped in Jessie, but in a very early state of
development. I suspect that the functionality that has the bug
wasn't already implemented in the Jessie version. I'm not
completely sure about resolved, but networkd in Jessie is only
shipped as a technical preview (see Jessie's release notes), by
the way, so people shouldn't be using Jessie's networkd for
production purposes anyway.

> I'd be curious on why tools which don't even require that systemd be PID1 go
> under the systemd umbrella.

The technical reason (which has been mentioned multiple times by
the systemd developers in various places) is that these utilties
share code with systemd when it comes to basic utility functions,
such as configuration file parsing, or wrappers around common
tasks that are quite cumbersome to do with just the standard C
library. And since they don't want to provide a stable API and
ABI for C utility helper functions, these programs are put
together in the same repository.

The reasoning is very similar to how the Linux kernel works,
albeit on a much smaller scale. Sure, some things are definitely
things that do belong in the core kernel, but there are a _ton_
of drivers and other functionality that technically could be
managed outside of the kernel tree. But for the very same reason
(the kernel developers don't want to have to guarantee a stable
ABI for modules) people are encouraged to only have out of tree
kernel modules during the initial development, and that these
should be merged into the mainline kernel at some point.

How somewhat related projects should or shouldn't be bundled in
the same repository has always been something that doesn't have
a clear right or wrong answer, and a lot of that will depend on
the personal tastes of the people working on this.

For example: why are most standard shell utilities all bundled
together in the 'coreutils' package, but e.g. find is in the
separate 'findutils' package? And 'ps' in the 'procps' package?
Wouldn't it also make sense to either unbundle all utilities
into their own tiny packages, or bundle them all into a large
global package? And the answer to that is simply: if you had to
develop all of these tools from the ground up again, then you
might choose one of the other schemes, or split it up differently
altogether. But because of historic reasons, and because of
how the developers of the respective utilities feel, the split
is the way it is at the moment. There simply is no right or wrong
answer here.

>  Doesn't that contribute to make systemd appear
> like some kind of conspiracy?

I don't know about the systemd developers, but I personally
don't think it's fruitful to make decisions based on the
irrational beliefs of other people. If a person came up to me
in the supermarket and said I shouldn't buy a particular brand of
milk because the company is affiliated with the Illuminati, I'm
not going to base my decision whether I'm going to buy that brand
or not on that statement.

(To clarify: I'm not saying that people who don't like systemd
can't be rational, but I do think that anyone who claims to see
a conspiracy here is not taking a rational position.)

> BTW, is resolved one of them or does it require systemd?

I suspect it doesn't need systemd as PID1, but I'm not sure and
I haven't tried it. I'm pretty sure it doesn't come with an init
script though, so that you'd have to write yourself if you
wanted to use it without systemd.

Regards,
Christian



Clarifying what 'systemd' actually means (was: Re: Remotely exploitable bug in systemd (CVE-2017-9445))

2017-07-02 Thread Christian Seiler
On 07/02/2017 11:24 AM, Michael Fothergill wrote:
> ​Could this be exploited to force people to use sysvinit instead of systemd ?

This bug has nothing to do with systemd as the init system, it's in an
optional component that's disabled by default on Debian. In principle,
I suspect that resolved could also be used on sysvinit, if you really
wanted to, though I haven't tried it.

Furthermore, the systemd versions of Wheezy and Jessie are too old to
already include systemd-resolved, so they are not affected at all.

In general, I think it's helpful for everyone to take a mental note
that 'systemd' can mean two things:

 1. The init binary itself. (PID 1)

 2. A project that implements various things in userspace
that includes the init binary, but also an assortment
of other tools.

In fact, it might be very helpful to draw the following Venn diagram:

+-+
|  systemd project|
| |
| ++ +--+ |
| |  init system   | |  other tools (some require that  | |
| || |  systemd be PID1, others don't)  | |
| | ++ | |  | |
| | | systemd binary (PID 1) | | |  these are all optional when | |
| | ++ | |  using systemd as init system,   | |
| || |  and there are other projects| |
| | ++ | |  providing similar functionality | |
| | | generators | | |  | |
| | | (for supporing | | | +--+ | |
| | | /etc/fstab, etc.)  | | | | resolved | | |
| | ++ | | +--+ | |
| || |  | |
| | ++ | | +--+ | |
| | | journald   | | | | nspawn   | | |
| | ++ | | +--+ | |
| || |  | |
| | ++ | | +--+ | |
| | | helpers (e.g. tmpfiles | | | | sysusers | | |
| | | or sd-modules-load.)   | | | +--+ | |
| | ++ | |  | |
| || | +--+ | |
| | ++ | | | networkd | | |
| | | user tools (systemctl, | | | +--+ | |
| | | systemd-analyze, ...)  | | |  | |
| | ++ | | ...  | |
| || |  | |
| | +-+ | |
| | | logind (it depends on the definitions where to put it)  | | |
| | | | | |
| | | requires systemd's interfaces to run, but there is an   | | |
| | | alternative implementation (systemd-shim) that mimicks that | | |
| | | so it can be used on sysvinit systems   | | |
| | | | | |
| | | this (or rather, it's interfaces) is what's mainly required | | |
| | | by GNOME and others | | |
| | | | | |
| | +-+ | |
| || |  | |
| ++ +--+ |
| |
| +-+ |
| | udev| |
| | | |
| | Doesn't require systemd as init system, but systemd requires it | |
| | (except when run in a container)| |
| +-+ |
+-+

In Debian most of the stuff in the "other tools" part is not enabled
by default, so unless you've explicitly chosen to enable it, it's very
likely that your system is NOT going to be affected by any bug in
there.

Regards,
Christian



Re: [Stretch]Typing ""clear" on terminal restricts the scrollback to only a screenful

2017-06-21 Thread Christian Seiler

Hi,

Am 2017-06-21 11:57, schrieb Avinash Sonawane:
On Wed, Jun 21, 2017 at 1:36 PM, Christian Seiler <christ...@iwakd.de> 
wrote:

So what happened is that 'tput clear' in Stretch now behaves the
same as 'clear', while the version in Jessie of 'tput clear'
wouldn't clear the scrollback buffer.


This suggests that the version of 'clear' in Jessie used to clear the
scrollback buffer.
But that's untrue. I have always used 'clear' to clear the screen in
Jessie. And it never cleared the scrollback buffer. (I'm not sure
about 'tput clear' though. Haven't used that much)


I see two possible explanations for the behavior you're seeing:

 a. The terminal program you're using doesn't like the additional ';'
character in the escape sequence that 'clear' on Jessie was
sending, so ignored the escape sequence entirely. In that case
it only interpreted '\033[H\033[2J'.

Reminder: 'clear' on Jessie would send:   \033[3;J\033[H\033[2J
  'clear' on Stretch would send:  \033[3J\033[H\033[2J
  On the Linux text console \033[3J and \033[3;J both
  clear the scrollback buffer completely, at least with
  kernel 4.9.

 b. The terminal program you're using changed behavior between
Jessie and Stretch. For example, KDE's 'konsole' terminal program
will _not_ clear the scrollback buffer with either one of these
commands:

  printf '\033[3J\033[H\033[2J'
  tput clear
  clear

It may be that your terminal program used to ignore the sequence
as well in Jessie, but implemented it in Stretch.

(Funnily enough, just found out, KDE's 'konsole' will clear it
if you just issue \033[3J - but once you issue a \033[2J again,
it will restore the buffer once more. Really weird.)

You can test which one of these is the case by using the following
commands:

printf '\033[3;J\033[H\033[2J'

printf '\033[3J\033[H\033[2J'

If the first one doesn't erase the scrollback buffer, but the second
one does, then you know that your program doesn't support the escape
sequence that Jessie's clear sent, but does support the one that
Stretch sent. (Case a.) If in both cases the scrollback buffer is
erased, then it's likely that the behavior of your terminal program
changed between Jessie and Stretch. (Case b.)

Regards,
Christian



Re: [Stretch]Typing ""clear" on terminal restricts the scrollback to only a screenful

2017-06-21 Thread Christian Seiler

Am 2017-06-21 10:06, schrieb Christian Seiler:

'clear' and 'tput clear' are identical in Stretch, both are from
ncurses-bin. This was not true in Jessie though, ncurses did
change behavior between Jessie and Stretch.


By the way. in case anyone was wondering:

Upstream changelog about how clear and tput clear are now
identical:
http://invisible-island.net/ncurses/NEWS.html#t20161022

Clearing the scrollback buffer is actually desired behavior,
see also:

http://invisible-island.net/ncurses/NEWS.html#t20130622
https://bugzilla.redhat.com/show_bug.cgi?id=815790
https://unix.stackexchange.com/questions/87469/clearing-the-old-scrollback-buffer

And this is apparently documented now:

http://invisible-island.net/ncurses/NEWS.html#t20161119
https://manpages.debian.org/stretch/ncurses-bin/clear.1.en.html

Regards,
Christian



Re: [Stretch]Typing ""clear" on terminal restricts the scrollback to only a screenful

2017-06-21 Thread Christian Seiler

Am 2017-06-21 05:57, schrieb Avinash Sonawane:
On Wed, Jun 21, 2017 at 3:26 AM, Larry Dighera  
wrote:



Try: tput clear


Right. My bad! I already did that. But I have always considered
`clear` and `tput clear` to be the same so I didn't mention it before.
Anyways I got exactly same behavior with `tput clear`.


'clear' and 'tput clear' are identical in Stretch, both are from
ncurses-bin. This was not true in Jessie though, ncurses did
change behavior between Jessie and Stretch.

The following escape sequences are printed by clear / tput clear:

|  Jessie |  Stretch
+-+
clear   |  \033[3;J\033[H\033[2J  |  \033[3J\033[H\033[2J
tput clear  |  \033[H\033[2J  |  \033[3J\033[H\033[2J


To decode them (man 4 console_codes):

 \033[3J or \033[3;J
   clear the entire scrollback buffer
 \033[H
   go to position on the screen,
   default: top left corner
 \033[2J
   clear entire screen

So what happened is that 'tput clear' in Stretch now behaves the
same as 'clear', while the version in Jessie of 'tput clear'
wouldn't clear the scrollback buffer.

Furthermore, whether the scrollback buffer is actually cleared is
also up to the program managing the buffer; the Linux text console
respects that, but e.g. 'Konsole' from KDE doesn't, there \033[3J
has the same effect as \033[2J - the scrollback buffer remains
intact regardless. (At least in the default settings, haven't
checked if that behavior can be changed.) It could also be that
the terminal emulator you're using has also changed it's behavior.

In any case: if you want 'clear' to act the same way 'tput clear'
acted in Jessie, you can simply add

alias clear="printf '\033[H\033[2J'"

to your .bashrc (or similar if you're using a different shell),
that should get you the behavior you had beforehand. [1]

Regards,
Christian

[1] Technically this alias is not 100% correct, as both 'clear'
and 'tput clear' will interpret the TERM environment variable and
make sure that the sequences they send out are actually understood
by the terminal. However, in practice any terminal you're going to
use this on is going to support this specific sequence, so you
don't need to worry about these details. In case you're accessing
from ancient terminal hardware (think 70s mainframes or similar)
this alias might not work though.



Re: Stretch--no network interfaces

2017-06-18 Thread Christian Seiler
On 06/18/2017 09:22 PM, Brian wrote:
> On Sun 18 Jun 2017 at 20:54:46 +0200, Christian Seiler wrote:
> 
>> On 06/18/2017 08:25 PM, pplaw wrote:
>>> The network I'm on at the moment hands out DHCP addresses.  But, sometimes,
>>> I'll hard-code the IP address for the computer (with ifconfig:  ifconfig 
>>> (eth0--but in this case) enx687f74158a8a 10.x.x.x netmask 255.255.255.0;
>>> route add default gw 10.x.x.x).  Since this is a new install of Stretch,
>>> I haven't been able to download the ifconfig package; and if I type ifup
>>> enx687f74158a8a (or for my wireless card, wlp1s0), I get:  "unknown in-
>>> terface.
>>
>> In the Debian release notes there's a section about the fact that
>> ifconfig has been deprecated for well over a decade now, and is not
>> included in new installs anymore starting with Stretch:
>>
>> https://www.debian.org/releases/stretch/amd64/release-notes/ch-information.en.html#iproute2
> 
> ifconfig isn't the problem. The OP installed Stretch; stretch doesn't
> have ifconfig.

Stretch does have ifconfig, just not by default. And I mentioned that
because the OP mentioned it and I wanted to clarify this. Please read
again the part I quoted from the OP. ;-)

>> If you want to temporarily add an IP to a given interface, you can
>> use the 'ip' utility (this also works in older Debian versions):
> 
> [Good advice snipped]
> 
> Do you not find it disturbing that someone can install Debian and not
> end up with a network connection?

Well, that depends on further details here: was the network configured
at all during installation? If you use one of the DVDs, for example,
you don't need to configure the network in the installer to install
Debian onto the hard disk - but if you didn't configure it in the
installer, you'll have to configure it manually in the running system.

But sure, if you can show that network properly configured in
the installer fails to lead to a system with configured networking,
then please report a bug with details and steps to reproduce, so that
this can be fixed in 9.1.

Regards,
Christian



Re: Stretch--no network interfaces

2017-06-18 Thread Christian Seiler
On 06/18/2017 08:25 PM, pplaw wrote:
> The network I'm on at the moment hands out DHCP addresses.  But, sometimes,
> I'll hard-code the IP address for the computer (with ifconfig:  ifconfig 
> (eth0--but in this case) enx687f74158a8a 10.x.x.x netmask 255.255.255.0;
> route add default gw 10.x.x.x).  Since this is a new install of Stretch,
> I haven't been able to download the ifconfig package; and if I type ifup
> enx687f74158a8a (or for my wireless card, wlp1s0), I get:  "unknown in-
> terface.

In the Debian release notes there's a section about the fact that
ifconfig has been deprecated for well over a decade now, and is not
included in new installs anymore starting with Stretch:

https://www.debian.org/releases/stretch/amd64/release-notes/ch-information.en.html#iproute2

If you want to temporarily add an IP to a given interface, you can
use the 'ip' utility (this also works in older Debian versions):

ip link set $DEVICE up
ip addr add 10.x.x.x/24 dev $DEVICE
ip route add default via 10.x.x.y

The question why 'ifup' doesn't work in your case: 'ifup' is only
a tool that is used in conjunction with /etc/network/interfaces
(or /etc/network/interfaces.d/*). So in order for ifup to work,
you need to create an entry in /etc/network/interfaces for your
network interface, for example:

auto enx687f74158a8a
iface enx687f74158a8a inet static
   address 10.x.x.x/24
   gateway 10.x.x.y

And then you can do 'ifup enx687f74158a8a'. (And in that case
the interface will also be configured when the system is rebooted.)

(The 'ifup' part is also the same in older Debian versions.)

Regards,
Christian



Re: shared objects vs. executables in Stretch

2017-04-13 Thread Christian Seiler
Hi there,

On 04/13/2017 09:16 PM, Neoklis Kyriazis wrote:
> Having just completed installation of development tools in my
> first installation of Debian (stretch), I tried to compile the
> sources of some of my own apps but I end up with a shared library
> object instead of an ELF executable! My apps are on my web site
> in the signature below, and they compile well in Void Linux. Its
> a puzzle for me but I am a complete beginner in Debian so it may
> have something to do with my installation's setup.

Debian activates PIE by default starting from Stretch in the
compiler. This allows ASLR (address space layout randomization)
to work not only for external library code, but also for the
executable code itself.

The thing is: internally, PIE executables don't exist in ELF.
There are either executables (with fixed addresses for their
code and no dynamic relocations) and shared objects (which are
relocated by the dynamic linker). Hence gcc/binutils actually
generate a shared library when PIE is enabled. Shared libraries
can be executable (try running /lib/x86_64-linux-gnu/libc.so.6
on your system, which is the standard C library - that will
print some version information), so that's how PIE executables
work.

Try doing:

file /bin/ls

and that will also tell you it's a shared object. Some binaries
in Debian have not been compiled with PIE (for various reasons),
so they will still appear as executables.

See also:
http://stackoverflow.com/questions/34519521/gcc-creates-a-shared-object-instead-of-an-executable-binary

You can link your code with -no-pie to get a regular ELF
executable again - but that won't benefit from ASLR for the
executable's own code.

Regards,
Christian



Re: Cannot enable Emulate3Buttons in stretch

2017-04-13 Thread Christian Seiler
On 04/13/2017 06:36 PM, Christian Seiler wrote:
> Debian Stretch uses libinput for handling input by default, even
> on Xorg, instead of the default evdev driver that was used
> previously. The option for the middle mouse button emulation
> is disabled.

Err, I meant "renamed", not "disabled", sorry. The advice for
fixing it still stands though. :)

Since this change in the default driver selection for Xorg is
something that many people might not expect, I've also reported
a bug against Debian's release notes so that when Stretch is
released people don't face the same surprise you did:
https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=860259

Regards,
Christian



Re: Cannot enable Emulate3Buttons in stretch

2017-04-13 Thread Christian Seiler
On 04/13/2017 05:56 PM, Neoklis Kyriazis wrote:
> I have installed (for the first time) Debian Testing (stretch) on my
> laptop and tried to enable Emulate3Buttons for my 2-button Kensington
> trackball. I tried a lot of solutions I found by searching but none
> seems to work. I have the same problem on my desktop computer, after
> upgrading my existing installation of Void Linux. The emulation used
> to work before the upgrade, which I beleive installed the latest
> version of X.org server. 
> 
> 
> I could not get any help on the Void Linux forums and this is why I 
> 
> installed Debian on the laptop - but no luck. Its obvious that the new
> version of Xorg either does not recognize the relevant options for
> setting up Emulate3Buttons or perhaps support for this is disabled by
> default.

Debian Stretch uses libinput for handling input by default, even
on Xorg, instead of the default evdev driver that was used
previously. The option for the middle mouse button emulation
is disabled.

Create a file /etc/X11/xorg.conf.d/41-middle-emulation.conf with
the following contents:

Section "InputClass"
Identifier "mouse"
MatchIsPointer "on"
Driver "libinput"
Option "MiddleEmulation" "on"
EndSection

Restart your X server (i.e. logout and login again) and it
should work.

Regards,
Christian



Re: A minimal relational database in Debian?

2017-02-27 Thread Christian Seiler
On 02/27/2017 02:17 PM, Richard Owlett wrote:
> The last time I needed a relational database my employer was using
> dBaseII on a MS-DOS machine. What is a functional equivalent in the
> Debian repository?

Well, I've never used DBaseII, but if you want a small relational
database, take a look at SQLite. There are bindings into most
languages and there's a command line client. The storage is simple
(from an end-user perspective): each database is just a different
file. There's also a GUI called 'sqlitebrowser' which is in Debian,
but I've never used that. There are also other GUIs out there not
part of Debian, which you can install manually.

> I looked at at LibreOffice Base. It was unusable as its "help" system
> provided no intrinsic way to increase fonts to a legible size.

No idea about LibreOffice Base, and I don't remember ever opening
the help window in LibreOffice, but often [Ctrl] + [+] will help
increase font sizes.

Regards,
Christian



Re: how to compute predictable network interface names?

2017-02-24 Thread Christian Seiler
On 02/24/2017 10:10 AM, Harald Dunkel wrote:
> On 02/23/2017 04:25 PM, Christian Seiler wrote:
>>
>> There's a policy which are going to be preferred. man 5 systemd.link
>> tells you what the options are and /lib/systemd/network/99-default.link
>> tells you what the default setting is (the first successful one is
>> used).
> 
> Of course I stumbled over this one:
> 
> % man 5 systemd.link
> No manual entry for systemd.link in section 5
> 
> Now I got confused: Who is responsible for renaming the NIC names?
> Is this a systemd feature, is this the job of udev, or are the NICs
> renamed by the kernel very early at boot time? Shouldn't I get the
> same predictable name for eth0, no matter what?

udev is responsible.

The kernel drivers give the NIC a name initially, and they can mark
that name as being "persistent" or not. The vast majority of drivers
do not mark the name as being persistent.

The names the kernel returns will typically be something like eth0,
etc.

udev will then rename the device once it encounters it.

In newer udev versions, it will use some (but not all) settings from
systemd.link files. The other settings are interpreted by
systemd.networkd. (And if you don't use that or don't have that
installed, they will be ignored.) That's also the reasnon why those
files are in a systemd directory, even though udev interprets some
parts of them - since the matching of interfaces is identical for
the purposes of udev and systemd-networkd, systemd developers decided
it would be simpler to have just one configuration file that is read
by both. (udev and systemd are both in the same repository, even
though you can use udev without systemd.)

>> On my Stretch system that is:
>>
>> NamePolicy=kernel database onboard slot path
>>
> 
> AFAIU
>   NamePolicy=kernel
> 
> makes sure that net.ifnames=0 given on the kernel command line
> works. Is this correct?

No. net.ifnames=0 will have udev completely ignore NamePolicy here
and not rename the interface at all.

NamePolicy=kernel will most likely never trigger on any system you
have (it has never triggered on any system I have), it's for the
case where the kernel driver says "yeah, the name I chose is already
a name that's going to be persistent". This will never be interfaces
like eth0, etc. I suspect this is mostly for SoC-type devices which
have one onboard network interface that's fixed and thus will always
have the same name.

>> 'kernel' and 'database' are likely going to fail in most cases (kernel
>> means the kernel indicates that the name used so far is already
>> predictable, which it only does for very special hardware, probably
>> embedded or similar, and database means that there's an entry in the
>> udev hardware database, which you'd have to do manually, because I
>> don't know of any upstream rules), so basically it's the following
>> logic:
>>
>>  - first try ID_NET_NAME_ONBOARD
>>  - if that doesn't exist, try ID_NET_NAME_SLOT 
>>  - if that doesn't exist, try ID_NET_NAME_PATH
>>
> Not to forget
> 
> - if that doesn't exist, use INTERFACE

>From the code logic, sure, but you'll be hard-pressed to find an
interface that doesn't have ID_NET_NAME_PATH set. ;-)

Regards,
Christian



Re: how to compute predictable network interface names?

2017-02-23 Thread Christian Seiler
On 02/23/2017 04:16 PM, Harald Dunkel wrote:
> On 02/16/2017 12:47 PM, Christian Seiler wrote:
>>
>> On a system with predictable names running? Or on a system
>> pre-upgrade?
>>
> 
> Its more "pre-installation". I boot a USB stick and run
> my own installer (using debootstrap or creating a clone).
> The NIC name is needed to setup /etc/network/interfaces.
> I know how the interfaces are named using the old scheme,
> but the predictable names are hard to guess.
> 
>> Because if you have a system that's being upgraded at the
>> moment, the following command _might_ work _after_ you've
>> upgraded udev and _before_ you've rebooted the system.
>>
>> udevadm info /sys/class/net/eth4
>>
>> Look at ID_NET_NAME there.
>>
> 
> I found 3 for eth0 on my desktop PC:
> 
>   E: ID_NET_NAME_MAC=enx54bef70930bd
>   E: ID_NET_NAME_ONBOARD=eno1
>   E: ID_NET_NAME_PATH=enp0s25
> 
> For a server with 6 NICs I got for eth4
> 
>   E: ID_NET_NAME_MAC=enx0cc47a860566
>   E: ID_NET_NAME_PATH=enp4s0f2
>   E: ID_NET_NAME_SLOT=ens261f2
> 
> A wild guess would be it is "ID_NET_NAME_PATH" unless there is
> a "ID_NET_NAME_ONBOARD" ? I understand that this is the fragile
> part.

There's a policy which are going to be preferred. man 5 systemd.link
tells you what the options are and /lib/systemd/network/99-default.link
tells you what the default setting is (the first successful one is
used). On my Stretch system that is:

NamePolicy=kernel database onboard slot path

'kernel' and 'database' are likely going to fail in most cases (kernel
means the kernel indicates that the name used so far is already
predictable, which it only does for very special hardware, probably
embedded or similar, and database means that there's an entry in the
udev hardware database, which you'd have to do manually, because I
don't know of any upstream rules), so basically it's the following
logic:

 - first try ID_NET_NAME_ONBOARD
 - if that doesn't exist, try ID_NET_NAME_SLOT 
 - if that doesn't exist, try ID_NET_NAME_PATH

Regards,
Christian



Re: how to compute predictable network interface names?

2017-02-16 Thread Christian Seiler
On 02/16/2017 12:24 PM, Harald Dunkel wrote:
> I understand that the predictable nic names can be turned off
> using
> 
>   net.ifnames=0
> 
> on the kernel command line, but I wonder if there is a shell
> script to actually predict the "enpYsZ" from the old style
> "ethX" initially assigned by the kernel? Something like
> 
>   % predict_nic eth4
>   enp12s1

On a system with predictable names running? Or on a system
pre-upgrade?

Because if you have a system that's being upgraded at the
moment, the following command _might_ work _after_ you've
upgraded udev and _before_ you've rebooted the system.

udevadm info /sys/class/net/eth4

Look at ID_NET_NAME there.

Can't really test that though, since I don't have a setup
with the old scheme that I still need to upgrade, so this
might not work at all.

If your system has already switched to the new scheme,
then 'eth4' is meaningless once the devices have been
renamed, so you don't really know.

Also, the reason why the new scheme was introduced is that
on some systems ethX is not reproducible: the order in
which the devices are found by the kernel can change
(especially if you have USB devices, but also in other
cases) so 'eth4' is often ambivalent anyway.

If you have some other information on identifying the
device (such as the device's MAC address), you can identify
it that way by looking for the interface with that MAC and
name it something yourself.

For example, you could create a file [1]
/etc/systemd/network/10-uplink.link
with the following contents:

[Match]
MACAddress=xx:yy:zz:gg:hh:ii

[Link]
Name=uplink

This way the network interface with the specified MAC
address is now called "uplink" instead of "enp12s1" or
whatever it was previously. You can also match by PCI
paths (udevadm info's ID_PATH attribute) and matching
supports wildcards. See the manpage for systemd.link
for further details.

Regards,
Christian

[1] This works even on non-systemd systems with udev because
*.link files are interpreted by udev's builtin and _not_
by systemd, so technically the location is a bit
misleading.



Re: Kernel Update on Stretch

2017-02-14 Thread Christian Seiler
Hi,

On 02/14/2017 12:58 AM, Daniel Bareiro wrote:
> Some time ago I read that Linux 4.x incorporates the feature to be
> updated without requiring a restart of the operating system.

They incorporated parts of that. There are still some unsolved issues.

See for example this article from last November about the topic:
https://lwn.net/Articles/706327/

So there's no complete upstream support for this yet, there are
several distributions that roll their own variants.

> Since stretch incorporates a kernel of the 4.x series, this would imply
> that we can update the kernel package and avoid reboots?

No. There are two components to this:

 1. The kernel must support loading live patches

This is partially true for the kernel that will come with
Stretch (CONFIG_LIVEPATCH=y), but (see the LWN article I linked)
it doesn't actually work safely yet.

 2. Someone needs to prepare the live patches. Currently nobody in
Debian is doing that.

You could do it yourself with the right tooling (look at kpatch
and kgraft), but preparing these kinds of patches is very
complicated. (And that still doesn't solve the problem that
the current patch loading support is unsafe, see 1.)

Further reading:

https://lists.debian.org/1460472961.25201.200.ca...@decadent.org.uk

Depending on whether there is movement in the upstream kernel there
is a chance this might be a thing in Buster, but it definitely
won't work out of the box in Stretch. You'll still need to reboot.

Regards,
Christian



Re: Where are WiFi passwords (WPA keys) stored?

2016-12-06 Thread Christian Seiler
On 12/06/2016 09:26 PM, Brian wrote:
> On Tue 06 Dec 2016 at 11:14:56 +0100, Christian Seiler wrote:
> 
>> Note that when using NetworkManager, it configures its own
>> instance of wpa_supplicant, so you should never touch a
>> configuration file for wpa_supplicant yourself in this kind of
>> setup.
>>
>> (You could of course stop using NetworkManager and configure
>> wpa_supplicant manually, but I really wouldn't recommend that;
>> I don't think wpa_supplicant is designed in a way that makes
>> direct end-user usage easy - there's a reason why NetworkManager
>> exists instead of desktop environments communicating directly
>> with wpa_supplicant.)
> 
> Direct interaction with the supplicant is not easy?

If you want to dynamically connect to a network that's not in
your wpa_supplicant.conf, then yes, that's not easy to do via
wpa_cli. (It's doable, just not easy or user-friendly.) If
you then want to combine a dynamically-added configuration
with something like DHCP, then it's even worse.

Of course, if you edit the wpa_supplicant.conf every time you
want to connect to a new network, and tear down and restart
the entire wifi interface, sure, that'll work, but it doesn't
fit well into the WiFi model.

That all said: I'm not a huge fan of NetworkManager, I think
some aspects of it are not well enough thought out to my
taste - but it does it's job in the case of WiFi, and it does
it well, better than the alternatives I've seen so far.

> However, it is worth acknowledging that Debian has the most complete
> integration of ifupdown with wpa_supplicant you will find. It also has
> excellent documentation to help with explaining this integration. There
> are some things Debian does so well that they are unsurpassable.

Yes, and the primary use case I see for this are headless
servers or similar that are connected via WiFi, where the
connection rarely changes. I would not want to use that on a
laptop though, because you never know when you'll want to
connect to a different network.

> Just in case you think you
> cannot point and click when you have direct enduser control over the
> supplicant, think again. There is wpa-gui.

Last time I tried wpa_gui troubleshooting with it was a huge
mess, and I had to resort to wpa_cli to actually get some
sensible information about what was going on. Maybe that has
improved since (it's been a couple of years), but my
experiences with it have been bad.

Regards,
Christian



Re: Where are WiFi passwords (WPA keys) stored?

2016-12-06 Thread Christian Seiler
On 12/06/2016 09:04 AM, Robert Latest wrote:
> Not in /etc/wpa_supplicant/wpa_supplicant.conf, despite suggestions in
> every bit of documentation that I got my hands on. In fact, that file
> doesn't even exist on my jessie system. Nevertheless, when I
> configured the WiFi network using some GUI tool in the XFCE desktop,
> it worked.

Disclaimer: I'm not a user of XFCE, so if that does something
really weird, this may not apply.

However, most graphical tools interface with NetworkManager, and
that stores its configuration in /etc/NetworkManager.

You'll likely find your password stored in
/etc/NetworkManager/system-connections/$SSID
(file only readable/writable as root; also please don't modify it
while NetworkManager is running, it will overwrite it without
warning; modifying it when NetworkManager is stopped is fine
though)

where you replace $SSID with the SSID of your WiFi.

On some desktops (e.g. GNOME) the Password can be stored in the
personal user's keyring/wallet/password manager instead, but
then you need to be logged in for NetworkManager to have access
to the password - which is not true in your case because you
mentioned:

> Even after a reboot, with no desktop running, I could ssh
> into the system via WiFi.

So that means that NetworkManager has the password stored
directly.

Note that when using NetworkManager, it configures its own
instance of wpa_supplicant, so you should never touch a
configuration file for wpa_supplicant yourself in this kind of
setup.

(You could of course stop using NetworkManager and configure
wpa_supplicant manually, but I really wouldn't recommend that;
I don't think wpa_supplicant is designed in a way that makes
direct end-user usage easy - there's a reason why NetworkManager
exists instead of desktop environments communicating directly
with wpa_supplicant.)

> BTW, I did find a wpa_supplicant.conf file in some deep subdir of
> /etc/dbus-1/...

That's just the DBus policy, that doesn't configure how
wpa_supplicant reacts, but only how the DBus daemon handles
the access policy for wpa_supplicant. (DBus is a communication
bus used on Linux and other systems; most desktop envirnoments,
including XFCE, use it internally for some things.) Unless you
know what you're doing, I wouldn't touch that, otherwise you
could end up stopping NetworkManager from communicating with
wpa_supplicant and then your WiFi could stop working altogether.

Regards,
Christian



Re: libvirt-bin on Stretch

2016-11-12 Thread Christian Seiler
On 11/12/2016 11:15 PM, Van Nelle wrote:
> I am trying to follow this https://wiki.debian.org/KVM tutorial but i cant
> find libvirt-bin on Stretch.
> 
> Is there any replacement of this package

The package was split into two parts:

libvirt-daemon-system
libvirt-clients

In most cases you probably want both of them at the same time.

While the Debian packaging was changed, the configuration
and administration of libvirt is still very similar to that
of previous versions, so the rest of the tutorial is probably
going to still work on Stretch.

Regards,
Christian



Re: dd - proper use or more suitable program

2016-11-11 Thread Christian Seiler
On 11/11/2016 10:38 PM, Richard Owlett wrote:
> I was wondering about that. 
> https://www.gnu.org/software/ddrescue/manual/ddrescue_manual.html is
> not first time user friendly. Will re-read after a good night's
> sleep. Will also look for appropriate tutorials. Suggestions?

Well, I would suggest not dumping the result on another partition
but rather into an image file. In that case you'd have the old
drive (not mounted), let's call it /dev/sda, and the new drive
with a partition on it and mounted on /mnt with sufficient free
disk space there.

In the simplest case you'd do:

ddrescue /dev/sda /mnt/defective_drive.img /mnt/defective_drive.log

If the drive is farther gone and has quite a few defective
sectors you may get better results if you use direct disk access
for the phase where you try to read defective sectors. In that case
you'd copy the bulk of the data while disabling the scraping phase
manually,

ddrescue -n /dev/sda /mnt/defective_drive.img /mnt/defective_drive.log

And then use direct access to scrape the rest:

ddrescue -d /dev/sda /mnt/defective_drive.img /mnt/defective_drive.log

Note that ddrescue assumes a default sector size of 512 bytes,
which is likely to be correct for older hard drives smaller than
2 TiB. If you have a newer hard drive, especially if it's rather
large, the physical sector size could be 4096 bytes instead.
(You can check the physical sector size of your disk by running
hdparm -I /dev/sda - that will tell you a lot of things about
your drive, among them the physical sector size in bytes. Note
that the physical sector size is relevant here, not the logical
one.)

In addition, you may also want to specify a number of retries
when trying to read defective sectors. (Default is 0.) You can
do that with the -r flag, e.g. -r3 for three retries per sector.

To recap:

 - Simplest way of calling the program is

   ddrescue /dev/sda output_image_file log_file

 - If the condition of your hard drive is a bit worse, you might
   achieve better results with:

   ddrescue -n /dev/sda output_image_file log_file
   ddrescue -d -r2 /dev/sda output_image_file log_file

   (-r2 is for 2 retries per defective sector, you can change
   that number; you can also call the second ddrescue command
   again in case you want to do more retries.)

 - If you have a new disk with 4096 bytes / sector (not likely)
   please also add a -b4096 to the command line.

 - If your target is not an image file, but a disk drive itself,
   also specify the -f option. (I would really recommend using
   image files though, gives you more flexibility.)

 - I wouldn't really bother with the other options, the default
   behavior is very sensible.

 - Finally, get some coffee or similar, this may take a while.

Regards,
Christian



Re: dd - proper use or more suitable program

2016-11-11 Thread Christian Seiler
Hi,

Am 11. November 2016 17:57:27 MEZ, schrieb Andy Smith :
>Hi Richard,
>
>On Fri, Nov 11, 2016 at 10:49:37AM -0600, Richard Owlett wrote:
>> I was considering using dd to copy the entire drive to a *SINGLE*
>> partition of a 1 TB drive with the intention making a "byte perfect"
>> of of the defective drive to a new 300 GB drive at a later time to
>> then attempt "data rescue". Partitions other than the first are
>> evidently readable.
>> 
>> Suggestions/comments please.
>
>You are better off using GNU ddrescue for taking images of
>possibly-failing devices.

Full ACK: GNU ddrescue has saved my data multiple times in the past, I can 
really recommend it. (The "log file" is very helpful with resuming at a later 
point in time if you had to cancel it.)

Just don't confuse it with dd_rescue, which I don't recommend unless you are an 
expert and have a very special case.

Regards,
Christian



Re: Trivial script will NOT execute

2016-11-04 Thread Christian Seiler
On 11/05/2016 01:51 AM, Richard Owlett wrote:
> Today I've been having weird problems executing scripts.
> As I have no valuable data on the partition containing Debian, I
> wiped it and did a fresh install of Debian Jessie (8.6.0) MATE
> desktop environment from a purchased set of DVDs. Earlier today I had
> had reason to create an *,iso of DVD1 of 13 using xorriso. The ISO
> had a MD5SUM matching the one at debian.org .
> 
> More than a half-century of trouble shooting *screams* 'operator error' ;[
> But what? [Caja reports the execute bit is set ;]
> 
> Cut-n-paste from MATE terminal:
> root@full-jessier:~# #!/bin/bash -x
> root@full-jessier:~# cd /media/root/myrepo
> root@full-jessier:/media/root/myrepo# RCO
> bash: RCO: command not found

By default for security reasons the current directory is not in the
PATH environment variable on Linux. Perhaps in your previous install
you had manually added it to your environment, but in a fresh
installation with an empty home directory (or at the very least
without restoring dotfiles in your home directory) it will not be
present.

You can add it to PATH via:

export PATH=$PATH:.

in the current shell. You can also add that line to your ~/.bashrc
to make that permanent. (Note that you appear to be running this as
root, so ~ means the home directory of the root user here, typically
/root.)

Please be aware of the security implications of this though; while
adding it to the end of PATH (as my line above does) is not quite as
bad as adding it in the front, this could lead you to potentially
running programs from untrusted sources. (Example scenario: you have
a command line open in a directory which contains an executable or a
script with the name of something you want to execute, you
accidentally removed the command a month ago during a system update;
in that case typing in that command will execute the binary/script
from the current directory - and if the current directory comes from
an untrusted source, because it's on an external pendrive that you
don't trust, for example, then it could lead you to executing
malicious code.)

Alternatively, what most people do is not add the current directory
to PATH explicitly. Because there's another way to call a script or
binary from the current directory, by explicitly telling the shell
what you want - in this case by prepending './'. In your case, you
can do

./RC0

and that will execute the script "RC0" in the current directory. It
will also be explicit that you are executing something from the
current directory and not a system command - which is why I prefer
to do it this way instead of tinkering with PATH here.

As a side note: your script RC0 doesn't appear to start with a
shebang line. In that case the script will be executed via /bin/sh,
so it will work regardless, but I would suggest to make that
explicit by having the script start with #!/bin/sh. (Or #!/bin/bash
if you need bash features in the script.)

Regards,
Christian



Re: Strange crontab message regarding execle

2016-10-31 Thread Christian Seiler
On 10/31/2016 10:06 AM, Johann Spies wrote:
> I user testing/sid and do regular dist-upgrades.  Sometime last week these
> messages started to appear regarding one of my crontab-entries:
> 
> Subject: Cron  /usr/bin/fetchmail -L ~/.procmail/log >
> /dev/null 2>&1
> 
> /bin/sh# Edit this file to introduce tasks to be run by cron.#: execle: No
> such file or directory

It looks like cron is trying to execute a comment line - and of course a
binary with the name "#Edit this file to  " doesn't exist on your
system. (execle is one of the system calls used for executing programs.)

Could you paste the full contents of the crontab file that contains this
entry?

Also, what implementation of cron are you using in what version? You can
get that via:
dpkg -l cronie bcron-run systemd-cron cron
(Those are the four providers of "cron-daemon" in testing at the moment.)

Regards,
CHristian



PID files of services vs. systemd (was: Re: systemd)

2016-10-11 Thread Christian Seiler
On 10/11/2016 08:04 PM, Pol Hallen wrote:
> Hi all, after upgraded oldstable to stable it happens that systemd
> doesn't create PID file of these packages:
> 
> openvpn
> smartmontools

There's a general misconception here: systemd never creates pid
files for daemons.

The PIDFile= setting in a unit file is a setting that tells
systemd where to look for PID files that it reads in to find
out the main PID of a forking process.

A bit of background: systemd has a concept called "main pid" of
a process. If you use KillMode=process or something along the
lines of ExecReload=/bin/kill -HUP $MAINPID, systemd will
automatically use the main pid of the service for these
operations. Also, a non-main process exiting in a service is
not considered a problem from a systemd perspective, but the
main process exiting is. In a very simple case, the main pid of
a service is trivial: if there's only one pid in a service,
that's the main pid.

However, there are other services that start multiple processes,
and there it's not necessarily easy to determine what the main
process of that service is. In these situations, systemd has
the ability to read pid files (that were written by the service
_itself_ after startup) to determine the main pid. The sequence
would be:

 - systemd starts the program
 - the program forks
 - the fork initializes
 - the program writes the pid file
 - the original process exits
 - systemd notices that, considers the forking service
   initialized, and reads in the pid file

If a forking service doesn't write a pid file, systemd will try
to guess the main process (see the GuessMainPID= setting)
instead. If there's only a single process in the service, that's
going to work reliably, but if there are multiple processes it
might not.

Also, if you don't use Type=forking but other service types,
then PID files are irrelevant from a systemd perspective - and
the PIDFile= setting is ignored. For example, if you have a
Type=notify unit, then the process spawned by systemd will not
exit (startup completion notification is done via the sd-notify
protocol instead) during regular operations, so that's going to
be the main pid of the unit.


Now for your units: I don't have openvpn installed, so I'd have
to check (and am too lazy right now to do so), but for
smartmontools the service is of Type=simple, so nothing forks
there. (smartd is called with -n, with leaves it in the
foreground.) There's simply no need for a PID file here, as the
main pid of the process is trivially known to systemd.

Now if you want to know the main PID of any given running
service for yourself, outside of systemd, then you don't need
a pid file anymore, you can query systemd dynamically,
regardless of service type:

systemctl show -p MainPID smartd.service

for use in scripting you can also do:

PID=$(systemctl show -p MainPID smartd.service 2>/dev/null | cut -d= -f2)
  (will be empty on error, e.g. service not running)

Hope that helps.

Regards,
Christian



Re: Problems with upgrade from Wheezy to Jessie

2016-10-03 Thread Christian Seiler
> [ kernel, dist-upgrade ]

All looks fine, you seem to have an up to date Jessie system and the
only thing that was installed was the security update DSA-3684-1 of
this morning.

On 10/03/2016 08:42 PM, Hans Kraus wrote:
> Oct 02 17:24:37 robbe /etc/gdm3/Xsession[2351]: Xlib:  extension "GLX" 
> missing on display ":0".
> Oct 02 17:24:37 robbe /etc/gdm3/Xsession[2351]: gnome-session-is-accelerated: 
> No hardware 3D support.

There's your problem: as far as I know, GNOME under Jessie only works
if you have 3D OpenGL hardware acceleration support. And apparently
your GPU is either not supported (maybe not anymore) or you don't have
the right driver loaded.

Could you post the contents of /var/log/Xorg.0.log? (It may repeat
itself, every time the X server is started, only the last one of
these repeats is sufficient.) Also, what graphics card do you have?
(You can find out via "lspci -v" as root and looking for a device
with "VGA" in the device type.) Do you have any special drivers
installed? (For example, if you have an NVIDIA card, do you have the
proprietary nvidia drivers installed?)

Regards,
Christian



Re: Problems with upgrade from Wheezy to Jessie

2016-10-03 Thread Christian Seiler
On 10/03/2016 06:50 PM, Hans Kraus wrote:
> With "gui stopped" I mean the following:
> After the boot process, instead of the graphical login screen where
> one can select the user, enter the password and do some selections,
> the following screen appears:
> 
> A pic of a sad computer with the text (the first line is in bold):
> ===
>Oh no! Something has gone wrong.
> A problem occurred and the system can't recover.
> Please log out and try again.
>---
>| Log Out |
>---
> ===
> I get the same screen when I choose another shell (via Ctrl-Alt-F2),
> log in as root and enter: startx.

Ok, this appears to be a GNOME-specific error message.

Could you restart the computer, try to log in, wait until that
message pops up, don't close the message, switch to another console
(Ctrl+Alt+F2) and run the following command as root?

journalctl -b1 -n40 _UID=1000

(Replace 1000 with your user id, you can look it via running id in
the console as your normal user; 1000 is the default for the first
user created by the Debian installer.)

See if there's anything in that output that might be relevant here.

> The /etc/apt/sources.list:
> ===
> root@robbe:~# cat /etc/apt/sources.list
> # deb http://ftp.at.debian.org/debian/ wheezy main
> 
> # deb http://backports.debian.org/debian-backports wheezy-backports non-free
> deb http://ftp.at.debian.org/debian/ jessie-backports main contrib non-free
> 
> deb http://ftp.at.debian.org/debian/ jessie main contrib non-free
> deb-src http://ftp.at.debian.org/debian/ jessie main contrib non-free
> 
> deb http://security.debian.org/ jessie/updates main contrib non-free
> deb-src http://security.debian.org/ jessie/updates main contrib non-free
> 
> # wheezy-updates, previously known as 'volatile'
> deb http://ftp.at.debian.org/debian/ jessie-updates main contrib non-free
> deb-src http://ftp.at.debian.org/debian/ jessie-updates main contrib non-free
> 
> deb http://ftp.de.debian.org/debian/ jessie-backports main contrib non-free
> ===

Side note: you have jessie-backports in there twice (first and last
uncommented line), but that's completely harmless. Otherwise, looks
fine.

Could you run

apt-get update
apt-get dist-upgrade

and see if it still wants to upgrade additional software?

Also, what kernel version are you running? (Find that out via the
command "uname -a".)

Regards,
Christian



Re: Problems with upgrade from Wheezy to Jessie

2016-10-02 Thread Christian Seiler
On 10/02/2016 05:59 PM, Hans Kraus wrote:
> I upgraded my Debian server about a week ago from Wheezy to Jessie.
> After that the GUI stopped,

This is a bit vague, so a better explanation would be good. What
exactly do you mean "gui stopped"? Did that happen during the
update? Or at boot? Does the login screen show up? Does this
message show up after login?

> I see only  the grey screen with the sad
> computer telling me " Oh no, Something is Wrong" and the only option is
> to log out.

Could you transcribe the precise error message?

> I installed the package again with:
> apt-get install task-gnome-desktop
> I did not get any error message, but that didn't cure my problem.
> 
> Afterwards I tried: "dpkg --configure -a". Again, I didn't get any
> error message but  that didn't cure my problem.

That means your packages are in a state that dpkg assumes to be
consistent. However, since you upgraded recently, and apparently
still have access to your shell, could you tell us what the
contents of /etc/apt/sources.list is?

Regards,
Christian



Re: Maximal volume size for the client with a NFS v3 mounting

2016-09-28 Thread Christian Seiler
On 09/28/2016 07:18 PM, Jean-Paul Bouchet wrote:
> On a Jessie 8.5 system I mount a partition on a NAS server with NFSv3
> protocol using options "nfs rw,soft" in /etc/fstab.
> 
> The size of the volume on the NAS server side has been extended to 20
> To, but for my Debian system, this extension appears to be limited to
> 16 To :
> - The df command gives 16106127360 blocks of 1K
> - processes trying to write beyond this limit end with error "no
>   space left on device"
> 
> Is the volume size mounted with NFSv3 limited to 16 To on the client
> side ?

RFC 1813 (the NFSv3 standard) defines the field for the free
space in the response of a NFSv3 server to be a 64bit (probably
unsigned) integer, so that should easily hold more than 16 TiB.

In your case, the 16 TiB limit you experience seems to be if
the size is somehow measured in a 32bit field in units of 4 KiB
blocks. From reading the source code, Linux 3.16 (that comes
with Jessie) only uses 64bit fields in both the NFS client code
and the general kernel structures, so I believe you're running
into a limitation of the NFS server your NAS provides.

I am not sure though - and I don't have any storage with more
than 16 TiB lying around to actually test it - so take my
response as an educated guess based on my read of the kernel
source code.

> Could the version 4 of NFS solve this problem ?

Possibly; it could be that the NFS4 implementation of your NAS
is not limited in that way - it could also be that it is. It
could also be that the NFS server in your NAS doesn't support
volumes > 16 TiB at all.

Another possibility could be that your NAS's NFS server supports
more than 16 TiB just fine, but the underlying filesystem used
on the NAS only supports up to 16 TiB. (For example, if the NAS
were to use the old Linux filesystem ReiserFS, that only supports
volumes up to 16 TiB.) Does the NAS actually show that there's
that much space free on the filesystem it exports? Or does it
only see the 20 TiB on the partitioning level? What NAS are you
using and what software are you running there?

Regards,
Christian



Re: WARNING! New Perl/Perl-base upgrade removes 141 Sid/Unstable packages

2016-09-24 Thread Christian Seiler
On 09/24/2016 06:02 PM, Glenn English wrote:
>> On Sep 24, 2016, at 7:44 AM, rhkra...@gmail.com wrote:
>>
>> For my own education, I'm not sure what you mean by "backup your current 
>> state
>> before upgrading"--does that mean a full backup of your system, or is there 
>> a 
>> way to somehow save the current "state" of the package list on your system 
>> so 
>> that you can easily request those packages be restored if something goes 
>> wrong?
> 
> I think that saving the contents of /var/cache/apt/archive(s?) will
> save copies of all the .debs installed on your machine. OTOH, it also
> gets .debs that *used* to be on your machine. If you're running sid,
> bring a big thumbdrive...

Note that the new apt wrapper removes .debs after installation by
default. (apt-get and aptitude will keep the .deb files though.)

So if you use

apt install foo

or

apt upgrade

then the packages that were downloaded won't be there after the
installation completed successfully. (They will be in the cache
directory after download and before the installation is done.)

You can change that behavior via:

echo 'Binary::apt::APT::Keep-Downloaded-Packages "true";' \
  > /etc/apt/apt.conf.d/01keep-debs

(See also /usr/share/doc/apt/NEWS.Debian.gz for details.)

Regarding the original problem: I'd recommend to anyone running
sid to also have testing in their sources.list - so they can
force the installation of an older package version while a
transition is still ongoing. Also, it's IMHO a good idea to
subscribe to debian-release for anyone running pure sid, so they
can have an overview over currently active transitions.

Regards,
Christian



Re: Typing Cyrillic script with a UK keyboard in an en-gb setting

2016-09-24 Thread Christian Seiler
On 09/24/2016 05:07 PM, david...@freevolt.org wrote:
> On Sat, 24 Sep 2016, Lisi Reisz wrote:
> 
>> My husband has just asked to do this.  His system is vanilla from this point
>> of view.  (Mine is in a mess, with a messed-up scim and no foreign
>> fonts "working", but that is another story.)
>>
>> Advice please on the best way to achieve this for him.  I.e., what do those 
>> of
>> you doing this or similar find works comfortably.
> 
> This is what I use in my /etc/default/keyboard file:
> 
>$ grep '^[^#]' /etc/default/keyboard
>XKBMODEL="pc101"
>XKBLAYOUT="us,ru,sy"
>XKBVARIANT=""
>XKBOPTIONS="grp:caps_toggle,compose:menu"
>BACKSPACE="guess"
> 
> The "ru" portion of the XKBLAYOUT value, and the "grp:caps_toggle"
> setting in XKBOPTIONS are the relevant parts for your purposes.
> 
> It makes capslock a toggle between en_US, russian, and syrian arabic
> keyboard layout.

That's a valid way of doing this (most desktop environments allow you
to define multiple keyboard layouts + a shortcut to switch without you
having to fiddle with Xkb btw.), I personally find it really hard
though to write Cyrillic with a Russian keyboard layout, if the
characters aren't printed on the keyboard. Of course, I grew up with
the Latin layout, and I don't have any real muscle memory for the
Cyrillic layout. (Typing then would be a LOT of trial and error for
me.)

If you indeed grew up on a Russion keyboard, your suggestion of just
switching the layout is probably the easier solution. It's just not
something I'd personally be able to use in any efficient manner.

Regards,
Christian



Re: Typing Cyrillic script with a UK keyboard in an en-gb setting

2016-09-24 Thread Christian Seiler
On 09/24/2016 04:10 PM, Lisi Reisz wrote:
> My husband has just asked to do this.  His system is vanilla from this point 
> of view.  (Mine is in a mess, with a messed-up scim and no foreign 
> fonts "working", but that is another story.)  
> 
> Advice please on the best way to achieve this for him.  I.e., what do those 
> of 
> you doing this or similar find works comfortably.

There's ibus and the ibus-table-translit package, so all GUI applications
that support ibus [1] will allow you to enter cyrillic characters via
typing latin equivalents. (ibus is a generic input framework originally
designed for complex scripts, such as Chinese, but now supports a lot of
things, just search for packages with ibus-table- in their name. Btw.
there's also a package for traditional Russian, if you're interested in
really old writings. For any modern Russian I would recommend just using
the aforementioned ibus-table-translit though.)

For example, typing "b" will give you a б, typing "v" will give you a в,
typing "ya" will give you a я, typing "yo" will give you a ё, and so on.

ibus is not completely trivial to setup, but it's not terribly difficult
either. It does take a bit of getting used to, but if you switch the
method back to "English", the keyboard will behave as you'd normally
expect.

Alternatively, there's something similar but as a website, called
http://translit.net/
which does the same in the Browser (Javascript required) - most Russian
people I know use that regularly.

Also, in case this is archived and people find this: for Mac OS X user
there appears to be
https://github.com/archagon/cyrillic-transliterator
which I believe does the same thing that ibus-table-translit does.
(Never tried it though.)

Hope that helps.

Regards,
Christian

[1] See https://wiki.debian.org/I18n/ibus for details. Also, ibus-setup
is your friend.



Re: update-alternatives failure to set/config x-www-browser

2016-09-05 Thread Christian Seiler
Hi,

Am 5. September 2016 14:31:45 MESZ, schrieb Tony Baldwin :
>I have done
>sudo update-alternatives --config x-www-browser about 50 times in this 
>past week, and chosen chromium-browser as my goto/default, but links 
>from icedove keep opening in iceweasel, no matter what.
>???

Most GUI applications don't use x-www-browser (which is what 
update-alternatives controls) but either the xdg mime association system or 
mailcap. For icedove it's the former.

In your case, running

xdg-mime default chromium.desktop x-scheme-handler/http 
x-scheme-handler/https text/html

as a the user for which this should be set should do the trick.

Regards,
Christian



Re: How to unpack an repack a deb package?

2016-08-26 Thread Christian Seiler
On 08/26/2016 02:03 PM, Hans wrote:
> I need to unpack and repack a debian package. Reason: I want to change the 
> dependencies in that package. 
> 
> How can I do that? I imagine, to unpack the *.deb, then edit my control file, 
> after that pack it again. 

Well, the easiest way to do so, if you have Debian's packaging tools
(notably dpkg-deb and fakeroot) installed:

($DIR/ shouldn't exist prior to this and could be the package name)
dpkg-deb -x package.deb $DIR/
dpkg-deb -e package.deb $DIR/DEBIAN
$EDITOR $DIR/DEBIAN/control
fakeroot dpkg-deb -b $DIR package_repacked.deb
rm -r $DIR

Example:

apt-get download sed
dpkg-deb -x sed_4.2.2-7.1_amd64.deb sed
dpkg-deb -e sed_4.2.2-7.1_amd64.deb sed/DEBIAN
$EDITOR sed/DEBIAN/control
   (I added Recommends: gawk to the control file and incremented
   the version by adding +local0 at the end)
fakeroot dpkg-deb -b sed sed_4.2.2-7.1+local0_amd64.deb
rm -r sed

If you don't have dpkg-deb installed, you can manually use ar/tar:

ar x sed_4.2.2-7.1_amd64.deb
mkdir sed sed/DEBIAN
tar -C sed -xJf data.tar.xz
tar -C sed/DEBIAN -xzf control.tar.gz
$EDITOR sed/DEBIAN/control
tar -C sed --exclude=DEBIAN --owner=root --group=root -cJf data.tar.xz .
tar -C sed/DEBIAN --owner=root --group=root -czf control.tar.gz .
ar cr sed_4.2.2-7.1+local0_amd64.deb debian-binary control.tar.gz data.tar.xz
(debian-binary should be a file created by the unpacking
with ar x, should contain a single line "2.0")

(Please change the compression algorithms for tar where appropriate,
depending on whether gzip, xz and/or bzip2 are used. My example shows
both gzip and xz for different files.)

There are corner cases when either method won't work properly out of
the box unless you are root or at the very least run the entire shell
you're in under fakeroot. (Notably when files have setuid permissions.)

> Is it that easy? I googled, but ar -x paket.deb /tmp/paket did not work.

ar x, not ar -x (ar is funny that way)

Big Warning: if you do not ALSO change the version of the package
slightly (e.g. by adding +local0 at the end), then this can have all
sorts of unpredictable results if you have package sources with _both_
Debian's version of the package and your modified version. The same
version of a package should _not_ be used twice in Debian for different
packages.

Regards,
Christian



Re: iscsistart: TargetName not set.

2016-08-19 Thread Christian Seiler
On 08/19/2016 02:11 PM, Fredrik Nilsson wrote:
> The contents of initiatornam.iscsi is one line:
> 
> GenerateName=yes

Ok, there's the problem. The message from iscsistart is misleading,
because not the target name is invalid, but the initiator name.
(Probably because of the way the options are passed, it thinks
that way.)

The reason why that is is actually a bug in open-iscsi, because
while it does copy the proper file into the root filesystem at the
end of the installer, it doesn't re-generate the initramfs, so that
only contains the default file.

This was fixed in Stretch anyway because we now generate the
initiator name on install (in postinst), but not upon first start
of the daemon anymore.

I've now reported this bug, to be able to track it.
https://bugs.debian.org/834830
(I've reproduced the problem in a VM on my system.)

I'll try to get a fix that for into the next point release - no
promises though.

If you want to boot your system in the mean time, there is a trick
you can do, once you're in the initramfs shell:

echo InitiatorName=iqn.test:test > /etc/initiatorname.iscsi
/scripts/local-top/iscsi
exit

That should boot your system. Once you're in the system, you need
to run (as root):

update-initramfs -k all -u

And then your system should boot normally.

> Looking at the kernel parameters there is no mentioning of any ip settings.
> I have tried supplying static ip settings, initiatorname and target name as
> kernel parameters but to me it looked liked those settings where ignored.

Yeah, so I checked again, and setting kernel parameters is actually
not necessary. (And won't change anything.)

> I have also tried to set the target name via the dhcp root-path option,
> which is then written out on the console at boot time, but the result is
> still the same error message and a drop to the initrams shell.

No, that doesn't help at all. In Jessie, open-iscsi is configured via
the files /etc/iscsi/iscsi.initramfs and /etc/iscsi/initiatorname.iscsi,
and there's no way to override things. In Stretch, there will be some
support for root=iscsi:... though. The DHCP parameters are completely
ignored even there though. (I'm not sure it'd be easy to support that
reliably across different types of configurations.)

Regards,
Christian



Re: iscsistart: TargetName not set.

2016-08-19 Thread Christian Seiler
Hi,

(writing this from my phone, so please pardon my bottom quote)

For reference: I co-maintain open-iscsi in Debian.

Could you provide the contents of the initiatorname.iscsi and iscsi.initramfs 
files in the initramfs verbatim? (Anonymizing users/passwords is OK of course.)

Also: did you specify the target as a host name or IP address? IIRC only IP 
addresses work for rootfs on iSCSI. (Not sure though and I am not in front of a 
computer to check.)

Furthermore: does the initramfs run its own DHCP client? Do you have ip=dhcp 
(or an equivalent static config) in your kernel command line args? (The 
installer should set this.)

Regards,
Christian

Am 19. August 2016 08:27:00 MESZ, schrieb Fredrik Nilsson :
>Hi,
>
>I have installed Debian Jessie (8.5) on an iscsi disk but I am having
>difficulty booting the system afterwards. (The installation itself went
>smoothly although a bit slow.)
>
>The Supermicro server I have at my disposal mounts the iscsci target
>fine
>via bios/nic firmware and grub loads then up nicely. But the problem
>starts
>after grub when the the ip-address has been retrieved via dhcp.
>
>iscsistart then tries to mount the iscsi device but fails with the
>following line
>
>iscsistart: TargetName not set. Exiting iscsistart
>
>It then drops me into the initramfs shell. From there I have been able
>to
>find /etc/iscsi.initrams and /etc/initiatorname.iscsi which seems to
>contain the correct information to mount the target.
>
>Any additional parameters I add to the kernel command line via grub
>seems
>to be ignored.
>
>When I googled this error message I found an old error that was caused
>by
>the network not bein available at the time iscsistart was started, but
>it
>was several years that particular issue was resolved.
>
>Any help on how I can debug and fix this is much appreciated.
>
>Thanks!
>
>/Fredrik


Re: gcc-doc in stretch

2016-08-06 Thread Christian Seiler
On 08/06/2016 04:08 PM, Steven Tan wrote:
> https://packages.debian.org/search?keywords=gcc-doc=names=all=all
> It looks like the package gcc-doc is not provided in stretch, not even in
> contrib or non-free, but the package is provided in jessie and sid.
> 
> Is this a bug or intended?

Well, it's a bug, but in gcc-doc. It currently fails to build
from source, and the maintainer hasn't fixed it yet, so it
was autoremoved from testing a month or so ago:

https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=825320
https://tracker.debian.org/news/782354

Once that bug is fixed, it can reenter testing. (Assuming no
other RC bug is filed in the mean time.)

Regards,
Christian



Re: Indexing Unix command

2016-08-01 Thread Christian Seiler
On 08/01/2016 02:52 PM, Rodolfo Medina wrote:
> with the following simple script:
> 
> #!/bin/sh
> 
> cat index.idx | sort > index.ind
> 
> I sort the contents of a file and write it in another file.  Now, I want that 
> a
> small vertical space, i.e. an empty line or two, were inserted before all the
> words that start with a new letter.

If the first letters of your words are only ASCII characters
(or you are using an 8 bit character encoding), then the
following should do the trick:

sort < index.idx | awk 'BEGIN {
  c = -1;
}
{
  if (substr($0, 1, 1) != c && c != -1)
printf("\n\n");
  c = substr($0, 1, 1);
  print;
}' > index.ind

See also:
https://en.wikibooks.org/wiki/AWK

Regards,
Christian



Re: What Linux distribution to use?

2016-07-26 Thread Christian Seiler
On 07/26/2016 11:35 AM, Johann Klammer wrote:
> Unfortunately, Debian does not work for me anymore.
> 
> I have special needs: 
> binary packages. 
> i386 code without SSE stuff or other surprises. 

Well, if you don't tell us why Debian doesn't work for you
any more, then it will probably not be easy to give a
recommendation.

Especially because you need non-SSE I think your options
are limited when it comes to binary distros. Most that still
fall in that category would probably be Debian-based - and
if Debian doesn't work for you anymore, then they might also
have the same issue.

The only one I can think of off the top of my head that isn't
based on Debian that still fulfills your criteria would be
Slackware, though I have never tried it:

http://www.slackware.com/

Regards,
Christian



Re: Linux startup, Wheezy -- a required script won't run on startup, but can run manually without any trouble

2016-06-10 Thread Christian Seiler
On 06/10/2016 07:54 AM, Andrew McGlashan wrote:
> I want the script to run once only at bootup, before exim4 and also
> before dovecot, it isn't a service; but I've moulded the script to
> appear like one in order to achieve the desired result.  And there is no
> need to run it ever again after startup and there is nothing to do at
> shutdown.

Well, then your script is far too complicated in a sense.

You could easily do the following:

---
#!/bin/sh
# LSB header stuff (what you have now)
case "$1" in
  start) ;;
  stop|status|restart|force-reload) exit 0 ;;
  *) echo "Usage: $SCRIPTNAME start" >&2; exit 3 ;;
esac

set -x

# now your actual script, without all the other stuff
---

The log_daemon_msg is just to show a pretty message at boot (if the
verbosity level is set correctly), you don't _need_ that for an init
script, you can also just echo stuff in there (which you appear to be
doing anyway). And you can drop the other stuff from the init script
template.

The template just makes sure that the init script fits nicely into
Debian's typical scheme for services (with the messages having a
uniform look and feel at boot), but the absolute minimum actual
requirements for an init script (in Debian) are:

 - have an LSB header
 - properly treat the arguments start, stop, status, restart and
   force-reload

So as long as your script does just that, it will work, once you
install it via update-rc.d defaults.

> I couldn't use /etc/rc.local as that would act after everything was up
> and running normally.  Probably I really should have the script
> somewhere else, but I'm not sure where exactly would be best.  Hence why
> it ended up in the initscript area.  Perhaps people have some other
> suggestions?  I would be happy to hear them here.

Well, I would maybe put in in /usr/local/sbin, and have the init script
call the script itself. Then the init script is just the glue to make
it work (see my above example, where after set -x you could just call
the script in /usr/local/sbin) and rather easy to understand (because
it's short), whereas the actual script you call an own script that you
can test separately.

That way, you separate out program logic (what you want to do) and
configuration (when you want to do it). I do the same for cron jobs: I
create a script in /usr/local/sbin, and have the cron job call that
script instead of writing it directly into the crontab.

> On 10/06/2016 7:29 AM, Christian Seiler wrote:
>> The main problem with this scheme alone is that the numbers are
>> actually really arbitrary, so it's not immediately clear which ones to
>> use when writing an init script.
> 
> Not arbitrary... I numbered them according to the desired sequence,
> knowing myself which processes needed to be started before others. 

Well sure, but whether you should give exim4 the number 20 or 25 is
arbitrary, as long as the number is higher than e.g. your syslog
implementation's number, etc.

So from the point of view of someone writing a service that is supposed
to work on multiple systems (e.g. Debian), the number is arbitrary to a
large extent - until you have a new service that creates an additional
dependency between two previously unrelated services.

> So, by design of the chosen numbers and script names, I was previously
> able to run scripts in the order that I knew was required by my own
> resolve and dependencies were not complex enough to require /special/
> processing outside my own resolve.

Sure, if you want to create all the symlinks in the correct ordering on
your very own, that will work. Especially if you have to modify the
default because the Debian ordering doesn't suite your needs - and
update a lot of symlinks. But I actually don't care about any specific
numbers in front of stuff, I care about the real order stuff is
executed in.

And for me at least (and for very many other people, which is why
Debian moved to dep-based booting with Squeeze), it's _much_ more
logical to declare dependencies and have the system then decide for
itself about ordering. As I said in my earlier mail: anything that
doesn't properly support dependencies isn't worth my time in looking
at it, because they make life _so_ much easier.

Btw. if you want to override the ordering of scripts that come with
Debian without modifying the scripts themselves, you can actually place
JUST the entire LSB header of a script in /etc/insserv/overrides, for
example /etc/insserv/overrides/exim4. Then, insserv will use that
header INSTEAD of the one found in the init script to calculate
dependencies. This is great if you need to introduce a new ordering
constraint between services that Debian doesn't know about. [1]

> I still have low numbers, but done the correct way via insserv.

So I fig

Re: Linux startup, Wheezy -- a required script won't run on startup, but can run manually without any trouble

2016-06-09 Thread Christian Seiler
On 06/09/2016 10:10 PM, Andrew McGlashan wrote:
> What I have now is that with some extra "smarts" that stops the original
> concept from working as intended.  The smarts is meant to allow for
> faster startup and to tie in dependancies; to me, it is trying to be too
> smart and that is where the problem lies.

Faster startup was never the initial goal of dependency-based boot,
that came afterwards. The main goal was always to describe startup
properly, because you want ordering to reflect what services actually
mean relative to another. And just pulling some arbitrary numbers out
of thin air was NEVER a good idea in my eyes.

> I see within the /etc/init.d/ scripts that there is all this extra junk
> at the top and there are .depend.* files in that directory too.

The .depend.* is for Makefile-based boot (startpar uses that
internally). You can still disable that and go back to a fully
serialized version with sysvinit.

> I am thinking that these extras are the reason why it isn't running the
> script at startup as expected.

No, the reason is that you appear to have pulled the number 02 out of
thin air and expect it to work, without giving a thought about what
you want to actually order it against. (See my other email.)

> Those extras weren't not part of the
> more original sysv init setup; and, it may be why lots of Debian and
> other people decided that the sysvinit was broken (due to the extras)...

No, in the contrary. When I first saw Gentoo's system in the mid 2000s,
which was based exclusively on dependencies (but still used scripts on
top of sysvinit), I thought: wow, this is SO much better than all the
other distros at that time.

To me, anything that doesn't allow me to have dependencies is not worth
my consideration. I've often had to write own services that hook into
the system startup at certain points. And being able to specify
dependencies is something absolutely essential here. Because then I
actually semantically describe why I want a service in a given position
in the boot sequence. Doing it in any other way is madness to me.

There's a reason why _every_ modern init system supports dependencies
(systemd, Solaris's SMF, nosh, OpenRC, ...), because in the modern
world, where so many things need to be taken care of at boot, it's
absolutely essential to be able to express the relations betwen all
the services that need to be started explicitly in form of
dependencies, otherwise you'd never be able to really tackle the
complexity.

> and hence why we ended up with systemd.

You're right and you're wrong here.

You're right in that the way dependency-based boot is handled in
sysvinit+initscripts-based systems is not really nice, because
dependencies are actually kind of implemented on top of an older model,
instead of being treated as a first-class citizen. (And it's not
complete, because the dependencies are only considered when booting,
not when manually starting/stopping services. [1])

You're wrong in the sense that nobody on the systemd side of the
argument wants to go back to non-dependency-based boot. So if you think
that had dependency-based boot never been added to the init script
logic, systemd wouldn't have been born or at least not have gained any
traction - it would be the complete opposite, some people would have
wanted something like systemd even moreso.

Regards,
Christian

[1] Gentoo's set of scripts actually already did that 10 years ago,
with the caveat that it didn't have proper state tracking, only an
emulation of that, which is why the 'zap' action existed (exists?)
there.




signature.asc
Description: OpenPGP digital signature


Re: Linux startup, Wheezy -- a required script won't run on startup, but can run manually without any trouble

2016-06-09 Thread Christian Seiler
On 06/09/2016 07:46 PM, Andrew McGlashan wrote:
> The order of the scripts alone allowed for everything to be very, very
> simple and no script relied upon any other; they were self dependent.
> If you wanted something to be available before your script, you made
> sure your numeric number after the S in the script name (or rather the
> symlink name back to the /etc/init.d directory file) was higher.  It was
> simple, it worked perfectly,

(In the following, for the most part I'm only going to talk about
sysvinit, ignoring any other init system.)

I think you are suffering from quite a bit of confusion. You need to
separate a few concepts apart:

 - the S**/K** symlinks
 - how they are generated
 - startup parallelization

Since very old versions of Debian (I don't remember which), you could
create symbolic links for init scripts like this:

 - /etc/rcX.d/SYYname -> /etc/init.d/name
 - /etc/rcX.d/KYYname -> /etc/init.d/name

   YY being a number between 00 and 99 here.

When changing a runlevel, first all the K** links (in order) of the
_new_ runlevel are run and then all the S** links (in order), also of
the _new_ runlevel are run. [1]

The symlinks would be generated by calling update-rc.d, e.g. via:

  update-rc.d NAME start 42 2 3 4 5 . stop 75 0 1 6 .

This would generate /etc/rc[2345].d/S42NAME, /etc/rc[016].d/K75NAME.

The main problem with this scheme alone is that the numbers are
actually really arbitrary, so it's not immediately clear which ones to
use when writing an init script.

This lead to multiple problems, most importantly that if you had two
otherwise unrelated services A and B, that don't have any dependency
with each other, so they have the same number, e.g. 20. But then a
service C comes a long that needs to be started before B but after A,
then A and B need to have different numbers regardless. But the numbers
of these services are fixed in the Debian package scripts, so the
maintainer of the package containing service C needed to convince the
maintainers of services A and B to change their number (and if they in
turn depend on other scripts, those have to be adapted, too). And this
doesn't even leave any room for modifications by the admin, who might
need this for local scripts that will never be part of Debian: even if
they could convince the maintainers of the packages they'd need to
squeeze their own script in between, they'd still have to wait for the
next Debian release or do some extensive local modifications.

Which is why people had been working on a replacement for a number of
years (the Debian wiki claims since 2002, but the link doesn't work).
In 2008 an alternative was implemented that was designed to work across
distributions, and the LSB standard for init scripts was born. [2]
(This was way before systemd btw.)

The integration into Debian took a bit longer, and Squeeze was the
first Debian version to fully incorporate that. (Although you could
still choose to use old system in Squeeze IIRC, support for which was
dropped in Wheezy.) Instead of having the numbers fixed, they would be
calculated when services were enabled.

Now, each service has to declare in form of the so-called LSB header
its dependencies relative to other services. Then, when services are
enabled/disabled, these dependencies taken into account and the numbers
are generated accordingly. (Which is why they rarely exceed 30 now,
unless you have really many services.)

This now has the huge advantage that if you squeeze in a service
between others, the numbers will automatically get recalculated.

The following LSB headers are understood in Debian:

 * Provides:
 Alternative names for the service for dependency resolution. For
 example, /etc/init.d/networking has the values 'networking' and
 'ifupdown' in there; so anything that orders against either of
 them will order against networking.

 * Required-Start:
 Anything that must be started before this script. insserv and
 update-rc.d will fail if the required script doesn't exist or is
 not enabled. (It will not enable that script automatically though,
 it will just complain.)

 * Required-Stop:
 Same as Required-Start, but just that these services have to be
 kept around during shutdown. Commonly the same as Required-Start,
 but not necessarily.

 * Should-Start/Should-Stop:
 Same as the Required- version, but if the other script is not
 enabled or not installed, don't consider that to be an error.

 * X-Start-After:/X-Stop-Before:
 The inverse dependency, meaning that
A: X-Start-After: B
 is equivalent to
B: Should-Start: A

  * Default-Start:
 List of runlevels where the service should be started in
 by default. Typically 2 3 4 5

  * Default-Stop:
 Typically 0 1 6

To enable a service initially, you'd call

  update-rc.d defaults NAME

And to remove the links:

  update-rc.d remove NAME

Important: only the options defaults, remove, enable and disable for

Re: aarch64 qemu workaround

2016-06-05 Thread Christian Seiler
On 06/05/2016 09:43 AM, Mike wrote:
> https://gmplib.org/~tege/qemu.html
> 
> Scrolling down to the aarch64 section describes the situation.

Is what is describe there for aarch64 actually that bad? Having to
modify /etc/initramfs-tools/modules seems to be not too bad of a
thing to me... You could probably even preseed the installer with
a custom command, see [1] for details.

On the other hand - what do you want to use emulated aarch64 for?
Do you really need to have a full emulated system, to test the
kernel etc.? Or would a chroot be sufficient?

Because what works _really_ well with aarch64 is qemu-user, i.e.
running aarch64 binaries directly on your local system. There's a
nice tool called qemu-debootstrap with which you can automatically
create a chroot with a different architecture. Some are supported
better than others [2], but for aarch64 I've only had good
experiences with it. For example,

qemu-debootstrap --arch=arm64 sid aarch64-chroot

works out of the box. (jessie as well as sid.)

You can the just chroot into that,

chroot aarch64-chroot /bin/bash

and install stuff you like (just like any other regular chroot).
(Probably want to edit /etc/apt/sources.list first and copy
/etc/resolv.conf from your host system though.) The same caveats
that apply for normal chroots also apply here - and as with
regular chroots, I'd recommend using schroot to manage them once
they're set up. (Or use them as basis for pbuilder, if you want
to build Debian packages. Or both.)

Obviously anything that is related to hardware access won't work
in there (because no full system is emulated), but it's great
for e.g. compiling software - and it's a bit faster than using
the system emulator, because it just emulates the program's CPU
instructions and translates syscalls, it still uses your host's
kernel.

Regards,
Christian

[1] https://www.debian.org/releases/stable/arm64/apbs05.html.en#preseed-hooks
[2] Regarding only official ports: for ppc64el, you need to
export QEMU_CPU=POWER8; on powerpc (32bit) some floating
point instructions aren't properly emulated (it causes
trouble with software compiled with -mpowerpc-gpopt); and
the posix_fadvise() syscall is not mapped correctly on
some 32bit platforms (armel, armhf, powerpc and mips; on
mipsel it works though, funnily enough), but the aarch64
qemu-user port is in very good shape, from my experience.



Re: cross install 64bit target from 32bit host

2016-05-28 Thread Christian Seiler
On 05/29/2016 01:34 AM, Haines Brown wrote:
> This is an extension of my initial question, for I'm not sure my initial
> conclusion that it is impossible to chroot a 64bit system from a 32bit
> system is correct.

With a 32bit kernel you need qemu-user-static for this to work - but
expect it to be _at least_ a factor of 10 or so slower than your
normal system when using qemu. (Especially if your kernel is 32bit
x86, which is register-starved, so emulating other platforms is
likely going to be really slow.)

> Or can chroot be run on a 64 bit system mounted on /mnt/64bit/:
> 
>   # dpkg --add-architecture amd64
>   # apt-get update
>   # apt-get install libc6-amd64
>   # LANG=C.UTF-8 chroot /mnt/64bit /bin/bash

dpkg --add-architecture (and installing libc6-amd64) is never
useful for chroots: either you can execute 64bit binaries (directly
via a 64bit kernel or indirectly via qemu-user-static), and then
chroot will just work (without any 64bit software on the host), or
you can't execute them, and then it won't help either.

What you can do is:

apt-get install qemu-user-static

Then you need to setup the x86_64 binfmt manually, because the
Debian package doesn't do that automatically anymore [1]:

/usr/sbin/update-binfmts --install qemu-x86_64 \
   /usr/bin/qemu-x86_64-static \
   --magic 
'\x7f\x45\x4c\x46\x02\x01\x01\x00\x00\x00\x00\x00\x00\x00\x00\x00\x02\x00\x3e\x00'
 \
   --mask 
'\xff\xff\xff\xff\xff\xfe\xfe\xfc\xff\xff\xff\xff\xff\xff\xff\xff\xfe\xff\xff\xff'
 \
   --offset 0 --credential yes

Then copy the /usr/bin/qemu-x86_64-static binary into the chroot:

cp /usr/bin/qemu-x86_64-static /mnt/64bit/usr/bin/

And then you can just use chroot directly:

LANG=C.UTF-8 chroot /mnt/64bit /bin/bash

(Note that the normal rules for chroots also apply, i.e. that
you might need to handle the /proc, /sys and /dev{,/pts,/shm}
and /run file systems specially, depending on what you want to
do inside the chroot.)

But as I said above: with qemu-user-static it's going to be very
slow. Especially since your hardware does support 64bit code
directly, I would *really* recommend you to just install a 64bit
kernel [2] (you can leave the main packages 32bit) and reboot,
then you can chroot into either 64bit or 32bit environments
without having to resort to emulation. So unless you're just
using this as a temporary measure, I really wouldn't recommend
it.

Regards,
Christian

[1] See bug #604712, because i386 can be used with 64bit kernels,
which is much more common than your use case, and there you
don't want to have emulation.
[2] Will probably boil down to something like:
dpkg --add-architecture amd64
apt-get update
apt-get install linux-image-amd64



signature.asc
Description: OpenPGP digital signature


Re: QEMU MIPS Debian8.4 run issue

2016-05-27 Thread Christian Seiler
On 05/27/2016 06:23 AM, 飞颜 wrote:
>   QEMU start command below:
>   qemu-system-mips -M mips -kernel vmlinux-3.16.0-4-4kc-malta -initrd
> initrd.gz -hda hda.img  -append "root=/dev/ram console=ttyS0" -nographice
> 
> Only show message below, can not run.
> qemu: Warning, could not load MIPS bios 'mips_bios.bin'

This looks a lot like:
https://www.linux-mips.org/wiki/QEMU#MIPS_BIOS_not_found_on_startup

Since you start the whole thing with a -kernel command line, the
firmware is actually irrelevant - so you could just create a
dummy file for the MIPS BIOS:

(as root)
dd if=/dev/zero of=/usr/share/qemu/mips_bios.bin bs=1024 count=128

Then the Qemu command should work. (Not tested, though, I've only
ever used qemu-user with MIPS, not qemu-system.)

Hope that helps.

Regards,
Christian



Re: boot confuses 'fuse' as local file system

2016-05-24 Thread Christian Seiler
On 05/24/2016 04:09 PM, dummy user wrote:
>   it seems that boot script which runs 'mount' doesn't recognize fuse as 
> remote file system.

Yes, because fuse can also be used for some local file systems, so
systemd can't know based on the type that this is indeed a remote
file system.

> Could you please tell me the way to mend it ?

Add the _netdev mount flag, then systemd will recongnize the file
system as remote, regardless of the type.

Note that instead of having

SUBTYPE#SOURCE DEST fuse OPTIONS 0 0

you can also write

SOURCE DEST fuse.SUBTYPE OPTIONS 0 0

Then systemd has a chance of detecting the file system. That said,
systemd doesn't detect curlftpfs to be a remote file system anyway
(it would with e.g. sshfs though), so in your case that wouldn't
help - but a thing to keep in mind generally speaking.

Regards,
Christian



signature.asc
Description: OpenPGP digital signature


Re: C program launched by start-stop-daemon fails to catch signals

2016-05-07 Thread Christian Seiler
On 05/07/2016 06:39 AM, CN wrote:
> The following compilable C++ program catches signals as expected if it
> runs directly from shell /tmp/a.out.
> 
> However, this program fails to catch any signal and silently terminates
> if it is fired by Debian's start-stop-daemon.

No, it does catch the signal without a problem. But start-stop-daemon
closes the standard filedescriptors and replaces them with /dev/null.

Try it:

/tmp/t.sh start
ps ax | grep a.out
lsof -p $PID
-> look for FD = 0u, 1u and 2u, those are all /dev/null

That means that the error output is dropped, because it goes to
/dev/null.

If you alter your test code to include #include  and replace
the following function:

void signal_handler(int signal_number)
{
std::ofstream error_out("/tmp/daemon-error.out");
error_out << "Caught signal# " << signal_number << std::endl;
caught_signal=signal_number;
}

Then you will see that /tmp/daemon-error.out is generated and that
there is the text "Caught signal#15" inside, if you terminate the
daemon via the init script.

> (My real life multiple
> threaded program does not silently terminates. Instead, segmentation
> fault occurs from pthread libray.)

Well, then your problem is likely to be a bit more complicated. :(

Regards,
Christian



signature.asc
Description: OpenPGP digital signature


Re: cross-debootstrap error

2016-05-01 Thread Christian Seiler
Hi,

On 04/30/2016 10:20 PM, Diddier Hilarion wrote:
> I have the following problem when I try to do crossdebootstrap of arm64
> as specified in the following page of the wiki
> 
> https://wiki.debian.org/Arm64Qemu
> 
> I got stuck in the fourth step, after doing the specified i get the
> following error message
> 
> //Unpacking base-files (8+deb8u4) ...//
> //dpkg: error processing archive
> /var/cache/apt/archives/base-files_8+deb8u4_arm64.deb (--install)://
> // symbolic link '/etc/os-release' size has changed from 50 to 21//
> //Errors were encountered while processing://
> // /var/cache/apt/archives/base-files_8+deb8u4_arm64.deb//
> /
> 
> It seems to be an integrity problem with base_files but after deleting
> it from the cache the problem persists.

This is really weird, especially since /etc/os-release is owned by
base-files, so it should only be created when the package is installed,
and it also shouldn't be a symbolic link but rather a regular file.

I have a setup here where I can generate qemu-based chroots for building
packages on multiple architectures, and that uses qemu-debootstrap
internally. I just tried to build one for jessie/arm64, which succeeded.
Internally, it ran the following command:

/usr/sbin/qemu-debootstrap --include=apt --arch arm64 --variant=buildd \
   --force-check-gpg jessie directory \
   http://mirror/

(Replace directory and http://mirror/ accordingly.)

If that doesn't work for you, you could try a different mirror, although
I'm not sure whether that can indeed help, because debootstrap tries to
verify that packages are correct when downloading them, so any
transmission error would have been picked up on.

Other than that:

 - What filesystem are you using? Did you run a filesystem check (fsck)
   recently?
 - Is the target directory empty (or non-existent) before you run
   qemu-debootstrap?
 - Are there other processes running that might do stuff in the
   directory where you bootstrap into?
 - Does a normal debootstrap of your native architecture work?
 - What is your host OS?
 - Which versions of qemu-user-static and debootstrap?

Regards,
Christian



signature.asc
Description: OpenPGP digital signature


Re: on-demand mounting of filesystems via Systemd (e.g. /backup)

2016-04-24 Thread Christian Seiler
On 04/24/2016 08:11 PM, Andrew McGlashan wrote:
> systemd continues to cause much more trouble than it is worth for so
> many people -- I really wish it wasn't so, but it truly is so. :(

You don't seem to understand the criticism that has been levelled at
your conduct here. Ansgar didn't ask you to spout your vitriol
elsewhere because you dislike systemd, or because you criticized it;
he asked you because the way you behave here is to the detriment of
the atmosphere on this list.

Christian



signature.asc
Description: OpenPGP digital signature


Re: open-iscsi & multipath-tools

2016-04-22 Thread Christian Seiler

Hello,

(CC'ing the bug report I created, dropping debian-user in reply-to.)

Am 2016-04-22 16:10, schrieb BASSAGET Cédric:

I'm unable to reproduce for about 1 hour... Now, everyhting works fine
after a reboot, but... the only thing I've done is to remove the vg /
pv i created on multipath device... weird.


Maybe that's the issue? Could you recreate the LVM stuff? That should
also be supported, so maybe this only occurs if LVM is used on top.

But even if it appears to be fixed, could you still copy the output
of the systemctl and journalctl commands I asked for?

systemctl show -p Before,After,WantedBy,Wants,RequiredBy,Requires 
multipath-tools.service
systemctl show -p Before,After,WantedBy,Wants,RequiredBy,Requires 
open-iscsi.service

journalctl -u open-iscsi.service -u multipath-tools.service

Regards,
Christian



Re: open-iscsi & multipath-tools

2016-04-22 Thread Christian Seiler

Package: open-iscsi
Version: 2.0.873+git0.3b4b4500-8+deb8u1
Severity: normal
Owner: !
Tags: jessie moreinfo

Hi there,

FYI: I'm co-maintainer of open-iscsi in Debian, but not
multipath-tools. CC'ing the bugtracker, assigning to open-iscsi for
now, will reassign to multipath-tools later if necessary.

Am 2016-04-22 14:45, schrieb Cédric Bassaget:

After a reboot, iscsi targets are OK, but multipath does not show any
volume. I have to restart it by hand to bring the multipath volume up.


Gah. During the freeze of Jessie I encountered some bugs related to the
boot process and I thought we had fixed them all before Jessie was
released. Obviously not... :-(


I guess it's because on system startup, multipath-tools is launched
befors open-iscsi. open-iscsi seems to be systemd compliant, but not
multipath-tools.


For current versions (starting with Jessie) of multipath-tools, this
is correct, as the daemon is supposed to be started and then pick up
all of the devices as they appear dynamically.

OTOH, what you're seeing in dmesg is just the modules that are loaded,
which might be due to /etc/modules, /etc/modprobe.d or similar, so
they don't necessarily indicate which is started before.


root@virtm6:/etc# find rc?.d -name 'S*multipath-tools'
rc2.d/S02multipath-tools
rc3.d/S02multipath-tools
rc4.d/S02multipath-tools
rc5.d/S02multipath-tools


multipath-tools is still late-boot? That seems wrong to me. May be part
of the problem you're seeing.

Could you give me the output of the following on your system?

systemctl show -p Before,After,WantedBy,Wants,RequiredBy,Requires 
multipath-tools.service
systemctl show -p Before,After,WantedBy,Wants,RequiredBy,Requires 
open-iscsi.service


Also, what does the following command tell you? (After booting, when
the problem appears, but before restarting multipath to fix it.)

journalctl -u open-iscsi.service -u multipath-tools.service


What would be the best way to fox this problem ?


Well, there's probably still some bug in the integration between
open-iscsi and multipath-tools. The output of the commands I requested
will help me narrow down the problem, which will then hopefully give
me enough information to tell you how to fix it on your local system,
and hopefully this can be fixed in 8.5.

Regards,
Christian



Re: Failure to install request-tracker4 in Jessie Newest

2016-04-07 Thread Christian Seiler
On 04/07/2016 10:41 PM, John T. Haggerty wrote:
> That was able to work, however at the moment I've run into an issue that I
> think I had years before namely the inability of the base installation
> (even with the questions that the system asks during configuration of
> request-tracker4) failing to give anything but a 404 error when hitting up
> localhost/rt.

RT is a complicated piece of software (it interacts both with the web and
mail server) and there is no sane way to make it work out of the box
without some configuration. Please read, after installing request-tracker4:

/usr/share/doc/request-tracker4/README.Debian.gz
/usr/share/doc/request-tracker4/NOTES.Debian.gz

These will explain how request-tracker can be installed and configured on
Debian.

Generally speaking: Debian-specific installation notes can often be found
in /usr/share/doc/PACKAGENAME/README.Debian.gz.

Regards,
Christian



signature.asc
Description: OpenPGP digital signature


Re: Failure to install request-tracker4 in Jessie Newest

2016-04-06 Thread Christian Seiler
Hi,

On 04/07/2016 12:14 AM, John T. Haggerty wrote:
> deb cdrom:[Debian GNU/Linux 8.3.0 _Jessie_ - Official amd64 DVD Binary-1
> 20160123-19:03]/ jessie contrib main
> 
> deb cdrom:[Debian GNU/Linux 8.3.0 _Jessie_ - Official amd64 DVD Binary-2
> 20160123-19:03]/ jessie contrib main
> 
> deb cdrom:[Debian GNU/Linux 8.3.0 _Jessie_ - Official amd64 DVD Binary-3
> 20160123-19:03]/ jessie contrib main

So here you still have the DVDs as your primary archive source, and no
network mirror. This is possible to do, but if for any reason you (or
something you ran where you didn't necessarily know the side effects
of) deleted your /var/lib/apt/lists/ at some point, apt-get update will
not automatically restore the package lists from the CDs.

You have two options:

A. Switch over to use a network mirror for installations. In that case,
remove the cdrom lines (but _only_ the cdrom lines) and add something
like the following to your sources.list:
deb http://httpredir.debian.org/debian jessie main
Then run apt-get update again.

B. Continue using the DVDs, but have APT re-read the lists of packages.
For that, also remove the cdrom lines, and then run the following
command:
apt-cdrom add 
It will prompt you to insert the DVD. After it has copied the list
from the DVD and you get the command line back, run it again and repeat
the process for all 3 DVDs. Then run apt-get update again.

After either of these procedures, you should be able to install the
package you wanted to install.

IMPORTANT:

There's a subtle difference between both of the methods: the network
mirrors carry only the _latest_  Jessie point release, which is now
8.4. So if you add a network mirror, there will be a few upgrades
available and you'll upgrade to that next point release. If you stick
with the DVDs, you'll remain on 8.3 with the exception of security
updates, which you have enabled.

> # jessie-updates, previously known as 'volatile'
> # A network mirror was not selected during install.  The following entries
> # are provided as examples, but you should amend them as appropriate
> # for your mirror of choice.
> #
> # deb http://ftp.debian.org/debian/ jessie-updates main contrib
> # deb-src http://ftp.debian.org/debian/ jessie-updates main contrib
> # wheezy-backports
> deb http://ftp.debian.org/debian/ wheezy-backports main

This has nothing to do with your problem, but I would not recommend
using wheezy-backports in combination with Jessie. (It shouldn't
hurt, as all packages in wheezy-backports should also be in jessie
in basically the same version, but it's not what you should have
there.) If you need backports _for_ jessie, replace that with
jessie-backports. See http://backports.debian.org/ and
http://backports.debian.org/Instructions/ for details.

Regards,
Christian



signature.asc
Description: OpenPGP digital signature


Re: Failure to install request-tracker4 in Jessie Newest

2016-04-06 Thread Christian Seiler
On 04/06/2016 10:23 PM, John T. Haggerty wrote:
> There is a /etc/apt/sources.list.d/ but only have a chrome.txt or something
> like that in there.

That's good to know, but that doesn't answer my other questions: what
is the contents of your /etc/apt/sources.list (without .d) and what
happens when you do the following as root?

apt-get update

Without that information, we won't be able to help you.

Regards,
Christian



signature.asc
Description: OpenPGP digital signature


Re: Failure to install request-tracker4 in Jessie Newest

2016-04-06 Thread Christian Seiler
On 04/06/2016 10:12 PM, John T. Haggerty wrote:
> I would like to get request tracker working but the main package fails to
> install. I am getting the following errors: [...]
> 
>  request-tracker4 : Depends: libhtml-mason-perl (>= 1:1.43) which is a
> virtual package.

libhtml-mason-perl is not actually a virtual package - and the only
way APT would think that is if you don't have an APT source that
includes it, but you do have another APT source that references it.

Since the current version of request-tracker4 is available in both
the main Debian archive as well as the security repository (because
of a security update from last August, see DSA 3335-1), my suspicion
is that you _only_ have the security archive enabled in your
sources.list _or_ the download of the main sources.list failed for
some reason.

Therefore, answers to the following questions will allow us to figure
out what is wrong on your system:

 - what is your /etc/apt/sources.list?
   (Also, are there files in /etc/apt/sources.list.d?)
 - what happens if you do "apt-get update" or "apt update" or
   "aptitude update"?

Regards,
Christian



signature.asc
Description: OpenPGP digital signature


Re: x86_64 vs i386

2016-03-20 Thread Christian Seiler
On 03/20/2016 06:45 PM, Gene Heskett wrote:
> One of the problems I have is architecture related, synaptic thinks for 
> some unfathomable to me reason, that this is an i386 machine.  But its 
> not, currently running kernel 3.16.0-0.bpo.4-amd64, and no currently 
> installed 32 bit application has a problem.
> 
> But now all the browser coders have thrown i386 machines under the bus, 
> and I'm apparently stuck with the broken i386 stuff left behind.
> 
> How can I convince the package managers to search for x86_64 stuff in the 
> repos and install it.

Since you are using a backports 3.16 kernel, I assume you are using
Wheezy, which already understands Multi-Arch. In that case, just do

dpkg --add-architecture amd64

That will add 'amd64' as a secondary architecture to your system and
you can install packages from there (at least those that are
co-installable with your current set of packages).

Note that you need to do an "apt-get update" after this change.

Regards,
Christian



signature.asc
Description: OpenPGP digital signature


Re: Verify packages?

2016-02-26 Thread Christian Seiler
On 02/26/2016 03:05 PM, Hans wrote:
>> Please try (don't need to be root):
>> [...]
> great! This helped. It was tvbrowser and fakturama (both Debian/Ubuntu 
> packages and not from the repo) which interfered.
> 
> I moved teh md5sums out of the way during the test.

I would like to note two things:

 - You should try to find out _why_ those programs were causing
   problems: even third-party packages should not misbehave in such
   a way, and this might be an indication for further problems.

 - Irrespective of any of your troubles: note that dpkg --verify and
   debsums are not safe if you want to check against sophisticated
   rootkits. For example, if an attacker modifies the md5sums files
   themselves in addition to some binary (which is what debsums and
   dpkg --verify use), then these tools don't help (and there are
   other possible attacks). Of course, less sophisticated rootkits
   can be detected like that.

   The only truly secure way is to use a boot medium (CD, DVD or USB
   stick) that you've gotten from a trusted source, and then check
   your file system from there. Unfortunately, I don't know of any
   _easy_ way to do so, because while debsums has some options that
   facilitate this, I don't know of any utility that downloads the
   configured APT lists of a given installation, downloads the
   packages that are installed and then checks the installed system
   against those. (You can of course do all that manually to some
   extent, but it gets complicated.)

   For known rootkits you can use the chkrootkit tool (available
   also as a Debian package), but that also has it's limitations.

Regards,
Christian



signature.asc
Description: OpenPGP digital signature


Re: Verify packages?

2016-02-26 Thread Christian Seiler
On 02/26/2016 12:01 PM, Hans wrote:
>> 'sudo dpkg --verify' will tell you what files have been altered from
>> the ones installed by packages.
> thanks for the advice. But it looks like dpkg has a bug. I get:
> 
> dpkg: error: control file 'md5sums' missing value separator
> 
> However, maybe a broken package causes this message.

Works fine on my system, so it will likely be a problem with one
of your packages. Maybe the system crashed while installing a
package at some point?

Please try (don't need to be root):

for fn in /var/lib/dpkg/info/*.md5sums ; do
  if grep -qvE '^[a-f0-9]{32}[ ][ ][A-Za-z0-9.].*' $fn ; then
echo $fn contains error
  fi
done

(Explanation: grep -qvE tries to find lines in that file that don't
match the format "MD5 hash, 2 spaces, file name", which is how that
file is supposed to look like. MD5 hashes are stored in hex with 32
hex digites, the [a-f0-9]{32} part, then [ ][ ] are the two spaces,
put in brackets because email clients love to swallow multiple
adjacent spaces when copy, and [A-Za-z0-9.].* indicates any
file name that starts with a letter or number or dot; the -q option
for grep tells it to be silent and just indicate whether it found
sth. via its exit code, the -v option tells it that it should invert
the results, so only look for things that _don't_ match that pattern
and the -E option tells it to support extended regular expression
syntax, which I've used here.)

This will tell you which file (from which you can determine the
package name) contains an error. (What you do from there will depend
on what exactly is wrong.)

Hope that helps.

Regards,
Christian



signature.asc
Description: OpenPGP digital signature


Re: Warning Linux Mint Website Hacked and ISOs replaced with Backdoored Operating System

2016-02-25 Thread Christian Seiler
On 02/25/2016 03:07 PM, Stefan Monnier wrote:
>> MD5 alone can be somewhat dangerous even in benevolent environments: if the
>> data sets are large enough or you are just unlucky, you are going to hit a
>> colision and corrupt-or-lose-data-on-dedup sooner or later.
> 
> [G]it doesn't seem worried about this.  Admittedly, they use sha1 rather
> than md5, so they have 160bit instead of 128bit, with a correspondingly
> lower probability of collisions, but I'd be interested to know about
> cases where md5 lead to accidental collisions.

Well, I wouldn't necessarily use that as a benchmark: git could have used
SHA2-256 from the start - it's not like SHA2 is something brand new, it
was already 4 years old when git was developed.

I haven't heard of any _accidental_ collision of either MD5 or SHA1 so
far, but I might be mistaken. (There are of course famous intentional
collisions in MD5, see .)

From a mathematical standpoint: if we assume that the values a hash may
produce are uniformly distributed and cover the entire range of possible
outputs, due to the birthday paradox accidental collisions occur every
2^(bitsize/2) inputs; for MD5 that would be 2^64, for SHA1 that would be
2^80.

Whether you can live with that is up to you.

Regards,
Christian



signature.asc
Description: OpenPGP digital signature


Re: Warning Linux Mint Website Hacked and ISOs replaced with Backdoored Operating System

2016-02-24 Thread Christian Seiler
On 02/24/2016 01:48 PM, Nicolas George wrote:
> Le sextidi 6 ventôse, an CCXXIV, Christian Seiler a écrit :
>> Yes, I know what an HMAC is. But an HMAC is _utterly_ useless for a
>> digital signature.
> 
> Please stop commenting the finger when I try to show you the moon.

The problem is that you were being extremely vague. And if you are
not actually discussing the merits of the issue itself, but talking
about different use cases with different threat models, then you
can't come back and complain that you don't get your points across.

> You want an actual example of attack that a proper signing protocol
> prevents?

Yes, precisely. That is what I was asking for in the first place.

> 1. Alice generates harmless.iso and harmful.iso with a hash collision.
> 2. Bob generates harmless.iso.sha and signs it as harmless.iso.sha.sign.
> 3. Alice replaces harmless.iso by harmful.iso.
> 4. Eve checks the signature, the signature is valid.
> 
> Compare to:
> 
> 2. Bob signs harmless.iso as harmless.iso.sign.
> 3. Alice replaces harmless.iso by harmful.iso.
> 4. Eve checks the signature, the signature is invalid, the attack is foiled.
> 
> The principle is that a proper signing protocol needs to include in the
> hashed message parts that the attacker can not control.

Thank you. Why didn't you say so from the beginning?

Yes, under that threat model (attacker controls what is signed)
using a simple hash is problematic.

And yes, preimage attacks are harder than collision attacks (as
we've seen with MD5).

Still, I don't think that's reason enough to discourage people from
signing hash lists, because:

 1. In practical terms there's no known _feasible_ collision attack
for the SHA-2 family out there.

 2. Current software doesn't actually do what you are suggesting.
For example, the only data added before hashing by GnuPG that's
_slightly_ difficult to predict is the timestamp. (OpenPGP
doesn't actually specify a subpacket type for something like a
salt.) The rest of the stuff added to the hash function is
completely predictable. (And it's appended, so as per your
comment about MD5, for at least that hash and possibly others,
it's also useless against such an attack.)

Yes, doing it differently (e.g. also signing each individual file)
would improve the security properties, but I don't think relying on
collision attack resistance for a decent hash function is _that_
problematic. (And it's not like there weren't warning against MD5
long before the known attacks were published; that people were still
using MD5 on such a widespread basis, even after it was expected to
be broken in the not too distant future, is a different problem.)

But to be more constructive:

Let's say one creates a tool akin to sha256sum that does the
following (let's call it ssha256sum):

 - Generate two sufficiently large random numbers S1, S2
   (Let's say 128bit each.)
 - Output first line: those random numbers
 - For each file generate output similar to sha256sum, but do
   SHA256(S1 || filecontents || S2) instead

If one just signs the resulting file, in your opinion, would that
offer the same guarantees for the contents of all of the files as
signing the individual files themselves?

Regards,
Christian



signature.asc
Description: OpenPGP digital signature


Re: Warning Linux Mint Website Hacked and ISOs replaced with Backdoored Operating System

2016-02-24 Thread Christian Seiler
>> So a valid way to construct an OpenPGP v4 signature would be to
>> use 
>>
>> H(contents || 0x04 0x00 0x01 0x08 0x00 0x00)
>>
>> as the input for the RSA algorithm (and then pack that up in a
>> nice OpenPGP packet).
> 
> I did not have the reference of what OpenPGP does near at hand, I was more
> referring to the way hashes are used to build HMACs:
> H(K^opad || H((K^ipad) || message))
> 
> As you can see, choosing the message does not leave you enough freedom to
> exploit a collision.

Yes, I know what an HMAC is. But an HMAC is _utterly_ useless for a
digital signature. An HMAC relies on a shared _secret_ (in your
notation K) between parties to authenticate data. And I fully agree:
an HMAC is much less susceptible to hash function weaknesses than
other uses of hash functions.

The problem is: a digital signature in asymmetric cryptography cannot
(by definition) rely on a shared secret, because it must be verifiable
with publicly available information.

>> Basically: if there's a preimage attack against a hash, you are
>> able to forge OpenPGP signatures on files using that hash function
>> in the same way as you are able to break a simple hash.
> 
> Between a hash that has no known attacks and a full working preimage attack,
> there is a lot of space for limited collision attacks. Even if it does not
> apply to the simplified example I gave, a well crafted protocol will be able
> to mitigate some of these attacks. Not all, not perfectly, but still some,
> and that is better than nothing.

If you look at current cryptosystems:

 - TLS certificates (the only part of TLS that _is_ offline signed)
   use plain hashes over the data to be signed. (See RFC 3447; it
   then does more complicated things _with_ that hash, but if you
   have a collision, then you can forge a signature.)

 - OpenPGP uses a hash of the data to be signed plus a trailer
   (see my previous email), etc.

(RFC 3447 even says:
   "For the signature schemes in this document, a collision
attack is easily translated into a signature forgery."
 See )

For the integrity of an _encrypted_ message (where there is a secret)
OpenPGP does use something akin to an HMAC (they call it MDC), which
does not require the same level of resistance to collisions. But that
doesn't apply to pure signatures.

[ Original thing I responded to ]
>> If the SHA512SUMS.sign
> Stop right there. Signing a bunch of hashes is a beginner's mistake, I 
> have
> already emphasized that in this thread. It is rather sad that Debian made
> that mistake.

And I still haven't seen any reasoning why that is the case. All what
you've done is claimed that a digital signature of the contents of a
file is somehow more secure than the signature of a hash. When pressed
for explanations, you simply claimed:

>>> The protocol used for cryptographic signatures builds and encrypts the hash
>>> in a way that protects from most attacks against the hash algorithm itself.

When asked for specifics, you mentioned unrelated examples such as
scrambling passwords and HMACs, both of which do not apply to digital
signatures such as made by Debian. Don't get me wrong, I agree that
simply using a hash is sometimes the wrong thing to do, and your
reasoning applies to the examples you provided, but not the use case
you were initially referring to.

But again: you claimed that Debian is making a mistake with the way
APT handles signatures, because it's similar to SHA256SUMS.asc. I
still haven't heard any explanation as to why what is a mistake - and
all you've brought up are examples as to why for _other_ use cases
there are good protocol designs that don't rely so heavily on hash
function properties.

What good design is there for offline asymmetric digital signatures?
A keyword would suffice, as long as it's specific enough.

But even if I grant you that there are algorithms for signatures out
there that don't suffer from hash collisions: it's one thing to point
out that using SHA256SUMS.asc relies on the collision resistance of
SHA256, it's another to say that using a scheme like that is a mistake.
At the moment SHA256 is considered to be collision resistant for
crypto purposes by the broader community, and while it's objectively
better to have something that doesn't rely on collision resistance (if
all else is equal), it does not mean that signing a list of hashes is
inherently problematic at the moment, as long as the hash function
you're using is sufficiently strong. So instead of using using
derisive language such as "beginner's mistake" (which is laughable,
because I wouldn't call the people designing modern crypto we all rely
on beginners), you could be much more specific and say "here, look,
there's a much better solution out there called X and it will improve
what you're doing". That would be far more constructive. Just
saying...

Regards,
Christian



signature.asc
Description: OpenPGP digital signature


Re: Warning Linux Mint Website Hacked and ISOs replaced with Backdoored Operating System

2016-02-23 Thread Christian Seiler
On 02/23/2016 07:52 PM, Nicolas George wrote:
> What you quote is about signing a summary of files at once versus signing
> each file individually. This is not what I was talking about. What I was
> talking about was signing the file contents itself versus signing the hash
> of the file.

But if you say what Debian is doing is a mistake, then this _is_ what
you are talking about.

> The protocol used for cryptographic signatures builds and encrypts the hash
> in a way that protects from most attacks against the hash algorithm itself.
> For example, even though we know how to craft MD5 collisions easily, these
> collisions can not be used to make fake signatures even with protocols that
> use MD5, because these protocols make sure that variable octets are inserted
> in front of the payload.

This is decisively not true when we are talking about signing files. If
you look at RFC 4880 [1], if you sign a normal file ("binary document",
type 0x00), what is signed with the asymmetric algorithm is [2]:

asym_input = H(contents || trailer)

In OpenPGP v4, the trailer is given by [3]

trailer = 0x04 || sigtype || pkalg || hashalg || hsplen || hspdata
sigtype = 0x00 (binary document)
pkgalg  = public key algorithm, e.g. 0x01 (RSA)
hashalg = hash algorithm, e.g. 0x08 (SHA256)

hsplen: two octets denoting the length of hspdata (may be 0)
hspdata: subpackets, containing e.g. signature date and time

So a valid way to construct an OpenPGP v4 signature would be to
use 

H(contents || 0x04 0x00 0x01 0x08 0x00 0x00)

as the input for the RSA algorithm (and then pack that up in a
nice OpenPGP packet).

Furthermore, what you are saying doesn't make much sense: even if
you add some random nonce (maybe in a subpacket, which at least
GnuPG doesn't appear to do), what does that protect against?

If you have a signature by a given key, and you have a feasible
preimage attack against the hash algorithm used in that specific
signature, you only need to find a preimage collision against your
chosen file contents concatenated with the trailer used in that
specific signature. Since _any_ preimage attack needs to deal with
prefixes and suffixes anyway (file formats that are processed have
certain things that need to exist in the right places for programs
processing these files to accept them), I don't see how that makes
this any more difficult.

Basically: if there's a preimage attack against a hash, you are
able to forge OpenPGP signatures on files using that hash function
in the same way as you are able to break a simple hash.

The _only_ way I can see that a signed simple hash is less secure
than a direct signature is that you can play around with padding,
which will reduce the effort for preimage attacks. For this reason
Debian also encodes (and checks) the length of the files in the
archive summary files.

> The same goes for other uses of crypto primitives. Consider scrambled
> passwords for example? Do you agree that just hashing a password is a
> beginner's mistake? A correct protocol requires a random salt: that way,
> password can be checked individually, but not brute-forced collectively; if
> attackers gets hold of a database of millions of scrambled passowrds, with a
> proper salt they have to build a dictionary attack for each password,
> without a salt they can attack the whole database at once.

Actually, scrambling passwords is more complicated than what you
describe, and simply salting them is still not sufficient, you need
to actually make it hard for brute-forcing (because the entropy of
a password is typically low), so you actually want something that's
computationally expensive and not easily done on GPUs an such, e.g.
scrypt or bcrypt.

> All this is quite orthogonal to the issues you quoted, which were about
> signing not only the files' individual contents but also the files' metadata
> and the global information about what files are in the collection. Signing
> the hash file protects against that but is wide open for collision attacks.
> A proper protocol can protect both.

OpenPGPv4 can't protect against hash collisions (see my reasoning
above) - and I seriously doubt you could construct a mechanism that
works offline which does protect against hash collisions.

There are _other_ use cases where collision resistance of a hash is
not that important; preimage attacks are still quite expensive, so
attacking HMACs with session keys for a live TLS session (as long
as rekeying happens often enough) is probably not feasible even for
weaker hash functions. But you were saying that Debian's use of
hash lists is a mistake, and I still don't see how that is the case.

Regards,
Christian

[1] http://tools.ietf.org/html/rfc4880
[2] http://tools.ietf.org/html/rfc4880#section-5.2.4
"For binary document signatures (type 0x00), the document data
is hashed directly." [...] "Once the data body is hashed, then
a trailer is hashed."
[3] http://tools.ietf.org/html/rfc4880#section-5.2.4
"A V4 signature hashes the 

Re: Warning Linux Mint Website Hacked and ISOs replaced with Backdoored Operating System

2016-02-23 Thread Christian Seiler
On 02/23/2016 04:49 PM, Nicolas George wrote:
> Le quintidi 5 ventôse, an CCXXIV, Thomas Schmitt a écrit :
>> If the SHA512SUMS.sign
> 
> Stop right there. Signing a bunch of hashes is a beginner's mistake, I have
> already emphasized that in this thread.

You have _emphasized_ it, but you haven't _explained_ it, nor provided
any search term one could use to look up an explanation for it.

> It is rather sad that Debian made that mistake.

Why is what Debian does a mistake? Debian stores both the hash value
and the file size in the Packages, Sources and Release files. (Packages
references e.g. the .deb packages, Release references the "Packages"
file and Release itself is signed.) Assuming that there's no feasible
preimage attack against the hash function, and the file containing the
hashes + sizes is signed via GnuPG, how is that problematic, as long as
you check everything along the way?

Also note that the Tor project (and I believe they do know something
about security) uses hash lists for reproducibility:
https://www.torproject.org/docs/verifying-signatures.html.en
(To be fair, they also sign each file individually, but the
instructions to verify builds w.r.t. reproducibility specifically
talk about hash lists.)

Also, note:
http://crypto.stackexchange.com/questions/24224/signing-files-vs-signing-file-hashes
The person writing the top answer to that question has his own blog
about cryptography: https://www.chosenplaintext.ca/

Since what you are talking about is apparently non-obvious to people
who do crypto for a living, your characterization of _beginner's_
mistake is definitely wrong. If it's a mistake, it's apparently a
highly non-trivial one.

Therefore, could you please provide some reasoning for your claim
that what Debian does is a mistake?

Regards,
Christian



signature.asc
Description: OpenPGP digital signature


Re: pam_smbpass.so

2016-02-22 Thread Christian Seiler
On 02/18/2016 02:49 AM, Joe Pfeiffer wrote:
> Christian Seiler <christ...@iwakd.de> writes:
>> Just a hunch: do you run dovecot chroot'ed? If so, then it is most
>> likely the case that the specific PAM module is not available within
>> the chroot and that's why it produces that message.
> 
> No, it isn't chrooted -- if it were, I'd expect the other pam modules to
> give the same issues (for that matter, I'd expect it to not be able to
> find pam.d!).

So I just looked a bit at the PAM source code and found the following:

1. the message you see is generated from libpam/pam_handlers.c [1] from
   within the function _pam_load_module, using the mod_path argument
   passed to that function (which is not modified)

2. the function _pam_load_module is only called from _pam_add_handler,
   which calls it in two cases [2]:

a. module name starts with a /, then it uses that directly
b. module name doesn't start with a /, then it prepends
   DEFAULT_MODULE_PATH

   In Debian, DEFAULT_MODULE_PATH is /lib//security (set via
   debian/rules --libdir=/lib/ for dh_auto_configure [3],
   then used by configure.in as the default argument for
   --enable-securedir if that's not specified [4], which it isn't in
   debian/rules, and then used my Makefile.am to specify the variable
   to the C source [5]).

[1] http://sources.debian.net/src/pam/1.1.8-3.2/libpam/pam_handlers.c/#L705
[2] http://sources.debian.net/src/pam/1.1.8-3.2/libpam/pam_handlers.c/#L760
[3] http://sources.debian.net/src/pam/1.1.8-3.2/debian/rules/#L30
[4] http://sources.debian.net/src/pam/1.1.8-3.2/configure.in/#L274
[5] http://sources.debian.net/src/pam/1.1.8-3.2/libpam/Makefile.am/#L5

If I look at your configuration file, we clearly have 

> # and here are more per-package modules (the "Additional" block)
> authoptionalpam_mount.so
> authoptionalpam_smbpass.so migrate

that the pam_smbpass.so is a relative path, so the code path 2(b)
should be taken, so the error you see shouldn't appear.

This is _really_ weird, especially since (as you said) the other
modules should also be affected...

I'm drawing a blank, sorry. Other than stracing the dovecot auth
process hand hoping you find something in the (presumeably huge) log
of that, I don't think I have any idea on how to debug that. Sorry.

Regards,
Christian



signature.asc
Description: OpenPGP digital signature


Re: dovecot -- Require different setting for mail_location for each of POP3S and IMAPS protocols

2016-02-22 Thread Christian Seiler
On 02/22/2016 06:00 PM, Andrew McGlashan wrote:
> I've tried getting this answered on dovecot mailing list, but not
> having success so far; so I'm trying here too now (considering it is a
> Debian system that was upgraded from squeeze-lts to wheezy).

Not tested, but you could try the following (10-mail.conf): set
location = Maildir in the "namespace private", but set
mail_location = mbox globally. Since namespaces are an IMAP feature,
it might be the case that the POP3 server doesn't evaluate the
namespace stuff at all, and then you'd have two separate settings.

No idea if that will actually work.

Alternatively, if that doesn't work out, the 'mail' field in userdb
always overwrites mail_location. And dovecot does replace '%s' with
the service that's accessing the userdb, so what you could do is
use the sqlite driver of dovecot, set the connection path to a non-
existent file (or an empty sqlite database) and use

user_query = SELECT CASE WHEN 'pop3' == '%s' THEN ('mbox:.../' || '%u') ELSE 
('Maildir:.../' || '%n') END AS mail, '%n' as uid ;

Since userdb and passdb are separate, you should be able to get
away with that.

(Unfortunately, using sqlite is the closest I could find to having
generic scripting support for this kind of thing.)

Also not tested, also no idea if that will actually work.

Regards,
Christian



signature.asc
Description: OpenPGP digital signature


Re: pam_smbpass.so

2016-02-17 Thread Christian Seiler
Hi,

On 02/17/2016 05:11 PM, Joe Pfeiffer wrote:
> Christian Seiler writes:
>> [Suggesting journalctl -o verbose to debug this]
> I'm running a current Debian testing installation, and journal is
> enabled.
> 
> It turns out it's only coming from /usr/lib/dovecot/auth.  What's
> weird is in /etc/pam.d/, the only files using the module are
> common-auth and common-password, so I'd expect to see the error coming
> either every time someone authenticates through anything, or any time
> someone changes their password, and I'm not seeing either of those
> cases -- just dovecot.

Just a hunch: do you run dovecot chroot'ed? If so, then it is most
likely the case that the specific PAM module is not available within
the chroot and that's why it produces that message.

If that's not the case: what's the contents of /etc/pam.d/dovecot?
And /etc/pam.d/common-auth?

Regards,
Christian



signature.asc
Description: OpenPGP digital signature


  1   2   >