Re: [systemd-devel] [survey] BTRFS_IOC_DEVICES_READY return status

2015-06-15 Thread Lennart Poettering
On Sat, 13.06.15 17:35, Anand Jain (anand.j...@oracle.com) wrote:

 Are there any other users?
 
- If the the device in the argument is already mounted,
  can it straightaway return 0 (ready) ? (as of now it would
  again independently read the SB determine total_devices
  and check against num_devices.
 
 
 I think yes; obvious use case is btrfs mounted in initrd and later
 coldplug. There is no point to wait for anything as filesystem is
 obviously there.
 
 
  There is little difference. If the device is already mounted.
  And there are two device paths for the same device PA and PB.
  The path as last given to either 'btrfs dev scan (BTRFS_IOC_SCAN_DEV)'
  or 'btrfs device ready (BTRFS_IOC_DEVICES_READY)' will be shown
  in the 'btrfs filesystem show' or '/proc/self/mounts' output.
  It does not mean that btrfs kernel will close the first device path
  and reopen the 2nd given device path, it just updates the device path
  in the kernel.

The device paths shown in /proc/self/mountinfo is also weird in other
cases: if people boot up without initrd, and use a btrfs fs as root,
then it will always carry the string /dev/root in there, which is
completely useless, since such a device never exists in userspace or
/sys, and hence one cannot make sense of. Moreover, if one then asks
the kernel for the devices backing the btrfs fs via the ioctl it will
also return /dev/root for it, which is really useless.

I think in general I'd prefer if btrfs would stop returning the device
paths it got from userspace or the kernel, and would always return
sanitized ones that use the official kernel names for the devices in
them. Specifically, the member devices ioctl should always return
names like /dev/sda5, even if I mount something using root= on the
kernel cmdline, or if I mount /dev/disks/by-uuid/ via a symlink
instead of the real kernel name of the device.

Then, I think it would be a good idea to always update the device
string shown in /proc/self/mountinfo to be a concatenated version of
the list of device names reported by the ioctl. So that a btrfs RAID
would show /dev/sda5:/dev/sdb6:/dev/sdc5 or so. And if I remove or
add backing devices the string really should be updated.

The btrfs client side tools then could use udev to get a list of the
device node symlinks for each device to help the user identifying
which backing devices belong to a btrfs pool.

Lennart

-- 
Lennart Poettering, Red Hat
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] [survey] BTRFS_IOC_DEVICES_READY return status

2015-06-15 Thread Lennart Poettering
On Fri, 12.06.15 21:16, Anand Jain (anand.j...@oracle.com) wrote:

 
 
 BTRFS_IOC_DEVICES_READY is to check if all the required devices
 are known by the btrfs kernel, so that admin/system-application
 could mount the FS. It is checked against a device in the argument.
 
 However the actual implementation is bit more than just that,
 in the way that it would also scan and register the device
 provided in the argument (same as btrfs device scan subcommand
 or BTRFS_IOC_SCAN_DEV ioctl).
 
 So BTRFS_IOC_DEVICES_READY ioctl isn't a read/view only ioctl,
 but its a write command as well.
 
 Next, since in the kernel we only check if total_devices
 (read from SB)  is equal to num_devices (counted in the list)
 to state the status as 0 (ready) or 1 (not ready). But this
 does not work in rest of the device pool state like missing,
 seeding, replacing since total_devices is actually not equal
 to num_devices in these state but device pool is ready for
 the mount and its a bug which is not part of this discussions.
 
 
 Questions:
 
  - Do we want BTRFS_IOC_DEVICES_READY ioctl to also scan and
register the device provided (same as btrfs device scan
command or the BTRFS_IOC_SCAN_DEV ioctl)
OR can BTRFS_IOC_DEVICES_READY be read-only ioctl interface
to check the state of the device pool. ?

I am pretty sure the kernel should not change API on this now. Hence:
stick to the current behaviour, please.

  - If the the device in the argument is already mounted,
can it straightaway return 0 (ready) ? (as of now it would
again independently read the SB determine total_devices
and check against num_devices.

Yeah, I figure that might make sense to do.

  - What should be the expected return when the FS is mounted
and there is a missing device.

An error, as it already does.

I am pretty sure that mounting degraded file systems should be an
exceptional operation, and not the common scheme. If it should happen
automatically at all, then it should be triggered by some daemon or
so, but not by udev/systemd.

Lennart

-- 
Lennart Poettering, Red Hat
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] [survey] BTRFS_IOC_DEVICES_READY return status

2015-06-15 Thread Lennart Poettering
On Sat, 13.06.15 17:09, Goffredo Baroncelli (kreij...@libero.it) wrote:

  Further, the problem will be more intense in this eg. if you use dd
  and copy device A to device B. After you mount device A, by just
  providing device B in the above two commands you could let kernel
  update the device path, again all the IO (since device is mounted)
  are still going to the device A (not B), but /proc/self/mounts and
  'btrfs fi show' shows it as device B (not A).
  
  Its a bug. very tricky to fix.
 
 In the past [*] I proposed a mount.btrfs helper . I tried to move the logic 
 outside the kernel.
 I think that the problem is that we try to manage all these cases
 from a device point of view: when a device appears, we register the
 device and we try to mount the filesystem... This works very well
 when there is 1-volume filesystem. For the other cases there is a
 mess between the different layers:

 - kernel
 - udev/systemd
 - initrd logic
 
 My attempt followed a different idea: the mount helper waits the
 devices if needed, or if it is the case it mounts the filesystem in
 degraded mode. All devices are passed as mount arguments
 (--device=/dev/sdX), there is no a device registration: this avoids
 all these problems.

Hmm, no. /bin/mount should not block for devices. That's generally
incompatible with how the tool is used, and in particular from
systemd. We would not make use for such a scheme in
systemd. /bin/mount should always be short-running.

I am pretty sure that if such automatic degraded mounting should be
supported, then this should be done with some background storage
daemon that alters the effect of the READY ioctl somehow after the
timeout, and then retriggers the devcies so that systemd takes
note. (or, alternatively: such a scheme could even be implemented all
in kernel, based on some configurable kernel setting...)

Lennart

-- 
Lennart Poettering, Red Hat
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


[systemd-devel] Can kdbus send signal to the source connection?

2015-06-15 Thread eshark
Hi, All,
   If I post this email to the wrong mail-list, please tell me, thank you.

   Now many JS applications implement the client and service in the same 
thread, so they share the same connection too.

However when the client or the service want to send signal to the other,  the 
receiver cannot get the signal because the kdbus driver 

won't broadcast the signal to the source connection.

  I've tried to simply allow the kdbus driver to send signal to all the 
connections including the source , but it seems not work OK.

I wonder that how I can make kdbus send signal to the source connection, or 
that this is impossible ?

Thanks a lot !


Best Regards,

Li Cheng 
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


[systemd-devel] systemd is trying to break mount ordering

2015-06-15 Thread Jan Synáček

I have the following setup on a freshly updated Fedora Rawhide machine
with systemd-220-9.fc23.x86_64.

# cat /etc/fstab
[comments left out]
UUID=d5ac823b-d0bd-4f7f-bf4b-5cc82d585a92 /   btrfs   
subvol=root 0 0
UUID=ec79f233-055c-40fa-98e5-e2d77314913a /boot   ext4
defaults1 2
UUID=d5ac823b-d0bd-4f7f-bf4b-5cc82d585a92 /home   btrfs   
subvol=home 0 0
192.168.122.1:/srv/nfs /mnt/nfs nfs defaults 0 0
/var/tmp/test.iso /mnt/nfs/content iso9660 loop,ro 0 0

Notice the last two lines. There is an NFS mount mounted to /mnt/nfs and
an ISO filesystem mounted into /mnt/nfs/content, which makes it
dependent on the NFS mount.

After booting the machine, there are the following lines in the journal:

[snip...]

Jun 15 10:37:55 rawhide-virt systemd[1]: firewalld.service: Found dependency on 
local-fs.target/start
Jun 15 10:37:55 rawhide-virt systemd[1]: firewalld.service: Found dependency on 
mnt-nfs-content.mount/start
Jun 15 10:37:55 rawhide-virt systemd[1]: firewalld.service: Found dependency on 
mnt-nfs.mount/start
Jun 15 10:37:55 rawhide-virt systemd[1]: firewalld.service: Found dependency on 
network.target/start
Jun 15 10:37:55 rawhide-virt systemd[1]: firewalld.service: Found dependency on 
firewalld.service/start
Jun 15 10:37:55 rawhide-virt systemd[1]: firewalld.service: Breaking ordering 
cycle by deleting job sockets.target/start
Jun 15 10:37:55 rawhide-virt systemd[1]: sockets.target: Job 
sockets.target/start deleted to break ordering cycle starting with 
firewalld.service/start
Jun 15 10:37:55 rawhide-virt systemd[1]: network.target: Found ordering cycle 
on network.target/start
Jun 15 10:37:55 rawhide-virt systemd[1]: network.target: Found dependency on 
systemd-networkd.service/start
Jun 15 10:37:55 rawhide-virt systemd[1]: network.target: Found dependency on 
dbus.service/start
Jun 15 10:37:55 rawhide-virt systemd[1]: network.target: Found dependency on 
dbus.socket/start
Jun 15 10:37:55 rawhide-virt systemd[1]: network.target: Found dependency on 
sysinit.target/start
Jun 15 10:37:55 rawhide-virt systemd[1]: network.target: Found dependency on 
fedora-autorelabel-mark.service/start
Jun 15 10:37:55 rawhide-virt systemd[1]: network.target: Found dependency on 
local-fs.target/start
Jun 15 10:37:55 rawhide-virt systemd[1]: network.target: Found dependency on 
mnt-nfs-content.mount/start
Jun 15 10:37:55 rawhide-virt systemd[1]: network.target: Found dependency on 
mnt-nfs.mount/start
Jun 15 10:37:55 rawhide-virt systemd[1]: network.target: Found dependency on 
network.target/start
Jun 15 10:37:55 rawhide-virt systemd[1]: network.target: Breaking ordering 
cycle by deleting job systemd-networkd.service/start
Jun 15 10:37:55 rawhide-virt systemd[1]: systemd-networkd.service: Job 
systemd-networkd.service/start deleted to break ordering cycle starting with 
network.target/start
Jun 15 10:37:55 rawhide-virt systemd[1]: firewalld.service: Found ordering 
cycle on firewalld.service/start
Jun 15 10:37:55 rawhide-virt systemd[1]: firewalld.service: Found dependency on 
basic.target/start
Jun 15 10:37:55 rawhide-virt systemd[1]: firewalld.service: Found dependency on 
dnf-makecache.timer/start
Jun 15 10:37:55 rawhide-virt systemd[1]: firewalld.service: Found dependency on 
sysinit.target/start
Jun 15 10:37:55 rawhide-virt systemd[1]: firewalld.service: Found dependency on 
fedora-autorelabel-mark.service/start
Jun 15 10:37:55 rawhide-virt systemd[1]: firewalld.service: Found dependency on 
local-fs.target/start
Jun 15 10:37:55 rawhide-virt systemd[1]: firewalld.service: Found dependency on 
mnt-nfs-content.mount/start
Jun 15 10:37:55 rawhide-virt systemd[1]: firewalld.service: Found dependency on 
mnt-nfs.mount/start
Jun 15 10:37:55 rawhide-virt systemd[1]: firewalld.service: Found dependency on 
network.target/start
Jun 15 10:37:55 rawhide-virt systemd[1]: firewalld.service: Found dependency on 
firewalld.service/start
Jun 15 10:37:55 rawhide-virt systemd[1]: firewalld.service: Breaking ordering 
cycle by deleting job dnf-makecache.timer/start
Jun 15 10:37:55 rawhide-virt systemd[1]: dnf-makecache.timer: Job 
dnf-makecache.timer/start deleted to break ordering cycle starting with 
firewalld.service/start
Jun 15 10:37:55 rawhide-virt systemd[1]: firewalld.service: Found ordering 
cycle on firewalld.service/start
Jun 15 10:37:55 rawhide-virt systemd[1]: firewalld.service: Found dependency on 
basic.target/start
Jun 15 10:37:55 rawhide-virt systemd[1]: firewalld.service: Found dependency on 
sysinit.target/start
Jun 15 10:37:55 rawhide-virt systemd[1]: firewalld.service: Found dependency on 
fedora-autorelabel-mark.service/start
Jun 15 10:37:55 rawhide-virt systemd[1]: firewalld.service: Found dependency on 
local-fs.target/start
Jun 15 10:37:55 rawhide-virt systemd[1]: firewalld.service: Found dependency on 
mnt-nfs-content.mount/start
Jun 15 10:37:55 rawhide-virt systemd[1]: firewalld.service: Found dependency on 
mnt-nfs.mount/start
Jun 

Re: [systemd-devel] Improve boot-time of systemd-based device, revisited

2015-06-15 Thread Harald Hoyer
On 14.06.2015 15:17, cee1 wrote:
 Hi all,
 
 I've recently got another chance to improve the boot-time of a
 systemd-based device. I'd like to share the experience here, and some
 thoughts and questions.
 
 The first time I tried to improve the boot-time of systemd:
 http://lists.freedesktop.org/archives/systemd-devel/2011-March/001707.html,
 after that, we have systemd-bootchart and systemd-analyze, which help
 a lot.
 
 It seems the biggest challenge of reducing boot-time of the ARM board
 at hand is taking are of the poor I/O performance:
 * A single fgets() call may randomly cause 200-300ms
 * A (big)service may spend 2-3s to complete its so loading - only
 ~100ms spent on CPU.
 
 I tried to first delay services which are less important, to save the
 I/O bandwidth in the early stage, and raise the priority of important
 services to SCHED_RR/IOPRIO_CLASS_RT:
 1. I need to find the top I/O hunger processes (and then delay them
 if not important), but it's not straightforward to figure it out in
 bootchart, so adding *** iotop feature *** in bootchart seems very
 useful.
 
 2. I think raising CPU scheduling priority works because it reduces
 chances of issuing I/O requests from other processes. Some thoughts:
 * The priority feature of I/O scheduler(CFQ) seems not work very well
 - IDLE I/O can still slow down Normal/RT I/O [1]
 * I don't know the detail of CFQ, but I wonder whether a rate limit
 helps - may reduce the latency between issuing I/O command and full
 filling the command?
 
 Last, I tried some readahead(ureadahead), but not do the magic, I
 guess it's because I/O is busy in the early stage, there's simply no
 ahead chance.
 What readahead helps, IMHO, is a snapshot of accessed disk blocks
 during boot up, in the order of they're requested. Thus a linear
 readahead against the snapshot will always read ahead of actual
 requesting blocks.
 
 BTW, systemd-bootchart has a option to chart entropy, how is the
 entropy involved in boot up procedure?

Well, if daemons need bytes from /dev/random (think sshd key generation), I
guess they will have to wait for enough entropy, and so does the boot process
in the end.

 
 
 
 ---
 1. 
 http://linux-kernel.vger.kernel.narkive.com/0FC8rduf/ioprio-set-idle-class-doesn-t-work-as-its-name-suggests
 
 
 Regards,
 
 - cee1
 ___
 systemd-devel mailing list
 systemd-devel@lists.freedesktop.org
 http://lists.freedesktop.org/mailman/listinfo/systemd-devel
 

___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] systemd is trying to break mount ordering

2015-06-15 Thread Uoti Urpala
On Mon, 2015-06-15 at 13:24 +0200, Jan Synáček wrote:
 
 192.168.122.1:/srv/nfs /mnt/nfs nfs defaults 0 0
 /var/tmp/test.iso /mnt/nfs/content iso9660 loop,ro 0 0
 
 Notice the last two lines. There is an NFS mount mounted to /mnt/nfs 
 and
 an ISO filesystem mounted into /mnt/nfs/content, which makes it
 dependent on the NFS mount.
 


 Jun 15 10:37:55 rawhide-virt systemd[1]: firewalld.service: Found 
 dependency on local-fs.target/start
 Jun 15 10:37:55 rawhide-virt systemd[1]: firewalld.service: Found 
 dependency on mnt-nfs-content.mount/start
 Jun 15 10:37:55 rawhide-virt systemd[1]: firewalld.service: Found 
 dependency on mnt-nfs.mount/start
 Jun 15 10:37:55 rawhide-virt systemd[1]: firewalld.service: Found 
 dependency on network.target/start


 Isn't systemd trying to delete too many jobs while resolving the 
 cycles?

I don't think the cycle breaking is to blame. It's simple and only
considers one cycle at a time, but in this case I doubt there exists
any good solution that could be found. The cycle breaking only
potentially breaks non-mandatory dependencies (Wants). local-fs.target
dependencies on mounts and (probably, didn't check) dependencies
between mounts are Requires, so the dependency that's arguably wrong
here cannot be broken. Once local-fs.target gets a hard dependency on
network the situation is already pretty bad, and you probably shouldn't
expect it to recover gracefully from that.


___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] Can kdbus send signal to the source connection?

2015-06-15 Thread David Herrmann
Hi

On Mon, Jun 15, 2015 at 4:43 PM, Simon McVittie
simon.mcvit...@collabora.co.uk wrote:
 On 15/06/15 15:32, Lennart Poettering wrote:
 Did I get this right, you have one bus connection per thread, but
 possibly both a kdbus client and its service run from the server, and
 you want broadcast msgs sent out from one to then also be matchable by
 the other?

 If this is indeed what eshark means, then talking to yourself like
 this is something that always used to work with traditional D-Bus (as
 long as you make sure to never block waiting for a reply!), so it's a
 regression if it doesn't work with kdbus.

 In traditional D-Bus, broadcasts go to any connection that has
 registered its interest in the broadcast via AddMatch. dbus-daemon does
 not discriminate between the sender of a message, and other connections
 - in particular, it will send a copy of a broadcast back to its sender,
 if that's what the sender asked for.

 Various projects' regression tests work like this: they run the
 client-side and service-side code in the same GLib main loop and do
 everything asynchronously, and it works. Ideally, the only processes
 involved are the test and the dbus-daemon (and under kdbus the
 dbus-daemon would not be needed either).

Didn't know traditional DBus allows this. The kdbus fix should be as
simple as removing the condition in kdbus_bus_broadcast(). Dead-lock
detection for src==dst is already in place, as we allow unicasts to
oneself.

I'll look into this.

Thanks
David
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


[systemd-devel] Fedora 21 and systemd-nspawn

2015-06-15 Thread Matthew Karas
I'm trying to use systemd-nspawn but when I launch it and try to login
as root - it still asks for a password and I can't seem to set one.
The docs for fedora mentioned turning off auditing - which I've done.

My cmd line says audit=0 at the end.

$ cat /proc/cmdline
BOOT_IMAGE=/vmlinuz-3.19.7-200.fc21.x86_64
root=/dev/mapper/fedora_localhost-root ro
rd.lvm.lv=fedora_localhost/swap rd.lvm.lv=fedora_localhost/root rhgb
audit=0 quiet


(This is fedora 21) Using these docs
https://fedoraproject.org/wiki/Features/SystemdLightweightContainers

When I try to change the password it tells me I have a auth token
manipulation error.

$ sudo systemd-nspawn -D /srv/eq1
Spawning container eq1 on /srv/eq1.
Press ^] three times within 1s to kill container.
-bash-4.3# passwd
Changing password for user root.
New password:
Retype new password:
passwd: Authentication token manipulation error
-bash-4.3#
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] Can kdbus send signal to the source connection?

2015-06-15 Thread Lennart Poettering
On Mon, 15.06.15 19:05, eshark (eshar...@163.com) wrote:

 Hi, All,
If I post this email to the wrong mail-list, please tell me, thank you.
 
Now many JS applications implement the client and service in the same 
 thread, so they share the same connection too.
 
 However when the client or the service want to send signal to the other,  the 
 receiver cannot get the signal because the kdbus driver 
 
 won't broadcast the signal to the source connection.
 
   I've tried to simply allow the kdbus driver to send signal to all the 
 connections including the source , but it seems not work OK.
 
 I wonder that how I can make kdbus send signal to the source connection, or 
 that this is impossible ?

I am not dure I follow. Are you developing a native kdbus client
library for JS? 

Did I get this right, you have one bus connection per thread, but
possibly both a kdbus client and its service run from the server, and
you want broadcast msgs sent out from one to then also be matchable by
the other?

Can't you dispatch that locally? i.e. in addition to passing the msg
to kdbus also enqueue it locallly along the kdbus fd, or so?

But I am not sure I understand the problem fully...

Lennart

-- 
Lennart Poettering, Red Hat
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] Fedora 21 and systemd-nspawn

2015-06-15 Thread Lennart Poettering
On Mon, 15.06.15 11:30, Matthew Karas (mkarasc...@gmail.com) wrote:

 I'm trying to use systemd-nspawn but when I launch it and try to login
 as root - it still asks for a password and I can't seem to set one.
 The docs for fedora mentioned turning off auditing - which I've done.
 
 My cmd line says audit=0 at the end.
 
 $ cat /proc/cmdline
 BOOT_IMAGE=/vmlinuz-3.19.7-200.fc21.x86_64
 root=/dev/mapper/fedora_localhost-root ro
 rd.lvm.lv=fedora_localhost/swap rd.lvm.lv=fedora_localhost/root rhgb
 audit=0 quiet
 
 
 (This is fedora 21) Using these docs
 https://fedoraproject.org/wiki/Features/SystemdLightweightContainers
 
 When I try to change the password it tells me I have a auth token
 manipulation error.
 
 $ sudo systemd-nspawn -D /srv/eq1
 Spawning container eq1 on /srv/eq1.
 Press ^] three times within 1s to kill container.
 -bash-4.3# passwd
 Changing password for user root.
 New password:
 Retype new password:
 passwd: Authentication token manipulation error
 -bash-4.3#

Hmm, this is weird. This should just work if audit=0 is set on the
kernel cmdline. Is this f21 both inside and on the host?

If you strace what passwd is doing there, do you see anything
interesting? If in doubt, paste the output on some pastebin and link
it here.

Lennart

-- 
Lennart Poettering, Red Hat
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] systemd is trying to break mount ordering

2015-06-15 Thread Lennart Poettering
On Mon, 15.06.15 13:24, Jan Synáček (jsyna...@redhat.com) wrote:

 
 I have the following setup on a freshly updated Fedora Rawhide machine
 with systemd-220-9.fc23.x86_64.
 
 # cat /etc/fstab
 [comments left out]
 UUID=d5ac823b-d0bd-4f7f-bf4b-5cc82d585a92 /   btrfs   
 subvol=root 0 0
 UUID=ec79f233-055c-40fa-98e5-e2d77314913a /boot   ext4
 defaults1 2
 UUID=d5ac823b-d0bd-4f7f-bf4b-5cc82d585a92 /home   btrfs   
 subvol=home 0 0
 192.168.122.1:/srv/nfs /mnt/nfs nfs defaults 0 0
 /var/tmp/test.iso /mnt/nfs/content iso9660 loop,ro 0 0
 
 Notice the last two lines. There is an NFS mount mounted to /mnt/nfs and
 an ISO filesystem mounted into /mnt/nfs/content, which makes it
 dependent on the NFS mount.

Please add _netdev to the mount options of the ISO mount, to let
systemd know that you need the network for that.

Otherwise systemd assumes the ISO mount is in fact a local mount
(which is hence ordered before local-fs.target which in turn is before
basic.target), while correctly detecting that the nfs mount is a
remote mount (which is henece ordered before remote-fs.target which in
turn is usually assumed to be started much later than
local-fs.target). Since however the local mount is ordered after the
remote mounts you get a cyclic dep loop.

 Isn't systemd trying to delete too many jobs while resolving the cycles?

Well, systemd removes jobs effectively randomly, since for the cycle
breaking logic all units are the same. Of course, you might consider
some jobs more important than others, but systemd doesn't know which
ones those would be.

There have been prior requests for a better cycle breaking strategy
but so far I am not aware of any proposal that could really work and
substantially better things.

Lennart

-- 
Lennart Poettering, Red Hat
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] [survey] BTRFS_IOC_DEVICES_READY return status

2015-06-15 Thread David Sterba
On Fri, Jun 12, 2015 at 09:16:30PM +0800, Anand Jain wrote:
 BTRFS_IOC_DEVICES_READY is to check if all the required devices
 are known by the btrfs kernel, so that admin/system-application
 could mount the FS. It is checked against a device in the argument.
 
 However the actual implementation is bit more than just that,
 in the way that it would also scan and register the device
 provided in the argument (same as btrfs device scan subcommand
 or BTRFS_IOC_SCAN_DEV ioctl).
 
 So BTRFS_IOC_DEVICES_READY ioctl isn't a read/view only ioctl,
 but its a write command as well.

The implemented DEVICES_READY behaviour is intentional, but not a good
example of ioctl interface design. I asked for a more generic interface
to querying devices when this patch was submitted but to no outcome.

 Next, since in the kernel we only check if total_devices
 (read from SB)  is equal to num_devices (counted in the list)
 to state the status as 0 (ready) or 1 (not ready). But this
 does not work in rest of the device pool state like missing,
 seeding, replacing since total_devices is actually not equal
 to num_devices in these state but device pool is ready for
 the mount and its a bug which is not part of this discussions.

That's an example why the single-shot ioctl is bad - it relies on some
internal state that's otherwise nontrivial to get.

 Questions:
 
   - Do we want BTRFS_IOC_DEVICES_READY ioctl to also scan and
 register the device provided (same as btrfs device scan
 command or the BTRFS_IOC_SCAN_DEV ioctl)
 OR can BTRFS_IOC_DEVICES_READY be read-only ioctl interface
 to check the state of the device pool. ?

This has been mentioned in the thread, we cannot change the ioctl that
way. Extensions are possible as far as they stay backward compatible
without changes to the existing users.

   - If the the device in the argument is already mounted,
 can it straightaway return 0 (ready) ? (as of now it would
 again independently read the SB determine total_devices
 and check against num_devices.

We can do that, looks like a safe optimization.

   - What should be the expected return when the FS is mounted
 and there is a missing device.

I think the current ioctl cannot give a good answer to that, similar to
the seeding or dev-replace case. We'd need an improved ioctl or do it
via sysfs which is my preference at the moment.
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] [PATCH v2] Add support for transient presets, applied on every boot.

2015-06-15 Thread Dimitri John Ledkov
On 22 April 2015 at 19:30, Lennart Poettering lenn...@poettering.net wrote:
 On Sat, 21.02.15 02:38, Dimitri John Ledkov (dimitri.j.led...@intel.com) 
 wrote:

 Sorry for the late review!


Hello, blast from the past =)

 Can you please add a commit description to this, explaining the
 precise rationale for this?


Right, will work on that. But let me comment on below to close up
discussion here, before moving next edition of the patchset into
github.


 ---
  src/core/main.c  | 27 +++
  src/core/unit.c  |  2 +-
  src/shared/install.c | 25 -
  src/shared/install.h |  2 +-
  4 files changed, 49 insertions(+), 7 deletions(-)

 diff --git a/src/core/main.c b/src/core/main.c
 index 08f46f5..2656779 100644
 --- a/src/core/main.c
 +++ b/src/core/main.c
 @@ -1207,6 +1207,23 @@ static int write_container_id(void) {
  return write_string_file(/run/systemd/container, c);
  }

 +static int transient_presets(void) {
 +struct stat st;
 +
 +if (lstat(/usr/lib/systemd/system-preset-transient, st) == 0)
 +return !!S_ISDIR(st.st_mode);

 Please use is_dir() for this, it's slightly nicer to read.


ok.

 +#ifdef HAVE_SPLIT_USR
 +if (lstat(/lib/systemd/system-preset-transient, st) == 0)
 +return !!S_ISDIR(st.st_mode);
 +#endif
 +
 +if (lstat(/etc/systemd/system-preset-transient, st) == 0)
 +return !!S_ISDIR(st.st_mode);
 +
 +return 0;
 +}

 Also, the function should probably return a proper bool instead of
 an int. We use C99 bool heavily.

 That said, maybe we shouldn't have this function at all, see below.


Well, I think we need this. Either here, or in unit_file_preset_all.

 +
  int main(int argc, char *argv[]) {
  Manager *m = NULL;
  int r, retval = EXIT_FAILURE;
 @@ -1619,6 +1636,16 @@ int main(int argc, char *argv[]) {
  if (arg_running_as == SYSTEMD_SYSTEM) {
  bump_rlimit_nofile(saved_rlimit_nofile);

 +// NB! transient presets must be applied before
 normal

 We try to stick to /* comments */ instead of // comments


Ok. I grew up with // comments =)

 +if (transient_presets()) {
 +r = unit_file_preset_all(UNIT_FILE_SYSTEM, true, 
 NULL, UNIT_FILE_PRESET_ENABLE_ONLY, false, NULL, 0);
 +if (r  0)
 +log_warning_errno(r, Failed to populate 
 transient preset unit settings, ignoring: %m);
 +else
 +log_info(Populated transient preset unit 
 settings.);
 +}

 Hmm, do we actually need the explicit check with transient_presets()
 at all? I mean, it replicates the search path logic, and
 unit_file_preset_all() should notice on its own that there are no
 preset files in those dirs...


Well. it does notice there are no presets, and hence defaults to
enabling all units. At the moment the chain of calls is like so:

main.c decides to call unit_file_preset_all at an appropriate time:

1) list of all unit paths is constructed, and iterated
2) for each valid unit in a unit path presets are queried
3) a list of preset paths is constructed
4) for each valid preset file, it is iterated to check if it has
anything about the unit in question
5) ... and if nothing found in preset files default to enable.

Thus the currently logic does a lot of iterations and without any
.preset files or folders, defaults to enable *.

Now, remember we discussed ways to optimise the preset application
logic. (parse and cache the policy first, and then for each unit do
lookups in the parsed policy). Reopening that discussion, would the
behaviour be acceptable to change slitghly:

1) if there are no .preset policy files defined at all. Bail out,
nothing is enabled.
2) if there are /usr/lib/systemd/preset/*.preset. Parse and create
policy cache, with the fall-through to enable * as before.
3) do similar for the transient-presets.

In general one would not use both types. And imho, it would be useful
to skip enable * if no policies exist at all. Mostly, because
without any policies installed (even the default one that systemd
ships) it is harmful to enable *. E.g. clearlinux, debian, ubuntu do
not use persistent presets and end up plugging the preset application
on first boot with hacks (e.g. making sure machine-id is always
available pre-first boot, shipping disable * persistent policy, or
in clearlinux case even patching out persistent policy application
since it takes time to iterate pointlessly all preset dirs, for each
unit file).

The changes above would optimise policy application, and would be no
change to behaviour of existing users, who have at least one .preset
file on disk.

(if .preset file presence not enough, could do journald style
presence check of any /preset/ folders instead).



 diff --git a/src/shared/install.c b/src/shared/install.c
 index 

Re: [systemd-devel] Can kdbus send signal to the source connection?

2015-06-15 Thread Simon McVittie
On 15/06/15 15:32, Lennart Poettering wrote:
 Did I get this right, you have one bus connection per thread, but
 possibly both a kdbus client and its service run from the server, and
 you want broadcast msgs sent out from one to then also be matchable by
 the other?

If this is indeed what eshark means, then talking to yourself like
this is something that always used to work with traditional D-Bus (as
long as you make sure to never block waiting for a reply!), so it's a
regression if it doesn't work with kdbus.

In traditional D-Bus, broadcasts go to any connection that has
registered its interest in the broadcast via AddMatch. dbus-daemon does
not discriminate between the sender of a message, and other connections
- in particular, it will send a copy of a broadcast back to its sender,
if that's what the sender asked for.

Various projects' regression tests work like this: they run the
client-side and service-side code in the same GLib main loop and do
everything asynchronously, and it works. Ideally, the only processes
involved are the test and the dbus-daemon (and under kdbus the
dbus-daemon would not be needed either).

 Can't you dispatch that locally? i.e. in addition to passing the msg
 to kdbus also enqueue it locallly along the kdbus fd, or so?

That would mean re-ordering the broadcast messages (taking them out of
their correct sequence relative to other messages), which is one of the
reasons why traditional D-Bus implementations don't optimize messages
to yourself in this way. One of dbus-daemon's roles is to impose a
total ordering on messages - it's the component that makes the arbitrary
decision on how the individual message streams get interleaved.

-- 
Simon McVittie
Collabora Ltd. http://www.collabora.com/

___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] Why we need to read/save random seed?

2015-06-15 Thread cee1
Hi,

I maybe got confused.

First, systemd-random-seed.service will save a seed from
/dev/urandom when shutdown, and load that seed to /dev/urandom when
next boot up.

My questions are:
1. Can we not save a seed, but load a seed that is read from **
/dev/random ** to ** /dev/urandom **?
2. Saving a seed on disk, and someone reads the content of it later,
will this make the urandom predictable?

Talking about /dev/random, it consumes an internal entropy pool, some
system events(disk reading/page fault, etc) enlarges this pool, am I
right?

And write to /dev/random will mix the input data into the pool, but
not enlarge it, right?  What benefits can it get when only mix data
but not enlarge the entropy pool?

3.16+ will mix data from HWRNG, does it also enlarges the entropy pool?


2015-06-15 8:40 GMT+08:00 Dax Kelson dkel...@gurulabs.com:

 On Jun 14, 2015 10:11 AM, Cristian Rodríguez
 cristian.rodrig...@opensuse.org wrote:

 On Sun, Jun 14, 2015 at 1:43 PM, Greg KH gre...@linuxfoundation.org
 wrote:
  On Sun, Jun 14, 2015 at 12:49:55PM -0300, Cristian Rodríguez wrote:


 Las time I checked , it required this userspace help even when the
 machine has rdrand/rdseed or when a virtual machine is fed from the
 host using the virtio-rng driver.. (may take up to 60 seconds to
 report
 random: nonblocking pool is initialized) Any other possible solution
 that I imagined involves either blocking and/or changes in the
 behaviour visible to userspace and that is probably unacceptable
 .

 I added the following text to Wikipedia's /dev/random page.

 With Linux kernel 3.16 and newer, the kernel itself mixes data from
 hardware random number generators into/dev/random on a sliding scale based
 on the definable entropy estimation quality of the HWRNG. This means that no
 userspace daemon, such as rngd from rng-toolsis needed to do that job. With
 Linux kernel 3.17+, the VirtIO RNG was modified to have a default quality
 defined above 0, and as such, is currently the only HWRNG mixed into
 /dev/random by default.


 ___
 systemd-devel mailing list
 systemd-devel@lists.freedesktop.org
 http://lists.freedesktop.org/mailman/listinfo/systemd-devel




-- 
Regards,

- cee1
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


[systemd-devel] [HEADSUP] Intend to release 221 by the end of the week

2015-06-15 Thread Lennart Poettering
Heya,

People asked for a heads-up on this: I intend to prepare v221 by the
end of this week. 

It's a good time to start testing what's currently in git!

If you take this as hint to start your auto-builder however, then
that's wrong: you should run your auto-builder CI-style all the time
anyway, not just now shortly before the release!

We are now labelling release critical issues in github with the
release-critical label. To help us with the release patches for
those issues would be particularly well received!

https://github.com/systemd/systemd/labels/release-critical

Thanks!

Lennart

-- 
Lennart Poettering, Red Hat
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] Fedora 21 and systemd-nspawn

2015-06-15 Thread Chris Morgan
On Monday, June 15, 2015, Lennart Poettering lenn...@poettering.net wrote:

 On Mon, 15.06.15 13:22, Matthew Karas (mkarasc...@gmail.com javascript:;)
 wrote:

  Yes - that seems to have let me set the password.  Now I can get
  started learning about this.
 
  Thanks a lot!
 
  Though it does return an error about selinux when I start the shell to
  set the password
 
  $ sudo systemd-nspawn -bD /srv/srv1
  Spawning container srv1 on /srv/srv1.
  Press ^] three times within 1s to kill container.
  Failed to create directory /srv/srv1//sys/fs/selinux: Read-only file
 system
  Failed to create directory /srv/srv1//sys/fs/selinux: Read-only file
 system

 Hmm, weird. Is /srv/srv1 read-only or so?

 Lennart

 --
 Lennart Poettering, Red Hat
 ___
 systemd-devel mailing list
 systemd-devel@lists.freedesktop.org javascript:;
 http://lists.freedesktop.org/mailman/listinfo/systemd-devel



On a somewhat related topic, are many people making use of nspawn
containers in production or test environments? I was a little surprised by
the issues I had when trying them out with f21. f22 seems smoother but
still required the audit=0 and I think I had to disable selinux to set the
password but I was trying for a while with a blank password so...

But yeah, was wondering if there were known users of nspawn containers that
discussed their use cases.

Chris
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] Why we need to read/save random seed?

2015-06-15 Thread Lennart Poettering
On Mon, 15.06.15 23:33, cee1 (fykc...@gmail.com) wrote:

 Hi,
 
 I maybe got confused.
 
 First, systemd-random-seed.service will save a seed from
 /dev/urandom when shutdown, and load that seed to /dev/urandom when
 next boot up.
 
 My questions are:
 1. Can we not save a seed, but load a seed that is read from **
 /dev/random ** to ** /dev/urandom **?

The seed is used for both. Then you'd feed the stuff you got from the
RNG back into the RNG which is a pointless excercise.

 2. Saving a seed on disk, and someone reads the content of it later,
 will this make the urandom predictable?

Well, it will always be mixed with whatever else is there, so
hopefully not. Also, the seed file is not readable by non-root, so it
should hopefully not leak.

The seed stuff can never make things worse, it can only make things
ebtter.

 Talking about /dev/random, it consumes an internal entropy pool, some
 system events(disk reading/page fault, etc) enlarges this pool, am I
 right?

Well, yeah, but if you want to know systemd is probably not the right
source of information for that. LWN had a couple of stories about this
however.

 And write to /dev/random will mix the input data into the pool, but
 not enlarge it, right?  What benefits can it get when only mix data
 but not enlarge the entropy pool?

Well, it's one thing to hand out randomness, it's another thing to
claim it was any good. Even though the seeding doesn't make the kernel
pretend it was good, it is still added to the randomness, hence should
generally be better than what was before, and in the worst case as
good, but never worse.

 3.16+ will mix data from HWRNG, does it also enlarges the entropy pool?

That's probably something to ask in some kernel forum...

Lennart

-- 
Lennart Poettering, Red Hat
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] Fedora 21 and systemd-nspawn

2015-06-15 Thread Matthew Karas
Here is my output

https://gist.github.com/mkcybi/eae6a2a67c5dc864

-- Forwarded message --
From: Lennart Poettering lenn...@poettering.net
Date: Mon, Jun 15, 2015 at 11:32 AM
Subject: Re: [systemd-devel] Fedora 21 and systemd-nspawn
To: Matthew Karas mkarasc...@gmail.com
Cc: systemd-devel@lists.freedesktop.org


On Mon, 15.06.15 11:30, Matthew Karas (mkarasc...@gmail.com) wrote:

 I'm trying to use systemd-nspawn but when I launch it and try to login
 as root - it still asks for a password and I can't seem to set one.
 The docs for fedora mentioned turning off auditing - which I've done.

 My cmd line says audit=0 at the end.

 $ cat /proc/cmdline
 BOOT_IMAGE=/vmlinuz-3.19.7-200.fc21.x86_64
 root=/dev/mapper/fedora_localhost-root ro
 rd.lvm.lv=fedora_localhost/swap rd.lvm.lv=fedora_localhost/root rhgb
 audit=0 quiet


 (This is fedora 21) Using these docs
 https://fedoraproject.org/wiki/Features/SystemdLightweightContainers

 When I try to change the password it tells me I have a auth token
 manipulation error.

 $ sudo systemd-nspawn -D /srv/eq1
 Spawning container eq1 on /srv/eq1.
 Press ^] three times within 1s to kill container.
 -bash-4.3# passwd
 Changing password for user root.
 New password:
 Retype new password:
 passwd: Authentication token manipulation error
 -bash-4.3#

Hmm, this is weird. This should just work if audit=0 is set on the
kernel cmdline. Is this f21 both inside and on the host?

If you strace what passwd is doing there, do you see anything
interesting? If in doubt, paste the output on some pastebin and link
it here.

Lennart

--
Lennart Poettering, Red Hat
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] Fedora 21 and systemd-nspawn

2015-06-15 Thread Matthew Karas
Yes - that seems to have let me set the password.  Now I can get
started learning about this.

Thanks a lot!

Though it does return an error about selinux when I start the shell to
set the password

$ sudo systemd-nspawn -bD /srv/srv1
Spawning container srv1 on /srv/srv1.
Press ^] three times within 1s to kill container.
Failed to create directory /srv/srv1//sys/fs/selinux: Read-only file system
Failed to create directory /srv/srv1//sys/fs/selinux: Read-only file system

On Mon, Jun 15, 2015 at 12:24 PM, Lennart Poettering
lenn...@poettering.net wrote:
 On Mon, 15.06.15 12:21, Matthew Karas (mkarasc...@gmail.com) wrote:

 Here is my output

 https://gist.github.com/mkcybi/eae6a2a67c5dc864

 This line is probably the error:

 rename(/etc/nshadow, /etc/shadow)   = -1 EACCES (Permission
 denied)

 For some reason the container cannot reply /etc/shadow in it.

 MAybe an SELinux problem? Have you tried turning it off?

 Lennart

 --
 Lennart Poettering, Red Hat
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] [survey] BTRFS_IOC_DEVICES_READY return status

2015-06-15 Thread Lennart Poettering
On Mon, 15.06.15 19:23, Goffredo Baroncelli (kreij...@inwind.it) wrote:

 On 2015-06-15 12:46, Lennart Poettering wrote:
  On Sat, 13.06.15 17:09, Goffredo Baroncelli (kreij...@libero.it) wrote:
  
  Further, the problem will be more intense in this eg. if you use dd
  and copy device A to device B. After you mount device A, by just
  providing device B in the above two commands you could let kernel
  update the device path, again all the IO (since device is mounted)
  are still going to the device A (not B), but /proc/self/mounts and
  'btrfs fi show' shows it as device B (not A).
 
  Its a bug. very tricky to fix.
 
  In the past [*] I proposed a mount.btrfs helper . I tried to move the 
  logic outside the kernel.
  I think that the problem is that we try to manage all these cases
  from a device point of view: when a device appears, we register the
  device and we try to mount the filesystem... This works very well
  when there is 1-volume filesystem. For the other cases there is a
  mess between the different layers:
  
  - kernel
  - udev/systemd
  - initrd logic
 
  My attempt followed a different idea: the mount helper waits the
  devices if needed, or if it is the case it mounts the filesystem in
  degraded mode. All devices are passed as mount arguments
  (--device=/dev/sdX), there is no a device registration: this avoids
  all these problems.
  
  Hmm, no. /bin/mount should not block for devices. That's generally
  incompatible with how the tool is used, and in particular from
  systemd. We would not make use for such a scheme in
  systemd. /bin/mount should always be short-running.
 
 Apart systemd, which are these incompatibilities ? 

Well, /bin/mount is not a daemon, and it should not be one.

  I am pretty sure that if such automatic degraded mounting should be
  supported, then this should be done with some background storage
  daemon that alters the effect of the READY ioctl somehow after the
  timeout, and then retriggers the devcies so that systemd takes
  note. (or, alternatively: such a scheme could even be implemented all
  in kernel, based on some configurable kernel setting...)
 
 I recognize that this solution provides the maximum compatibility
 with the current implementation. However it seems too complex to
 me. Re-trigging a devices seems to me more a workaround than a
 solution.

Well, it's not really ugly. I mean, if the state or properties of a
device change, then udev should update its information about it, and
that's done via a retrigger. We do that all the time already, for
example when an existing loopback device gets a backing file assigned
or removed. I am pretty sure that loopback case is very close to what
you want to do here, hence retriggering (either from the kernel side,
or from userspace), appears like an OK thing to do.

 Could a generator do this job ? I.e. this generator (or storage
 daemon) waits that all (or enough) devices are appeared, then it
 creates a .mount unit: do you think that it is doable ?

systemd generators are a way to extend the systemd unit dep tree with
units. They are very short running, and are executed only very very
early at boot. They cannot wait for anything, they don#t have access
to devices and are not run when they are appear.

Lennart

-- 
Lennart Poettering, Red Hat
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] Fedora 21 and systemd-nspawn

2015-06-15 Thread Lennart Poettering
On Mon, 15.06.15 13:22, Matthew Karas (mkarasc...@gmail.com) wrote:

 Yes - that seems to have let me set the password.  Now I can get
 started learning about this.
 
 Thanks a lot!
 
 Though it does return an error about selinux when I start the shell to
 set the password
 
 $ sudo systemd-nspawn -bD /srv/srv1
 Spawning container srv1 on /srv/srv1.
 Press ^] three times within 1s to kill container.
 Failed to create directory /srv/srv1//sys/fs/selinux: Read-only file system
 Failed to create directory /srv/srv1//sys/fs/selinux: Read-only file system

Hmm, weird. Is /srv/srv1 read-only or so?

Lennart

-- 
Lennart Poettering, Red Hat
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] Understanding DHCP, DNS and IPMasquerade

2015-06-15 Thread Johannes Ernst

 On Jun 15, 2015, at 11:32, Lennart Poettering lenn...@poettering.net wrote:
 
 On Mon, 15.06.15 10:39, Johannes Ernst (johannes.er...@gmail.com) wrote:
 
 
 On Jun 15, 2015, at 10:33, Lennart Poettering lenn...@poettering.net 
 wrote:
 
 On Mon, 15.06.15 10:32, Johannes Ernst (johannes.er...@gmail.com 
 mailto:johannes.er...@gmail.com) wrote:
 
 
 On Jun 14, 2015, at 15:27, Lennart Poettering lenn...@poettering.net 
 wrote:
 
 On Fri, 12.06.15 17:32, Johannes Ernst (johannes.er...@gmail.com) wrote:
 
 * host and container can ping test (if test is the name of the
 * container machine per machinectl): FAILS, neither can
 
 Do you have nss-mymachines enabled in /etc/nsswitch.conf?
 
 Yes:
 
 Does pinging via the IP addresses work? 
 
 Yes. Both container-host and host-container.
 
 On host:
 machinectl
 MACHINE CLASS SERVICE
 foo container nspawn 
 
 1 machines listed.
 ping foo
 ping: unknown host foo
 cat /etc/nsswitch.conf 
 hosts: nss-mymachines files mdns_minimal [NOTFOUND=return] dns
 myhostname
 
 Ah, heh, try mymachines instead of nss-mymachines... Also see
 nss-mymachines(8) man page. That should fix your issue.

Magic! It’s working! Thank you.


 
 Lennart
 
 -- 
 Lennart Poettering, Red Hat

___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] Understanding DHCP, DNS and IPMasquerade

2015-06-15 Thread Lennart Poettering
On Mon, 15.06.15 10:39, Johannes Ernst (johannes.er...@gmail.com) wrote:

 
  On Jun 15, 2015, at 10:33, Lennart Poettering lenn...@poettering.net 
  wrote:
  
  On Mon, 15.06.15 10:32, Johannes Ernst (johannes.er...@gmail.com 
  mailto:johannes.er...@gmail.com) wrote:
  
  
  On Jun 14, 2015, at 15:27, Lennart Poettering lenn...@poettering.net 
  wrote:
  
  On Fri, 12.06.15 17:32, Johannes Ernst (johannes.er...@gmail.com) wrote:
  
  * host and container can ping test (if test is the name of the
  * container machine per machinectl): FAILS, neither can
  
  Do you have nss-mymachines enabled in /etc/nsswitch.conf?
  
  Yes:
  
  Does pinging via the IP addresses work? 
  
  Yes. Both container-host and host-container.
  
  On host:
  machinectl
  MACHINE CLASS SERVICE
  foo container nspawn 
  
  1 machines listed.
  ping foo
  ping: unknown host foo
  cat /etc/nsswitch.conf 
  hosts: nss-mymachines files mdns_minimal [NOTFOUND=return] dns
 myhostname

Ah, heh, try mymachines instead of nss-mymachines... Also see
nss-mymachines(8) man page. That should fix your issue.

Lennart

-- 
Lennart Poettering, Red Hat
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] Understanding DHCP, DNS and IPMasquerade

2015-06-15 Thread Johannes Ernst

 On Jun 14, 2015, at 15:27, Lennart Poettering lenn...@poettering.net wrote:
 
 On Fri, 12.06.15 17:32, Johannes Ernst (johannes.er...@gmail.com) wrote:
 
 * host and container can ping test (if test is the name of the
 * container machine per machinectl): FAILS, neither can
 
 Do you have nss-mymachines enabled in /etc/nsswitch.conf?

Yes:

 Does pinging via the IP addresses work? 

Yes. Both container-host and host-container.

On host:
 machinectl
MACHINE CLASS SERVICE
foo container nspawn 

1 machines listed.
 ping foo
ping: unknown host foo
 cat /etc/nsswitch.conf 
hosts: nss-mymachines files mdns_minimal [NOTFOUND=return] dns myhostname

(or with just nss-mymachines)

Thanks for your help,


Johannes.

___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] [PATCH] watchdog: Don't require WDIOC_SETOPTIONS/WDIOS_ENABLECARD

2015-06-15 Thread Lennart Poettering
On Mon, 15.06.15 18:14, Jean Delvare (jdelv...@suse.de) wrote:

 Not all watchdog drivers implement WDIOC_SETOPTIONS. Drivers which do
 not implement it have their device always enabled. So it's fine to
 report an error if WDIOS_DISABLECARD is passed and the ioctl is not
 implemented, however failing when WDIOS_ENABLECARD is passed and the
 ioctl is not implemented is not good: if the device was already
 enabled then WDIOS_ENABLECARD was a no-op and wasn't needed in the
 first place. So we can just ignore the error and continue.

Isn't this something that should be fixed in the drivers?

 ---
  src/shared/watchdog.c |3 ++-
  1 file changed, 2 insertions(+), 1 deletion(-)
 
 --- a/src/shared/watchdog.c
 +++ b/src/shared/watchdog.c
 @@ -64,7 +64,8 @@ static int update_timeout(void) {
  
  flags = WDIOS_ENABLECARD;
  r = ioctl(watchdog_fd, WDIOC_SETOPTIONS, flags);
 -if (r  0) {
 +/* ENOTTY means the watchdog is always enabled so we're fine 
 */
 +if (r  0  errno != ENOTTY) {
  log_warning(Failed to enable hardware watchdog: 
 %m);
  return -errno;

If this is something to fix in systemd, rather than fix in the
drivers:  am pretty sure that we should log in all cases, but change
the log level to LOG_DEBUG if its ENOTTY. i.e. use log_full(errno ==
ENOTTY ? LOG_DEBUG : LOG_WARNING...

Lennart

-- 
Lennart Poettering, Red Hat
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] Understanding DHCP, DNS and IPMasquerade

2015-06-15 Thread Lennart Poettering
On Mon, 15.06.15 10:32, Johannes Ernst (johannes.er...@gmail.com) wrote:

 
  On Jun 14, 2015, at 15:27, Lennart Poettering lenn...@poettering.net 
  wrote:
  
  On Fri, 12.06.15 17:32, Johannes Ernst (johannes.er...@gmail.com) wrote:
  
  * host and container can ping test (if test is the name of the
  * container machine per machinectl): FAILS, neither can
  
  Do you have nss-mymachines enabled in /etc/nsswitch.conf?
 
 Yes:
 
  Does pinging via the IP addresses work? 
 
 Yes. Both container-host and host-container.
 
 On host:
  machinectl
 MACHINE CLASS SERVICE
 foo container nspawn 
 
 1 machines listed.
  ping foo
 ping: unknown host foo
  cat /etc/nsswitch.conf 
 hosts: nss-mymachines files mdns_minimal [NOTFOUND=return] dns myhostname

Does machinectl status show the IP addresses of the container in its
output?

Lennart

-- 
Lennart Poettering, Red Hat
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] Understanding DHCP, DNS and IPMasquerade

2015-06-15 Thread Johannes Ernst

 On Jun 15, 2015, at 10:33, Lennart Poettering lenn...@poettering.net wrote:
 
 On Mon, 15.06.15 10:32, Johannes Ernst (johannes.er...@gmail.com 
 mailto:johannes.er...@gmail.com) wrote:
 
 
 On Jun 14, 2015, at 15:27, Lennart Poettering lenn...@poettering.net 
 wrote:
 
 On Fri, 12.06.15 17:32, Johannes Ernst (johannes.er...@gmail.com) wrote:
 
 * host and container can ping test (if test is the name of the
 * container machine per machinectl): FAILS, neither can
 
 Do you have nss-mymachines enabled in /etc/nsswitch.conf?
 
 Yes:
 
 Does pinging via the IP addresses work? 
 
 Yes. Both container-host and host-container.
 
 On host:
 machinectl
 MACHINE CLASS SERVICE
 foo container nspawn 
 
 1 machines listed.
 ping foo
 ping: unknown host foo
 cat /etc/nsswitch.conf 
 hosts: nss-mymachines files mdns_minimal [NOTFOUND=return] dns myhostname
 
 Does machinectl status show the IP addresses of the container in its
 output?

Yes:

 sudo machinectl status foo
   Since: Mon 2015-06-15 17:27:33 UTC; 9min ago
  Leader: 31137 (systemd)
 Service: nspawn; class container
Root: 
/home/buildmaster/git/github.com/uboslinux/ubos-buildconfig/repository/dev/x86_64/images/ubos_dev_container-pc_20150614-054626
   Iface: ve-foo
 Address: 10.0.0.2
  169.254.169.115
  OS: UBOS

(UBOS: for our purposes here: same as Arch)


___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


[systemd-devel] [PATCH] watchdog: Don't require WDIOC_SETOPTIONS/WDIOS_ENABLECARD

2015-06-15 Thread Jean Delvare
Not all watchdog drivers implement WDIOC_SETOPTIONS. Drivers which do
not implement it have their device always enabled. So it's fine to
report an error if WDIOS_DISABLECARD is passed and the ioctl is not
implemented, however failing when WDIOS_ENABLECARD is passed and the
ioctl is not implemented is not good: if the device was already
enabled then WDIOS_ENABLECARD was a no-op and wasn't needed in the
first place. So we can just ignore the error and continue.
---
 src/shared/watchdog.c |3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

--- a/src/shared/watchdog.c
+++ b/src/shared/watchdog.c
@@ -64,7 +64,8 @@ static int update_timeout(void) {
 
 flags = WDIOS_ENABLECARD;
 r = ioctl(watchdog_fd, WDIOC_SETOPTIONS, flags);
-if (r  0) {
+/* ENOTTY means the watchdog is always enabled so we're fine */
+if (r  0  errno != ENOTTY) {
 log_warning(Failed to enable hardware watchdog: %m);
 return -errno;
 }

-- 
Jean Delvare
SUSE L3 Support

___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] Why we need to read/save random seed?

2015-06-15 Thread Cristian Rodríguez
On Mon, Jun 15, 2015 at 12:33 PM, cee1 fykc...@gmail.com wrote:
 Hi,

 I maybe got confused.

 First, systemd-random-seed.service will save a seed from
 /dev/urandom when shutdown, and load that seed to /dev/urandom when
 next boot up.

 My questions are:
 1. Can we not save a seed, but load a seed that is read from **
 /dev/random ** to ** /dev/urandom **?

No, at boot you do not have enough entropy to begin with.

 2. Saving a seed on disk, and someone reads the content of it later,
 will this make the urandom predictable?

Yes, that's why the file is only readable by root.

 Talking about /dev/random, it consumes an internal entropy pool, some
 system events(disk reading/page fault, etc) enlarges this pool, am I
 right?

See this article http://www.2uo.de/myths-about-urandom/

 And write to /dev/random will mix the input data into the pool, but
 not enlarge it, right?

It is up to the kernel to credit the data written to it as entropy (or not)

  What benefits can it get when only mix data
 but not enlarge the entropy pool?

The data written to it may be predictable..

 3.16+ will mix data from HWRNG, does it also enlarges the entropy pool?

Yes but it might not be given credit depending what the source is.
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


[systemd-devel] Scripting a server test

2015-06-15 Thread Johannes Ernst
This is a best-practice question.

I’d like to automate testing of a web application (running in a container) by 
running curl from the host. The logical sequence should be:

* boot container using local tar file or existing directory
* wait until container is-system-running=true
* on the container, execute a few commands
* on the host, run curl against the container
* tear down the container

I need to boot the container, and the image I need to use for this test brings 
up a login prompt at the console.

I’m thinking of doing something like:
 machinectl import-tar foo.tar foo
 machinectl start foo
 ssh foo systemctl is-system-running
until satisfied
 ssh foo some other commands
 curl http://foo/ http://foo/…
 machinectl poweroff foo
 machinectl status foo
until off

But I don’t like the “container import and registration” part of this, because 
my container is very ephemeral and might only live for a few minutes if the 
test passes.

Alternatively I could create myself a “test@.service” which would be identical 
to systemd-nspawn@.service, except it would use the directory as the %I instead 
of the machine name, so I could start it like:
 systemctl start test@/my/container/directory

Or I could fork off the systemctl-nspawn command in my test script.

Opinions? I figure this is a common-enough scenario that there might be some 
opinions on this list ...

Cheers,



Johannes.

___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] [PATCH] watchdog: Don't require WDIOC_SETOPTIONS/WDIOS_ENABLECARD

2015-06-15 Thread Jean Delvare
Hi Lennart,

Thanks for your quick reply.

Le Monday 15 June 2015 à 18:16 +0200, Lennart Poettering a écrit :
 On Mon, 15.06.15 18:14, Jean Delvare (jdelv...@suse.de) wrote:
 
  Not all watchdog drivers implement WDIOC_SETOPTIONS. Drivers which do
  not implement it have their device always enabled. So it's fine to
  report an error if WDIOS_DISABLECARD is passed and the ioctl is not
  implemented, however failing when WDIOS_ENABLECARD is passed and the
  ioctl is not implemented is not good: if the device was already
  enabled then WDIOS_ENABLECARD was a no-op and wasn't needed in the
  first place. So we can just ignore the error and continue.
 
 Isn't this something that should be fixed in the drivers?

This is a legitimate question and I pondered it myself too. However the
fact that over 20 drivers do not implement WDIOC_SETOPTIONS, together
with the fact that it's named, well, set options, give me the feeling
that not implementing it is legitimate.

  ---
   src/shared/watchdog.c |3 ++-
   1 file changed, 2 insertions(+), 1 deletion(-)
  
  --- a/src/shared/watchdog.c
  +++ b/src/shared/watchdog.c
  @@ -64,7 +64,8 @@ static int update_timeout(void) {
   
   flags = WDIOS_ENABLECARD;
   r = ioctl(watchdog_fd, WDIOC_SETOPTIONS, flags);
  -if (r  0) {
  +/* ENOTTY means the watchdog is always enabled so we're 
  fine */
  +if (r  0  errno != ENOTTY) {
   log_warning(Failed to enable hardware watchdog: 
  %m);
   return -errno;
 
 If this is something to fix in systemd, rather than fix in the
 drivers:  am pretty sure that we should log in all cases, but change
 the log level to LOG_DEBUG if its ENOTTY. i.e. use log_full(errno ==
 ENOTTY ? LOG_DEBUG : LOG_WARNING...

Sure, I can try that. Updated patch coming (likely tomorrow.)

Thanks for the review,
-- 
Jean Delvare
SUSE L3 Support

___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel