Re: [systemd-devel] Errorneous detection of degraded array

2017-01-31 Thread Andrei Borzenkov
31.01.2017 01:19, NeilBrown пишет:
> On Mon, Jan 30 2017, Andrei Borzenkov wrote:
> 
>> On Mon, Jan 30, 2017 at 9:36 AM, NeilBrown  wrote:
>> ...
>>>
>>> systemd[1]: Created slice system-mdadm\x2dlast\x2dresort.slice.
>>> systemd[1]: Starting system-mdadm\x2dlast\x2dresort.slice.
>>> systemd[1]: Starting Activate md array even though degraded...
>>> systemd[1]: Stopped target Local File Systems.
>>> systemd[1]: Stopping Local File Systems.
>>> systemd[1]: Unmounting /share...
>>> systemd[1]: Stopped (with error) /dev/md0.
>
>> ...
>>>
>>> The race is, I think, that one I mentioned.  If the md device is started
>>> before udev tells systemd to start the timer, the Conflicts dependencies
>>> goes the "wrong" way and stops the wrong thing.
>>>
>>
>> From the logs provided it is unclear whether it is *timer* or
>> *service*. If it is timer - I do not understand why it is started
>> exactly 30 seconds after device apparently appears. This would match
>> starting service.
> 
> My guess is that the timer is triggered immediately after the device is
> started, but before it is mounted.
> The Conflicts directive tries to stop the device, but is cannot stop the
> device and there are no dependencies yet, so nothing happen.
> After the timer fires (30 seconds later) the .service starts.  It also
> has a Conflicts directory so systemd tried to stop the device again.
> Now that it has been mounted, there is a dependences that can be
> stopped, and the device gets unmounted.
> 
>>
>> Yet another case where system logging is hopelessly unfriendly for
>> troubleshooting :(
>>
>>> It would be nice to be able to reliably stop the timer when the device
>>> starts, without risking having the device get stopped when the timer
>>> starts, but I don't think we can reliably do that.
>>>
>>
>> Well, let's wait until we can get some more information about what happens.
>>

Not much more, but we at least have confirmed that it was indeed last
resort service which was fired off by last resort timer. Unfortunately
no trace of timer itself.

>>> Changing the
>>>   Conflicts=sys-devices-virtual-block-%i.device
>>> lines to
>>>   ConditionPathExists=/sys/devices/virtual/block/%i
>>> might make the problem go away, without any negative consequences.
>>>
>>
>> Ugly, but yes, may be this is the only way using current systemd.
>>

This won't work. sysfs node appears as soon as the very first array
member is found and array is still inactive, while what we need is
condition "array is active".

Conflicts line works because array is not announced to systemd
(SYSTEMD_READY) until it is active. Which in turn is derived from the
content of md/array_state.

>>> The primary purpose of having the 'Conflicts' directives was so that
>>> systemd wouldn't log
>>>   Starting Activate md array even though degraded
>>> after the array was successfully started.
>>

Yes, I understand it.

>> This looks like cosmetic problem. What will happen if last resort
>> service is started when array is fully assembled? Will it do any harm?
> 
> Yes, it could be seen as cosmetic, but cosmetic issues can be important
> too.  Confusing messages in logs can be harmful.
> 
> In all likely cases, running the last-resort service won't cause any
> harm.
> If, during the 30 seconds, the array is started, then deliberately
> stopped, then partially assembled again, then when the last-resort
> service finally starts it might do the wrong thing.
> So it would be cleanest if the timer was killed as soon as the device
> is started.  But I don't think there is a practical concern.
> 
> I guess I could make a udev rule that fires when the array started, and
> that runs "systemctl stop mdadm-last-resort@md0.timer"
> 


Well ... what we really need is unidirectional dependency. Actually the
way Conflicts is used *is* unidirectional anyway - nobody seriously
expects that starting foo.service will stop currently running
shutdown.target. But that is semantic we have currently.

But this probably will do to mitigate this issue until something more
generic can be implemented.



signature.asc
Description: OpenPGP digital signature
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel


[systemd-devel] UMask attribute in service file

2017-01-31 Thread Oliver Graute
Hello list,

I'am using the UMask attribute in my service file to define my umask
setting to 0027. In this service file I start a little c program which uses
fopen() to open a file.

the file permissions are

rw-rw-rw (0666)

instead of

rw-r--- (0640)

Is this a expected behavior?

some further background:

In my system there are different services started by systemd 225 (all
with UMask=027). Sometimes files are created with 666 sometimes with
640 as I wish.

ls -la
-rw-r-1 oliver oliver75 Jan 11 17:30 
log_20170111_163024_2.log
-rw-rw-rw-1 oliver oliver75 Jan 11 17:30 
log_20170111_163024_3.log
-rw-rw-rw-1 oliver oliver75 Jan 11 17:36 
log_20170111_163610_1.log
-rw-r-1 oliver oliver75 Jan 31 13:38 
log_20170131_123842_1.log
-rw-r-1 oliver oliver75 Jan 31 13:57 
log_20170131_125738_2.log

some clue whats going one here?

here my stub service definition:

[Unit]
Description=Start umask_test

[Service]
Type=simple
PIDFile=/var/run/umasktest.pid
WorkingDirectory=/home/oliver
User=oliver
Group=oliver
UMask=0027
Environment=
ExecStart=/home/oliver/umask_test

Best Regards,

Oliver
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] nspawn --overlay and --read-only

2017-01-31 Thread Fabien Meghazi
>
> $ systemd-nspawn --directory=/os --read-only 
> --overlay=/os/home/foobar:/tmp/home/foobar:/home/foobar
> --user=foobar
>
> I expect the user foobar to be able to write in /home/foobar (in the
> container) but instead I get a Permission denied.
>

Sorry all, I was not properly managing the underlying filesystem UID
properly between the host and the guest.
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel


[systemd-devel] .device not creating in systemd

2017-01-31 Thread Arun Kumar
Hi all,

Previously i used systemd version 206 and kernel version 3.10.17.

Now I updated kernel version to 4.1.15 and systemd version to 229.

I'm using yocto for build.

I was mounting few devices from systemd using .mount service file. In that
I'm using requires and wants as .device.

After updating systemd with latest version there is no .device creating. I
verified with bootchart and also with

# systemctl list-units
command.
The devices are not shown. Please suggest ideas as I want to use the mount
from systemd.
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel