Re: [systemd-devel] restart vs. stop/start

2016-05-20 Thread Reindl Harald



Am 20.05.2016 um 21:50 schrieb Christian Boltz:

Hello,

it looks like
systemctl restart foo
is internally mapped to a sequence of
systemctl stop foo; systemctl start foo


what else?


Unfortunately, this behaviour causes quite some trouble for me.


why?


I need a way to know if "restart "or "stop" was used because the mapping
to stop / start gives my service a completely different behaviour than
expected on restart.

Is there a way to find out if "stop" or "restart" was used?


if you need to differ here your service is broken by design - why do you 
need to kow what triggered stop and what else do you imagine for 
"restart" then stop-start?




signature.asc
Description: OpenPGP digital signature
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel


[systemd-devel] launching an interactive user session

2016-05-20 Thread Mike Gulick
Hi systemd-devel,

I'm on Debian Jessie running the default systemd-215.  I have a daemon (running 
as root, controlled by systemd), whose job it is to launch on-demand VNC 
servers for other users.  Currently, this daemon uses a shell command like the 
following to launch the vnc server for a given $USER:

  sudo -i -u $USER /bin/sh -l -c 'cd \$HOME && /path/to/vncserver $ARGS

The issue I'm having is that the user VNC sessions being created all share the 
same systemd login session as my daemon.  I can see this by running 
systemd-cgls.  The users of these VNC sessions would like to be able to use 
"systemd-run --user --scope -p MemoryLimit=X COMMAND" to launch a command with 
cgroup-based resource limiting.  However without a user session, this results 
in "Failed to create bus connection: Connection refused".

There's too many users to create static systemd unit files, and it doesn't seem 
like I can create and load .service files on the fly.  The "machinectl shell" 
command (https://github.com/systemd/systemd/pull/1022) looks promising, but 
unfortunately it's not in my systemd yet.  I've tried searching through this 
mailing list's history, but the results all were dead ends.

It seems like there's a lot of pieces needed to make this work (dbus, XDG env 
vars, systemd --user), and all of the recommendations say to go through 
pam_systemd.so.  I'm not afraid of interacting with PAM, but I don't really 
understand what's needed, and I can't actually authenticate as the user because 
I don't know their password (currently this daemon is root so it doesn't need a 
password to switch user).

If there is some kind of shell pipeline, or a wrapper script I can write to 
automate the necessary steps please let me know.

Thank you very much!

-Mike Gulick​
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] Hear opinions about changing watchdog timeout value during service running

2016-05-20 Thread Lennart Poettering
On Thu, 19.05.16 10:57, 김민경/주임연구원/SW Platform(연)AOT팀(minkyung88@lge.com) 
(minkyung88@lge.com) wrote:

> Hello,
> 
> I am planning to work on supporting api which changes "WATCHDOG_USEC"
> value during service running.
> 
> With this api, running service can change its own watchdog timeout value.
> 
> I want to hear your opinions about this suggested function.

Hmm, what's the usecase here?

Adding this sounds simply enough: it could be as easy as introducing a
WATCHDOG_USEC parameter for sd_notify(), which would then alter the
watchdog settings...

But before doing that, what's the background?

Lennart

-- 
Lennart Poettering, Red Hat
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] systemd and cgroup memory membership reset

2016-05-20 Thread Lennart Poettering
On Tue, 17.05.16 21:50, Tomasz Janowski (tomjanow...@gmail.com) wrote:

> Hello,
> 
>  I have an issue with possible conflicts between systemd and a SLURM
> scheduler cgroup management. SLURM ca be configured to use a few cgroup
> subsystems like freezer, cpuset and memory. The daemon itself runs in the
> root of the memory tree, but it creates entries (subdirectories) for child
> processes to constrain their memory use.
> 
> Occasionally it happens that processes started by SLURM get their memory
> membership reset (e.g. after a week of running). The cgroup tree is there,
> only processes are relocated back to the root of memory cgroup. Is there
> any action that a system administrator can take, or a cron script that can
> reset memory cgroup membership while keeping the other two intact (freezer,
> cpuset)? This would affect only child processes, as a master (slurmd) runs
> in the root of memory cgroup.
> 
> I use systemd 215 and the options relevant to memory are:
> 
> MemoryAccounting=no
> MemoryLimit=18446744073709551615
> 
> I am quite sure that this happens due to systemd actions. Any remarks would
> be greatly appreciated! Otherwise I will have to plow through the sources
> trying to figure this out. :(

Well, it's simply not supported to have two cgroup managers manage the
same cgroup hierarchy. This must break as the cgroup logic is really
not designed to allow cooperative management. In fact, the kernel guys
are pretty explicit about cgroups being a single-writer interface.

Hence, your cgroup software is not compatible with systemd. If it
shall run in conjunction with systemd, then it may not intefere with
the top-level cgroup hierarchy at all.

That all said, systemd permits delegation of subhierarchies, via the
Delegate= property in service and unit files. If that's set for a
unit, then systemd won't migrate processes below that subhierarchy,
and it may be managed by another manger. This is how container
managers can have their own subtrees for example.

Also see:

https://www.freedesktop.org/wiki/Software/systemd/ControlGroupInterface/

Note that the 215 release you are using is really ancient however,
hence, YMMV may vary with all of this.

Lennart

-- 
Lennart Poettering, Red Hat
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] Running ldconfig at boot

2016-05-20 Thread Lennart Poettering
On Fri, 20.05.16 18:08, Ivan Shapovalov (inte...@intelfx.name) wrote:

> On 2016-05-20 at 14:59 +0200, Lennart Poettering wrote:
> > On Fri, 20.05.16 14:01, Florian Weimer (fwei...@redhat.com) wrote:
> > 
> > > The default systemd configuration runs ldconfig at boot.  Why?
> > 
> > It's conditionalized via ConditionNeedsUpdate=, which means it is
> > only
> > run when /etc is older than /usr. (This is tested via checking
> > modification times of /etc/.updated and /usr), see explanation on
> > systemd.unit(5).
> > 
> > The usecase for this is to permit systems where a single /usr tree is
> > shared among multiple systems, and might be updated at any time, and
> > the changes need to be propagated to /etc on each individual
> > systems. The keyword is "stateless systems".
> > 
> > Note that normally this should not be triggered at all, since this
> > only works on systems where /usr itsel is explicitly touched after
> > each updated so that the mtime is updated. That should normally not
> > happen, except when your distro is prepared for that, and does that
> > explicitly.
> > 
> > Hence, in your case, any idea how it happens that your /usr got its
> > mtime updated?
> 
> Hi,
> 
> I just recalled I've seen many extraneous triggers of
> ConditionNeedsUpdate= on arch some time ago. It looked like the mtime
> of /usr has nonzero nanoseconds value, but mtime of /etc/.updated seems
> to be clamped to whole seconds. I didn't look at the logic though...

We had trouble with that in the past. This is really stupid ext234
behaviour: if you create small file systems it will silently degrade
mtime accuracy to 1s, and there's no way to figure out the accuracy of
a mtime on a specific fs...

See 329c542585cd92cb905990e3bf59eda16fd88cfb about this...

Are you saying that fix doesn#t work and is not sufficient?

Lennart

-- 
Lennart Poettering, Red Hat
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] service stop taking too long

2016-05-20 Thread Pradeepa Kumar
I debugged this further .
And this turned out to be issue with our script in ExecStop.
Thanks for comments
On May 20, 2016 8:16 PM, "Lennart Poettering" 
wrote:

> On Wed, 18.05.16 20:38, Pradeepa Kumar (cdprade...@gmail.com) wrote:
>
> > sorry for not being clear earlier.
> >  may be i am not explaining properly.
> >
> > In XYZ.service:
> > ExecStop: myscript1
> >
> > $cat myscript1
> > echo "inside myscript1"
> >
> >
> > and
> >
> > The sequence in jounrnalctl logs are:
> >
> >  May 18 01:18:06 machine1 systemd[1]: Stopping "XYZ service"...
> > ...
> >  May 18 01:18:46 machine1  myscript1[3941]: inside myscript1
> >
> > As you can see, the beginning of execution of myscript1 took 40 sec.
>
> So you are saying that systemd reports that it is starting your script
> 40 seconds before your script is actually started?
>
> If so, this would suggest that something hangs in the time systemd forks
> off your stop script, but before exec() is actually called for
> it. This could be an NSS look-up done due to User= or Group=, or a PAM
> intraction done via PAM= or so.
>
> How precisely does your full service file look like?
>
> Lennart
>
> --
> Lennart Poettering, Red Hat
>
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] service stop taking too long

2016-05-20 Thread Lennart Poettering
On Wed, 18.05.16 20:38, Pradeepa Kumar (cdprade...@gmail.com) wrote:

> sorry for not being clear earlier.
>  may be i am not explaining properly.
> 
> In XYZ.service:
> ExecStop: myscript1
> 
> $cat myscript1
> echo "inside myscript1"
> 
> 
> and
> 
> The sequence in jounrnalctl logs are:
> 
>  May 18 01:18:06 machine1 systemd[1]: Stopping "XYZ service"...
> ...
>  May 18 01:18:46 machine1  myscript1[3941]: inside myscript1
> 
> As you can see, the beginning of execution of myscript1 took 40 sec.

So you are saying that systemd reports that it is starting your script
40 seconds before your script is actually started?

If so, this would suggest that something hangs in the time systemd forks
off your stop script, but before exec() is actually called for
it. This could be an NSS look-up done due to User= or Group=, or a PAM
intraction done via PAM= or so.

How precisely does your full service file look like?

Lennart

-- 
Lennart Poettering, Red Hat
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] Shutdown a specific service in systemd shutdown

2016-05-20 Thread Lennart Poettering
On Fri, 20.05.16 11:24, Bao Nguyen (bao...@gmail.com) wrote:

> Hi Martin,
> 
> Thanks a lot for your answer.
> 
> How about if my specific script is written by SysVinit, it has LSB headers,
> can we still use in LSB header the property lAfter= as in systemd to make
> it start/stop orderly?

Our "sysv-generator" tool that is responsible for turning SysV
services into native systemd services understands the
"X-Start-Before:" and "X-Start-After:" LSB header file stanzas to
declare ordering. This is an extension Debian introduced that we
support too.




> 
> Another solution I think to make it shutdowns "order" when I read
> systemd-halt.service in
> https://www.freedesktop.org/software/systemd/man/systemd-halt.service.html
> 
> "Immediately before executing the actual system halt/poweroff/reboot/kexec
> systemd-shutdown will run all executables in
> /usr/lib/systemd/system-shutdown/ and pass one arguments to them: either "
> halt", "poweroff", "reboot" or "kexec", depending on the chosen action. All
> executables in this directory are executed in parallel, and execution of
> the action is not continued before all executables finished."
> 
> Can I put a script to terminate my specific script in
> /usr/lib/systemd/system-shutdown/?
> As the description, the script will be run to terminate my script before
> executing the actual system shutdown?

No, these executables are run very late, immediately before executing
the actual system halt, as the documentation says pretty explicitly...

> Some people on internet also tried to make a script to do something
> before everything
> else on shutdown with systemd like
> http://superuser.com/questions/1016827/how-do-i-run-a-script-before-everything-else-on-shutdown-with-systemde
> 
> How do you think if I can make a script to terminate my script before all
> other services shutdown like above to make it "order"?

The concept doesn't exist. If everybody does soemthing like that, how
are we supposed to resolve that? If you have 100 services, and all of
them want to be stopped before all others, how would you ever resolve
that?

Please simply list the right deps instead.

Lennart

-- 
Lennart Poettering, Red Hat
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] Running ldconfig at boot

2016-05-20 Thread Lennart Poettering
On Fri, 20.05.16 16:10, Lennart Poettering (lenn...@poettering.net) wrote:

> You mean, because not all local file systems have been mounted yet?
> 
> So, there was actually a PR that tried to fix that, posted recently:
> 
> https://github.com/systemd/systemd/pull/2859
> 
> and it was merged recently, too. But it broke precisely the reason why
> the service exists at all, because it did more than just fix the local
> fs thing, and removed the update trigger...
> 
> I filed a PR to revert that PR now:
> 
> https://github.com/systemd/systemd/pull/3305
> 
> I figure an independent PR should be filed to restore the local-fs
> thing then...

I have now replaced #3305 by this new PR:

https://github.com/systemd/systemd/pull/3311

It restores correct update behaviour, but leaves the local-fs.target
change in, thus ensuring indempotent behaviour. This should fix your
issue?

Lennart

-- 
Lennart Poettering, Red Hat
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] Running ldconfig at boot

2016-05-20 Thread Florian Weimer

On 05/20/2016 04:04 PM, Lennart Poettering wrote:

On Fri, 20.05.16 15:55, Florian Weimer (fwei...@redhat.com) wrote:


On 05/20/2016 02:59 PM, Lennart Poettering wrote:

On Fri, 20.05.16 14:01, Florian Weimer (fwei...@redhat.com) wrote:


The default systemd configuration runs ldconfig at boot.  Why?


It's conditionalized via ConditionNeedsUpdate=, which means it is only
run when /etc is older than /usr. (This is tested via checking
modification times of /etc/.updated and /usr), see explanation on
systemd.unit(5).

The usecase for this is to permit systems where a single /usr tree is
shared among multiple systems, and might be updated at any time, and
the changes need to be propagated to /etc on each individual
systems. The keyword is "stateless systems".


Do such systems need systemd configuration changes?


Not sure I understand this question?


If such systems require specialized unit files, then you can put 
ldconfig.service there, instead of exposing all systemd users to the 
service.



There are some more packages where installation or upgrades would update the
/usr mtime.  I don't have a current Fedora rawhide list, but I'm attaching
an older version.  The /usr mtime is not a reliable indicator for what you
are trying to detect, I think.


Well, it really doesn't have to be. I mean, in the worst case we'll
run ldconfig once too often, which should be idempotent...


As I explained, the way you run it, it is not necessarily idempotent.

Florian

___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] Running ldconfig at boot

2016-05-20 Thread Lennart Poettering
On Fri, 20.05.16 16:06, Florian Weimer (fwei...@redhat.com) wrote:

> On 05/20/2016 04:04 PM, Lennart Poettering wrote:
> >On Fri, 20.05.16 15:55, Florian Weimer (fwei...@redhat.com) wrote:
> >
> >>On 05/20/2016 02:59 PM, Lennart Poettering wrote:
> >>>On Fri, 20.05.16 14:01, Florian Weimer (fwei...@redhat.com) wrote:
> >>>
> The default systemd configuration runs ldconfig at boot.  Why?
> >>>
> >>>It's conditionalized via ConditionNeedsUpdate=, which means it is only
> >>>run when /etc is older than /usr. (This is tested via checking
> >>>modification times of /etc/.updated and /usr), see explanation on
> >>>systemd.unit(5).
> >>>
> >>>The usecase for this is to permit systems where a single /usr tree is
> >>>shared among multiple systems, and might be updated at any time, and
> >>>the changes need to be propagated to /etc on each individual
> >>>systems. The keyword is "stateless systems".
> >>
> >>Do such systems need systemd configuration changes?
> >
> >Not sure I understand this question?
> 
> If such systems require specialized unit files, then you can put
> ldconfig.service there, instead of exposing all systemd users to the
> service.

No they don't. Basic Fedora works fine in this mode, without any
changes. It isn#t round, and it isn't supported fully by Fedora, but
the basics do work just fine.

> >>There are some more packages where installation or upgrades would update the
> >>/usr mtime.  I don't have a current Fedora rawhide list, but I'm attaching
> >>an older version.  The /usr mtime is not a reliable indicator for what you
> >>are trying to detect, I think.
> >
> >Well, it really doesn't have to be. I mean, in the worst case we'll
> >run ldconfig once too often, which should be idempotent...
> 
> As I explained, the way you run it, it is not necessarily idempotent.

You mean, because not all local file systems have been mounted yet?

So, there was actually a PR that tried to fix that, posted recently:

https://github.com/systemd/systemd/pull/2859

and it was merged recently, too. But it broke precisely the reason why
the service exists at all, because it did more than just fix the local
fs thing, and removed the update trigger...

I filed a PR to revert that PR now:

https://github.com/systemd/systemd/pull/3305

I figure an independent PR should be filed to restore the local-fs
thing then...

Lennart

-- 
Lennart Poettering, Red Hat
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] Running ldconfig at boot

2016-05-20 Thread Lennart Poettering
On Fri, 20.05.16 15:55, Florian Weimer (fwei...@redhat.com) wrote:

> On 05/20/2016 02:59 PM, Lennart Poettering wrote:
> >On Fri, 20.05.16 14:01, Florian Weimer (fwei...@redhat.com) wrote:
> >
> >>The default systemd configuration runs ldconfig at boot.  Why?
> >
> >It's conditionalized via ConditionNeedsUpdate=, which means it is only
> >run when /etc is older than /usr. (This is tested via checking
> >modification times of /etc/.updated and /usr), see explanation on
> >systemd.unit(5).
> >
> >The usecase for this is to permit systems where a single /usr tree is
> >shared among multiple systems, and might be updated at any time, and
> >the changes need to be propagated to /etc on each individual
> >systems. The keyword is "stateless systems".
> 
> Do such systems need systemd configuration changes?

Not sure I understand this question?

> >Note that normally this should not be triggered at all, since this
> >only works on systems where /usr itsel is explicitly touched after
> >each updated so that the mtime is updated. That should normally not
> >happen, except when your distro is prepared for that, and does that
> >explicitly.
> 
> It happens if the filesystem package is upgraded.  I think this is what
> triggered the last mtime update on my workstation.
> 
> There are some more packages where installation or upgrades would update the
> /usr mtime.  I don't have a current Fedora rawhide list, but I'm attaching
> an older version.  The /usr mtime is not a reliable indicator for what you
> are trying to detect, I think.

Well, it really doesn't have to be. I mean, in the worst case we'll
run ldconfig once too often, which should be idempotent...

Lennart

-- 
Lennart Poettering, Red Hat
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] Running ldconfig at boot

2016-05-20 Thread Florian Weimer

On 05/20/2016 02:59 PM, Lennart Poettering wrote:

On Fri, 20.05.16 14:01, Florian Weimer (fwei...@redhat.com) wrote:


The default systemd configuration runs ldconfig at boot.  Why?


It's conditionalized via ConditionNeedsUpdate=, which means it is only
run when /etc is older than /usr. (This is tested via checking
modification times of /etc/.updated and /usr), see explanation on
systemd.unit(5).

The usecase for this is to permit systems where a single /usr tree is
shared among multiple systems, and might be updated at any time, and
the changes need to be propagated to /etc on each individual
systems. The keyword is "stateless systems".


Do such systems need systemd configuration changes?


Note that normally this should not be triggered at all, since this
only works on systems where /usr itsel is explicitly touched after
each updated so that the mtime is updated. That should normally not
happen, except when your distro is prepared for that, and does that
explicitly.


It happens if the filesystem package is upgraded.  I think this is what 
triggered the last mtime update on my workstation.


There are some more packages where installation or upgrades would update 
the /usr mtime.  I don't have a current Fedora rawhide list, but I'm 
attaching an older version.  The /usr mtime is not a reliable indicator 
for what you are trying to detect, I think.


Florian
 nevra  |  name 
  
+-
 arm-none-eabi-binutils-cs-2014.05.28-3.fc22.x86_64 | /usr/arm-none-eabi
 arm-none-eabi-newlib-2.1.0-5.fc21.noarch   | /usr/arm-none-eabi
 avr-binutils-1:2.24-4.fc22.x86_64  | /usr/avr
 avr-libc-1.8.0-9.fc21.noarch   | /usr/avr
 cinnamon-control-center-filesystem-2.4.2-1.fc22.i686   | /usr/share
 cinnamon-control-center-filesystem-2.4.2-1.fc22.x86_64 | /usr/share
 filesystem-3.2-32.fc22.x86_64  | /usr/bin
 filesystem-3.2-32.fc22.x86_64  | /usr/games
 filesystem-3.2-32.fc22.x86_64  | /usr/include
 filesystem-3.2-32.fc22.x86_64  | /usr/lib
 filesystem-3.2-32.fc22.x86_64  | /usr/lib64
 filesystem-3.2-32.fc22.x86_64  | /usr/libexec
 filesystem-3.2-32.fc22.x86_64  | /usr/local
 filesystem-3.2-32.fc22.x86_64  | /usr/sbin
 filesystem-3.2-32.fc22.x86_64  | /usr/share
 filesystem-3.2-32.fc22.x86_64  | /usr/src
 krb5-appl-clients-1.0.3-9.fc22.x86_64  | /usr/kerberos
 krb5-appl-servers-1.0.3-9.fc22.x86_64  | /usr/kerberos
 mingw32-filesystem-99-5.fc21.noarch| /usr/i686-w64-mingw32
 mingw64-filesystem-99-5.fc21.noarch| 
/usr/x86_64-w64-mingw32
 msp430-binutils-2.21.1a-8.fc22.x86_64  | /usr/msp430
 nqp-jvm-0.0.2014.04-3.fc22.noarch  | /usr/languages
 nqp-moar-0.0.2014.04-3.fc22.x86_64 | /usr/languages
 sblim-cmpi-network-1.4.0-13.fc22.i686  | /usr/share
 sblim-cmpi-network-1.4.0-13.fc22.x86_64| /usr/share
(25 rows)

___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] Automount some dirs at user ligin

2016-05-20 Thread Lennart Poettering
On Wed, 18.05.16 22:14, Vasiliy Tolstov (v.tols...@selfip.ru) wrote:

> I need to mount tmpfs on .cache for each user after login.
> How can I do that with systemd?
> S
> For example I want for user1 mount tmpfs on dir .cache, for user2 mount
> .cache to tmpfs also and so on.
> After logout last session for this user, I need to unmount it...

systemd does not cover this, and I am not sure it should. There's a
reason why .cache is defined to be in $HOME and not in /tmp or /run after all...

If you want to redefine .cache like this, then I'd probably just add a
few system-wide shell profile lines that symlink .cache into
$XDG_RUNTIME_DIR/cache or so. After all that is a tmpfs whose lifetime
is bound to the user being logged in, and it has a per-user size limit applied.

Lennart

-- 
Lennart Poettering, Red Hat
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] Running ldconfig at boot

2016-05-20 Thread Lennart Poettering
On Fri, 20.05.16 14:01, Florian Weimer (fwei...@redhat.com) wrote:

> The default systemd configuration runs ldconfig at boot.  Why?

It's conditionalized via ConditionNeedsUpdate=, which means it is only
run when /etc is older than /usr. (This is tested via checking
modification times of /etc/.updated and /usr), see explanation on
systemd.unit(5).

The usecase for this is to permit systems where a single /usr tree is
shared among multiple systems, and might be updated at any time, and
the changes need to be propagated to /etc on each individual
systems. The keyword is "stateless systems".

Note that normally this should not be triggered at all, since this
only works on systems where /usr itsel is explicitly touched after
each updated so that the mtime is updated. That should normally not
happen, except when your distro is prepared for that, and does that
explicitly.

Hence, in your case, any idea how it happens that your /usr got its
mtime updated?

Lennart

-- 
Lennart Poettering, Red Hat
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] Running ldconfig at boot

2016-05-20 Thread Michał Zegan
From what I understand, directories such as /usr/lib and stuff are
properly used even in case of a corrupted ld.so cache. like ldconfig
does not affect those directories at this time.

W dniu 20.05.2016 o 14:06, Vasiliy Tolstov pisze:
> 2016-05-20 15:01 GMT+03:00 Florian Weimer :
>> The default systemd configuration runs ldconfig at boot.  Why?
>>
>> Most deployments of systemd appear to be dynamically linked, so if the ld.so
>> caches are corrupted, you will never get to the point where you can run
>> ldconfig.
>>
>> Running ldconfig too early tends to cause problems because the file system
>> might not have been set up completely, and the cache does not match what the
>> system administrator has configured.
>>
>> Florian
> 
> 
> Also sometimes this take on my server 22s =)
> 



signature.asc
Description: OpenPGP digital signature
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] Running ldconfig at boot

2016-05-20 Thread Vasiliy Tolstov
2016-05-20 15:01 GMT+03:00 Florian Weimer :
> The default systemd configuration runs ldconfig at boot.  Why?
>
> Most deployments of systemd appear to be dynamically linked, so if the ld.so
> caches are corrupted, you will never get to the point where you can run
> ldconfig.
>
> Running ldconfig too early tends to cause problems because the file system
> might not have been set up completely, and the cache does not match what the
> system administrator has configured.
>
> Florian


Also sometimes this take on my server 22s =)

-- 
Vasiliy Tolstov,
e-mail: v.tols...@yoctocloud.net
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel


[systemd-devel] Running ldconfig at boot

2016-05-20 Thread Florian Weimer

The default systemd configuration runs ldconfig at boot.  Why?

Most deployments of systemd appear to be dynamically linked, so if the 
ld.so caches are corrupted, you will never get to the point where you 
can run ldconfig.


Running ldconfig too early tends to cause problems because the file 
system might not have been set up completely, and the cache does not 
match what the system administrator has configured.


Florian
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel