Re: [systemd-devel] nfs-convert.service

2022-08-22 Thread Steve Dickson




On 8/22/22 9:46 AM, Steve Dickson wrote:


Thanks for the reply!

On 8/22/22 4:09 AM, Lennart Poettering wrote:

On Fr, 19.08.22 11:21, Steve Dickson (ste...@redhat.com) wrote:


Hello,

I'm trying to remove nfsconvert from Fedora but I'm
getting the following systemd error after I removed
the command and the service file.

# systemctl restart nfs-server
Failed to restart nfs-server.service: Unit nfs-convert.service not
found


This is expected if you remove the file first?

Well, I build a package that didn't include that
service file and then installed the new package.

I just assumed if the service file didn't exist and
didn't exist in any other service files, it would not
be called... I guess is not the case.




There is nothing in the nfs-utils files that
has that service in it... and when I do a

systemctl list-dependencies --all | grep -1 nfs-convert

I see every nfs related service dependent on nfs-convert.service


Did you issue "systemctl daemon-reload"?

Yes and rebooted...

Back in the day there was this bz
  Make nfs-convert enabled by adding it to systemd presets [1]

I don't know that a systemd presets is, but I'm wondering
if that is why nfs-convert services shows up as a dependency
of all the rest of the service files.

    systemctl list-dependencies --all | grep -1 nfs-convert

It turns out all that's needed is a %triggerun to disable
the service on the way out...

Sorry for the noise...

steved.


steved.

[1] https://bugzilla.redhat.com/show_bug.cgi?id=1683101




Re: [systemd-devel] nfs-convert.service

2022-08-22 Thread Steve Dickson



Thanks for the reply!

On 8/22/22 4:09 AM, Lennart Poettering wrote:

On Fr, 19.08.22 11:21, Steve Dickson (ste...@redhat.com) wrote:


Hello,

I'm trying to remove nfsconvert from Fedora but I'm
getting the following systemd error after I removed
the command and the service file.

# systemctl restart nfs-server
Failed to restart nfs-server.service: Unit nfs-convert.service not
found


This is expected if you remove the file first?

Well, I build a package that didn't include that
service file and then installed the new package.

I just assumed if the service file didn't exist and
didn't exist in any other service files, it would not
be called... I guess is not the case.




There is nothing in the nfs-utils files that
has that service in it... and when I do a

systemctl list-dependencies --all | grep -1 nfs-convert

I see every nfs related service dependent on nfs-convert.service


Did you issue "systemctl daemon-reload"?

Yes and rebooted...

Back in the day there was this bz
 Make nfs-convert enabled by adding it to systemd presets [1]

I don't know that a systemd presets is, but I'm wondering
if that is why nfs-convert services shows up as a dependency
of all the rest of the service files.

   systemctl list-dependencies --all | grep -1 nfs-convert

steved.

[1] https://bugzilla.redhat.com/show_bug.cgi?id=1683101



[systemd-devel] nfs-convert.service

2022-08-19 Thread Steve Dickson

Hello,

I'm trying to remove nfsconvert from Fedora but I'm
getting the following systemd error after I removed
the command and the service file.

# systemctl restart nfs-server
Failed to restart nfs-server.service: Unit nfs-convert.service not found

There is nothing in the nfs-utils files that
has that service in it... and when I do a

systemctl list-dependencies --all | grep -1 nfs-convert

I see every nfs related service dependent on nfs-convert.service

Was this service add to something in the systemd world?
If so, how do I remove it?

tia,

steved.



Re: [systemd-devel] systemd and chroot()

2019-06-04 Thread Steve Dickson


On 6/4/19 1:14 PM, Zbigniew Jędrzejewski-Szmek wrote:
> On Tue, Jun 04, 2019 at 12:42:35PM -0400, Steve Dickson wrote:
>> Hello,
>>
>> We are adding some new functionality to the NFS server that 
>> will make it a bit more container friendly... 
>>
>> This new functionality needs to do a chroot(2) system call. 
>> This systemcall is failing with EPERM due to the
>> following AVC error:
>>
>> AVC avc:  denied  { sys_chroot } for  pid=2919 comm="rpc.mountd" 
>> capability=18  scontext=system_u:system_r:nfsd_t:s0 
>> tcontext=system_u:system_r:nfsd_t:s0 tclass=capability permissive=0
> 
> It doesn't sound right to do any kind of chrooting yourself.
> Why can't you use the systemd builtins for this?
The patch set is basically adding a pseudo to all the export
which should make things a bit more container friendly...
There is the thread
https://www.spinics.net/lists/linux-nfs/msg73006.html

steved.
> 
> Zbyszek
> 
>> The entery in the /var/loc/audit.log
>> type=AVC msg=audit(1559659652.217:250): avc:  denied  { sys_chroot } for  
>> pid=2412 comm="rpc.mountd" capability=18  
>> scontext=system_u:system_r:nfsd_t:s0 tcontext=system_u:system_r:nfsd_t:s0 
>> tclass=capability permissive=0
>>
>> It definitely is something with systemd, since I can
>> start the daemon by hand... 
>>
>> It was suggested I make the following change to the service unit
>> # diff -u nfs-mountd.service.orig nfs-mountd.service
>> --- nfs-mountd.service.orig  2019-06-04 10:38:57.0 -0400
>> +++ nfs-mountd.service   2019-06-04 12:29:34.339621802 -0400
>> @@ -11,3 +11,4 @@
>>  [Service]
>>  Type=forking
>>  ExecStart=/usr/sbin/rpc.mountd
>> +AmbientCapabilities=CAP_SYS_CHROOT
>>
>> which did not work. 
>>
>> Any ideas on how to tell systemd its ok for a daemon
>> to do a chroot(2) system call?
>>
>> tia,
>>
>> steved.
>> ___
>> systemd-devel mailing list
>> systemd-devel@lists.freedesktop.org
>> https://lists.freedesktop.org/mailman/listinfo/systemd-devel
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel

Re: [systemd-devel] systemd and chroot()

2019-06-04 Thread Steve Dickson


On 6/4/19 12:45 PM, Matthew Garrett wrote:
> On Tue, Jun 4, 2019 at 9:42 AM Steve Dickson  wrote:
>> AVC avc:  denied  { sys_chroot } for  pid=2919 comm="rpc.mountd" 
>> capability=18  scontext=system_u:system_r:nfsd_t:s0 
>> tcontext=system_u:system_r:nfsd_t:s0 tclass=capability permissive=0
> 
> This is an SELinux policy violation, nothing to do with systemd.
Yeah... that's what I originally thought it was but when
it was suggested to set  AmbientCapabilities=CAP_SYS_CHROOT
in the service unit I figured I would run it by you guys..

> You're probably not seeing it when you run the daemon by hand because
> the SELinux policy doesn't specify a transition in that case, so the
> daemon doesn't end up running in the confined context.
> 
Makes sense... thanks!

steved.
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel

[systemd-devel] systemd and chroot()

2019-06-04 Thread Steve Dickson
Hello,

We are adding some new functionality to the NFS server that 
will make it a bit more container friendly... 

This new functionality needs to do a chroot(2) system call. 
This systemcall is failing with EPERM due to the
following AVC error:

AVC avc:  denied  { sys_chroot } for  pid=2919 comm="rpc.mountd" capability=18  
scontext=system_u:system_r:nfsd_t:s0 tcontext=system_u:system_r:nfsd_t:s0 
tclass=capability permissive=0

The entery in the /var/loc/audit.log
type=AVC msg=audit(1559659652.217:250): avc:  denied  { sys_chroot } for  
pid=2412 comm="rpc.mountd" capability=18  scontext=system_u:system_r:nfsd_t:s0 
tcontext=system_u:system_r:nfsd_t:s0 tclass=capability permissive=0

It definitely is something with systemd, since I can
start the daemon by hand... 

It was suggested I make the following change to the service unit
# diff -u nfs-mountd.service.orig nfs-mountd.service
--- nfs-mountd.service.orig 2019-06-04 10:38:57.0 -0400
+++ nfs-mountd.service  2019-06-04 12:29:34.339621802 -0400
@@ -11,3 +11,4 @@
 [Service]
 Type=forking
 ExecStart=/usr/sbin/rpc.mountd
+AmbientCapabilities=CAP_SYS_CHROOT

which did not work. 

Any ideas on how to tell systemd its ok for a daemon
to do a chroot(2) system call?

tia,

steved.
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel

Re: [systemd-devel] Question on Before=

2019-02-02 Thread Steve Dickson


On 2/2/19 3:44 PM, Reindl Harald wrote:
> 
> 
> Am 02.02.19 um 21:05 schrieb Steve Dickson:
>> On 2/2/19 2:52 PM, Reindl Harald wrote:
>>> Am 02.02.19 um 20:42 schrieb Steve Dickson:
>>>> Hello,
>>>>
>>>> In a.service  I have 
>>>>
>>>> [Unit]
>>>> Before=b.service 
>>>>
>>>> [Install]
>>>> RequiredBy=b.service
>>>>
>>>> when I systemd start b.service (which happens to fail) 
>>>> but... a.service is not being run.
>>>>
>>>> So I guess my question is what do I have to do
>>>> to ensure a.service is *always* run before b.service?
>>>
>>>> [Install]
>>>> RequiredBy=b.service
>>>
>>> why?
>>>
>>> [Unit]
>>> Before/After/Require
>>>
>>> [Install]
>>> WantedBy=multi-user.target
>> Because a.service only needs to run when b.service is started. 
> 
> this is a cirrcualr dependency!
> 
> you say "start a before b" and at the same time "start a only when b is
> started"
> 
> hwo do you imagine that to work?
> 
That a would start before b because of the Before= in a.

There was an issue as how I was enabling a... 

thanks for the help!

steved.
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] Question on Before=

2019-02-02 Thread Steve Dickson


On 2/2/19 4:03 PM, Tomasz Torcz wrote:
> On Sat, Feb 02, 2019 at 03:03:22PM -0500, Steve Dickson wrote:
>>
>>
>> On 2/2/19 2:48 PM, Tomasz Torcz wrote:
>>> On Sat, Feb 02, 2019 at 02:42:15PM -0500, Steve Dickson wrote:
>>>> Hello,
>>>>
>>>> In a.service  I have 
>>>>
>>>> [Unit]
>>>> Before=b.service 
>>>>
>>>> [Install]
>>>> RequiredBy=b.service
>>>>
>>>> when I systemd start b.service (which happens to fail) 
>>>> but... a.service is not being run.
>>>>
>>>> So I guess my question is what do I have to do
>>>> to ensure a.service is *always* run before b.service?
>>>
>>>   Have you enabled a.service?
>>>
>> No... I did not think I had to... I figured 
>> when b.service was started, a.service would be 
>> run regardless of being enabled or disabled.
>>
>> Is that not the case?
> 
>   Not really.  It would work, if you had in b.service line like
> Requires=a.service (*).
>   But apparently you do not want to modify b.service, so you
> put RequiredBy= in a.service's [Install] section. Directives
> in [Install] section requires "systemctl enable" to have symlinks
> created and to have effect. After enable, it will work identical to (*).
> 
>   Nb. most services have RequireBy=multi-user.target (or WantedBy=). For
> such services, enabling mean they will start at boot (beacuse
> multi-user.target is part of boot process).  But there is not
> requirement for services to be Wanted/Required by not boot-related
> services and target.
>   Thus, you often find in tutorials assertion that 
> "systemctl enable" equals "start during boot". This is not true.
It turns out I had a bug in my spec file logic which should
have enabled the service... 

Thanks for the help!

steved.

> 
> 
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] Question on Before=

2019-02-02 Thread Steve Dickson


On 2/2/19 4:07 PM, Uoti Urpala wrote:
> On Sat, 2019-02-02 at 15:03 -0500, Steve Dickson wrote:
>>>   Have you enabled a.service?
>>>
>> No... I did not think I had to... I figured 
>> when b.service was started, a.service would be 
>> run regardless of being enabled or disabled.
>>
>> Is that not the case?
> 
> So you just have the file for a.service lying somewhere on disk, but
> haven't enabled it and no other unit references it? 
That is true... 

> That won't do anything - systemd does not read through all files on disk to 
> see if
> there'd be something inside the file which declares that it should
> actually be started. Units need to have something else referencing them
> for systemd to "see" them at all. "enable" does this by creating a link
> from the units/targets referenced in the [Install] section to the file
> in question (by creating a symlink in 
> /etc/systemd/system/multi-user.target.wants/ for example).
Basically enabling the service... fair enough... 

steved.

___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] Question on Before=

2019-02-02 Thread Steve Dickson


On 2/2/19 2:52 PM, Reindl Harald wrote:
> 
> 
> Am 02.02.19 um 20:42 schrieb Steve Dickson:
>> Hello,
>>
>> In a.service  I have 
>>
>> [Unit]
>> Before=b.service 
>>
>> [Install]
>> RequiredBy=b.service
>>
>> when I systemd start b.service (which happens to fail) 
>> but... a.service is not being run.
>>
>> So I guess my question is what do I have to do
>> to ensure a.service is *always* run before b.service?
> 
>> [Install]
>> RequiredBy=b.service
> 
> why?
> 
> [Unit]
> Before/After/Require
> 
> [Install]
> WantedBy=multi-user.target
Because a.service only needs to run when b.service is started. 

steved.

> ___
> systemd-devel mailing list
> systemd-devel@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/systemd-devel
> 
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] Question on Before=

2019-02-02 Thread Steve Dickson


On 2/2/19 2:48 PM, Tomasz Torcz wrote:
> On Sat, Feb 02, 2019 at 02:42:15PM -0500, Steve Dickson wrote:
>> Hello,
>>
>> In a.service  I have 
>>
>> [Unit]
>> Before=b.service 
>>
>> [Install]
>> RequiredBy=b.service
>>
>> when I systemd start b.service (which happens to fail) 
>> but... a.service is not being run.
>>
>> So I guess my question is what do I have to do
>> to ensure a.service is *always* run before b.service?
> 
>   Have you enabled a.service?
> 
No... I did not think I had to... I figured 
when b.service was started, a.service would be 
run regardless of being enabled or disabled.

Is that not the case?

steved.
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel


[systemd-devel] Question on Before=

2019-02-02 Thread Steve Dickson
Hello,

In a.service  I have 

[Unit]
Before=b.service 

[Install]
RequiredBy=b.service

when I systemd start b.service (which happens to fail) 
but... a.service is not being run.

So I guess my question is what do I have to do
to ensure a.service is *always* run before b.service?

tia,

steved. 
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] [PATCH][V2] rpcbind.service: Not pulling the rpcbind.target

2017-12-15 Thread Steve Dickson


On 12/15/2017 09:52 AM, Lennart Poettering wrote:
> On Fr, 15.12.17 08:00, Steve Dickson (ste...@redhat.com) wrote:
> 
>> According to systemd.special(7) manpage:
>>
>> rpcbind.target
>> The portmapper/rpcbind pulls in this target and orders itself
>> before it, to indicate its availability. systemd automatically
>> adds dependencies of type After= for this target unit to
>> all SysV init script service units with an LSB header
>> referring to the "$portmap" facility.
>>
>> Signed-off-by: Steve Dickson <ste...@redhat.com>
>> Fixes: https://bugzilla.redhat.com/show_bug.cgi?id=1431574
>> ---
>>  systemd/rpcbind.service.in | 4 ++--
>>  1 file changed, 2 insertions(+), 2 deletions(-)
>>
>> diff --git a/systemd/rpcbind.service.in b/systemd/rpcbind.service.in
>> index f8cfa9f..9dbc82c 100644
>> --- a/systemd/rpcbind.service.in
>> +++ b/systemd/rpcbind.service.in
>> @@ -6,8 +6,8 @@ RequiresMountsFor=@statedir@
>>  
>>  # Make sure we use the IP addresses listed for
>>  # rpcbind.socket, no matter how this unit is started.
>> -Wants=rpcbind.socket
>> -After=rpcbind.socket
>> +Requires=rpcbind.socket
>> +Before=rpcbind.target
> 
> You should still pull in rpcbind.target as the man page
> says. i.e. "Wants=rpcbind.target" really should be there.
Duly noted... and changed!

thanks!

steved.
> 
> Lennart
> 
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel


[systemd-devel] [PATCH][V2] rpcbind.service: Not pulling the rpcbind.target

2017-12-15 Thread Steve Dickson
According to systemd.special(7) manpage:

rpcbind.target
The portmapper/rpcbind pulls in this target and orders itself
before it, to indicate its availability. systemd automatically
adds dependencies of type After= for this target unit to
all SysV init script service units with an LSB header
referring to the "$portmap" facility.

Signed-off-by: Steve Dickson <ste...@redhat.com>
Fixes: https://bugzilla.redhat.com/show_bug.cgi?id=1431574
---
 systemd/rpcbind.service.in | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/systemd/rpcbind.service.in b/systemd/rpcbind.service.in
index f8cfa9f..9dbc82c 100644
--- a/systemd/rpcbind.service.in
+++ b/systemd/rpcbind.service.in
@@ -6,8 +6,8 @@ RequiresMountsFor=@statedir@
 
 # Make sure we use the IP addresses listed for
 # rpcbind.socket, no matter how this unit is started.
-Wants=rpcbind.socket
-After=rpcbind.socket
+Requires=rpcbind.socket
+Before=rpcbind.target
 
 [Service]
 Type=notify
-- 
2.14.3

___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] [PATCH] rpcbind.service: Not pulling the rpcbind.target

2017-12-14 Thread Steve Dickson


On 12/14/2017 01:47 PM, Uoti Urpala wrote:
> On Thu, 2017-12-14 at 13:24 -0500, Steve Dickson wrote:
>>
>> On 12/14/2017 12:48 PM, Uoti Urpala wrote:
>>> On Thu, 2017-12-14 at 12:05 -0500, Steve Dickson wrote:
>>>> +Wants=rpcbind.socket rpcbind.target
>>>> +After=rpcbind.socket rpcbind.target
>>>
>>> Is this needed when the service has socket activation support? If the
>>> only interaction with it is through the socket, it shouldn't matter
>>> even if the service is not actually up yet - clients can already open
>>> connections to the socket regardless.
>>
>> Well things are working as is... but this man page paragraph 
>> was pointed out to me so I though these Wants and After were needed.
>>
>> So you saying this patch is not needed?
> 
> I'm not familiar enough with rpcbind stuff to say with certainty that
> it wouldn't be needed, but at least it seems plausible to me that it
> would not be. The mechanism described on the man page is a way to
> implement ordering if needed, but if the early availability of the
> socket means ordering is never an issue, then it can be ignored.
> 
> 
>>> And regardless, that "After" for rpcbind.target seems backwards.
>>> Shouldn't it be "Before", so that the target being up signals that the
>>> service has already been started?
>>
>> I think this makes sense... So if the patch is needed I'll add
>> Before=rpcbind.target and remove the target from the After=
> 
> Yes.
> 
>>> Not directly related, but if that comment is accurate and the socket
>>> should be used "no matter what", perhaps that should be "Requires"
>>> instead of "Wants" so that if the socket could not be opened for some
>>> reason, the service fails instead of starting without socket
>>> activation?
>>>
>>
>> I was afraid of opening a can a worms here... :-) 
>>
>> So you are saying Wants and After should be changed to 
>>
>> Requires=rpcbind.socket
>> Before=rpcbind.target
> 
> Depends on the exact semantics you want. "Wants" means that systemd
> will try to start the socket if the service is started, but will
> continue with the service start even if the dependency fails.
> "Requires" guarantees that the service will never be started without
> the socket active - if opening the socket fails, then the service start
> will return failure too. If you know that the socket unit should always
> be used, or the service will either fail or do the wrong thing without
> it (such as open a socket with parameters different from what was
> configured for the socket unit, and which the admin didn't expect) then
>  Requires may be more appropriate.
I think I agree with you... thanks!

steved.

> 
> --
> To unsubscribe from this list: send the line "unsubscribe linux-nfs" in
> the body of a message to majord...@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> 
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] [PATCH] rpcbind.service: Not pulling the rpcbind.target

2017-12-14 Thread Steve Dickson


On 12/14/2017 12:48 PM, Uoti Urpala wrote:
> On Thu, 2017-12-14 at 12:05 -0500, Steve Dickson wrote:
>> According to systemd.special(7) manpage:
>>
>> rpcbind.target
>> The portmapper/rpcbind pulls in this target and orders itself
>> before it, to indicate its availability. systemd automatically adds
>> dependencies of type After= for this target unit to all SysV init
>> script service units with an LSB header referring to the "$portmap"
>> facility.
> 
> 
>> diff --git a/systemd/rpcbind.service.in b/systemd/rpcbind.service.in
>> index f8cfa9f..2b49c24 100644
>> --- a/systemd/rpcbind.service.in
>> +++ b/systemd/rpcbind.service.in
>> @@ -6,8 +6,8 @@ RequiresMountsFor=@statedir@
>>  
>>  # Make sure we use the IP addresses listed for
>>  # rpcbind.socket, no matter how this unit is started.
>> -Wants=rpcbind.socket
>> -After=rpcbind.socket
>> +Wants=rpcbind.socket rpcbind.target
>> +After=rpcbind.socket rpcbind.target
> 
> Is this needed when the service has socket activation support? If the
> only interaction with it is through the socket, it shouldn't matter
> even if the service is not actually up yet - clients can already open
> connections to the socket regardless.
Well things are working as is... but this man page paragraph 
was pointed out to me so I though these Wants and After were needed.

So you saying this patch is not needed? 
 
> 
> And regardless, that "After" for rpcbind.target seems backwards.
> Shouldn't it be "Before", so that the target being up signals that the
> service has already been started?
I think this makes sense... So if the patch is needed I'll add
Before=rpcbind.target and remove the target from the After=

> 
> Not directly related, but if that comment is accurate and the socket
> should be used "no matter what", perhaps that should be "Requires"
> instead of "Wants" so that if the socket could not be opened for some
> reason, the service fails instead of starting without socket
> activation?
> 
I was afraid of opening a can a worms here... :-) 

So you are saying Wants and After should be changed to 

Requires=rpcbind.socket
Before=rpcbind.target

steved
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel


[systemd-devel] [PATCH] rpcbind.service: Not pulling the rpcbind.target

2017-12-14 Thread Steve Dickson
According to systemd.special(7) manpage:

rpcbind.target
The portmapper/rpcbind pulls in this target and orders itself before it, to 
indicate its availability. systemd automatically adds dependencies of type 
After= for this target unit to all SysV init script service units with an LSB 
header referring to the "$portmap" facility.

Signed-off-by: Steve Dickson <ste...@redhat.com>
---
 systemd/rpcbind.service.in | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/systemd/rpcbind.service.in b/systemd/rpcbind.service.in
index f8cfa9f..2b49c24 100644
--- a/systemd/rpcbind.service.in
+++ b/systemd/rpcbind.service.in
@@ -6,8 +6,8 @@ RequiresMountsFor=@statedir@
 
 # Make sure we use the IP addresses listed for
 # rpcbind.socket, no matter how this unit is started.
-Wants=rpcbind.socket
-After=rpcbind.socket
+Wants=rpcbind.socket rpcbind.target
+After=rpcbind.socket rpcbind.target
 
 [Service]
 Type=notify
-- 
2.14.3

___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] [PATCH] nfs.man: document incompatibility between "bg" option and systemd.

2017-07-10 Thread Steve Dickson
Hey Neil,

On 07/04/2017 06:20 PM, NeilBrown wrote:
> On Tue, May 30 2017, NeilBrown wrote:
> 
>> Systemd does not, and will not, support "bg" correctly.
>> It has other, better, ways to handle "background" mounting.
> 
> For those who aren't closely watching systemd development, a
> patch was recently accepted which causes systemd to work correctly with
> NFS bg mounts.  So the above "and will not" was, happily, not correct.
Could you please post a pointer to the thread?

tia,

steved.
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] [PATCH] nfs.man: document incompatibility between "bg" option and systemd.

2017-06-08 Thread Steve Dickson
On 06/07/2017 12:08 PM, J. Bruce Fields wrote:
> On Wed, Jun 07, 2017 at 06:04:12AM -0400, Steve Dickson wrote:
>> # ps ax | grep mount
>>   980 ?Ss 0:00 /sbin/mount.nfs nfssrv:/home/tmp /mnt/tmp -o rw,bg
> 
> Right, but I think we also need to see a "systemctl status
> remote-fs.target", or something, to verify whether that's the forked
> background process or just the foreground process that's still hanging
> up some part of the boot process (even though it's gotten far enough
> along that you can log in--unless logins aren't permitted till remote
> fs's are mounted, I don't know.)
It succeeds... 

# systemctl status remote-fs.target 
* remote-fs.target - Remote File Systems
   Loaded: loaded (/usr/lib/systemd/system/remote-fs.target; enabled; vendor 
preset: enabled)
   Active: active since Tue 2017-06-06 12:36:51 EDT; 12min ago
 Docs: man:systemd.special(7)

Jun 06 12:36:51 f26 systemd[1]: Reached target Remote File Systems.

The reason being, as Neil pointed out, the mount.nfs gets the 
ECONNREFUSED right away because the server is down. So a
child is quickly forked that continues to try the mount... 
Basically sneaking around systemd back... Which is hard
to do... these day 8-)  

steved.
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] [PATCH] nfs.man: document incompatibility between "bg" option and systemd.

2017-06-08 Thread Steve Dickson


On 06/08/2017 01:16 AM, NeilBrown wrote:
> On Wed, Jun 07 2017, Steve Dickson wrote:
> 
>> On 06/07/2017 08:02 AM, Lennart Poettering wrote:
>>> On Wed, 07.06.17 06:08, Steve Dickson (ste...@redhat.com) wrote:
>>>
>>>>
>>>>
>>>> On 06/06/2017 05:49 PM, NeilBrown wrote:
>>>>> On Tue, Jun 06 2017, Steve Dickson wrote:
>>>>>
>>>>>> Hello,
>>>>>>
>>>>>> On 05/29/2017 06:19 PM, NeilBrown wrote:
>>>>>>>
>>>>>>> Systemd does not, and will not, support "bg" correctly.
>>>>>>> It has other, better, ways to handle "background" mounting.
>>>>>> The only problem with this is bg mounts still work at least
>>>>>> up to 4.11 kernel... 
>>>>>
>>>>> I don't think this is correct.
>>>>> In the default configuration, "mount -t nfs -o bg "
>>>>> runs for longer than 90 seconds, so systemd kill it.
>>>> I must be missing something... it seems to be working for me
>>>>
>>>> # mount -vvv -o bg rhel7srv:/home/tmp /mnt/tmp
>>>> mount.nfs: trying text-based options 
>>>> 'bg,vers=4.1,addr=172.31.1.60,clientaddr=172.31.1.58'
>>>> mount.nfs: mount(2): Connection refused
>>>> mount.nfs: trying text-based options 'bg,addr=172.31.1.60'
>>>> mount.nfs: prog 13, trying vers=3, prot=6
>>>> mount.nfs: trying 172.31.1.60 prog 13 vers 3 prot TCP port 2049
>>>> mount.nfs: portmap query failed: RPC: Remote system error - Connection 
>>>> refused
>>>> mount.nfs: backgrounding "rhel7srv:/home/tmp"
>>>> mount.nfs: mount options: "rw,bg"
>>>
>>> We are talking about mounts done through /etc/fstab, i.e. the ones
>>> systemd actually manages.
>> I guess I was not clear... After adding a bg mount to fstab and
>> reboot, mounting a server that is not up, there is a mount in
>> background that looks like 
>>
>> # ps ax | grep mount
>>  1104 ?Ss 0:00 /sbin/mount.nfs nfssrv:/home/tmp /mnt/tmp -o rw,bg
>>
>> Looking at the remote-fs.target status:
>>
>> # systemctl status remote-fs.target 
>> * remote-fs.target - Remote File Systems
>>Loaded: loaded (/usr/lib/systemd/system/remote-fs.target; enabled; vendor 
>> preset: enabled)
>>Active: active since Tue 2017-06-06 12:36:51 EDT; 12min ago
>>  Docs: man:systemd.special(7)
>>
>> Jun 06 12:36:51 f26.boston.devel.redhat.com systemd[1]: Reached target 
>> Remote File Systems.
>>
>> It appears to be successful 
> 
> Strange ... not for me.
> 
> I have a recent compiled-from-source upstream systemd and a very recent
> upstream kernel.
> 
> I add a line to /etc/fstab
> 
>  192.168.1.111:/nowhere /mnt nfs bg 0 0
> 
> and reboot (192.168.1.111 is on a different subnet to the VM I am
> testing in, but no host responds to the address).
> 
> There is a 1m30s wait while trying to mount /mnt, then boot completes
> (I was wrong when I said that it didn't).
> 
> ● mnt.mount - /mnt
>Loaded: loaded (/etc/fstab; generated; vendor preset: enabled)
>Active: failed (Result: timeout) since Thu 2017-06-08 11:13:52 AEST; 1min 
> 24s ago
> Where: /mnt
>  What: 192.168.1.111:/nowhere
>  Docs: man:fstab(5)
>man:systemd-fstab-generator(8)
>   Process: 333 ExecMount=/bin/mount 192.168.1.111:/nowhere /mnt -t nfs -o bg 
> (code=killed, signal=TERM)
> 
> Jun 08 11:12:22 debian systemd[1]: Mounting /mnt...
> Jun 08 11:13:52 debian systemd[1]: mnt.mount: Mounting timed out. Stopping.
> Jun 08 11:13:52 debian systemd[1]: mnt.mount: Mount process exited, 
> code=killed status=15
> Jun 08 11:13:52 debian systemd[1]: Failed to mount /mnt.
> Jun 08 11:13:52 debian systemd[1]: mnt.mount: Unit entered failed state.
> 
> 
> The /bin/mount process has been killed (SIGTERM) after the 90 second
> timeout
> 
> ● remote-fs.target - Remote File Systems
>Loaded: loaded (/lib/systemd/system/remote-fs.target; enabled; vendor 
> preset: enabled)
>   Drop-In: /run/systemd/generator/remote-fs.target.d
>└─50-insserv.conf.conf
>Active: inactive (dead)
>  Docs: man:systemd.special(7)
> 
> Jun 08 11:13:52 debian systemd[1]: Dependency failed for Remote File Systems.
> Jun 08 11:13:52 debian systemd[1]: remote-fs.target: Job 
> remote-fs.target/start failed with result 'dependency'.
> 
> 
> remote-fs.target has not been reached.
I'm seeing this now... the server has to 

Re: [systemd-devel] [PATCH] nfs.man: document incompatibility between "bg" option and systemd.

2017-06-07 Thread Steve Dickson


On 06/07/2017 08:02 AM, Lennart Poettering wrote:
> On Wed, 07.06.17 06:08, Steve Dickson (ste...@redhat.com) wrote:
> 
>>
>>
>> On 06/06/2017 05:49 PM, NeilBrown wrote:
>>> On Tue, Jun 06 2017, Steve Dickson wrote:
>>>
>>>> Hello,
>>>>
>>>> On 05/29/2017 06:19 PM, NeilBrown wrote:
>>>>>
>>>>> Systemd does not, and will not, support "bg" correctly.
>>>>> It has other, better, ways to handle "background" mounting.
>>>> The only problem with this is bg mounts still work at least
>>>> up to 4.11 kernel... 
>>>
>>> I don't think this is correct.
>>> In the default configuration, "mount -t nfs -o bg "
>>> runs for longer than 90 seconds, so systemd kill it.
>> I must be missing something... it seems to be working for me
>>
>> # mount -vvv -o bg rhel7srv:/home/tmp /mnt/tmp
>> mount.nfs: trying text-based options 
>> 'bg,vers=4.1,addr=172.31.1.60,clientaddr=172.31.1.58'
>> mount.nfs: mount(2): Connection refused
>> mount.nfs: trying text-based options 'bg,addr=172.31.1.60'
>> mount.nfs: prog 13, trying vers=3, prot=6
>> mount.nfs: trying 172.31.1.60 prog 13 vers 3 prot TCP port 2049
>> mount.nfs: portmap query failed: RPC: Remote system error - Connection 
>> refused
>> mount.nfs: backgrounding "rhel7srv:/home/tmp"
>> mount.nfs: mount options: "rw,bg"
> 
> We are talking about mounts done through /etc/fstab, i.e. the ones
> systemd actually manages.
I guess I was not clear... After adding a bg mount to fstab and
reboot, mounting a server that is not up, there is a mount in
background that looks like 

# ps ax | grep mount
 1104 ?Ss 0:00 /sbin/mount.nfs nfssrv:/home/tmp /mnt/tmp -o rw,bg

Looking at the remote-fs.target status:

# systemctl status remote-fs.target 
* remote-fs.target - Remote File Systems
   Loaded: loaded (/usr/lib/systemd/system/remote-fs.target; enabled; vendor 
preset: enabled)
   Active: active since Tue 2017-06-06 12:36:51 EDT; 12min ago
 Docs: man:systemd.special(7)

Jun 06 12:36:51 f26.boston.devel.redhat.com systemd[1]: Reached target Remote 
File Systems.

It appears to be successful 

steved.
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] [PATCH] nfs.man: document incompatibility between "bg" option and systemd.

2017-06-07 Thread Steve Dickson


On 06/06/2017 05:49 PM, NeilBrown wrote:
> On Tue, Jun 06 2017, Steve Dickson wrote:
> 
>> Hello,
>>
>> On 05/29/2017 06:19 PM, NeilBrown wrote:
>>>
>>> Systemd does not, and will not, support "bg" correctly.
>>> It has other, better, ways to handle "background" mounting.
>> The only problem with this is bg mounts still work at least
>> up to 4.11 kernel... 
> 
> I don't think this is correct.
> In the default configuration, "mount -t nfs -o bg "
> runs for longer than 90 seconds, so systemd kill it.
I must be missing something... it seems to be working for me

# mount -vvv -o bg rhel7srv:/home/tmp /mnt/tmp
mount.nfs: trying text-based options 
'bg,vers=4.1,addr=172.31.1.60,clientaddr=172.31.1.58'
mount.nfs: mount(2): Connection refused
mount.nfs: trying text-based options 'bg,addr=172.31.1.60'
mount.nfs: prog 13, trying vers=3, prot=6
mount.nfs: trying 172.31.1.60 prog 13 vers 3 prot TCP port 2049
mount.nfs: portmap query failed: RPC: Remote system error - Connection refused
mount.nfs: backgrounding "rhel7srv:/home/tmp"
mount.nfs: mount options: "rw,bg"

# ps ax | grep mount.nfs
 2270 ?Ss 0:00 /sbin/mount.nfs rhel7srv:/home/tmp /mnt/tmp -v -o 
rw,bg

> 
> A working "bg" mount would successfully mount the filesystem is the
> server came back after 5 minutes. If you use current systemd in the
> default configuration, it won't.
When I add a bg mount to fstab again... it works just fine. This
is with the latest upstream nfs-utils. 

> 
> bg mounts do work sometimes, but I don't think they are reliable, and
> there seems to be no interest in changing this.
> Maybe the text should say "Systemd does not, and will not, reliably
> support "bg" mounts...".
not reliable maybe... I'm just doing very simple testing... 
> 
> 
>>
>> It appears there is a problem with a 4.12 kernel. The mount no 
>> longer errors out with ECONNREFUSED it just hangs in the 
>> kernel trying forever... It sounds like a bug to me but 
>> maybe that change was intentional.. Anna?? Trond???
> 
> Might be related to my patch
>   [PATCH] SUNRPC: ensure correct error is reported by xs_tcp_setup_socket()
> 
> sent 25th May.
I'll take a look.. thanks!

> 
>>
>> So I'm a bit hesitant to commit this since not accurate, yet.
>>
>> Finally, the whole idea of systemd randomly/silently 
>> strip off mount options is crazy... IMHO... 
> 
> It isn't exactly systemd, it is systemd-fstab-generator.
> The options lists in /etc/fstab are not all equal.  Some
> are relevant to /bin/mount, some to mount.nfs, some to the kernel.
> I think /bin/mount processes the option lists before passing it
> to mount.nfs.  There is no intrinsic reason that systemd-fstab-generator
> shouldn't do the same thing.
OK. 
> 
>>
>> Just because a concept that has been around for years
>> does not fix well in the systemd world it gets
>> rip out??? IDK... but I think we can do better than that.
> 
> I could suggest that automount is uniformly better than bg.  Give how
> long automount has been around, and how easy it is to use with systemd,
> it might be time to start encouraging people to stop using the inferior
> technology.
Yes... bg mounts are a poor man's automount... 
> 
> I could say that, but I'm not 100% sure.  The difference between
> automount and bg is that with bg it is easy to see if the mount has
> succeeded yet - just look for an empty directory.  With automount,
> you'll typically get a delay at that point.  We could possibly wind down
> that delay...
> 
>>
>> Note, the 'bg' is used by clients that do want their
>> booting to hang by servers that are down so if the 
>> option is rip out, boots will start hang. This
>> will make it very difficult to debug since the bg
>> will still exist in fstab.
> 
> Not exactly.
> Current default behaviour is that systemd will wait 90 seconds, then
> kill the mount program and fail the boot.  If we strip out "bg", the
> same thing will happen.
Again.. I'm not seeing this 90 sec delay when I add a bg mount
to /etc/fstab.

> 
> I'm OK with the patch not being applied just yet.  I think we need to
> resolve this issue, but it isn't 100% clear to me what the best
> resolution would be.  So I'm happy to see a conversation happening and
> opinions being shared.
> I'd be particularly pleased if you could double check how "bg" is
> currently handled on some systemd-enabled system that you use.
> Does the mount program get killed like I see?  
No. after adding a bg mount to fstab and rebooting (with the server
down) I see the following mount in backgroup 
   1104 ?Ss 

Re: [systemd-devel] [PATCH] nfs.man: document incompatibility between "bg" option and systemd.

2017-06-07 Thread Steve Dickson


On 06/07/2017 04:12 AM, Lennart Poettering wrote:
> On Tue, 06.06.17 14:07, Steve Dickson (ste...@redhat.com) wrote:
> 
>> Hello,
>>
>> On 05/29/2017 06:19 PM, NeilBrown wrote:
>>>
>>> Systemd does not, and will not, support "bg" correctly.
>>> It has other, better, ways to handle "background" mounting.
>> The only problem with this is bg mounts still work at least
>> up to 4.11 kernel... 
>>
>> It appears there is a problem with a 4.12 kernel. The mount no 
>> longer errors out with ECONNREFUSED it just hangs in the 
>> kernel trying forever... It sounds like a bug to me but 
>> maybe that change was intentional.. Anna?? Trond???
>>
>> So I'm a bit hesitant to commit this since not accurate, yet.
>>
>> Finally, the whole idea of systemd randomly/silently 
>> strip off mount options is crazy... IMHO... 
>>
>> Just because a concept that has been around for years
>> does not fix well in the systemd world it gets
>> rip out??? IDK... but I think we can do better than that.
> 
> Well "bg" doesn't really work on systemd systems, and this was never
> different, hence I think it's only fair to document that it is
> incompatible with systemd. 
I'm seeing it work just fine... 

/etc/fstab: nfssrv:/home/tmp /mnt/tmp nfs bg 0 0
nfssrv is down 
reboot client 
login to client 
# ps ax | grep mount
  980 ?Ss 0:00 /sbin/mount.nfs nfssrv:/home/tmp /mnt/tmp -o rw,bg


> In addition, I have the suspicion it is not
> used very widely, since I never actually got complaints about it.
Since it seems to be still working we probably would not hear
any complaints about it... 

steved.
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] [PATCH] nfs.man: document incompatibility between "bg" option and systemd.

2017-06-07 Thread Steve Dickson


On 06/07/2017 04:13 AM, Lennart Poettering wrote:
> On Tue, 06.06.17 21:57, Michael Biebl (mbi...@gmail.com) wrote:
> 
>> 2017-06-06 20:07 GMT+02:00 Steve Dickson <ste...@redhat.com>:
>>> Finally, the whole idea of systemd randomly/silently
>>> strip off mount options is crazy... IMHO...
>>
>> Personally, I would prefer if systemd simply logged a warning/error
>> message but would *not* strip the bg option.
> 
> What good does that do if "bg" doesn't actually work properly?
At the moment it works just fine... 

steved.

> 
> Lennart
> 
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] [PATCH] nfs.man: document incompatibility between "bg" option and systemd.

2017-06-06 Thread Steve Dickson
Hello,

On 05/29/2017 06:19 PM, NeilBrown wrote:
> 
> Systemd does not, and will not, support "bg" correctly.
> It has other, better, ways to handle "background" mounting.
The only problem with this is bg mounts still work at least
up to 4.11 kernel... 

It appears there is a problem with a 4.12 kernel. The mount no 
longer errors out with ECONNREFUSED it just hangs in the 
kernel trying forever... It sounds like a bug to me but 
maybe that change was intentional.. Anna?? Trond???

So I'm a bit hesitant to commit this since not accurate, yet.

Finally, the whole idea of systemd randomly/silently 
strip off mount options is crazy... IMHO... 

Just because a concept that has been around for years
does not fix well in the systemd world it gets
rip out??? IDK... but I think we can do better than that.

Note, the 'bg' is used by clients that do want their
booting to hang by servers that are down so if the 
option is rip out, boots will start hang. This
will make it very difficult to debug since the bg
will still exist in fstab.  

Again, the whole concept of systemd messing with mounts options
is just not a good one... IMHO.. 

steved.

> 
> Explain this.
> 
> See also https://github.com/systemd/systemd/issues/6046
> 
> Signed-off-by: NeilBrown 
> ---
>  utils/mount/nfs.man | 18 +-
>  1 file changed, 17 insertions(+), 1 deletion(-)
> 
> diff --git a/utils/mount/nfs.man b/utils/mount/nfs.man
> index cc6e992ed807..7e76492d454f 100644
> --- a/utils/mount/nfs.man
> +++ b/utils/mount/nfs.man
> @@ -372,6 +372,21 @@ Alternatively these issues can be addressed
>  using an automounter (refer to
>  .BR automount (8)
>  for details).
> +.IP
> +When
> +.B systemd
> +is used to mount the filesystems listed in
> +.IR /etc/fstab ,
> +the
> +.B bg
> +option is not supported, and may be stripped from the option list.
> +Similar functionality can be achieved by providing the
> +.B x-system.automount
> +option.  This will cause
> +.B systemd
> +to attempt to mount the filesystem when the mountpoint is first
> +accessed, rather than during system boot.  The mount still happens in
> +the "background", though in a different way.
>  .TP 1.5i
>  .BR rdirplus " / " nordirplus
>  Selects whether to use NFS v3 or v4 READDIRPLUS requests.
> @@ -1810,7 +1825,8 @@ such as security negotiation, server referrals, and 
> named attributes.
>  .BR rpc.idmapd (8),
>  .BR rpc.gssd (8),
>  .BR rpc.svcgssd (8),
> -.BR kerberos (1)
> +.BR kerberos (1),
> +.BR systemd.mount (5) .
>  .sp
>  RFC 768 for the UDP specification.
>  .br
> 
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel


[systemd-devel] Dropping core with Systemd.

2017-05-15 Thread Steve Dickson
Hello,

I want rpcbind to drop core so I can debug 
something but systemd keeps getting in the way

systemd: rpcbind.service: Main process exited, code=killed, status=6/ABRT
audit: ANOM_ABEND auid=4294967295 uid=32 gid=32 ses=4294967295 
subj=system_u:system_r:rpcbind_t:s0 pid=2787 comm="rpcbind" 
exe="/usr/bin/rpcbind" sig=6
systemd: rpcbind.service: Unit entered failed state.
audit: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 
subj=system_u:system_r:init_t:s0 msg='unit=rpcbind comm="systemd" 
exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed'
systemd: rpcbind.service: Failed with result 'signal'.
systemd: Starting RPC Bind...
systemd: Started RPC Bind.

How do I stop systemd from restarting rpcbind and allowing
the process to drop core?

Note, this problem only happens when systemd starts rpcbind
and ypbind so I need systemd to start the processes. 

tia,

steved.
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] nfs-server.service starts before _netdev iscsi mount completes (required)... how can I fix this?

2016-11-07 Thread Steve Dickson


On 11/04/2016 04:47 PM, Lennart Poettering wrote:
> On Fri, 04.11.16 11:12, c...@endlessnow.com (c...@endlessnow.com) wrote:
>
>>> On Thu, Nov 03, 2016 at 04:01:15PM -0700, c...@endlessnow.com wrote:
>>  >> so I'm using CentOS 7, and we're mounting a disk from our
>> iSCSI
>>  >> SAN and then we want to export that via NFS.  But on a fresh boot
>> the
>>  >> nfs-server service fails because the filesytem isn't there yet. 
>> Any
>>  >> ideas on how to fix this?
>>> Add RequiresMountsFor=/your/export/path to nfs-server.service
>> (first, apologize for the formatting using a very limted web based
>> i/f)
>>
>> I tried creating a nfs-server.service.d directory with a
>> required-mounts.conf with that line in it and it did not work. 
>> However adding the line directly to the nfs-server.service file did
>> work.  Can't we add this using a nfs-server.service.d directory and
>> conf file?
> mkdir -p /etc/systemd/system/nfs-server.service.d/
> echo "[Unit]" > /etc/systemd/system/nfs-server.service.d/50-myorder.conf
> echo "RequiresMountsFor=/foo/bar/baz" >> 
> /etc/systemd/system/nfs-server.service.d/50-myorder.conf
> systemctl daemon-reload
This happens automatically with later nfs-utils. A systemd generator is
created that read /etc/exports and creates RequiresMountsFor=
for anything exported and then read /etc/fstab looking for
nfs or nfs4 types. It creates a Before= entry in the same file.

The name is order-with-mounts.conf under  nfs-server.service.d

steved.

>
> Lennart
>

___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] rpcbind.socket failing

2016-11-01 Thread Steve Dickson


On 11/01/2016 12:14 PM, Reindl Harald wrote:
> 
> 
> Am 01.11.2016 um 17:05 schrieb Steve Dickson:
>> and I still getting the following errors
>>
>> rpcbind.socket: Failed to listen on sockets: No such file or directory
I though this was talking about /run/rpcbind.sock since 
that's the only patch defined in that file...  I didn't 
even think it was talking about the patch to the 
daemon since that's defined in the service file. 

>> Failed to listen on RPCbind Server Activation Socket.
>> Starting RPC Bind...
>> rpcbind.service: Failed at step EXEC spawning /usr/bin/rpcbind: No such file 
>> or directory
>> rpcbind.service: Main process exited, code=exited, status=203/EXEC
>> Failed to start RPC Bind.
Maybe the ENOENT was being carried over to the socket file??

> 
> come on "No such file or directory" is pretty clear and says your path is 
> just wrong
> 
> [root@rh:~]$ ls /usr/bin/rpcbind
> /usr/bin/ls: cannot access '/usr/bin/rpcbind': No such file or directory
I guess that is where rpcbind lives in SUSE distro... 

> 
> [root@rh:~]$ ls /usr/sbin/rpcbind
> -rwxr-xr-x 1 root root 58K 2016-08-01 19:01 /usr/sbin/rpcbind
Thanks for catch and the time!

steved.
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] rpcbind.socket failing

2016-11-01 Thread Steve Dickson


On 11/01/2016 11:47 AM, Lennart Poettering wrote:
> On Tue, 01.11.16 11:11, Steve Dickson (ste...@redhat.com) wrote:
> 
>>
>>
>> On 10/31/2016 03:40 PM, Michael Biebl wrote:
>>> Why is it using /var/run (where /var could be on a separate partition)
>>> and not /run for the socket files?
>>
>> Historical reasons?? I guess that's way its always been
>> and never caused a problem... 
> 
> Yeah, it normally shouldn't create problems – however, you are setting
> DefaultDependencies=no now, which means you run in early boot, and
> then things become more complex, as /var is not around fully yet. 
Fair enough... /var is gone so here is the two new sock and server files

rpcbind.socket:

[Unit]
Description=RPCbind Server Activation Socket
DefaultDependencies=no
Wants=rpcbind.target
Before=rpcbind.target

[Socket]
ListenStream=/run/rpcbind.sock

# RPC netconfig can't handle ipv6/ipv4 dual sockets
BindIPv6Only=ipv6-only
ListenStream=0.0.0.0:111
ListenDatagram=0.0.0.0:111
ListenStream=[::]:111
ListenDatagram=[::]:111

[Install]
WantedBy=sockets.target

rpcbind.service:

[Unit]
Description=RPC Bind
Documentation=man:rpcbind(8)
DefaultDependencies=no
# Make sure we use the IP addresses listed for
# rpcbind.socket, no matter how this unit is started.
Wants=rpcbind.socket
After=rpcbind.socket

[Service]
Type=notify
# distro can provide a drop-in adding EnvironmentFile=-/??? if needed.
ExecStart=/usr/bin/rpcbind $RPCBIND_OPTIONS -w -f

[Install]
WantedBy=multi-user.target

and I still getting the following errors

rpcbind.socket: Failed to listen on sockets: No such file or directory
Failed to listen on RPCbind Server Activation Socket.
Starting RPC Bind...
rpcbind.service: Failed at step EXEC spawning /usr/bin/rpcbind: No such file or 
directory
rpcbind.service: Main process exited, code=exited, status=203/EXEC
Failed to start RPC Bind.
SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 
subj=system_u:system_r:init_t:s0 msg='unit=rpcbind comm="systemd" 
exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed'
rpcbind.service: Unit entered failed state.
rpcbind.service: Failed with result 'exit-code'.

Note, I did take out the Bind and Listen lines in the socket
file... to no avail, the same error

Is there some type of debugging I can turn on? Clearly /run/rpcbind.sock
does exist... 

steved
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] rpcbind.socket failing

2016-11-01 Thread Steve Dickson


On 11/01/2016 09:31 AM, Lennart Poettering wrote:
> On Mon, 31.10.16 13:19, Steve Dickson (ste...@redhat.com) wrote:
> 
>> [Unit]
>> Description=RPCbind Server Activation Socket
>> DefaultDependencies=no
>> RequiresMountsFor=/var/run /run
>> Wants=rpcbind.target
>> Before=rpcbind.target
>>
>> [Socket]
>> ListenStream=/var/run/rpcbind.sock
> 
> So you turned off the default dependencies, which means your socket
> unit will be started during earliest boot, at a time where /var is
> usually not writable yet. (You then try to fix this up via the
> RequiresMountsFor= thing, but that will only ensure that /var is
> mounted, it does not require it to actually be writable, or that the
> /var/run → /run symlink was created)
> 
> So, most importantly: please drop all references to /var/run
> here. That's just a legacy alias to /run, implemented via a
> symlink. All distributions, regardless if they adopted systemd or not,
> have switched to making /run a tmpfs and /var/run a symlink to
> it. There's really no point at all in ever referring to /var/run
> again.
Ok... 

> 
> By dropping any reference to /var/run and replacing it by a reference
> directly to /run you make the entire problem set go away: /run is
> guaranteed to always exist on systemd systems, it's always writable,
> and always a tmpfs. PID 1 mounts in in its earliest initialization
> phase, right at the same time it mounts /proc and /sys too.
> 
> Summary:
> 
> a) drop the entire RequiresMountsFor= line, it's useless. /run is
>mounted anyway, and depending on /var/run doesn't do what people
>might think, and you don't need it anway.
> 
> b) Change the ListenStream directive to reference /run/rpcbind.sock
>directly, avoid the indirection via /var/run/rpcbind.sock
The whole point of the exercise is to develop uniformity 
across distros so I'll send post an updates version and
see what is said as well as cc-ing the original author
for comment... 

thanks!

steved.


> 
> Lennart
> 
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] rpcbind.socket failing

2016-11-01 Thread Steve Dickson


On 10/31/2016 06:07 PM, Kai Krakow wrote:
> Am Mon, 31 Oct 2016 13:19:24 -0400
> schrieb Steve Dickson <ste...@redhat.com>:
> 
>> Upstream has come up with some new rpcbind service socket files
>> and I'm trying to incorporate them into f25.
>>
>> The rpcbind.socket is failing to come up
>>rpcbind.socket: Failed to listen on sockets: No such file or
>> directory Failed to listen on RPCbind Server Activation Socket.
>>
>> But the rpcbind.socket does exist 
>> # file /var/run/rpcbind.sock
>> /var/run/rpcbind.sock: socket
>>
>> and everything comes up when the daemon is started by hand.
> 
> I guess the problem is with the listen directives... See below.
> 
>> old rpcbind.socket file:
>>
>> [Unit]
>> Description=RPCbind Server Activation Socket
>>
>> [Socket]
>> ListenStream=/var/run/rpcbind.sock
> 
> Probably not your problem but it should really point to /run directly
> because /var may not be mounted at that time. Even if /var/run is
> usually a symlink, /var needs to be ready to resolve the symlink. But
> otoh I'm not sure if systemd even supports /var being mounted later and
> not by initramfs.
In the new rpcbind.socket there is a
RequiresMountsFor=/var/run /run so /var should
be there.

> 
>> [Install]
>> WantedBy=sockets.target
>>
>>
>> New rpcbind.socket file:
>>
>> [Unit]
>> Description=RPCbind Server Activation Socket
>> DefaultDependencies=no
>> RequiresMountsFor=/var/run /run
>> Wants=rpcbind.target
>> Before=rpcbind.target
>>
>> [Socket]
>> ListenStream=/var/run/rpcbind.sock
>>
>> # RPC netconfig can't handle ipv6/ipv4 dual sockets
>> BindIPv6Only=ipv6-only
>> ListenStream=0.0.0.0:111
>> ListenDatagram=0.0.0.0:111
>> ListenStream=[::]:111
>> ListenDatagram=[::]:111
> 
> I'm not sure, but I don't think you can combine BindIPv6Only and listen
> on IPv4 sockets at the same time. What if you remove the two ipv4
> style listen directives?
> 
> Maybe remove the complete block if you are going to use only unix
> sockets.
Well I remove the entire block and I still got the failure..

thanks!

steved.

> 
>> [Install]
>> WantedBy=sockets.target
>>
>>
>> ListenStream is the same for both files... 
>> How do I debugging something like this?
>>
>> tia,
>>
>> steved.
>>
>>
>>
>> ___
>> systemd-devel mailing list
>> systemd-devel@lists.freedesktop.org
>> https://lists.freedesktop.org/mailman/listinfo/systemd-devel
> 
> 
> 
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] rpcbind.socket failing

2016-11-01 Thread Steve Dickson


On 10/31/2016 03:40 PM, Michael Biebl wrote:
> Why is it using /var/run (where /var could be on a separate partition)
> and not /run for the socket files?
Historical reasons?? I guess that's way its always been
and never caused a problem... 

steved.

> 
> 2016-10-31 18:19 GMT+01:00 Steve Dickson <ste...@redhat.com>:
>> Hello,
>>
>> Upstream has come up with some new rpcbind service socket files
>> and I'm trying to incorporate them into f25.
>>
>> The rpcbind.socket is failing to come up
>>rpcbind.socket: Failed to listen on sockets: No such file or directory
>>Failed to listen on RPCbind Server Activation Socket.
>>
>> But the rpcbind.socket does exist
>> # file /var/run/rpcbind.sock
>> /var/run/rpcbind.sock: socket
>>
>> and everything comes up when the daemon is started by hand.
>>
>> old rpcbind.socket file:
>>
>> [Unit]
>> Description=RPCbind Server Activation Socket
>>
>> [Socket]
>> ListenStream=/var/run/rpcbind.sock
>>
>> [Install]
>> WantedBy=sockets.target
>>
>>
>> New rpcbind.socket file:
>>
>> [Unit]
>> Description=RPCbind Server Activation Socket
>> DefaultDependencies=no
>> RequiresMountsFor=/var/run /run
>> Wants=rpcbind.target
>> Before=rpcbind.target
>>
>> [Socket]
>> ListenStream=/var/run/rpcbind.sock
>>
>> # RPC netconfig can't handle ipv6/ipv4 dual sockets
>> BindIPv6Only=ipv6-only
>> ListenStream=0.0.0.0:111
>> ListenDatagram=0.0.0.0:111
>> ListenStream=[::]:111
>> ListenDatagram=[::]:111
>>
>> [Install]
>> WantedBy=sockets.target
>>
>>
>> ListenStream is the same for both files...
>> How do I debugging something like this?
>>
>> tia,
>>
>> steved.
>>
>>
>>
>> ___
>> systemd-devel mailing list
>> systemd-devel@lists.freedesktop.org
>> https://lists.freedesktop.org/mailman/listinfo/systemd-devel
> 
> 
> 
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel


[systemd-devel] rpcbind.socket failing

2016-10-31 Thread Steve Dickson
Hello,

Upstream has come up with some new rpcbind service socket files
and I'm trying to incorporate them into f25.

The rpcbind.socket is failing to come up
   rpcbind.socket: Failed to listen on sockets: No such file or directory
   Failed to listen on RPCbind Server Activation Socket.

But the rpcbind.socket does exist 
# file /var/run/rpcbind.sock
/var/run/rpcbind.sock: socket

and everything comes up when the daemon is started by hand.

old rpcbind.socket file:

[Unit]
Description=RPCbind Server Activation Socket

[Socket]
ListenStream=/var/run/rpcbind.sock

[Install]
WantedBy=sockets.target


New rpcbind.socket file:

[Unit]
Description=RPCbind Server Activation Socket
DefaultDependencies=no
RequiresMountsFor=/var/run /run
Wants=rpcbind.target
Before=rpcbind.target

[Socket]
ListenStream=/var/run/rpcbind.sock

# RPC netconfig can't handle ipv6/ipv4 dual sockets
BindIPv6Only=ipv6-only
ListenStream=0.0.0.0:111
ListenDatagram=0.0.0.0:111
ListenStream=[::]:111
ListenDatagram=[::]:111

[Install]
WantedBy=sockets.target


ListenStream is the same for both files... 
How do I debugging something like this?

tia,

steved.



___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel


[systemd-devel] rpc-gssd.service failure

2016-03-19 Thread Steve Dickson
Hello all,

When systemd starts the rpc.gssd.service, the service fails with:

* rpc-gssd.service - RPC security service for NFS client and server
   Loaded: loaded (/usr/lib/systemd/system/rpc-gssd.service; static; vendor 
preset: disabled)
   Active: failed (Result: exit-code) since Wed 2016-03-16 13:14:41 EDT; 12s ago
  Process: 1496 ExecStart=/usr/sbin/rpc.gssd $RPCGSSDARGS (code=exited, 
status=127)

Mar 16 13:14:41 fedora.home.dicksonnet.net systemd[1]: Starting RPC security 
service for NFS client and server...
Mar 16 13:14:41 fedora.home.dicksonnet.net rpc.gssd[1496]: Inconsistency 
detected by ld.so: dl-close.c: 811: _dl_close: Assertion `map->l_init_called' 
failed!
Mar 16 13:14:41 fedora.home.dicksonnet.net systemd[1]: rpc-gssd.service: 
Control process exited, code=exited status=127
Mar 16 13:14:41 fedora.home.dicksonnet.net systemd[1]: Failed to start RPC 
security service for NFS client and server.
Mar 16 13:14:41 fedora.home.dicksonnet.net systemd[1]: rpc-gssd.service: Unit 
entered failed state.
Mar 16 13:14:41 fedora.home.dicksonnet.net systemd[1]: rpc-gssd.service: Failed 
with result 'exit-code'.

When I start the daemon from the command line, the daemon comes up w/out any 
errors... 

Any idea on what is going on?

tia!

steved.
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] Variables in the [Unit] section of a server

2016-02-02 Thread Steve Dickson


On 01/23/2016 11:33 AM, Armin K. wrote:
> On 23.01.2016 17:28, Armin K. wrote:
>>> On 01/13/2016 10:51 AM, Steve Dickson wrote:
>>>> Hello,
>>>>
>>>> Is is possible to set a variable in the [Unit]
>>>> section of a service?
>>>>
>>>> For example in rpc-gssd.service there is
>>>> ConditionPathExists=/etc/krb5.keytab
>>>>
>>>> but for some installation the krb5.keytab
>>>> is in a different place. The rpc.gssd daemon
>>>> can be told this by setting a command line
>>>> argument from the EnvironmentFile.
>>>>
>>>> So people have to edit both the EnvironmentFile
>>>> and the rpc-gssd.service to make this change. 
>>>>
>>>> So it would be nice if only the EnvironmentFile
>>>> need to be edit and the change would happen
>>>> in both places. 
>>>>
>>>> Possible?
>>>>
>>>> steved.
>>
>> I'm sorry I'm not responding to the original mail, as I just subscribed
>> yesterday to this list.
>>
>> I am not sure I fully understand what you want, but I think this might do it:
>>
>> [Unit]
>> Description=test
>>
>> [Service]
>> Type=oneshot
>> Environment=TEST=blah
>> EnvironmentFile=-/etc/systemd/system/test-env
>> ExecStart=/etc/systemd/system/test ${TEST}
>> TimeoutSec=0
>> RemainAfterExit=yes
>>
>> The /etc/systemd/system/test is a script that prints $1, depending on what is
>> being passed to it. When env file is not present, it prints the TEST from the
>> unit file. When env file is present, and TEST is set to something else, it
>> prints its value. Is that what you are looking for?
>>
>> Cheers
>>
> 
> Ugh, sorry. You asked for the [Unit] section, not the [Service] section.
> From what I see, it doesn't yet seem to be possible:
> 
> [/etc/systemd/system/test.service:2] Unknown lvalue 'Environment' in section 
> 'Unit'
> [/etc/systemd/system/test.service:4] Path in condition not absolute, 
> ignoring: $TEST
> 
> You can always use some kind of hackery in ExecStartPre, but I don't think 
> that's
> what anyone wants.
> 
Well its to bad ConditionPathExists is not part of the [Service] section
so I could use a shell variable... 

steved.
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] Variables in the [Unit] section of a server

2016-01-23 Thread Steve Dickson
Thank for the discussion... It was very helpful! 

steved. 

On 01/13/2016 10:51 AM, Steve Dickson wrote:
> Hello,
> 
> Is is possible to set a variable in the [Unit]
> section of a service?
> 
> For example in rpc-gssd.service there is
> ConditionPathExists=/etc/krb5.keytab
> 
> but for some installation the krb5.keytab
> is in a different place. The rpc.gssd daemon
> can be told this by setting a command line
> argument from the EnvironmentFile.
> 
> So people have to edit both the EnvironmentFile
> and the rpc-gssd.service to make this change. 
> 
> So it would be nice if only the EnvironmentFile
> need to be edit and the change would happen
> in both places. 
> 
> Possible?
> 
> steved.
> ___
> systemd-devel mailing list
> systemd-devel@lists.freedesktop.org
> http://lists.freedesktop.org/mailman/listinfo/systemd-devel
> 
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


[systemd-devel] Variables in the [Unit] section of a server

2016-01-13 Thread Steve Dickson
Hello,

Is is possible to set a variable in the [Unit]
section of a service?

For example in rpc-gssd.service there is
ConditionPathExists=/etc/krb5.keytab

but for some installation the krb5.keytab
is in a different place. The rpc.gssd daemon
can be told this by setting a command line
argument from the EnvironmentFile.

So people have to edit both the EnvironmentFile
and the rpc-gssd.service to make this change. 

So it would be nice if only the EnvironmentFile
need to be edit and the change would happen
in both places. 

Possible?

steved.
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] [PATCH] gssd: Improve scalability by not waiting for child processes

2015-10-05 Thread Steve Dickson


On 10/04/2015 04:19 AM, Florian Weimer wrote:
> * Steve Dickson:
> 
>> +static void
>> +sig_child(int signal)
>> +{
>> +int err;
>> +pid_t pid;
>> +
>> +/* Parent: just wait on child to exit and return */
>> +do {
>> +pid = wait();
>> +} while(pid == -1 && errno != -ECHILD);
>> +
>> +if (WIFSIGNALED(err))
>> +printerr(0, "WARNING: forked child was killed"
>> + "with signal %d\n", WTERMSIG(err));
>> +}
> 
> prinerr calls vfprintf or vsyslog.  Neither is safe to use in signal
> handlers, so you need to log this message in some other way.
Good point... but this patch was self NAK-ed due to it leaving
zombie processes during my testing.

steved.

___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] systemd and kernel process

2015-10-03 Thread Steve Dickson


On 10/01/2015 03:50 PM, Kay Sievers wrote:
> On Thu, Oct 1, 2015 at 9:30 PM, Steve Dickson <ste...@redhat.com> wrote:
>>
>>
>> On 10/01/2015 09:24 AM, Kay Sievers wrote:
>>> On Wed, Sep 30, 2015 at 10:49 PM, Steve Dickson <ste...@redhat.com> wrote:
>>>> Is there a way for systemd to monitor kernel process?
>>>> By monitor I mean the existence.
>>>
>>> No, and there is no plan to do anything like that.
>>>
>>> Kernel tasks are kernel internals, and userspace must not make any
>>> assumptions about them. They can come and go at any time between any
>>> kernel versions.
>>>
>>> Custom tools can do what they need, but systemd should not offer to do
>>> that to users.
>> First all thanks for the response
>>
>> kernel process in question is nfsd. The number of thread
>> is kept in /proc/fs/nfsd/threads.
>>
>> So the idea would be doing a systemctl status nfsd
>> and number in /proc/fs/nfsd/threads is zero the
>> service would be deactive. An non-zero number the
>> service would active.
>>
>> Is this something systemd could be used for?
> 
> No, not directly. There is no facility to watch /proc or any other
> similar interface for such changes. Plain /proc directories are just
> not capable of providing event information to userspace.
Gotcha... 

> 
> The kernel's nfs implementation would need a character device where
> events are send out, or possibly a poll()able file in /proc, or
> something in /sys/devices/, or a similar approach, where udev can
> react to. Such interface could be used to signal systemd that
> userspace should react to state changes in the kernel.
Hmm... Interesting idea... Thanks!

steved.

___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] systemd and kernel process

2015-10-03 Thread Steve Dickson


On 10/03/2015 08:18 AM, Lennart Poettering wrote:
> On Wed, 30.09.15 16:49, Steve Dickson (ste...@redhat.com) wrote:
> 
>> Hello,
>>
>> Is there a way for systemd to monitor kernel process?
> 
> To add to what Kay said:
> 
> No. Kernel threads cannot really be tracked by userspace in any
> sensible manner. We won't get SIGCHLD for them, and they cannot be
> moved outside of the root cgroup. This means we will not get events
> about their life-time, and cannot track their existance at all. 
> 
> With the current kernel this is hence technically not possible. On top
> of that though, I am not sure this would even be desirable
> semantically: babysitting kernel facilities from userspace sounds
> wrong to me, I think we want a clear hierarchy, that the kernel is
> responsible for PID 1, and then PID 1 for the rest of the
> services. But making PID1 also responsible for the kernel appears like
> the wrong way around.
Its for an HA environment so they need to know the
"health" of the NFS server, which is a combination of
userspace daemon and kernel processes.

The userspace daemon all have systemd units so monitoring
their health not a problem. I just hoping I could used 
systemd to monitor the kernel processes as well. But
I guess that's not possible. 

Thanks for the input!

steved.
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] systemd and kernel process

2015-10-01 Thread Steve Dickson


On 10/01/2015 09:24 AM, Kay Sievers wrote:
> On Wed, Sep 30, 2015 at 10:49 PM, Steve Dickson <ste...@redhat.com> wrote:
>> Is there a way for systemd to monitor kernel process?
>> By monitor I mean the existence.
> 
> No, and there is no plan to do anything like that.
> 
> Kernel tasks are kernel internals, and userspace must not make any
> assumptions about them. They can come and go at any time between any
> kernel versions.
> 
> Custom tools can do what they need, but systemd should not offer to do
> that to users.
First all thanks for the response

kernel process in question is nfsd. The number of thread 
is kept in /proc/fs/nfsd/threads. 

So the idea would be doing a systemctl status nfsd 
and number in /proc/fs/nfsd/threads is zero the
service would be deactive. An non-zero number the
service would active. 

Is this something systemd could be used for?

tia,

steved.

___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


[systemd-devel] systemd and kernel process

2015-09-30 Thread Steve Dickson
Hello,

Is there a way for systemd to monitor kernel process?
By monitor I mean the existence. 

Here the story...  a systemd service calls a command
that creates a number kernel process/threads 
then the command exits. 

Is there a way for systemd to monitor those kernel process
even though it was told nothing about them?

tia,

steved.
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


[systemd-devel] [PATCH] gssd: Improve scalability by not waiting for child processes

2015-09-23 Thread Steve Dickson
Instead of waiting on every fork, which would
become a bottle neck during a mount storm, simply
set a SIGCHLD signal handler to do the wait on
the child process

Signed-off-by: Steve Dickson <ste...@redhat.com>
---
 utils/gssd/gssd.c  | 18 ++
 utils/gssd/gssd_proc.c | 11 ++-
 2 files changed, 20 insertions(+), 9 deletions(-)

diff --git a/utils/gssd/gssd.c b/utils/gssd/gssd.c
index e480349..8b778cb 100644
--- a/utils/gssd/gssd.c
+++ b/utils/gssd/gssd.c
@@ -44,11 +44,13 @@
 #define _GNU_SOURCE
 #endif
 
+#include 
 #include 
 #include 
 #include 
 #include 
 #include 
+#include 
 #include 
 #include 
 #include 
@@ -736,6 +738,21 @@ sig_die(int signal)
printerr(1, "exiting on signal %d\n", signal);
exit(0);
 }
+static void
+sig_child(int signal)
+{
+   int err;
+   pid_t pid;
+
+   /* Parent: just wait on child to exit and return */
+   do {
+   pid = wait();
+   } while(pid == -1 && errno != -ECHILD);
+
+   if (WIFSIGNALED(err))
+   printerr(0, "WARNING: forked child was killed"
+"with signal %d\n", WTERMSIG(err));
+}
 
 static void
 usage(char *progname)
@@ -902,6 +919,7 @@ main(int argc, char *argv[])
 
signal(SIGINT, sig_die);
signal(SIGTERM, sig_die);
+   signal(SIGCHLD, sig_child);
signal_set(_ev, SIGHUP, gssd_scan_cb, NULL);
signal_add(_ev, NULL);
event_set(_ev, inotify_fd, EV_READ | EV_PERSIST, 
gssd_inotify_cb, NULL);
diff --git a/utils/gssd/gssd_proc.c b/utils/gssd/gssd_proc.c
index 11168b2..8f5ca03 100644
--- a/utils/gssd/gssd_proc.c
+++ b/utils/gssd/gssd_proc.c
@@ -656,16 +656,9 @@ process_krb5_upcall(struct clnt_info *clp, uid_t uid, int 
fd, char *tgtname,
/* fork() failed! */
printerr(0, "WARNING: unable to fork() to handle"
"upcall: %s\n", strerror(errno));
-   return;
+   /* FALLTHROUGH */
default:
-   /* Parent: just wait on child to exit and return */
-   do {
-   pid = wait();
-   } while(pid == -1 && errno != -ECHILD);
-
-   if (WIFSIGNALED(err))
-   printerr(0, "WARNING: forked child was killed"
-"with signal %d\n", WTERMSIG(err));
+   /* Parent: Return and wait for the SIGCHLD */
return;
}
 no_fork:
-- 
2.4.3

___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] [PATCH] systemd: Have rpc-statd-notify.service Require network.target

2015-03-19 Thread Steve Dickson


On 03/03/2015 01:36 PM, Steve Dickson wrote:
 Its been reported that having the rpc-statd-notify service
 depend on network.target instead network-online.target
 decrease boot times as much as 10 seconds on some
 installs
 
 Fixes: https://bugzilla.redhat.com/show_bug.cgi?id=1183293
 
 Signed-off-by: Steve Dickson ste...@redhat.com
 Reported-by: Eric Work work.e...@gmail.com
Committed...

steved.

 ---
  systemd/rpc-statd-notify.service |2 +-
  1 files changed, 1 insertions(+), 1 deletions(-)
 
 diff --git a/systemd/rpc-statd-notify.service 
 b/systemd/rpc-statd-notify.service
 index a655445..b608a14 100644
 --- a/systemd/rpc-statd-notify.service
 +++ b/systemd/rpc-statd-notify.service
 @@ -1,6 +1,6 @@
  [Unit]
  Description=Notify NFS peers of a restart
 -Requires=network-online.target
 +Requires=network.target
  After=network.target nss-lookup.target
  
  # if we run an nfs server, it needs to be running before we
 
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] [PATCH 0/2] some systemd unit changes

2015-03-19 Thread Steve Dickson


On 03/03/2015 12:28 PM, Martin Pitt wrote:
 Hello NFS developers,
 
 reposting the two patches inline as requested by Steve.
 
 I'm currently systemd-ifying our nfs-utils Ubuntu package. For testing I put
 the NFS server and client (i. e. localhost:/foo/bar mounts) on the same
 machine. With that I get long hangs during shutdown on stopping the NFS .mount
 units, as when that happens the NFS server is already shut down.
 
 This is certainly a corner case as you'd usually not NFS-mount a share from
 localhost; but fixing it is relatively simple with the first patch, which 
 makes
 sure that if NFS server and client are installed, the server starts before the
 client, and the client stops before the server.
 
 For a client without installed server this is harmless as Before= does not
 imply a dependency. Likewise, for an NFS server which does not mount shares by
 itself, it's also a no-op as remote-fs.target is empty. This would only
 slightly reorder the boot sequence for machines which both are an NFS server
 and have some remote NFS mounts, but I don't see an issue with that.
 
 The second patch make NFS start earlier in the boot (i. e. before
 basic.target), so that you can do things like put /var/ on NFS, or have rcS
 SysV init scripts which depend on $remote_fs work. I tested this on both a
 server and a client. This is certainly a bit more intrusive, but could be
 worthwhile; what do you think?
 
 Thanks for considering,
Committed

steved.

 
 Martin
 
 ___
 systemd-devel mailing list
 systemd-devel@lists.freedesktop.org
 http://lists.freedesktop.org/mailman/listinfo/systemd-devel
 
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] [PATCH 0/1] details for starting nfs-idmapd also on clients

2015-03-06 Thread Steve Dickson
Hello,

On 03/06/2015 06:15 AM, Martin Pitt wrote:
 Hello all,
 
 Steve Langasek pointed out in [1] that idmapd is also necessary on the client
 side. It isn't for my very simple NFSv4 test, but then again I don't know that
 much about the various other modes of operation.
 
 This patch starts nfs-idmapd.service on clients too.
This is distro specific... Other distros use the 
key ring based nfsidmap(5) command to do the id mapping.

Make note, with new kernels the default upcall is to the
nfsdimap command, if that fails then the rpc.idmapd daemon 
is tried. Meaning there are two upcalls for every id map
when rpc.idmapd is used. This was the reason for the switch.

steved.
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] [PATCH] systemd: Have rpc-statd-notify.service Require network.target

2015-03-06 Thread Steve Dickson


On 03/03/2015 05:51 PM, Zbigniew Jędrzejewski-Szmek wrote:
 On Tue, Mar 03, 2015 at 04:37:24PM -0500, Steve Dickson wrote:


 On 03/03/2015 02:18 PM, Zbigniew Jędrzejewski-Szmek wrote:
 On Tue, Mar 03, 2015 at 10:06:57PM +0300, Andrei Borzenkov wrote:
 Indeed. From the man page:
 -m retry-time
 Specifies the length of time, in minutes, to continue retry‐
 ing  notifications to unresponsive hosts.  If this option is
 not specified, sm-notify attempts to send notifications  for
 15  minutes.   Specifying  a  value of 0 causes sm-notify to
 continue sending notifications to unresponsive  peers  until
 it is manually killed.

 Notifications  are retried if sending fails, the remote does
 not respond, the remote's NSM service is not registered,  or
 if  there  is  a  DNS  failure  which  prevents the remote's
 mon_name from being resolved to an address.

 So rpc-statd-notify.service should be fine with being started before
 the network is up at all.
 Right... that's the point... we want the service to fork and keep trying
 in the background 
 ...so like Andrei wrote, the dependency on network.target can be removed
 (I wasn't sure if it was clear what I meant).
I did miss the fact you guys were saying because of the 15min retry
network.target is not needed... But, in reality, a network is needed/wanted
so leaving it in will document that fact w/out causing any delay.

steved.
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


[systemd-devel] [PATCH] systemd: Have rpc-statd-notify.service Require network.target

2015-03-03 Thread Steve Dickson
Its been reported that having the rpc-statd-notify service
depend on network.target instead network-online.target
decrease boot times as much as 10 seconds on some
installs

Fixes: https://bugzilla.redhat.com/show_bug.cgi?id=1183293

Signed-off-by: Steve Dickson ste...@redhat.com
Reported-by: Eric Work work.e...@gmail.com
---
 systemd/rpc-statd-notify.service |2 +-
 1 files changed, 1 insertions(+), 1 deletions(-)

diff --git a/systemd/rpc-statd-notify.service b/systemd/rpc-statd-notify.service
index a655445..b608a14 100644
--- a/systemd/rpc-statd-notify.service
+++ b/systemd/rpc-statd-notify.service
@@ -1,6 +1,6 @@
 [Unit]
 Description=Notify NFS peers of a restart
-Requires=network-online.target
+Requires=network.target
 After=network.target nss-lookup.target
 
 # if we run an nfs server, it needs to be running before we
-- 
1.7.1

___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] [PATCH] systemd: Have rpc-statd-notify.service Require network.target

2015-03-03 Thread Steve Dickson


On 03/03/2015 02:18 PM, Zbigniew Jędrzejewski-Szmek wrote:
 On Tue, Mar 03, 2015 at 10:06:57PM +0300, Andrei Borzenkov wrote:
 Indeed. From the man page:
 -m retry-time
 Specifies the length of time, in minutes, to continue retry‐
 ing  notifications to unresponsive hosts.  If this option is
 not specified, sm-notify attempts to send notifications  for
 15  minutes.   Specifying  a  value of 0 causes sm-notify to
 continue sending notifications to unresponsive  peers  until
 it is manually killed.
 
 Notifications  are retried if sending fails, the remote does
 not respond, the remote's NSM service is not registered,  or
 if  there  is  a  DNS  failure  which  prevents the remote's
 mon_name from being resolved to an address.
 
 So rpc-statd-notify.service should be fine with being started before
 the network is up at all.
Right... that's the point... we want the service to fork and keep trying
in the background 

Thanks for the cycles! 

steved.
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] [PATCH 2/2] systemd: Relax dependencies of services

2015-03-03 Thread Steve Dickson


On 03/03/2015 03:12 PM, Martin Pitt wrote:
 Hello all,
 
 Zbigniew Jędrzejewski-Szmek [2015-03-03 19:08 +0100]:
 Are you sure that all of those nfs daemons do not require
 sockets.target and other stuff provided by basic.target to be ready?
 
 The corresponding upstart jobs trigger on virtual file systems (/sys,
 etc.) and rpcbind, and we've used them for years. Also, NFS does not
 yet use socket activation, or talks to other services on sockets
 (except for rpcbind), so we don't need sockets.target either.  The
 other dependencies (some network.target, some nss-lookup.target, etc.)
 are already specified explicitly. So I'm quite sure.
You are correct. rpcbind is the only service we have that uses socket
activation, which I don't think works very well... 

Just last week I notice if you reboot a vm and the first command 
you type is rpcinfo -p. That command will time out trying to talk
with rpcbind. After the timeout everything works fine... 

I thought I opened a bz but I can't seem to find it.

steved.

 
 That said, there's of course always a nonzero chance that this breaks
 in a case which I haven't tested. In particular, I didn't test
 kerberos/gssd, I'd appreciate if someone who has a real-world setup
 with that could give this a spin.
 
 Thanks,
 
 Martin
 
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] [PATCH 0/3] Miscellaneous systemd changes (v2)

2015-01-15 Thread Steve Dickson


On 01/14/2015 10:46 AM, Steve Dickson wrote:
 v2:
   * Corrected the numerous BindTo typos 
 
 Here are a few systemd changes that were suggested
 by the systemd folks:
 
* Bind the nfs-idmap service to the nfs server.
* Correctly bind nfs-mountd service to the nfs server.
* Used approved way to check if systemd is install and running
 
 Steve Dickson (3):
   systemd: Bind rpc.idmapd to the nfs-server service
   systemd: Bind the nfs-mountd service to the nfs-server service
   start-statd: Use the canonical to check if systemd is running.
 
  systemd/nfs-idmapd.service | 2 +-
  systemd/nfs-mountd.service | 3 +--
  utils/statd/start-statd| 2 +-
  3 files changed, 3 insertions(+), 4 deletions(-)
 
All three committed 

steved.
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] systemctl status not showing still running processes in inactive .mount unit cgroups (NFS specifically)

2015-01-14 Thread Steve Dickson
Hello,

On 01/12/2015 04:43 PM, Colin Guthrie wrote:
 
 But FWIW, your check for whether systemctl is installed via calling
 systemctl --help is IMO not very neat.
 
 If you're using bash here anyway, you might as well just do a:
 
 if [ -d /sys/fs/cgroup/systemd ]; then
 
 type check or if you want to be super sure you could do:
 
 if mountpoint -q /sys/fs/cgroup/systemd; then
 
 This is a simple trick to detect if systemd is running. If systemctl is
 then not found, then I'd be pretty surprised (but your code would
 continue if the call fails anyway, so this should be safe).
 
 This avoids one more fork.
 
 Technicaly you could avoid calling systemctl start by calling
 systemctl is-active first, but to be honest this isn't really needed.
I took Michael advice and used 'test -d /run/systemd/system'


 That seems to work OK (from a practical perspective things worked OK and
 I got my mount) but are obviously sub optimal, especially when the mount
 point is unmounted.

 In my case, I called umount but the rpc.statd process was still running:
 What is the expectation? When the umount should bring down rpc.statd?
 
 If it started it's own instance, it should IMO kill it again on umount,
 but I was more talking about the systemd context here. If the
 start-statd thing had done it's job correctly we wouldn't have gotten
 into this situation in the first place (as rpc-statd.service would have
 been started and contained it's own rpc.statd process happily!t's
 
 I don't really know how it should work on non-systemd systems as in that
 case I presume start-statd is called for each mount there (forgive me if
 I'm wrong) and thus you'd likely end up with lots of rpc.statd processes
 running, especially if you do lots of mount/umount cycles on a given
 share. Perhaps all you need to do is some very, very minimal fallback
 support here? e.g. checking the pid file and that the process of that
 pid is an rpc.statd process and only actually start it if it's not
 already running?
Well, there is code in rpc.statd, sm-notify and mount.nfs that checks 
to see if a rpc.statd is already running... But the code appears
to be a bit racy since in very a few environments, multiple rpc.statds
are being started up... 
  
 
 For systemd systems generally all that would happen is you've have a lot
 of redundant calls to systemctl start, but they should generally be
 harmless.
Well, the environment I just described, where multiple statd getting
started which are going using systemd to do the start ups.
 
 
 
 FWIW, I think there are a number of issues with the systemd units
 upstream. If you're interested in some feedback here is a small snippet.
 I'm happy to do some patches for you if you'd be willing to apply them.
Always... But I would like to have this conversation with the
NFS community at linux-...@vger.kernel.org. Maybe you could post
your ideas there? In a new thread?

 
 Main issues are:
 
 1. nfs-utils.service really is an abuse. It should be a nfs-utils.target
 (the comment inside aludes to the fact this is know, and it's just that
 you want systemctl restart nfs-utils to just work as a command. I
 can see the desire, but it's still an abuse. Perhaps someone would be
 willing to write a patch that does expansion to .service or .target
 should the unit type not be specified? Dunno how hard it would be tho'
Well we did make the nfs-client a target but the nfs-server was made 
a service... I remember bring this point up at the time... but I forget 
what was said.

 
 2. The way nfs-blkmap.service/target interact seems really non-standard.
 The fact that nfs-blkmap.service has no [Install] section will make it
 report oddly in systemctl status (it will not say enabled or
 disabled but static). The use of Requisite= to tie it to it's target
 is, well, creative. Personally, I'd have it as a
I see your point... 

 
 3. rpc-svcgssd.service and nfs-mountd.service have two PartOf=
 directives. This could lead to a very strange state where e.g.
 nfs-server.service is stopped, and thus the stop job is propigated to
 both these units, but they are actually still needed by nfs clients (I
 believe) as you also list it as part of nfs-utils.service (which as I
 mentioned already is really an abuse of a service.
At this point rpc-svcgssd.service not even being used, at least
in the distos I work with... The point being use BindsTo instead 
of PartOf?

 
 4. Numerous units make use of /etc/sysconfig/* tree. This is very much
 discourage for upstream units and the official mechanism for tweaking
 units is to put a dropin file in
 /etc/systemd/system/theservice.service.d/my-overrides.conf
Like it or not people expect things to be in /etc/sysconf/. From a distro
stand point that would be a very hard thing to change. But.. ff there
was a seamless way to make that change... That would be interesting... 
 
 
 In these files you can tweak a single line, typically the Exec= line, or
 add an Environment= line, without altering anything else.
 
 

[systemd-devel] [PATCH 2/3] systemd: Bind the nfs-mountd service to the nfs-server service

2015-01-14 Thread Steve Dickson
Use BindsTo, instead of PartOf, to bind the rpc-mountd
service to the nfs-server service. Its a much tighter
bind than PartOf.

The Partof=nfs-utils.service was not needed.

One side effect of this tighter bond is when rpc.mountd
is stop, that will also bring the nfs server down
as well, due to the Requires=nfs-mountd.service in
the nfs-server service

Signed-off-by: Steve Dickson ste...@redhat.com
---
 systemd/nfs-mountd.service | 3 +--
 1 file changed, 1 insertion(+), 2 deletions(-)

diff --git a/systemd/nfs-mountd.service b/systemd/nfs-mountd.service
index 7ccc0f7..d908afe 100644
--- a/systemd/nfs-mountd.service
+++ b/systemd/nfs-mountd.service
@@ -3,8 +3,7 @@ Description=NFS Mount Daemon
 Requires=proc-fs-nfsd.mount
 After=proc-fs-nfsd.mount
 After=network.target
-PartOf=nfs-server.service
-PartOf=nfs-utils.service
+BindsTo=nfs-server.service
 
 Wants=nfs-config.service
 After=nfs-config.service
-- 
2.1.0

___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] [PATCH 1/3] systemd: Bind rpc.idmapd to the nfs-server service

2015-01-14 Thread Steve Dickson


On 01/14/2015 04:46 AM, Michal Sekletar wrote:
 On Tue, Jan 13, 2015 at 03:37:35PM -0500, Steve Dickson wrote:
 Since rpc.idmapd is only used by the nfs server, to do 
 its id mapping, bind the nfs-idmapd service to the 
 nfs-server service so rpc.idmapd will be started 
 and stopped with the nfs server.

 Signed-off-by: Steve Dickson ste...@redhat.com
 ---
  systemd/nfs-idmapd.service | 2 +-
  1 file changed, 1 insertion(+), 1 deletion(-)

 diff --git a/systemd/nfs-idmapd.service b/systemd/nfs-idmapd.service
 index 11895e2..61c9a64 100644
 --- a/systemd/nfs-idmapd.service
 +++ b/systemd/nfs-idmapd.service
 @@ -1,7 +1,7 @@
  [Unit]
  Description=NFSv4 ID-name mapping service
  
 -PartOf=nfs-utils.service
 +BindTo=nfs-server.service
 
 Please note that actual name of the dependency is BindsTo.
Got it.. Thanks!

steved.

 
 Michal
 
  
  Wants=nfs-config.service
  After=nfs-config.service
 -- 
 2.1.0

 ___
 systemd-devel mailing list
 systemd-devel@lists.freedesktop.org
 http://lists.freedesktop.org/mailman/listinfo/systemd-devel
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] PartOf= Question

2015-01-13 Thread Steve Dickson
Hello,

On 01/13/2015 08:27 AM, Steve Dickson wrote:
 Here is what I have now in nfs-idmap.service:
 
 [Unit]
 Description=NFSv4 ID-name mapping service
 
 BindTo=nfs-server.service
 
 Wants=nfs-config.service
 After=nfs-config.service
 
 [Service]
 EnvironmentFile=-/run/sysconfig/nfs-utils
 Type=forking
 ExecStart=/usr/sbin/rpc.idmapd $RPCIDMAPDARGS
 
 And in nfs-server.service I have the lines:
 Wants= nfs-idmapd.service
 After= nfs-idmapd.service 
 
 And still rpc.idmapd does not come down when server is stopped.
Never mind... I got this working... It was a pilot error on my part :$

thanks for all the help!!

steved.

___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] PartOf= Question

2015-01-13 Thread Steve Dickson


On 01/12/2015 05:20 PM, Colin Guthrie wrote:
 Hi Steve,
 
 I think I maybe (coincidentally) partly answered some of this question
 (or at least outlined the problem) in my other reply on a different
 thread made just a moment ago.
I must a have missed it... I should watch this list closer than I do.

 
 Steve Dickson wrote on 12/01/15 19:58:
 Hello,

 The nfs-server service starts both the rpc-mountd service
 and the rpc-idmapd service when the server is started.
 But only brings down the rpc-mountd service when
 the NFS server is stopped. 

 I want nfs-server service to bring both services when
 the server is stopped. 
 
 Do you mean nfs-mountd/idmapd.service here? That's the name I see in
 git, but perhaps you've renamed them locally to match their binary
 names? (I'm forever mixing up the rpc- vs. nfs- prefixes too FWIW!)
Yes... nfs-mountd and nfs-idmapd where the services I was talking about

 
 Question: Are idmapd and mountd *only* required for the server? I
 thought that idmapd was at least needed for the client too (but this
 could easily be a problem with my understanding, so feel free to correct
 me).
Well if /usr/sbin/nfsidmap and  /etc/request-key.d/id_resolver.conf 
exists then the kernel keyrings are used to store the idmappings.
The kernel makes the upcall to nfsidmap via id_resolver.conf.

The caveat here is its hard coded in the kernel to always try the nfsidmap 
upcall first; If that fails then an upcall to rpc.idmapd is tried. 
So it makes sense to ensue the nfsidmap upcall exists (aka not making
two upcalls for every id binding). 

 
 I'll assume I'm wrong for the sake of argument as you seem to what this
 behaviour! :)
 
 Looking at the difference between rpc-mountd and 
 rpc-idmapd services, I noted the rpc-mountd service
 had:
PartOf=nfs-server.service
PartOf=nfs-utils.service
 
 I would strongly discourage the use of multiple PartOf= directives.
 
 Note that as the man page describes PartOf is a one-way propagation.
 That is if nfs-server is stopped, started or restarted it will propagate
 to rpc-mountd.
 
 But likewise is nfs-utils is stopped, started or restarted it too will
 propagate, but this might then go out of sync with nfs-server.
 
 In these units, as nfs-mountd is required by nfs-server.service, if
 nfs-utils is restarted, then (I think this is correct) nfs-server will
 have to go down because it's requirement is no longer true (during the
 window when nfs-mountd.service restarting), but there is nothing that
 will then start nfs-server again after things are back up.
 
 So by having two PartOf= directives here, issuing systemctl restart
 nfs-utils when nfs-server is started, will result in nfs-server being
 stopped. Now in this particular case, nfs-server is not really a daemon
 so things will physically work, but the state will be really confusing
 to a sysadmin!
This makes senses... 

 
 
 If rpc-mountd and rpc-idmap are essentially bound to nfs-server.service
 state, then I would remove both PartOf= lines and simply add a
 BindsTo=nfs-server.service line. Forget nfs-utils.service (which I think
 should be generally done anyway).
 
 This binds both units state to that of nfs-server.service. If it's
 started, they will start, if it is stopped, they will stop. If they are
 individually stopped, so will nfs-server (it Requires= them).
 
 They should thus continue to not have any [Install] section.
 
 
 PartOf is looser than BindTo. It was introduced to allow targets to be
 restarted and have all their units restart automatically (often this
 would be templated units), but to also allow the individual units that
 are part of that target to be restarted individually without affecting
 other units in the same target.
Got it.. 

 
 Perhaps this is actually what you want here (e.g. to be able to restart
 idmap on it's own without having this propagate to the
 nfs-server.service too? If so, I believe you should use Wants= in
 nfs-server.service rather than Requires= as that way the individual
 units can be restarted without actually affecting the state of the
 nfs-server itself, but you do have to ensure they are enabled in some
 other way (as Wants= will not pull them in automatically).
I have a Wants=nfs-idmapd.service in nfs-server.service so I 
think that is good to go...

Here is what I have now in nfs-idmap.service:

[Unit]
Description=NFSv4 ID-name mapping service

BindTo=nfs-server.service

Wants=nfs-config.service
After=nfs-config.service

[Service]
EnvironmentFile=-/run/sysconfig/nfs-utils
Type=forking
ExecStart=/usr/sbin/rpc.idmapd $RPCIDMAPDARGS

And in nfs-server.service I have the lines:
Wants= nfs-idmapd.service
After= nfs-idmapd.service 

And still rpc.idmapd does not come down when server is stopped.

 
 
 I suspect it's required by something else perhaps? Does it's pid change
 (i.e. is it restarted)?
No. The pid stays the same on restarts.

 
 Or dod you just modify the units on disk but forget to run systemctl
 daemon-reload to reread them?
After

Re: [systemd-devel] Running system services required for certain filesystems

2015-01-13 Thread Steve Dickson


On 01/12/2015 05:40 PM, Colin Guthrie wrote:
 Steve Dickson wrote on 12/01/15 20:31:
 Hello

 On 01/12/2015 05:37 AM, Colin Guthrie wrote:
 Hi,

 On a related note to my previous message (subject systemctl status not
 showing still running processes in inactive .mount unit cgroups (NFS
 specifically)), when mount.nfs runs to mount NFS filesystems, it shells
 out to /usr/sbin/start-statd which in turn calls sytemctl to start
 rpc.statd service. This feels ugly.
 Why? This is why rpc.statd does not need to be started 
 on the client default any more. 
 
 Yes but it requires the shelling out to bash script to do some
 modification of a pre-calculate set of transactions and dynamically
 adjusts the systemd jobs.
 
 It feels very un-systemd to use systemctl during the initial transaction
 of start jobs to modify things.
I guess I'm not such a systemd purist ;-) but it feels ok to me! :-) 
 
 
 Generally speaking you also have to be really, really careful doing such
 things as they can be called at unexpected times and result in deadlocks
 (protected by a timeout thankfully) due to ordering cycles.
 
 e.g. say something in early boot that has Before=rpc-statd.service is
 run, that somehow triggers, e.g. an automount, that in turn calls
 mount.nfs, which in turn calls systemctl start rpc-statd.service, then
 that systemctl job will block because the job it creates is waiting for
 the job with Before=rpc-statd.service in it to complete.
I see the deadlock... in theory... but that assumes the first 
Before=rpc-statd.service never completes which is very 
unlikely.

 
 So calling systemctl during the initial transaction is really something
 to strongly discourage IMO. Ideally all information would be available
 after all the generators are run to calculate the initial transaction
 right at the beginning without any of dynamic modification in the middle.
 
 We have a sync point for this in the form of remote-fs-pre.target, but
 for some reason upstream nfs-utils people still deem that
 /usr/sbin/start-statd is a required component.
 I'm not seeing how remote-fs-pre.target is a sync point. Its only
 used by the nfs-client.target... 
 
 Well, it's original intention was as as a sync point, but it doesn't
 seem to be getting used that way now (and there are some good
 reasons which I'll cover in a reply to Andrei).
OK.

 
 But it did get me thinking about how clean remote-fs-pre.target really
 is. We do need to make sure rpc.statd is running before any NFS
 filesystems are mounted and and relying on the blunt instrument of
 remote-fs-pre.target seems kinda wrong. It should be more on demand
 e.g. when I start an nfs mount, it should be able to specify that
 rpc.statd service is a prerequisite.

 So my question is, is there a cleaner way to have dependencies like this
 specified for particular FS types? With the goal being that before
 systemd will try and mount any NFS filesystems it will make sure that
 nfs-lock.service (or statd.service or nfs-statd.service or whatever it's
 name really should be) is running?

 I kinda want a Requires=nfs-lock.service and After=nfs-lock.service
 definitions to go into all my *.mount units for any nfs filesystem, but
 it a way that means I don't have to actually specify this manually in my
 fstab.
 Why spread out the pain? I think the sync point we have right now
 mount.nfs calling start-statd works and keeps everything in one place.
 
 Shelling out to start-statd definitely isn't a sync point and as I've
 outlined above, calling systemctl mid-transaction is really something we
 should avoid.
Again, I do see your point. In this particular I'm not sure there
is much else we can do.
 
 
 I do like that it solves the case of calling mount /mountpoint command
 manually as a sysadmin and it will start the necessary service but I
 still thing it's ugly if called via systemctl start /mountpoint - we
 should be able to handle this kind of dep without such shelling out.
 
 Something like a pseudo service - systemd-fstype@nfs.service with
 Type=oneshot+RemainAfterExit=true+Exec=/usr/bin/true that is run by
 systemd before it does it mounting to act as a sync point (thus allowing
 nfs-lock.service to just put
 RequiredBy=systemd-fstype@nfs.service+Before=systemd-fstype@nfs.service
 and all is well) - there shouldn't really be a strong need for any
 actual changes to systemd-fstype@.service (or any
 systemd-fstype@nfs.service.d dropins) here, as it can all be specified
 the other way around in nfs-lock.service.
 WOW.. Granted I'm no systemd expert... what did you say?? :-) 
 My apologies but I'm unable to parse the above paragraph at all!

 In the end, I'm all for making things go smoother but I've
 never been a fan of fixing something that's not broken... 
 
 To be fair, I could probably word it better, and (being totally fair)
  I'm suggesting a similar abuse of a .service unit that the current
 nfs-utils.service does (which we really shouldn't do!)
 
 But ultimately, what the above would

[systemd-devel] [PATCH 0/3] Miscellaneous systemd changes

2015-01-13 Thread Steve Dickson
Here are a few systemd changes that were suggested
by the systemd folks:

   * Bind the nfs-idmap service to the nfs server.
   * Correctly bind nfs-mountd service to the nfs server.
   * Used approved way to check if systemd is install and running

Steve Dickson (3):
  systemd: Bind rpc.idmapd to the nfs-server service
  systemd: Bind the nfs-mountd service to the nfs-server service
  start-statd: Use the canonical to check if systemd is running.

 systemd/nfs-idmapd.service | 2 +-
 systemd/nfs-mountd.service | 3 +--
 utils/statd/start-statd| 2 +-
 3 files changed, 3 insertions(+), 4 deletions(-)

-- 
2.1.0

___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


[systemd-devel] [PATCH 2/3] systemd: Bind the nfs-mountd service to the nfs-server service

2015-01-13 Thread Steve Dickson
Use BindTo, instead of PartOf, to bind the nfs-mountd
service to the nfs-server service. Its a much tighter
bind than PartOf.

The Partof=nfs-utils.service was not needed.

One side effect of this tighter bond is when rpc.mountd
is stop, that will also bring the nfs server down,
due to the Requires=nfs-mountd.service in the 
nfs-server service

Signed-off-by: Steve Dickson ste...@redhat.com
---
 systemd/nfs-mountd.service | 3 +--
 1 file changed, 1 insertion(+), 2 deletions(-)

diff --git a/systemd/nfs-mountd.service b/systemd/nfs-mountd.service
index 7ccc0f7..c16af57 100644
--- a/systemd/nfs-mountd.service
+++ b/systemd/nfs-mountd.service
@@ -3,8 +3,7 @@ Description=NFS Mount Daemon
 Requires=proc-fs-nfsd.mount
 After=proc-fs-nfsd.mount
 After=network.target
-PartOf=nfs-server.service
-PartOf=nfs-utils.service
+BindTo=nfs-server.service
 
 Wants=nfs-config.service
 After=nfs-config.service
-- 
2.1.0

___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


[systemd-devel] [PATCH 3/3] start-statd: Use the canonical to check if systemd is running.

2015-01-13 Thread Steve Dickson
Use the approved way, define in
   http://www.freedesktop.org/software/systemd/man/sd_booted.html

to check if systemd is installed and running

Signed-off-by: Steve Dickson ste...@redhat.com
---
 utils/statd/start-statd | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/utils/statd/start-statd b/utils/statd/start-statd
index ec9383b..b32b3a5 100755
--- a/utils/statd/start-statd
+++ b/utils/statd/start-statd
@@ -7,7 +7,7 @@
 PATH=/sbin:/usr/sbin:/bin:/usr/bin
 
 # First try systemd if it's installed.
-if systemctl --help /dev/null 21; then
+if test -d /run/systemd/system; then
 # Quit only if the call worked.
 systemctl start rpc-statd.service  exit
 fi
-- 
2.1.0

___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] systemctl status not showing still running processes in inactive .mount unit cgroups (NFS specifically)

2015-01-12 Thread Steve Dickson
Hello

On 01/12/2015 05:34 AM, Colin Guthrie wrote:
 Hi,
 
 Looking into a thoroughly broken nfs-utils package here I noticed a
 quirk in systemctl status and in umount behaviour.
 
 
 In latest nfs-utils there is a helper binary shipped upstream called
 /usr/sbin/start-statd (I'll send a separate mail talking about this
 infrastructure with subject: Running system services required for
 certain filesystems)
 
 It sets the PATH to /sbin:/usr/sbin then tries to run systemctl
 (something that is already broken here as systemctl is in bin, not sbin)
 to start statd.service (again this seems to be broken as the unit
 appears to be called nfs-statd.service upstream... go figure).
The PATH problem has been fixed in the latest nfs-utils.  

 
 Either way we call the service nfs-lock.service here (for legacy reasons).
With the latest nfs-utils rpc-statd.service is now called from start-statd
But yes, I did symbolically nfs-lock.service to rpc-statd.service when 
I moved to the upstream systemd scripts.

 
 If this command fails (which it does for us for two reasons) it runs
 rpc.statd --no-notify directly. This binary then run in the context of
 the .mount unit and thus in the .mount cgroup.
What are the two reason rpc.statd --no-notify fails? 

 
 That seems to work OK (from a practical perspective things worked OK and
 I got my mount) but are obviously sub optimal, especially when the mount
 point is unmounted.
 
 In my case, I called umount but the rpc.statd process was still running:
What is the expectation? When the umount should bring down rpc.statd?

 
 [root@jimmy nfs-utils]$ pscg | grep 3256
  3256 rpcuser
 4:devices:/system.slice/mnt-media-scratch.mount,1:name=systemd:/system.slice/mnt-media-scratch.mount
 rpc.statd --no-notify
 
 [root@jimmy nfs-utils]$ systemctl status mnt-media-scratch.mount
 ● mnt-media-scratch.mount - /mnt/media/scratch
Loaded: loaded (/etc/fstab)
Active: inactive (dead) since Mon 2015-01-12 09:58:52 GMT; 1min 12s ago
 Where: /mnt/media/scratch
  What: marley.rasta.guthr.ie:/mnt/media/scratch
  Docs: man:fstab(5)
man:systemd-fstab-generator(8)
 
 Jan 07 14:55:13 jimmy mount[3216]: /usr/sbin/start-statd: line 8:
 systemctl: command not found
 Jan 07 14:55:14 jimmy rpc.statd[3256]: Version 1.3.0 starting
 Jan 07 14:55:14 jimmy rpc.statd[3256]: Flags: TI-RPC
 [root@jimmy nfs-utils]$
Again this is fixed with the latest nfs-utils...

Question? Why are you using v3 mounts? With V4 all this goes away.

steved.
 
 
 As you can see the mount is dead but the process is still running and
 the systemctl status output does not correctly show the status of
 binaries running in the cgroup. When the mount is active the process
 does actually exist in this unit's context (provided systemd is used to
 do the mount - if you call mount /path command separately, the
 rpc.statd process can end up in weird cgroups - such as your user session!)
 
 Anyway, assuming the process is in the .mount unit cgroup, should
 systemd detect the umount and kill the processes accordingly, and if
 not, should calling systemctl status on .mount units show processes
 even if it's in an inactive state?
 
 This is with 217 with a few cherry picks on top so might have been
 addressed by now.
 
 
 Cheers
 
 Col
 
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] [PATCH] rpcbind: systemd socket activation (v2)

2014-11-26 Thread Steve Dickson


On 11/25/2014 05:40 PM, J. Bruce Fields wrote:
 On Tue, Nov 25, 2014 at 12:05:32PM -0500, Steve Dickson wrote:
 This is based on a patch originally posted by Lennart Poettering:
 http://permalink.gmane.org/gmane.linux.nfs/33774.
 
 Have you run this by the reporter
 of https://bugzilla.redhat.com/show_bug.cgi?id=1158164 ?
 
 Because he tried applying that old patch and found he was still having
 problems.
 
 But they may well be problems that are fixed by your version, or he may
 have applied it incorrectly, I didn't try to figure it out.
If he was using the systemd scripts in the patch then there would be
problems... which is the reason I eliminated them 

steved.
 
 --b.
 

 That patch was not merged due to the lack of a shared library and
 as systemd was seen to be too Fedora specific.

 Systemd now provides a shared library, and it is (or very soon will
 be) the default init system on all the major Linux distributions.

 This version of the patch has three changes from the original:

  * It uses the shared library.
  * It comes with unit files.
  * It is rebased on top of master.

 Please review the patch with git show -b or otherwise ignoring the
 whitespace changes, or it will be extremely difficult to read.

 v5: incorporated comments on the PKG_CHECK_MODULES macro.

 v4: reorganized the changes to make the diff easier to read
  remove systemd scripts.

 v3: rebase
  fix typos
  listen on /run/rpcbind.sock, rather than /var/run/rpcbind.sock (the
  latter is a symlink to the former, but this means the socket can be
  created before /var is mounted)
  NB: this version has been compile-tested only as I no longer use
  rpcbind myself
 v2: correctly enable systemd code at compile time
  handle the case where not all the required sockets were supplied
  listen on udp/tcp port 111 in addition to /var/run/rpcbind.sock
  do not daemonize

 Tom Gundersen (1):
   rpcbind: add support for systemd socket activation

  Makefile.am   |  6 +
  configure.ac  | 12 +
  src/rpcbind.c | 81 
 ++-
  3 files changed, 93 insertions(+), 6 deletions(-)

 -- 
 1.9.3

 --
 To unsubscribe from this list: send the line unsubscribe linux-nfs in
 the body of a message to majord...@vger.kernel.org
 More majordomo info at  http://vger.kernel.org/majordomo-info.html
 --
 To unsubscribe from this list: send the line unsubscribe linux-nfs in
 the body of a message to majord...@vger.kernel.org
 More majordomo info at  http://vger.kernel.org/majordomo-info.html
 
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] [PATCH] rpcbind: add support for systemd socket activation

2014-11-26 Thread Steve Dickson


On 11/25/2014 12:05 PM, Steve Dickson wrote:
 From: Tom Gundersen t...@jklm.no
 
 Making rpcbind sockect activated will greatly simplify
 its integration in systemd systems. In essence, other services
 may now assume that rpcbind is always available, even during very
 early boot. This means that we no longer need to worry about any
 ordering dependencies.
 
 Original-patch-by: Lennart Poettering lenn...@poettering.net
 Cc: systemd-devel@lists.freedesktop.org
 Acked-by: Cristian Rodríguezcrrodrig...@opensuse.org
 Signed-off-by: Tom Gundersen t...@jklm.no
 Signed-off-by: Steve Dickson ste...@redhat.com
Committed... 

steved.
 ---
  Makefile.am   |  6 +
  configure.ac  | 12 +
  src/rpcbind.c | 81 
 ++-
  3 files changed, 93 insertions(+), 6 deletions(-)
 
 diff --git a/Makefile.am b/Makefile.am
 index 8715082..c99566d 100644
 --- a/Makefile.am
 +++ b/Makefile.am
 @@ -41,6 +41,12 @@ rpcbind_SOURCES = \
   src/warmstart.c
  rpcbind_LDADD = $(TIRPC_LIBS)
  
 +if SYSTEMD
 +AM_CPPFLAGS += $(SYSTEMD_CFLAGS) -DSYSTEMD
 +
 +rpcbind_LDADD += $(SYSTEMD_LIBS)
 +endif
 +
  rpcinfo_SOURCES =   src/rpcinfo.c
  rpcinfo_LDADD   =   $(TIRPC_LIBS)
  
 diff --git a/configure.ac b/configure.ac
 index 5a88cc7..967eb05 100644
 --- a/configure.ac
 +++ b/configure.ac
 @@ -36,6 +36,18 @@ AC_SUBST([nss_modules], [$with_nss_modules])
  
  PKG_CHECK_MODULES([TIRPC], [libtirpc])
  
 +PKG_PROG_PKG_CONFIG
 +AC_ARG_WITH([systemdsystemunitdir],
 +  AS_HELP_STRING([--with-systemdsystemunitdir=DIR], [Directory for systemd 
 service files]),
 +  [], [with_systemdsystemunitdir=$($PKG_CONFIG 
 --variable=systemdsystemunitdir systemd)])
 +  if test x$with_systemdsystemunitdir != xno; then
 +AC_SUBST([systemdsystemunitdir], [$with_systemdsystemunitdir])
 + PKG_CHECK_MODULES([SYSTEMD], [libsystemd], [],
 +[PKG_CHECK_MODULES([SYSTEMD], [libsystemd-daemon], [],
 +AC_MSG_ERROR([libsystemd support requested but found]))])
 +  fi
 +AM_CONDITIONAL(SYSTEMD, [test -n $with_systemdsystemunitdir -a 
 x$with_systemdsystemunitdir != xno ])
 +
  AS_IF([test x$enable_libwrap = xyes], [
   AC_CHECK_LIB([wrap], [hosts_access], ,
   AC_MSG_ERROR([libwrap support requested but unable to find 
 libwrap]))
 diff --git a/src/rpcbind.c b/src/rpcbind.c
 index e3462e3..f7c71ee 100644
 --- a/src/rpcbind.c
 +++ b/src/rpcbind.c
 @@ -56,6 +56,9 @@
  #include netinet/in.h
  #endif
  #include arpa/inet.h
 +#ifdef SYSTEMD
 +#include systemd/sd-daemon.h
 +#endif
  #include fcntl.h
  #include netdb.h
  #include stdio.h
 @@ -296,6 +299,7 @@ init_transport(struct netconfig *nconf)
   u_int32_t host_addr[4];  /* IPv4 or IPv6 */
   struct sockaddr_un sun;
   mode_t oldmask;
 + int n;
  res = NULL;
  
   if ((nconf-nc_semantics != NC_TPI_CLTS) 
 @@ -314,6 +318,76 @@ init_transport(struct netconfig *nconf)
   fprintf(stderr, [%d] - %s\n, i, *s);
   }
  #endif
 + if (!__rpc_nconf2sockinfo(nconf, si)) {
 + syslog(LOG_ERR, cannot get information for %s,
 + nconf-nc_netid);
 + return (1);
 + }
 +
 +#ifdef SYSTEMD
 + n = sd_listen_fds(0);
 + if (n  0) {
 + syslog(LOG_ERR, failed to acquire systemd sockets: %s, 
 strerror(-n));
 + return 1;
 + }
 +
 + /* Try to find if one of the systemd sockets we were given match
 +  * our netconfig structure. */
 +
 + for (fd = SD_LISTEN_FDS_START; fd  SD_LISTEN_FDS_START + n; fd++) {
 + struct __rpc_sockinfo si_other;
 + union {
 + struct sockaddr sa;
 + struct sockaddr_un un;
 + struct sockaddr_in in4;
 + struct sockaddr_in6 in6;
 + struct sockaddr_storage storage;
 + } sa;
 + socklen_t addrlen = sizeof(sa);
 +
 + if (!__rpc_fd2sockinfo(fd, si_other)) {
 + syslog(LOG_ERR, cannot get information for fd %i, fd);
 + return 1;
 + }
 +
 + if (si.si_af != si_other.si_af ||
 +si.si_socktype != si_other.si_socktype ||
 +si.si_proto != si_other.si_proto)
 + continue;
 +
 + if (getsockname(fd, sa.sa, addrlen)  0) {
 + syslog(LOG_ERR, failed to query socket name: %s,
 +   strerror(errno));
 + goto error;
 + }
 +
 + /* Copy the address */
 + taddr.addr.maxlen = taddr.addr.len = addrlen;
 + taddr.addr.buf = malloc(addrlen);
 + if (taddr.addr.buf == NULL) {
 + syslog(LOG_ERR,
 +   cannot allocate memory for %s address,
 +   nconf-nc_netid);
 + goto error;
 + }
 + memcpy

Re: [systemd-devel] [PATCH] rpcbind: add support for systemd socket activation

2014-11-25 Thread Steve Dickson
Hello,

On 11/22/2014 09:24 PM, Zbigniew Jędrzejewski-Szmek wrote:
 On Fri, Nov 21, 2014 at 11:43:47AM -0500, Steve Dickson wrote:
 From: Tom Gundersen t...@jklm.no

 Making rpcbind sockect activated will greatly simplify
 its integration in systemd systems. In essence, other services
 may now assume that rpcbind is always available, even during very
 early boot. This means that we no longer need to worry about any
 ordering dependencies.

 This is based on a patch originally posted by Lennart Poettering:
 http://permalink.gmane.org/gmane.linux.nfs/33774.

 That patch was not merged due to the lack of a shared library and
 as systemd was seen to be too Fedora specific.

 Systemd now provides a shared library, and it is (or very soon will
 be) the default init system on all the major Linux distributions.

 This version of the patch has three changes from the original:

  * It uses the shared library.
  * It comes with unit files.
  * It is rebased on top of master.

 Please review the patch with git show -b or otherwise ignoring the
 whitespace changes, or it will be extremely difficult to read.
 
 Thanks for working on this... Looks fine to me. Two suggestions
 below.
np... 

 
 v4: reorganized the changes to make the diff easier to read
 remove systemd scripts.

 v3: rebase
 fix typos
 listen on /run/rpcbind.sock, rather than /var/run/rpcbind.sock (the
 latter is a symlink to the former, but this means the socket can be
 created before /var is mounted)
 NB: this version has been compile-tested only as I no longer use
 rpcbind myself
 v2: correctly enable systemd code at compile time
 handle the case where not all the required sockets were supplied
 listen on udp/tcp port 111 in addition to /var/run/rpcbind.sock
 do not daemonize

 Original-patch-by: Lennart Poettering lenn...@poettering.net
 Cc: Steve Dickson ste...@redhat.com
 Cc: systemd-devel@lists.freedesktop.org
 Acked-by: Cristian Rodríguezcrrodrig...@opensuse.org
 Signed-off-by: Tom Gundersen t...@jklm.no
 Signed-off-by: Steve Dickson ste...@redhat.com
 ---
  Makefile.am   |  6 +
  configure.ac  | 10 
  src/rpcbind.c | 81 
 ++-
  3 files changed, 91 insertions(+), 6 deletions(-)

 diff --git a/Makefile.am b/Makefile.am
 index 8715082..c99566d 100644
 --- a/Makefile.am
 +++ b/Makefile.am
 @@ -41,6 +41,12 @@ rpcbind_SOURCES = \
  src/warmstart.c
  rpcbind_LDADD = $(TIRPC_LIBS)
  
 +if SYSTEMD
 +AM_CPPFLAGS += $(SYSTEMD_CFLAGS) -DSYSTEMD
 +
 +rpcbind_LDADD += $(SYSTEMD_LIBS)
 +endif
 +
  rpcinfo_SOURCES =   src/rpcinfo.c
  rpcinfo_LDADD   =   $(TIRPC_LIBS)
  
 diff --git a/configure.ac b/configure.ac
 index 5a88cc7..fdad639 100644
 --- a/configure.ac
 +++ b/configure.ac
 @@ -36,6 +36,16 @@ AC_SUBST([nss_modules], [$with_nss_modules])
  
  PKG_CHECK_MODULES([TIRPC], [libtirpc])
  
 +PKG_PROG_PKG_CONFIG
 +AC_ARG_WITH([systemdsystemunitdir],
 +  AS_HELP_STRING([--with-systemdsystemunitdir=DIR], [Directory for systemd 
 service files]),
 +  [], [with_systemdsystemunitdir=$($PKG_CONFIG 
 --variable=systemdsystemunitdir systemd)])
 +  if test x$with_systemdsystemunitdir != xno; then
 +AC_SUBST([systemdsystemunitdir], [$with_systemdsystemunitdir])
 +PKG_CHECK_MODULES([SYSTEMD], [libsystemd-daemon])
 libsystemd-daemon got subsummed by libsystemd. So you should use something
 like
 
PKG_CHECK_MODULES([SYSTEMD], [libsystemd], [],
   [PKG_CHECK_MODULES([SYSTEMD], [libsystemd-daemon], [],
AC_MSG_ERROR([libsystemd support requested but found]))])
 
 to get the new libary in preference to the old one.
Got it... 

 
 +  fi
 +AM_CONDITIONAL(SYSTEMD, [test -n $with_systemdsystemunitdir -a 
 x$with_systemdsystemunitdir != xno ])
 +
  AS_IF([test x$enable_libwrap = xyes], [
  AC_CHECK_LIB([wrap], [hosts_access], ,
  AC_MSG_ERROR([libwrap support requested but unable to find 
 libwrap]))
 diff --git a/src/rpcbind.c b/src/rpcbind.c
 index e3462e3..f7c71ee 100644
 --- a/src/rpcbind.c
 +++ b/src/rpcbind.c
 @@ -56,6 +56,9 @@
  #include netinet/in.h
  #endif
  #include arpa/inet.h
 +#ifdef SYSTEMD
 +#include systemd/sd-daemon.h
 +#endif
  #include fcntl.h
  #include netdb.h
  #include stdio.h
 @@ -296,6 +299,7 @@ init_transport(struct netconfig *nconf)
  u_int32_t host_addr[4];  /* IPv4 or IPv6 */
  struct sockaddr_un sun;
  mode_t oldmask;
 +int n;
  res = NULL;
  
  if ((nconf-nc_semantics != NC_TPI_CLTS) 
 @@ -314,6 +318,76 @@ init_transport(struct netconfig *nconf)
  fprintf(stderr, [%d] - %s\n, i, *s);
  }
  #endif
 +if (!__rpc_nconf2sockinfo(nconf, si)) {
 +syslog(LOG_ERR, cannot get information for %s,
 +nconf-nc_netid);
 +return (1);
 +}
 +
 +#ifdef SYSTEMD
 +n = sd_listen_fds(0);
 +if (n  0) {
 +syslog(LOG_ERR, failed to acquire systemd sockets: %s, 
 strerror(-n));
 +return 1

[systemd-devel] [PATCH] rpcbind: add support for systemd socket activation

2014-11-25 Thread Steve Dickson
From: Tom Gundersen t...@jklm.no

Making rpcbind sockect activated will greatly simplify
its integration in systemd systems. In essence, other services
may now assume that rpcbind is always available, even during very
early boot. This means that we no longer need to worry about any
ordering dependencies.

Original-patch-by: Lennart Poettering lenn...@poettering.net
Cc: systemd-devel@lists.freedesktop.org
Acked-by: Cristian Rodríguezcrrodrig...@opensuse.org
Signed-off-by: Tom Gundersen t...@jklm.no
Signed-off-by: Steve Dickson ste...@redhat.com
---
 Makefile.am   |  6 +
 configure.ac  | 12 +
 src/rpcbind.c | 81 ++-
 3 files changed, 93 insertions(+), 6 deletions(-)

diff --git a/Makefile.am b/Makefile.am
index 8715082..c99566d 100644
--- a/Makefile.am
+++ b/Makefile.am
@@ -41,6 +41,12 @@ rpcbind_SOURCES = \
src/warmstart.c
 rpcbind_LDADD = $(TIRPC_LIBS)
 
+if SYSTEMD
+AM_CPPFLAGS += $(SYSTEMD_CFLAGS) -DSYSTEMD
+
+rpcbind_LDADD += $(SYSTEMD_LIBS)
+endif
+
 rpcinfo_SOURCES =   src/rpcinfo.c
 rpcinfo_LDADD   =   $(TIRPC_LIBS)
 
diff --git a/configure.ac b/configure.ac
index 5a88cc7..967eb05 100644
--- a/configure.ac
+++ b/configure.ac
@@ -36,6 +36,18 @@ AC_SUBST([nss_modules], [$with_nss_modules])
 
 PKG_CHECK_MODULES([TIRPC], [libtirpc])
 
+PKG_PROG_PKG_CONFIG
+AC_ARG_WITH([systemdsystemunitdir],
+  AS_HELP_STRING([--with-systemdsystemunitdir=DIR], [Directory for systemd 
service files]),
+  [], [with_systemdsystemunitdir=$($PKG_CONFIG --variable=systemdsystemunitdir 
systemd)])
+  if test x$with_systemdsystemunitdir != xno; then
+AC_SUBST([systemdsystemunitdir], [$with_systemdsystemunitdir])
+ PKG_CHECK_MODULES([SYSTEMD], [libsystemd], [],
+  [PKG_CHECK_MODULES([SYSTEMD], [libsystemd-daemon], [],
+  AC_MSG_ERROR([libsystemd support requested but found]))])
+  fi
+AM_CONDITIONAL(SYSTEMD, [test -n $with_systemdsystemunitdir -a 
x$with_systemdsystemunitdir != xno ])
+
 AS_IF([test x$enable_libwrap = xyes], [
AC_CHECK_LIB([wrap], [hosts_access], ,
AC_MSG_ERROR([libwrap support requested but unable to find 
libwrap]))
diff --git a/src/rpcbind.c b/src/rpcbind.c
index e3462e3..f7c71ee 100644
--- a/src/rpcbind.c
+++ b/src/rpcbind.c
@@ -56,6 +56,9 @@
 #include netinet/in.h
 #endif
 #include arpa/inet.h
+#ifdef SYSTEMD
+#include systemd/sd-daemon.h
+#endif
 #include fcntl.h
 #include netdb.h
 #include stdio.h
@@ -296,6 +299,7 @@ init_transport(struct netconfig *nconf)
u_int32_t host_addr[4];  /* IPv4 or IPv6 */
struct sockaddr_un sun;
mode_t oldmask;
+   int n;
 res = NULL;
 
if ((nconf-nc_semantics != NC_TPI_CLTS) 
@@ -314,6 +318,76 @@ init_transport(struct netconfig *nconf)
fprintf(stderr, [%d] - %s\n, i, *s);
}
 #endif
+   if (!__rpc_nconf2sockinfo(nconf, si)) {
+   syslog(LOG_ERR, cannot get information for %s,
+   nconf-nc_netid);
+   return (1);
+   }
+
+#ifdef SYSTEMD
+   n = sd_listen_fds(0);
+   if (n  0) {
+   syslog(LOG_ERR, failed to acquire systemd sockets: %s, 
strerror(-n));
+   return 1;
+   }
+
+   /* Try to find if one of the systemd sockets we were given match
+* our netconfig structure. */
+
+   for (fd = SD_LISTEN_FDS_START; fd  SD_LISTEN_FDS_START + n; fd++) {
+   struct __rpc_sockinfo si_other;
+   union {
+   struct sockaddr sa;
+   struct sockaddr_un un;
+   struct sockaddr_in in4;
+   struct sockaddr_in6 in6;
+   struct sockaddr_storage storage;
+   } sa;
+   socklen_t addrlen = sizeof(sa);
+
+   if (!__rpc_fd2sockinfo(fd, si_other)) {
+   syslog(LOG_ERR, cannot get information for fd %i, fd);
+   return 1;
+   }
+
+   if (si.si_af != si_other.si_af ||
+si.si_socktype != si_other.si_socktype ||
+si.si_proto != si_other.si_proto)
+   continue;
+
+   if (getsockname(fd, sa.sa, addrlen)  0) {
+   syslog(LOG_ERR, failed to query socket name: %s,
+   strerror(errno));
+   goto error;
+   }
+
+   /* Copy the address */
+   taddr.addr.maxlen = taddr.addr.len = addrlen;
+   taddr.addr.buf = malloc(addrlen);
+   if (taddr.addr.buf == NULL) {
+   syslog(LOG_ERR,
+   cannot allocate memory for %s address,
+   nconf-nc_netid);
+   goto error;
+   }
+   memcpy(taddr.addr.buf, sa, addrlen);
+
+   my_xprt = (SVCXPRT *)svc_tli_create(fd, nconf, taddr

[systemd-devel] [PATCH] rpcbind: systemd socket activation (v2)

2014-11-25 Thread Steve Dickson
This is based on a patch originally posted by Lennart Poettering:
http://permalink.gmane.org/gmane.linux.nfs/33774.

That patch was not merged due to the lack of a shared library and
as systemd was seen to be too Fedora specific.

Systemd now provides a shared library, and it is (or very soon will
be) the default init system on all the major Linux distributions.

This version of the patch has three changes from the original:

 * It uses the shared library.
 * It comes with unit files.
 * It is rebased on top of master.

Please review the patch with git show -b or otherwise ignoring the
whitespace changes, or it will be extremely difficult to read.

v5: incorporated comments on the PKG_CHECK_MODULES macro.

v4: reorganized the changes to make the diff easier to read
remove systemd scripts.

v3: rebase
fix typos
listen on /run/rpcbind.sock, rather than /var/run/rpcbind.sock (the
latter is a symlink to the former, but this means the socket can be
created before /var is mounted)
NB: this version has been compile-tested only as I no longer use
rpcbind myself
v2: correctly enable systemd code at compile time
handle the case where not all the required sockets were supplied
listen on udp/tcp port 111 in addition to /var/run/rpcbind.sock
do not daemonize

Tom Gundersen (1):
  rpcbind: add support for systemd socket activation

 Makefile.am   |  6 +
 configure.ac  | 12 +
 src/rpcbind.c | 81 ++-
 3 files changed, 93 insertions(+), 6 deletions(-)

-- 
1.9.3

___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


[systemd-devel] [PATCH] rpcbind: systemd socket activation

2014-11-21 Thread Steve Dickson
This is a re-post of Tom's patch he posted a while back.
I made the following changes.

* Reorganized the changes so the diff is more readable
  by using a goto instead of indenting 268 lines.

* Removed the systemd scripts because they just didn't
  work and won't going to work. I did spend some time 
  on them but debugging was not worth the time since 
  the current ones worked. I'm more than willing to 
  revisit this down the line.

Finally I would like to thank Tom for all of his help! 
It was much appreciated!

Tom Gundersen (1):
  rpcbind: add support for systemd socket activation

 Makefile.am   |  6 +
 configure.ac  | 10 
 src/rpcbind.c | 81 ++-
 3 files changed, 91 insertions(+), 6 deletions(-)

-- 
2.1.0

___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] Fedora NFS systemd units

2013-05-08 Thread Steve Dickson


On 06/05/13 06:29, Jóhann B. Guðmundsson wrote:
 On 05/06/2013 09:27 AM, Colin Guthrie wrote:
 Hi,

 Just trying to work out a few problems on our (Mageia's) NFS packages.

 As with a lot of things we often take the units from Fedora (we will
 soon have a nicer way to share units I hope - need to get release out
 the way before I can help and put my bit of the work into this tho').

 However I'm a bit confused by the latest units.
 
 Steve did not pull in all the units a while back [1] ( which I had broken 
 into special nfs target ) so I honestly expect nfs implemenation to be 
 utterly broken ( as it used to be ) in Fedora  + the units need to be 
 rewritten and necessary changes done to dracut for root on nfs4 to work ( 
 which I did not test or have in mind when creating them ).
 
 JBG
 
 1. https://bugzilla.redhat.com/show_bug.cgi?id=769879
I just did... https://bugzilla.redhat.com/show_bug.cgi?id=769879#c11

Please verify that everything that is needed is there... 

steved.
 
 
 ___
 systemd-devel mailing list
 systemd-devel@lists.freedesktop.org
 http://lists.freedesktop.org/mailman/listinfo/systemd-devel
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] Fedora NFS systemd units

2013-05-08 Thread Steve Dickson


On 06/05/13 15:36, Guillaume Rousse wrote:
 Le 06/05/2013 11:27, Colin Guthrie a écrit :
 Also, It is my understanding (and feel free to correct me here) but
 nfs-idmap is often needed on client systems also? I'm sure I had to
 configure a client in the past to ensure idmap was running in order to
 avoid permissions problems and users getting mapped to the 65k uid that
 means nobody.

 I had to force this by setting NEEDS_IDMAP=yes in the old sysconfig file
 /etc/sysconfig/nfs-common (I'm pretty sure we had the same sysvinit
 setup as Fedora in the past).
 We didn't :)
 
 I stole the nfs-common sysvinit script from Debian, to hide the complexity of 
 the gazillion different daemons needed.  AFAIK, Fedora always used 
 single-executable-services.
 
 
 And indeed idmapd is needed both sides for NFSv4.
This is no longer the case on the client. The kernel now calls the 
nfsidmap(5) command to do the idmapping, which is the reason 
rpc.idmapd is only started on the server side. 

steved.
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] Fedora NFS systemd units

2013-05-08 Thread Steve Dickson


On 06/05/13 05:27, Colin Guthrie wrote:
 Hi,
 
 Just trying to work out a few problems on our (Mageia's) NFS packages.
 
 As with a lot of things we often take the units from Fedora (we will
 soon have a nicer way to share units I hope - need to get release out
 the way before I can help and put my bit of the work into this tho').
 
 However I'm a bit confused by the latest units.
 
 nfs-server.service:[Unit]
 nfs-server.service:Description=NFS Server
 nfs-server.service:Requires=proc-fs-nfsd.mount var-lib-nfs-rpc_pipefs.mount 
 rpcbind.service
 nfs-server.service:Requires=nfs-idmap.service nfs-mountd.service 
 nfs-rquotad.service
 nfs-server.service:After=network.target named.service
 nfs-server.service:[Service]
 nfs-server.service:Type=oneshot
 nfs-server.service:RemainAfterExit=yes
 nfs-server.service:StandardError=syslog+console
 nfs-server.service:EnvironmentFile=-/etc/sysconfig/nfs
 nfs-server.service:ExecStartPre=/usr/lib/nfs-utils/scripts/nfs-server.preconfig
 nfs-server.service:ExecStartPre=/usr/sbin/exportfs -r
 nfs-server.service:ExecStart=/usr/sbin/rpc.nfsd $RPCNFSDARGS $RPCNFSDCOUNT
 nfs-server.service:ExecStartPost=-/usr/lib/nfs-utils/scripts/nfs-server.postconfig
 nfs-server.service:ExecStop=/usr/sbin/rpc.nfsd 0
 nfs-server.service:ExecStopPost=/usr/sbin/exportfs -f
 nfs-server.service:[Install]
 nfs-server.service:WantedBy=multi-user.target
 
 This is the main server unit. It requires the idmap, mountd and rquotad
 services.
 
 It has After=named.service. Should this not be After=nss-lookup.target
 instead? Bind/named might not be the only thing that does name lookups
 after all and nss-lookup.target is meant to encapsulate this does it
 not? (e.g. ldap could factor in here).
I didn't know a nss-lookup.target existed... would that be better than
After=named.service?

 
 nfs-idmap.service:[Unit]
 nfs-idmap.service:Description=NFSv4 ID-name mapping daemon
 nfs-idmap.service:BindTo=nfs-server.service
 nfs-idmap.service:After=nfs-server.service
 nfs-idmap.service:[Service]
 nfs-idmap.service:Type=forking
 nfs-idmap.service:StandardError=syslog+console
 nfs-idmap.service:EnvironmentFile=-/etc/sysconfig/nfs
 nfs-idmap.service:ExecStart=/usr/sbin/rpc.idmapd $RPCIDMAPDARGS
 nfs-idmap.service:[Install]
 nfs-idmap.service:WantedBy=nfs.target
 
 This unit is bound to nfs-server so it will follow it's start/stop
 cycle. Yet it is also wanted by nfs.target. What purpose does nfs.target
 actually serve here?
It was request from https://bugzilla.redhat.com/show_bug.cgi?id=769879
Its not clear why a nfs.target is needed either but it does not break 
anything so I went with it.. 

 
 Ditto for the mountd and rquotad units which are similarly structured.
 
 Also, It is my understanding (and feel free to correct me here) but
 nfs-idmap is often needed on client systems also? I'm sure I had to
 configure a client in the past to ensure idmap was running in order to
 avoid permissions problems and users getting mapped to the 65k uid that
 means nobody.
No, As (I believe) f17 rpc.idmapd is no longer needed on the client.
The kernel now uses the nfsidmap(5) command to to do the idmapping.

steved.

 
 I had to force this by setting NEEDS_IDMAP=yes in the old sysconfig file
 /etc/sysconfig/nfs-common (I'm pretty sure we had the same sysvinit
 setup as Fedora in the past).
 
 This being the case should idmap be enablable as an independent unit for
 client systems (same as nfs-lock.service). Again, feel free to correct
 me here if I'm wrong.
 
 If this is the case the BindTo would have to be dropped, but the Require
 could still be kept. The install rule would have to be made independant
 of nfs.target. To aid sysadmin clarity, it would make sense to have the
 nfs-server.service's [Install] section to have an Also= directive so
 that the relevant unit's enabled/disabled status's are shown more
 clearly to sysadmins.
 
 If mountd and rquotad make no sense to run separately then they should
 just have their [Install] sections nuked (more comments about rquoatad
 below tho').
 
 nfs.target:[Unit]
 nfs.target:Description=Network File System Server
 nfs.target:Requires=var-lib-nfs-rpc_pipefs.mount proc-fs-nfsd.mount 
 rpcbind.service
 nfs.target:After=network.target named.service 
 nfs.target:[Install]
 nfs.target:WantedBy=multi-user.target
 
 If nfs.target is Network File Systemd Server, and the units are
 already set to be BindTo AND Require, then I really don't grok what
 nfs.target is for. It's not like it provides any additional level of
 isolation or configurability. In fact, enabling/disabling idmap, mountd
 and rquotad services will have no effect anyway due to them being
 requires/bound to nfs-server.service. Should this target just be dropped?
 
 
 nfs-rquotad.service:[Unit]
 nfs-rquotad.service:Description=NFS Remote Quota Server
 nfs-rquotad.service:BindTo=nfs-server.service
 nfs-rquotad.service:After=nfs-server.service
 nfs-rquotad.service:[Service]
 nfs-rquotad.service:Type=forking
 

[systemd-devel] Using Multiple EnvironmentFile lines

2011-08-02 Thread Steve Dickson
Hello,

I noticed that the ypbind.service used multiple
EnvironmentFile lines so thought this would be a 
good way to build command lines to daemons on the fly... 

So the nfs-server.service looks like:

[Unit]
Description=NFS Protocol Daemon
After=network.target rpcbind.service
ConditionPathIsDirectory=/sys/module/sunrpc

[Service]
EnvironmentFile=-/etc/sysconfig/nfs
EnvironmentFile=/usr/lib/nfs-utils/scripts/nfs-server.preconfig
ExecStartPre=-/usr/sbin/rpc.rquotad $RPCRQUOTADOPTS
ExecStartPre=/usr/sbin/exportfs -r
ExecStart=/usr/sbin/rpc.nfsd $RPCNFSDARGS ${RPCNFSDCOUNT}
ExecStartPost=/usr/sbin/rpc.mountd $RPCMOUNTDOPTS
ExecStartPost=-/usr/lib/nfs-utils/scripts/nfs-server.postconfig
RemainAfterExit=yes

[Install]
WantedBy=multi-user.target


The /etc/sysconf/nfs like what is installed today.

The nfs-server.preconfig looks like:

#
# Set up arguments to rpc.mountd
#
# Define the port rpc.mountd listens on if requested
[ -n $MOUNTD_PORT ]  RPCMOUNTDOPTS=$RPCMOUNTDOPTS -p $MOUNTD_PORT

case $MOUNTD_NFS_V2 in
no|NO)
RPCMOUNTDOPTS=$RPCMOUNTDOPTS --no-nfs-version 2 ;;
esac 

case $MOUNTD_NFS_V3 in
no|NO)
RPCMOUNTDOPTS=$RPCMOUNTDOPTS --no-nfs-version 3 ;;
esac 

#
# Set up arguments to rpc.nfsd
#
# Define the number of nfsd processes to be started by default
[ -z $RPCNFSDCOUNT ]  RPCNFSDCOUNT=8

# Set v4 grace period if requested
[ -n $NFSD_V4_GRACE ]  {
echo $NFSD_V4_GRACE  /proc/fs/nfsd/nfsv4gracetime
}

Now it appears the variables being set in nfs-server.preconfig
(the second EnvironmentFile line) are not be carried over 
into the .service file. 

For example if RPCNFSDCOUNT is not set in /etc/sysconfg/nfs 
(the first EnvironmentFile line) and is set in the second 
EnvironmentFile, ${RPCNFSDCOUNT} does not have a value when 
rpc.nfsd is started in the service file. Note setting the variable 
in the first EnvironmentFile always works. 

Can one have multiple Environment Files to set multiple
env variables?  

Is this the correct way to build command lines for daemon on the fly?

tia,

steved. 
 
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] systemd automounts

2011-08-02 Thread Steve Dickson


On 08/02/2011 08:55 AM, Mirco Tischler wrote:
 2011/8/2 Steve Dickson ste...@redhat.com:


 On 08/02/2011 04:35 AM, Mirco Tischler wrote:
 2011/8/2 Steve Dickson ste...@redhat.com:


 On 08/01/2011 09:10 PM, Mirco Tischler wrote:
 Hi
 2011/8/2 Steve Dickson ste...@redhat.com:

 Yes, this looks like a good usecase.

 Hmm, does the automount point work after boot?
 It seems so, because if I restart nfs-idmap.service the
 service comes up.


 How does the output of systemctl list-units look like for the
 automount and mount unit?

 attached.

 steved.

 The attached output indicates that your automount unit isn't started,
 and I can't see anything causing it to start in your unit files
 either. You can verify this with systemctl status
 var-lib-nfs-rpc_pipefs.automount.
 It appears you are correct. systemctl status 
 var-lib-nfs-rpc_pipefs.automount
 show the status not being started.

 Note that After= is only an ordering information and doesn't cause the
 unit to be started. Only if the automount is started anyway through
 some other path, the After= line causes the service to wait until the
 automount point is started.You may need to add a line
 Wants=var-lib-nfs-rpc_pipefs.automount to your service file.

 Does that help you?
 Adding that wants does start var-lib-nfs-rpc_pipefs.automount but
 now I'm getting two mounts...

 # mount | grep rpc
 systemd-1 on /var/lib/nfs/rpc_pipefs type autofs 
 (rw,relatime,fd=16,pgrp=1,timeout=300,minproto=5,maxproto=5,direct)
 sunrpc on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw,relatime)

 or is that normal for automounts?

 steved.

 Yup that's normal. the systemd-1 is the automount point.
 Thanks for this tip... but things are still no quite right...

 the Starting RPC Pipe File System (i.e. the mount of  
 /var/lib/nfs/rpc_pipefs)
 was happening later than the Starting Name to UID/GID mapping for NFSv4
 (i.e. the nfs-idmap.service) so I added back the 
 After=var-lib-nfs-rpc_pipefs.automount
 line nfs-idmap.service which didn't seem to work... Looking at the booting 
 messages,
 the nfs-idmap.service is still being started before the automount.

 How I do I guarantee the automount happens and finish before 
 nfs-idmap.service
 is started?
 This is not how automount works. The real fs is only mounted when the
 first acces is done which is of cause
 after the service begins to start. The access is then blocked until
 the fs is mounted.
 However I'm starting to think automount isn't what you want at all.
 Your nfs-idmap.service is started on boot and needs the rpc fs during
 service startup so it doesn't make a lot of sense to delay the start
 of the service with an automount point. Automount is more useful IMO
 when the fs isn't usually used during boot.
 Why don't you discard the automount unit and instead reference the
 actual mount unit in your service file with After= and Wants= lines.
 Does that make sense to you?
Yes it does and changing the mount from an automount to an actual mount
did fix the problem... Thank you very much!

There is an oddity though... I noted the var-lib-nfs-rpc_pipefs.mount actually
failed 

systemctl status var-lib-nfs-rpc_pipefs.mount
var-lib-nfs-rpc_pipefs.mount - RPC Pipe File System
  Loaded: loaded (/lib/systemd/system/var-lib-nfs-rpc_pipefs.mount)
  Active: active (mounted) since Tue, 02 Aug 2011 09:35:31 -0400; 1min 
56s ago
   Where: /var/lib/nfs/rpc_pipefs
What: sunrpc
 Process: 857 ExecMount=/bin/mount sunrpc /var/lib/nfs/rpc_pipefs -t 
rpc_pipefs -o rw,relatime (code=exited, status=32)
  CGroup: name=systemd:/system/var-lib-nfs-rpc_pipefs.mount

But file system was mounted maybe cause the fs is already mounted? I 
rebooted
a couple times and the service continually comes up so I'm moving on..  

Again, thanks for your help Its most definitely appreciated!

steved.
 

 Also now when I reboot the system hangs for a bit due to the following
 problem:

 [  272.510946] systemd[1]: var-lib-nfs-rpc_pipefs.mount mounting timed out. 
 Killing.
 [  362.511271] systemd[1]: var-lib-nfs-rpc_pipefs.mount mount process still 
 around after SIGKILL. Ignoring.
 [  362.609307] systemd[1]: Unit var-lib-nfs-rpc_pipefs.mount entered failed 
 state.

 So it appears the ordering of the shutdown is not quite right either...
 I'm not sure if this is the problem but you have
 DefaultDependencies=no in your unit files. Do you really need this? If
 yes you might need to add Conflicts=shutdown.target to make this work

 steved.
 
 Mirco
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] Using Multiple EnvironmentFile lines

2011-08-02 Thread Steve Dickson


On 08/02/2011 03:17 PM, Jóhann B. Guðmundsson wrote:
 On 08/02/2011 07:11 PM, Mantas Mikulėnas wrote:
 t doesn't make much sense for me to run nfsd, mountd, rquotad and everything 
 from a single .service unit - after all, they are separate services with 
 their own protocols... I might want to just restart rpc.idmapd without 
 killing the rest of NFS.

 For Arch Linux I tried to separate everything into their own units; far from 
 perfect, but it's much cleaner:
 
 Same thing I did but Steve did not like that...

I did play around with Mantas idea of breaking things
up in that fashion but I quickly into a decency knot
that look a bit looked a bit ominous ;-) So 
I just intergraded Mantas changes to the existing scripts
and keep nfs-server.service the same. 

The scripts I plan on committing tomorrow are at:
http://people.redhat.com/steved/.tmp/systemd/

I'm waiting on the NFS team at Red Hat to make
comments... but please feel free share you opinion as
well... They will be well used scripts... ;-)  

In the end, Johann, they are very similar to what you suggested
a while ago, which means I lost a number of configuration
knobs due to the simplistic environment that systemd supports.

But... only time will tell how much they will be missed and 
they were mostly on legacy daemon that I can hopefully 
get rid of in the near future...

I just want to thank everybody for their help! Its truly 
appreciated!  

steved.


___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] systemd automounts

2011-08-01 Thread Steve Dickson


On 08/01/2011 05:43 PM, Lennart Poettering wrote:
 On Fri, 29.07.11 11:16, Steve Dickson (ste...@redhat.com) wrote:
 
 I'm trying to automount /var/lib/nfs/rpc_pipefs
 for the nfs-idmap.service

 var-lib-nfs-rpc_pipefs.mount is:
 [Unit]
 Description=RPC Pipe File System
 DefaultDependencies=no

 [Mount]
 What=sunrpc
 Where=/var/lib/nfs/rpc_pipefs
 Type=rpc_pipefs
 
 Looks good.
 

 var-lib-nfs-rpc_pipefs.automount is:
 [Unit]
 Description=RPC Pipe File System
 DefaultDependencies=no

 [Automount]
 Where=/var/lib/nfs/rpc_pipefs
 
 Looks good, too. But I'd recommend adding After=local-fs.target here, to
 ensure your automount unit is established after /var is, if that's on a
 separate partition.
Added.

 
 and the nfs-idmap.service is:
 [Unit]
 Description=Name to UID/GID mapping for NFSv4.
 After=syslog.target network.target var-lib-nfs-rpc_pipefs.automount
 ConditionPathIsDirectory=/sys/module/sunrpc
 
 Is this really dependent on the network? If not I'd recommend to
 ordering this after network.target.
No, so I removed the network.target.

 
 Also, in F16 we will no longer support non-socket-activated syslogs (all
 existing implementations have support for socket actviation upstream),
 so the After=syslog.target is not necessary anymore.
Ok. I remove the syslog.target  so the after line is:
  After=var-lib-nfs-rpc_pipefs.automount 

 
 [Service]
 Type=forking
 EnvironmentFile=-/etc/sysconfig/nfs
 ExecStart=/usr/sbin/rpc.idmapd $RPCIDMAPDARGS

 [Install]
 WantedBy=multi-user.target

 Now I know for a fact that /var/lib/nfs/rpc_pipefs
 is being mount *after* the nfs-idmap.service 
 is run, because:
 
 being mounted?
Yes. Once the machine comes up the 
   sunrpc on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw,relatime)

mount exists. But its not being mounted soon enough because for 
the nfs-idmap.service. 

 
 You mean the automount point being established, not the backing mount, right?
I'm not sure what you are asking me.

 
 rpc.idmapd is failing because 
  rpc.idmapd[819]: main: open(/var/lib/nfs/rpc_pipefs//nfs): No such file 
 or
 directory

 and the startup message clearly show the service is being
 run before the mount:

 Starting Name to UID/GID mapping for NFSv4
 Starting OpenSSH server daemon
 Started OpenSSH server daemon..
 Starting RPC bind service...
 Starting Sendmail Mail Transport Agent...
 Started LSB: Mount and unmount network filesystems..
 [   25.803165] RPC: Registered named UNIX socket transport module.
 [   25.804236] RPC: Registered udp transport module.
 [   25.805327] RPC: Registered tcp transport module.
 [   25.806283] RPC: Registered tcp NFSv4.1 backchannel transport module.
 [   25.889822] SELinux: initialized (dev rpc_pipefs, type rpc_pipefs), uses 
 genfs_contexts

 So any idea what on what I'm doing wrong? Is this how autmounts are
 suppose be used?
 
 Yes, this looks like a good usecase.
 
 Hmm, does the automount point work after boot?
It seems so, because if I restart nfs-idmap.service the 
service comes up.

 
 How does the output of systemctl list-units look like for the
 automount and mount unit?
 
attached.

steved.
dev-hugepages.automount   loaded active waiting   Huge Pages File System 
Automount Point
dev-mqueue.automount  loaded active waiting   POSIX Message Queue File 
System Automount Point
proc-sys...misc.automount loaded active waiting   Arbitrary Executable File 
Formats File System Automount Point
sys-kern...ebug.automount loaded active waiting   Debug File System 
Automount Point
sys-kern...rity.automount loaded active waiting   Security File System 
Automount Point
sys-devi...ock-sr0.device loaded active plugged   QEMU_DVD-ROM
sys-devi...et-eth0.device loaded active plugged   
/sys/devices/pci:00/:00:03.0/virtio0/net/eth0
sys-devi...d-card0.device loaded active plugged   
/sys/devices/pci:00/:00:04.0/sound/card0
sys-devi...da-vda1.device loaded active plugged   
/sys/devices/pci:00/:00:05.0/virtio1/block/vda/vda1
sys-devi...da-vda2.device loaded active plugged   
/sys/devices/pci:00/:00:05.0/virtio1/block/vda/vda2
sys-devi...ock-vda.device loaded active plugged   
/sys/devices/pci:00/:00:05.0/virtio1/block/vda
sys-devi...y-ttyS0.device loaded active plugged   
/sys/devices/platform/serial8250/tty/ttyS0
sys-devi...y-ttyS1.device loaded active plugged   
/sys/devices/platform/serial8250/tty/ttyS1
sys-devi...y-ttyS2.device loaded active plugged   
/sys/devices/platform/serial8250/tty/ttyS2
sys-devi...y-ttyS3.device loaded active plugged   
/sys/devices/platform/serial8250/tty/ttyS3
sys-devi...dm\x2d0.device loaded active plugged   
/sys/devices/virtual/block/dm-0
sys-devi...dm\x2d1.device loaded active plugged   
/sys/devices/virtual/block/dm-1
sys-devi...dm\x2d2.device loaded active plugged   
/sys/devices/virtual/block/dm-2
sys-devi...dm\x2d3.device loaded active plugged   
/sys/devices/virtual/block/dm-3
sys-devi...dm\x2d4

[systemd-devel] EnvironmentFile being ignored.

2011-07-29 Thread Steve Dickson
Hello,

Doing a 'systemctl show' it appears my EnvironmentFile 
is being ignored

EnvironmentFile=/etc/sysconfig/nfs (ignore=yes)

Why is this happening?

steved.
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


[systemd-devel] systemd automounts

2011-07-29 Thread Steve Dickson
I'm trying to automount /var/lib/nfs/rpc_pipefs
for the nfs-idmap.service

var-lib-nfs-rpc_pipefs.mount is:
[Unit]
Description=RPC Pipe File System
DefaultDependencies=no

[Mount]
What=sunrpc
Where=/var/lib/nfs/rpc_pipefs
Type=rpc_pipefs

var-lib-nfs-rpc_pipefs.automount is:
[Unit]
Description=RPC Pipe File System
DefaultDependencies=no

[Automount]
Where=/var/lib/nfs/rpc_pipefs

and the nfs-idmap.service is:
[Unit]
Description=Name to UID/GID mapping for NFSv4.
After=syslog.target network.target var-lib-nfs-rpc_pipefs.automount
ConditionPathIsDirectory=/sys/module/sunrpc

[Service]
Type=forking
EnvironmentFile=-/etc/sysconfig/nfs
ExecStart=/usr/sbin/rpc.idmapd $RPCIDMAPDARGS

[Install]
WantedBy=multi-user.target


Now I know for a fact that /var/lib/nfs/rpc_pipefs
is being mount *after* the nfs-idmap.service 
is run, because:

rpc.idmapd is failing because 
 rpc.idmapd[819]: main: open(/var/lib/nfs/rpc_pipefs//nfs): No such file or
directory

and the startup message clearly show the service is being
run before the mount:

Starting Name to UID/GID mapping for NFSv4
Starting OpenSSH server daemon
Started OpenSSH server daemon..
Starting RPC bind service...
Starting Sendmail Mail Transport Agent...
Started LSB: Mount and unmount network filesystems..
[   25.803165] RPC: Registered named UNIX socket transport module.
[   25.804236] RPC: Registered udp transport module.
[   25.805327] RPC: Registered tcp transport module.
[   25.806283] RPC: Registered tcp NFSv4.1 backchannel transport module.
[   25.889822] SELinux: initialized (dev rpc_pipefs, type rpc_pipefs), uses 
genfs_contexts

So any idea what on what I'm doing wrong? Is this how autmounts are
suppose be used?

tia...

steved.
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel