On (23/12/16 10:29), Jakub Hrozek wrote:
>On Thu, Dec 22, 2016 at 08:38:38PM -0500, Dan Kemp wrote:
>> Hello,
>> 
>> I recently ran an upgrade of my freeipa servers, and most of the clients to
>> 4.4.0 (Current with CentOS 7 repos) from version 4.2.0. After the install
>> and server update, I can no longer log in to update clients via ssh. Login
>> to non-update clients works as before.
>> 
>> The SSH connections fail with:
>> 
>> Connection closed by UNKNOWN
>> 
>> I ran sssd with debugging on a failing 4.4.0 client and got this error log:
>> 
>> (Wed Dec 20 20:38:13 2016) [sssd[be[domain.local]]] [ldb] (0x4000): commit
>> ldb transaction (nesting: 2)
>> (Wed Dec 20 20:38:13 2016) [sssd[be[domain.local]]] [ldb] (0x4000): commit
>> ldb transaction (nesting: 1)
>> (Wed Dec 20 20:38:13 2016) [sssd[be[domain.local]]] [ldb] (0x4000): commit
>> ldb transaction (nesting: 0)
>> (Wed Dec 20 20:38:13 2016) [sssd[be[domain.local]]]
>> [selinux_child_create_buffer] (0x4000): buffer size: 45
>> (Wed Dec 20 20:38:13 2016) [sssd[be[domain.local]]] [child_handler_setup]
>> (0x2000): Setting up signal handler up for pid [437]
>> (Wed Dec 20 20:38:13 2016) [sssd[be[domain.local]]] [child_handler_setup]
>> (0x2000): Signal handler set up for pid [437]
>> (Wed Dec 20 20:38:13 2016) [sssd[be[domain.local]]] [sdap_process_result]
>> (0x2000): Trace: sh[0x560c04c37790], connected[1], ops[(nil)],
>> ldap[0x560c04c32d60]
>> (Wed Dec 20 20:38:13 2016) [sssd[be[domain.local]]] [sdap_process_result]
>> (0x2000): Trace: end of ldap_result list
>> (Wed Dec 20 20:38:13 2016) [sssd[be[domain.local]]] [write_pipe_handler]
>> (0x0400): All data has been sent!
>> (Wed Dec 20 20:38:13 2016) [[sssd[selinux_child[437]]]] [main] (0x0400):
>> selinux_child started.
>> (Wed Dec 20 20:38:13 2016) [[sssd[selinux_child[437]]]] [main] (0x2000):
>> Running with effective IDs: [0][0].
>> (Wed Dec 20 20:38:13 2016) [[sssd[selinux_child[437]]]] [main] (0x2000):
>> Running with real IDs [0][0].
>> (Wed Dec 20 20:38:13 2016) [[sssd[selinux_child[437]]]] [main] (0x0400):
>> context initialized
>> (Wed Dec 20 20:38:13 2016) [[sssd[selinux_child[437]]]] [unpack_buffer]
>> (0x2000): seuser length: 12
>> (Wed Dec 20 20:38:13 2016) [[sssd[selinux_child[437]]]] [unpack_buffer]
>> (0x2000): seuser: unconfined_u
>> (Wed Dec 20 20:38:13 2016) [[sssd[selinux_child[437]]]] [unpack_buffer]
>> (0x2000): mls_range length: 14
>> (Wed Dec 20 20:38:13 2016) [[sssd[selinux_child[437]]]] [unpack_buffer]
>> (0x2000): mls_range: s0-s0:c0.c1023
>> (Wed Dec 20 20:38:13 2016) [[sssd[selinux_child[437]]]] [unpack_buffer]
>> (0x2000): username length: 7
>> (Wed Dec 20 20:38:13 2016) [[sssd[selinux_child[437]]]] [unpack_buffer]
>> (0x2000): username: ipauser
>> (Wed Dec 20 20:38:13 2016) [[sssd[selinux_child[437]]]] [main] (0x0400):
>> performing selinux operations
>> (Wed Dec 20 20:38:13 2016) [[sssd[selinux_child[437]]]] [sss_semanage_init]
>> (0x0020): SELinux policy not managed
>> (Wed Dec 20 20:38:13 2016) [[sssd[selinux_child[437]]]] [get_seuser]
>> (0x0020): Cannot create SELinux handle
>> (Wed Dec 20 20:38:13 2016) [[sssd[selinux_child[437]]]]
>> [seuser_needs_update] (0x2000): get_seuser: ret: 5 seuser: unknown mls:
>> unknown
>> (Wed Dec 20 20:38:13 2016) [[sssd[selinux_child[437]]]] [sss_semanage_init]
>> (0x0020): SELinux policy not managed
>> (Wed Dec 20 20:38:13 2016) [[sssd[selinux_child[437]]]] [set_seuser]
>> (0x0020): Cannot init SELinux management
>> (Wed Dec 20 20:38:13 2016) [[sssd[selinux_child[437]]]] [main] (0x0020):
>> Cannot set SELinux login context.
>> (Wed Dec 20 20:38:13 2016) [[sssd[selinux_child[437]]]] [main] (0x0020):
>> selinux_child failed!
>> (Wed Dec 20 20:38:13 2016) [sssd[be[domain.local]]] [read_pipe_handler]
>> (0x0400): EOF received, client finished
>> (Wed Dec 20 20:38:13 2016) [sssd[be[domain.local]]] [selinux_child_done]
>> (0x0020): selinux_child_parse_response failed: [22][Invalid argument]
>> (Wed Dec 20 20:38:13 2016) [sssd[be[domain.local]]] [dp_req_done] (0x0400):
>> DP Request [PAM SELinux #3]: Request handler finished [0]: Success
>> (Wed Dec 20 20:38:13 2016) [sssd[be[domain.local]]] [_dp_req_recv]
>> (0x0400): DP Request [PAM SELinux #3]: Receiving request data.
>> (Wed Dec 20 20:38:13 2016) [sssd[be[domain.local]]] [dp_req_destructor]
>> (0x0400): DP Request [PAM SELinux #3]: Request removed.
>> (Wed Dec 20 20:38:13 2016) [sssd[be[domain.local]]] [dp_req_destructor]
>> (0x0400): Number of active DP request: 0
>> (Wed Dec 20 20:38:13 2016) [sssd[be[domain.local]]] [dp_pam_reply]
>> (0x1000): DP Request [PAM Account #2]: Sending result [4][domain.local]
>> (Wed Dec 20 20:38:13 2016) [sssd[be[domain.local]]] [child_sig_handler]
>> (0x1000): Waiting for child [437].
>> (Wed Dec 20 20:38:13 2016) [sssd[be[domain.local]]] [child_sig_handler]
>> (0x0020): child [437] failed with status [1].
>> (Wed Dec 20 20:38:13 2016) [sssd[pam]] [sbus_remove_timeout] (0x2000):
>> 0x55f4be11f4c0
>> (Wed Dec 20 20:38:13 2016) [sssd[pam]] [sbus_dispatch] (0x4000): dbus conn:
>> 0x55f4be1191b0
>> (Wed Dec 20 20:38:13 2016) [sssd[pam]] [sbus_dispatch] (0x4000):
>> Dispatching.
>> (Wed Dec 20 20:38:13 2016) [sssd[pam]] [pam_dp_process_reply] (0x0200):
>> received: [4 (System error)][domain.local]
>> (Wed Dec 20 20:38:13 2016) [sssd[pam]] [pam_reply] (0x0200): pam_reply
>> called with result [4]: System error.
>> (Wed Dec 20 20:38:13 2016) [sssd[pam]] [pam_reply] (0x0200): blen: 39
>> (Wed Dec 20 20:38:13 2016) [sssd[pam]] [reset_idle_timer] (0x4000): Idle
>> timer re-set for client [0x55f4be11f980][19]
>> (Wed Dec 20 20:38:13 2016) [sssd[pam]] [reset_idle_timer] (0x4000): Idle
>> timer re-set for client [0x55f4be11f980][19]
>> (Wed Dec 20 20:38:13 2016) [sssd[pam]] [client_recv] (0x0200): Client
>> disconnected!
>> (Wed Dec 20 20:38:13 2016) [sssd[pam]] [client_close_fn] (0x2000):
>> Terminated client [0x55f4be11f980][19]
>> (Wed Dec 20 20:38:13 2016) [sssd[nss]] [reset_idle_timer] (0x4000): Idle
>> timer re-set for client [0x5604f5217c60][22]
>> (Wed Dec 20 20:38:13 2016) [sssd[nss]] [client_recv] (0x0200): Client
>> disconnected!
>> (Wed Dec 20 20:38:13 2016) [sssd[nss]] [client_close_fn] (0x2000):
>> Terminated client [0x5604f5217c60][22]
>> (Wed Dec 20 20:38:13 2016) [sssd[nss]] [reset_idle_timer] (0x4000): Idle
>> timer re-set for client [0x5604f5216b20][21]
>> (Wed Dec 20 20:38:13 2016) [sssd[nss]] [client_recv] (0x0200): Client
>> disconnected!
>> (Wed Dec 20 20:38:13 2016) [sssd[nss]] [client_close_fn] (0x2000):
>> Terminated client [0x5604f5216b20][21]
>> 
>> 
>> SeLinux is disabled, as I am running the clients in LXC containers on a
>> debian host. My setup is pretty simple, and was working before (and
>> connections to 4.2.0 clients are still functional!). Nothing looks amiss in
>> the ssh logs.
>
Coudl you provide an output of "sestatus" from host and from LXC container?

>If you don't use SELinux, then I'm quite sure you can just set:
>    selinux_provider=none
>in the domain section of sssd.conf, then selinux_child won't be called
>at all.
>
>But to be honest I'm not sure what exactly can cause the error about
>sss_semanage_init(). Looking at libsemanage, they just connecto to the
>policy store (typically /etc/selinux/targeted/modules/). The only idea I
>have is that the store might not be there in the LXC container or, the
>LXC container might not support extended FS attributes that are IIRC
>used to store SELinux labels.
>
>Stracing the selinux child (see
>https://fedorahosted.org/sssd/wiki/DevelTips#UsingstracetotracktheSSSDprocesses)
>might give a better idea which exact syscall failed.
>
IMHO, LXC container is not created in best way.
Did you mount bind /sys from host?

Because all SElinux related code should check /sys/fs/selinux/
and /sys/fs/selinux/enforce. And they should not fail with
SELinux in disabled mode.

LS

-- 
Manage your subscription for the Freeipa-users mailing list:
https://www.redhat.com/mailman/listinfo/freeipa-users
Go to http://freeipa.org for more info on the project

Reply via email to