Re: [Freeipa-users] IPA and DNS reverse subnets

2017-01-30 Thread lejeczek



On 30/01/17 19:32, Tomasz Torcz wrote:

On Mon, Jan 30, 2017 at 07:12:10PM +, lejeczek wrote:


On 30/01/17 18:28, Tomasz Torcz wrote:

On Mon, Jan 30, 2017 at 06:01:03PM +, lejeczek wrote:

hi everybody

I'm having trouble trying to figure out, or in other words make this to
work:

I'm setting up a domain in a subnet like this: 10.5.10.48/28 but not sure it
I got it right.
Host reverse resoling does not seem to right. I have:


Zone name: 28/48.10.5.10.in-addr.arpa.   <= this here is like non-usual, I
understand it's how such a reverse subnet should be defined, but not 100%
sure.

Here you got it wrong.  IPv4 reverses are split at octet boundary, you
cannot have greater granularity.  And for sure you cannot mix CIDR addressing 
(/28)
and netblock type.  On top of that, “/” is not correct character in DNS.

how about this - http://www.zytrax.com/books/dns/ch9/reverse.html - would
this not work?

   Wow. This is first time in my life I see this notation. Nevertheless, I was 
wrong
with my previous email.
   Having read your link, I found 
http://www.freeipa.org/page/Howto/DNS_classless_IN-ADDR.ARPA_delegation
Is this helpful?

meanwhile I had it working partially, delegation to subnets 
works but not everything.

More tampering to do, I'll post more findings later, hopefully.
thanks.

--
Manage your subscription for the Freeipa-users mailing list:
https://www.redhat.com/mailman/listinfo/freeipa-users
Go to http://freeipa.org for more info on the project

Re: [Freeipa-users] IPA and DNS reverse subnets

2017-01-30 Thread Tomasz Torcz
On Mon, Jan 30, 2017 at 07:12:10PM +, lejeczek wrote:
> 
> 
> On 30/01/17 18:28, Tomasz Torcz wrote:
> > On Mon, Jan 30, 2017 at 06:01:03PM +, lejeczek wrote:
> > > hi everybody
> > > 
> > > I'm having trouble trying to figure out, or in other words make this to
> > > work:
> > > 
> > > I'm setting up a domain in a subnet like this: 10.5.10.48/28 but not sure 
> > > it
> > > I got it right.
> > > Host reverse resoling does not seem to right. I have:
> > > 
> > > 
> > >Zone name: 28/48.10.5.10.in-addr.arpa.   <= this here is like 
> > > non-usual, I
> > > understand it's how such a reverse subnet should be defined, but not 100%
> > > sure.
> >Here you got it wrong.  IPv4 reverses are split at octet boundary, you
> > cannot have greater granularity.  And for sure you cannot mix CIDR 
> > addressing (/28)
> > and netblock type.  On top of that, “/” is not correct character in DNS.
> 
> how about this - http://www.zytrax.com/books/dns/ch9/reverse.html - would
> this not work?

  Wow. This is first time in my life I see this notation. Nevertheless, I was 
wrong
with my previous email.
  Having read your link, I found 
http://www.freeipa.org/page/Howto/DNS_classless_IN-ADDR.ARPA_delegation
Is this helpful?

-- 
Tomasz Torcz  ,,If you try to upissue this patchset I shall be 
seeking
xmpp: zdzich...@chrome.pl   an IP-routable hand grenade.'' -- Andrew Morton 
(LKML)

-- 
Manage your subscription for the Freeipa-users mailing list:
https://www.redhat.com/mailman/listinfo/freeipa-users
Go to http://freeipa.org for more info on the project

Re: [Freeipa-users] IPA and DNS reverse subnets

2017-01-30 Thread lejeczek



On 30/01/17 18:28, Tomasz Torcz wrote:

On Mon, Jan 30, 2017 at 06:01:03PM +, lejeczek wrote:

hi everybody

I'm having trouble trying to figure out, or in other words make this to
work:

I'm setting up a domain in a subnet like this: 10.5.10.48/28 but not sure it
I got it right.
Host reverse resoling does not seem to right. I have:


   Zone name: 28/48.10.5.10.in-addr.arpa.   <= this here is like non-usual, I
understand it's how such a reverse subnet should be defined, but not 100%
sure.

   Here you got it wrong.  IPv4 reverses are split at octet boundary, you
cannot have greater granularity.  And for sure you cannot mix CIDR addressing 
(/28)
and netblock type.  On top of that, “/” is not correct character in DNS.


how about this - 
http://www.zytrax.com/books/dns/ch9/reverse.html - would 
this not work?




Your reverse zone is 10.5.10.in-addr.arpa.

(IPv6 reverses are split at nibble boundary, FWIW).



--
Manage your subscription for the Freeipa-users mailing list:
https://www.redhat.com/mailman/listinfo/freeipa-users
Go to http://freeipa.org for more info on the project

Re: [Freeipa-users] IPA and DNS reverse subnets

2017-01-30 Thread Tomasz Torcz
On Mon, Jan 30, 2017 at 06:01:03PM +, lejeczek wrote:
> hi everybody
> 
> I'm having trouble trying to figure out, or in other words make this to
> work:
> 
> I'm setting up a domain in a subnet like this: 10.5.10.48/28 but not sure it
> I got it right.
> Host reverse resoling does not seem to right. I have:
> 
> 
>   Zone name: 28/48.10.5.10.in-addr.arpa.   <= this here is like non-usual, I
> understand it's how such a reverse subnet should be defined, but not 100%
> sure.

  Here you got it wrong.  IPv4 reverses are split at octet boundary, you
cannot have greater granularity.  And for sure you cannot mix CIDR addressing 
(/28)
and netblock type.  On top of that, “/” is not correct character in DNS.

   Your reverse zone is 10.5.10.in-addr.arpa. 

(IPv6 reverses are split at nibble boundary, FWIW).

-- 
Tomasz Torcz  ,,If you try to upissue this patchset I shall be 
seeking
xmpp: zdzich...@chrome.pl   an IP-routable hand grenade.'' -- Andrew Morton 
(LKML)

-- 
Manage your subscription for the Freeipa-users mailing list:
https://www.redhat.com/mailman/listinfo/freeipa-users
Go to http://freeipa.org for more info on the project

[Freeipa-users] IPA and DNS reverse subnets

2017-01-30 Thread lejeczek

hi everybody

I'm having trouble trying to figure out, or in other words 
make this to work:


I'm setting up a domain in a subnet like this: 10.5.10.48/28 
but not sure it I got it right.

Host reverse resoling does not seem to right. I have:

  Zone name: whale.private.
  Active zone: TRUE
  Authoritative nameserver: work1.whale.private.
  Administrator e-mail address: hostmaster.whale.private.
  SOA serial: 1485797688
  SOA refresh: 3600
  SOA retry: 900
  SOA expire: 1209600
  SOA minimum: 3600
  Allow query: any;
  Allow transfer: none;

  Zone name: 28/48.10.5.10.in-addr.arpa.   <= this here is 
like non-usual, I understand it's how such a reverse subnet 
should be defined, but not 100% sure.

  Active zone: TRUE
  Authoritative nameserver: work1.whale.private.
  Administrator e-mail address: hostmaster
  SOA serial: 1485790340
  SOA refresh: 3600
  SOA retry: 900
  SOA expire: 1209600
  SOA minimum: 3600
  Allow query: any;
  Allow transfer: none;

but:

~]$ host 10.5.10.55
Host 55.10.5.10.in-addr.arpa. not found: 3(NXDOMAIN)

and when I try to install a replica:

~]$ ipa-replica-install --setup-dns --no-forwarders --setup-ca
Password for admin@WHALE.PRIVATE:
ipa : ERRORReverse DNS resolution of address 
10.5.10.55 (work1.whale.private) failed. Clients may not 
function properly. Please check your DNS setup. (Note that 
this check queries IPA DNS directly and ignores /etc/hosts.)


I understand it's all in DNS, so.. how to tweak it, to fix it?
many thank,
L.
-- 
Manage your subscription for the Freeipa-users mailing list:
https://www.redhat.com/mailman/listinfo/freeipa-users
Go to http://freeipa.org for more info on the project

Re: [Freeipa-users] Needs help understand this timeout issue

2017-01-30 Thread Sullivan, Daniel [CRI]
I have had to deal with the symptoms you describe, never with 730 groups 
though.  Based on my experience doing a lookup for a user in an AD trusted 
domain is a resource intensive process on the server.  I’d first start by 
taking a look at your logs to see if the lookup is failing on the server or on 
the client.  The logs should be able to tell you this.  My suspicion is that 
the timeout is actually occurring on the server.

If the timeout is occurring on the server, I would start by increasing one or 
both of these values:

ldap_opt_timeout
ldap_search_timeout

If that doesn’t work I’d take look to see if the 389 server is under high load 
when you are performing this operation.  The easiest way I have found to do 
this is to just execute an LDAP query directly against the IPA server when you 
are performing an id lookup, for example:

ldapsearch -D "cn=Directory Manager" -w  -s base -b "cn=config" 
"(objectclass=*)”

If the LDAP server is not responsive you probably need to increase the number 
of worker threads for 389ds.  Also, you might want to disable referrals, check 
out the man pages for this;

ldap_referrals = false

Also, FWIW, if you crank up debug logging on the sssd client, you should be 
able to see the amount of seconds of timeout assigned to the operation, and 
witness the fact that the operation is actually timing out by inspecting the 
logs themselves.  The logs can get a little verbose but the data is there.

Dan



On Jan 30, 2017, at 4:00 AM, Troels Hansen 
> wrote:

Hi there

I'm trying to debug on a strange IPA timeout issue.

Its SSSD 1.14, IPA 4.4, RHEL 7.3.
2 IPA servers in AD trust.

Besides being a bit slow on groups membership lookups on users with a moderate 
number of Groups, there are some users with a HUGE amount of nested groups.

A server just installed, thereby having clean cache:

# time id shja
id: shja: no such user
real0m12.107s
user0m0.000s
sys 0m0.007s

Hmm, lets try again:

# sss_cache -E && systemctl restart sssd
# time id shja
id: shja: no such user
real0m58.016s
user0m0.001s
sys 0m0.005s

Hmm..

# sss_cache -E && systemctl restart sssd
# time id shja

...about 30% of the users Groups are returned

real5m16.840s
user0m0.010s
sys 0m0.019s


Next lookup is pretty fast and returns all Groups (about 730).

# time id shja
real0m7.670s
user0m0.028s
sys 0m0.066s


A few questions.
The first times id seems to bail out and report no such user after whet seems 
to be a random amount of time.
Then is actually starts fetching groups it fetches a portion of the Groups, and 
the last try it fetches all groups.

It looks like IPA is starting a thread running in backgroups, filling the cache 
and this continues after the failed lookup?

Shouldn't SSSD be able to use the cache from the the SSSD on the IPA server?
In this example the IPA server had full cache of the user and groups but the 
time it took to do the lookup indicates its still traversing the AD?

sssd.conf is pretty default:
full_name_format = %1$s

set on SSSD client.

On IPA server this is added (no full_name_format):
ignore_group_members = True
ldap_purge_cache_timeout = 0
ldap_user_principal = nosuchattr
subdomain_inherit = ldap_user_principal, ignore_group_members, 
ldap_purge_cache_timeout


--
Manage your subscription for the Freeipa-users mailing list:
https://www.redhat.com/mailman/listinfo/freeipa-users
Go to http://freeipa.org for more info on the project


-- 
Manage your subscription for the Freeipa-users mailing list:
https://www.redhat.com/mailman/listinfo/freeipa-users
Go to http://freeipa.org for more info on the project

[Freeipa-users] caching of lookups / performance problem

2017-01-30 Thread Sullivan, Daniel [CRI]
Hi,

I have another question about sssd performance.  I’m having a difficult time 
doing a regularly performant ‘ls -l’ operation against /home, a mounted NFS 
share of all of our users home directories.  There are 667 entries in this 
folder, and all of them have IDs that are resolvable via freeipa/sssd.  We are 
using an AD trusted domain.

It is clear to me why an initial invocation of this lookup should take some 
time (populating the local ldb cache).   And it does.  Usually around 5-10 
minutes, but sometimes longer.  After the initial lookups are complete, the 
output of ‘ls -l' renders fine, and I can inspect the local filesystem cache 
using ldbsearch and see that it is populated.  The issue is that if I wait a 
while, or restart sssd, it appears that I have to go through all of these 
lookups again to render the directory listing.

I am trying to find an optimal configuration for sssd.conf that will allow a 
performant ‘ls -l’ listing of a directory with a large number of different id 
numbers assigned to filesystem objects to always return results immediately 
from the local cache (after an initial invocation of this command for any given 
directory).  I think basically what I want is to have the ldb cache always 
‘up-to-date’, or at least have sssd willing to immediately dump what it has 
without having to do a bunch of lookups while blocking the ‘ls -l’ thread.  If 
possible, whatever solution implemented should also survive a restart of the 
sssd process.  In short, aside from an initial invocation, I never want ‘ls -l’ 
to take more than a few seconds.

The issue described above is somewhat problematic because it appears to cause 
contention on the sssd process effectively allowing a user doing ls -l /home to 
inadvertently degrade system performance for another user.

So far I have tried:

1)  Implementing 'enumeration = true' for the [domain] section .  This seems to 
have no impact.  It might be worthwhile to note that we are using an AD trusted 
domain.
2)  Using the refresh_expired_interval configuration for the [domain] section

I have read the following two documents in a decent level of detail:

https://jhrozek.wordpress.com/2015/08/19/performance-tuning-sssd-for-large-ipa-ad-trust-deployments/
https://jhrozek.wordpress.com/2015/03/11/anatomy-of-sssd-user-lookup/

It almost seems to me like the answer to this would be to keep the LDB cache 
valid indefinitely (step 4 on 
https://jhrozek.wordpress.com/2015/03/11/anatomy-of-sssd-user-lookup/).

Presumably this is a problem that somebody has seen before.  Would somebody be 
able to advise on the best way to deal with this?  I appreciate your help.

Thank you,

Dan

-- 
Manage your subscription for the Freeipa-users mailing list:
https://www.redhat.com/mailman/listinfo/freeipa-users
Go to http://freeipa.org for more info on the project

[Freeipa-users] Needs help understand this timeout issue

2017-01-30 Thread Troels Hansen
Hi there 

I'm trying to debug on a strange IPA timeout issue. 

Its SSSD 1.14, IPA 4.4, RHEL 7.3. 
2 IPA servers in AD trust. 

Besides being a bit slow on groups membership lookups on users with a moderate 
number of Groups, there are some users with a HUGE amount of nested groups. 

A server just installed, thereby having clean cache: 



# time id shja 
id: shja: no such user 

real 0m12.107s 
user 0m0.000s 
sys 0m0.007s 





Hmm, lets try again: 




# sss_cache -E && systemctl restart sssd 

# time id shja 
id: shja: no such user 

real 0m58.016s 
user 0m0.001s 
sys 0m0.005s 





Hmm.. 




# sss_cache -E && systemctl restart sssd 

# time id shja 




...about 30% of the users Groups are returned 




real 5m16.840s 
user 0m0.010s 
sys 0m0.019s 







Next lookup is pretty fast and returns all Groups (about 730). 




# time id shja 

real 0m7.670s 
user 0m0.028s 
sys 0m0.066s 







A few questions. 

The first times id seems to bail out and report no such user after whet seems 
to be a random amount of time. 

Then is actually starts fetching groups it fetches a portion of the Groups, and 
the last try it fetches all groups. 




It looks like IPA is starting a thread running in backgroups, filling the cache 
and this continues after the failed lookup? 




Shouldn't SSSD be able to use the cache from the the SSSD on the IPA server? 

In this example the IPA server had full cache of the user and groups but the 
time it took to do the lookup indicates its still traversing the AD? 




sssd.conf is pretty default: 

full_name_format = %1$s 




set on SSSD client. 




On IPA server this is added (no full_name_format): 


ignore_group_members = True 
ldap_purge_cache_timeout = 0 
ldap_user_principal = nosuchattr 

subdomain_inherit = ldap_user_principal, ignore_group_members, 
ldap_purge_cache_timeout 





-- 
Manage your subscription for the Freeipa-users mailing list:
https://www.redhat.com/mailman/listinfo/freeipa-users
Go to http://freeipa.org for more info on the project

Re: [Freeipa-users] sudo sometimes doesn't work

2017-01-30 Thread Jakub Hrozek
On Fri, Jan 27, 2017 at 02:15:16PM -0700, Orion Poplawski wrote:
> EL7.3
> Users are in active directory via AD trust with IPA server
> 
> sudo is configured via files - users in our default "nwra" group can run
> certain sudo commands, e.g.:
> 
> Cmnd_Alias WAKEUP = /sbin/ether-wake *
> %nwra,%visitor,%ivm   ALL=NOPASSWD: WAKEUP
> 
> However, sometimes when I run sudo /sbin/ether-wake I get prompted for my
> password.  Other times it works fine.  I've attached some logs from failed
> attempt.

So the sudo command is successfull in the end, it 'just' prompts for a
password?

I think the sudo logs would be the most important part here, see:
https://fedorahosted.org/sssd/wiki/HOWTO_Troubleshoot_SUDO
there is a section called ' a) How do I get sudo logs?' that explains
how to generate them..

-- 
Manage your subscription for the Freeipa-users mailing list:
https://www.redhat.com/mailman/listinfo/freeipa-users
Go to http://freeipa.org for more info on the project


Re: [Freeipa-users] be_pam_handler_callback Backend returned: (3, 4, ) [Internal Error (System error)]

2017-01-30 Thread thierry bordaz



On 01/27/2017 12:51 PM, Harald Dunkel wrote:

Hi Thierry,

On 01/26/17 16:55, thierry bordaz wrote:


Those entries are managed entries and it is not possible to delete them from 
direct ldap command.
A solution proposed by Ludwig is not first make them unmanaged:

cn=ipaservers+nsuniqueid=109be304-ccd911e6-a5b3d0c8-d8da17db,cn=ng,cn=alt,dc=example,dc=de
changetype: modify
modify: objectclass
delete: mepManagedEntry

cn=ipaservers+nsuniqueid=109be304-ccd911e6-a5b3d0c8-d8da17db,cn=ng,cn=alt,dc=example,dc=de
changetype: modify
modify: objectclass
delete: mepManagedEntry

Then retry to delete them.
It should work for the first one but unsure it will succeed for the second one.


I am not sure about this "managed" thing. This sounds like some
kind of external influence.

How can I make sure that removing these entries doesn't break
something? Is the original entry managed in the same way as
the duplicate?


Regards
Harri


Hello Harri,

sorry for this late answer.

I understand your concern and in fact it is difficult to anticipate a  
potential bad impact of this cleanup. However,I think it is safe to get 
rid of the following entry.

Before doing so you may check it exists

cn=ipaservers,cn=ng,cn=alt,dc=example,dc=de that is managedBy the 
ipaservers_hostgoups.

dn: 
cn=ipaservers+nsuniqueid=109be304-ccd911e6-a5b3d0c8-d8da17db,cn=ng,cn=alt,dc=example,dc=de
mepManagedBy: cn=ipaservers,cn=hostgroups,cn=accounts,dc=example,dc=de
objectClass: mepManagedEntry


If you are willing to remove that entry you need to remove the mepmanagedEntry 
oc. So you need to remove the mepManagedBy and oc in the same operation


Regarding the following entry
 dn: 
cn=ipaservers+nsuniqueid=109be302-ccd911e6-a5b3d0c8-d8da17db,cn=hostgroups,cn=accounts,dc=example,dc=de
objectClass: mepOriginEntry
mepManagedEntry: cn=ipaservers,cn=ng,cn=alt,dc=example,dc=de

You may want to check if it exists an entry it manages, looking for 
"(mepManagedBy=
cn=ipaservers+nsuniqueid=109be302-ccd911e6-a5b3d0c8-d8da17db,cn=hostgroups,cn=accounts,dc=example,dc=de
)". If it exists none, you should be able to remove it.

Also I think working on ipabak, you should be able to do some tests on the 
cleanup instance to validate everything is working fine.

regards
thierry

--
Manage your subscription for the Freeipa-users mailing list:
https://www.redhat.com/mailman/listinfo/freeipa-users
Go to http://freeipa.org for more info on the project