I was wondering if there was any more clarity to be had around 'issue 3' here. 
I am interested in a reliable way to detect this error condition on the slave 
server itself (or even reproducing it).

Or is figuring out if an update *should* have come in and then seeing that it 
didn't is the only way?

With respect to the policy propagation issue, I found that modifying a slave's 
copy of the principal database by creating a bogus principal (or something) 
with kadmin.local will force a full propagation in an otherwise healthy server. 
This seems pretty effective, but is it safe?

jd


On Oct 11, 2010, at 9:36 PM, Jeremy Hunt wrote:

>  Hi Ken, Dominic et al,
> 
> Sorry about using the term "second issue" twice. I will clarify all points as 
> Ken raised them
> 
> Issue1:  profile changes do not appear to be logged and propagated via iprop.
> 
> I am sorry, I meant "policy" not "profile". Probably because I meant a user 
> profile, where a user is tied to one or more policies. A policy in kerberos 
> is a bunch of attributes that are linked by a name so certain types of user 
> can be given consistent sets of attributes.
> 
> I originally read this document 
> http://k5wiki.kerberos.org/wiki/Projects/Lockout, which is a discussion on 
> how to provide a principal lockout functionality in kerberos similar to AD 
> and LDAP. It is very interesting. It points out that some attributes of 
> principals are not replicated, surprisingly these include last successful 
> login, lastfailed login and failed authentication counts. This would 
> encourage a lot of readers of this list to revisit their pasword lockout 
> policy if they have one. However read on.
> 
> My own experiments found that:
> 
>    *   locked out principals were propagated as lockouts by the "kiprop" 
> process.
>    * policies were not propagated, however principals referring to the 
> policies were propagated
> 
> For example I created a lockout policy and linked it to my principals. My 
> principal changes were propagated, but I had to create a lockout policy 
> separately on the slave. the actual "kadmin" commands were
> adpol lockout
> modpol -maxfailure 4 lockout
> modprinc -policy lockout principal17
> Thenceforth "getprinc principal17" had a line "Policy lockout", even on KDCs 
> that did not have the lockout policy defined.
> 
> I also found with a full propagation the policy was carried across. Full 
> propagation is a database dump, remote copy of dump and load of dump on the 
> replica server(s). Incremental propagation copies log changes and updates the 
> database from the new entries since the last log time stamp on the replica 
> server.
> 
> I would agree this is not a bug, just something to know. As part of my setup 
> I create a new master KDC, setup the database from a dump of the old 
> database, make any required changes (the kdc names might have changed), dump 
> the database again and finally load the new dump on my replica(s).
> 
> Issue 2: occasionally iprop gets lost and decides to do a full propagation, 
> for that scenario you will get your timeouts, but it will be a lot less 
> frequent than what you are currently getting.
> 
> Ken is right, this is designed not to happen. I was actually able to force it 
> by putting a much higher load of updates on my test systems than production 
> would ever see, and my test systems were hopelessly under configured. If you 
> were considering incremental propagation, then you might want to do similar 
> tests.
> 
> Issue  3. it is documented as losing connectivity occasionally and so may 
> need to be restarted.
> 
> I found a reference to this in a proviso in the kerberos install document 
> provided online by MIT in "Incremental Database Propagation" Section:
> http://web.mit.edu/Kerberos/krb5-1.8/krb5-1.8.3/doc/krb5-install.html#Incremental%20Database%20Propagation
> 
> At the end of this section it refers to several known bugs and restrictions 
> in the current implementation, the first of which is:
> 
>    * The "call out to |kprop|" mechanism is a bit fragile; if the |kprop| 
> propagation fails to connect for some reason, the process on the slave may 
> hang waiting for it, and will need to be restarted.
> 
> I was unable to cause this in my testing and would be interested to know more 
> about it. The interim solution would be to restart kpropd on the replica 
> server, maybe after checking the timestamp on the propagation log (as a cron 
> job perhaps).
> 
> Regards,
> 
> Jeremy Hunt
> 
> On 12/10/2010 8:02 AM, Ken Raeburn wrote:
>> [safeTgram (safetgram-in) receive status: NOT encrypted, NOT signed.]
>> 
>> 
>> On Oct 10, 2010, at 19:46, Jeremy Hunt wrote:
>>> Hi Dominic,
>>> 
>>> Thanks for your feedback. You make a good point about reporting a bug. 
>>> Though my memory is that the Kerberos team knew about them all..
>>> 
>>> The second issue is as designed, and given that kprop is so efficient, 
>>> isn't as bad as I first thought when I read about it. Of course your 
>>> experience does show its downside.
>>> 
>>> The second issue is reported in the Kerberos team's own documentation. Like 
>>> I said I haven't seen it, ... yet. I reported it because they have and you 
>>> might miss the warning.
>> I assume one of these is meant to say "third issue"?
>> 
>> The occasional need for kprop is basically only if iprop fails for long 
>> enough that the on-disk queue of changes overflows its configured size 
>> (which might just be hard-coded, I forget).  Unless you're making changes to 
>> large numbers of principals at once or losing connectivity for extended 
>> periods, it ought not to happen much.  And if you're changing most of your 
>> principals at once, a full kprop is probably more efficient anyways.
>> 
>> (I don't recall the occasional-losing-connectivity issue offhand, despite 
>> having worked with this code before when I was at MIT, but I'm at my 
>> non-Kerberos-hacking job at the moment and don't have time to look it up; 
>> maybe later.)
>> 
>>> The first issue, I assumed the kerberos development team already knew of. I 
>>> will have to go back and look at my notes, but I thought I saw something in 
>>> the code or the documentation that made me think they knew about it and it 
>>> wasn't an easy fix. Certainly it behoves me to revisit this and to report 
>>> it as a bug if my memory is wrong. Like I say, it is an observation that 
>>> operationally does not affect us, btw a full dump/restore a la kprop will 
>>> take across profiles.
>> It's not clear to me what you mean by "profile changes".  At least 
>> internally, we use "profile" to refer to the configuration settings from the 
>> config files.  Such things aren't and never have been recorded in the 
>> database nor automatically propagated between KDCs by any means in the 
>> Kerberos software; some of the settings are required to be consistent 
>> between KDCs for consistent operation, but others only affect the KDC making 
>> changes to the database.  If this isn't what you're referring to, please do 
>> explain.
>> 
>> Ken
>> ________________________________________________
>> Kerberos mailing list           [email protected]
>> https://mailman.mit.edu/mailman/listinfo/kerberos
>> 
> 
> 
> -- 
> 
> "The whole modern world has divided itself into Conservatives and 
> Progressives. The business of Progressives is to go on making mistakes. The 
> business of the Conservatives is to prevent the mistakes from being 
> corrected." -- G. K. Chesterton
> 
> ________________________________________________
> Kerberos mailing list           [email protected]
> https://mailman.mit.edu/mailman/listinfo/kerberos

________________________________________________
Kerberos mailing list           [email protected]
https://mailman.mit.edu/mailman/listinfo/kerberos

Reply via email to