[389-devel] Re: Future of nunc-stans
On 10/22/19 8:28 PM, William Brown wrote: I think turbo mode was to try and shortcut returning to the conntable and then having the blocking on the connections poll because the locking strategies before weren't as good. I think there is still some value in turbo "for now" but if we can bring in libevent, then it diminishes because we become event driven rather than poll driven. "turbo mode" means "keep reading from this socket as quickly as possible until you get EAGAIN/EWOULDBLOCK" i.e. keep reading from the socket as fast as possible as long as there is data immediately available. Yep that's how I understood it - it's trying to prevent a longer delay until it's poll()-ed again. This is very useful for replication consumers, especially during online init, when the supplier is feeding you data as fast as possible. Otherwise, its usefulness is limited to applications where you have a single client hammering you with requests, of which test/stress clients form a significant percentage. Don't you know though, microoptimising for benchmarks is the new and hip trend. Joking aside, there probably are situations for now where it's still useful, but ifwe can bring in libevent and be event driven rather than using poll() we shouldn't have to worry to much. Another option is when we hit EAGAIN/EWOULDBLOCK we move the task back to the slapi work q rather than re-waiting on it in the poll phase. +1 — Sincerely, William Brown Senior Software Engineer, 389 Directory Server SUSE Labs ___ 389-devel mailing list -- 389-devel@lists.fedoraproject.org To unsubscribe send an email to 389-devel-le...@lists.fedoraproject.org Fedora Code of Conduct: https://docs.fedoraproject.org/en-US/project/code-of-conduct/ List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines List Archives: https://lists.fedoraproject.org/archives/list/389-devel@lists.fedoraproject.org ___ 389-devel mailing list -- 389-devel@lists.fedoraproject.org To unsubscribe send an email to 389-devel-le...@lists.fedoraproject.org Fedora Code of Conduct: https://docs.fedoraproject.org/en-US/project/code-of-conduct/ List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines List Archives: https://lists.fedoraproject.org/archives/list/389-devel@lists.fedoraproject.org
[389-devel] Re: Future of nunc-stans
On 10/8/19 4:55 PM, William Brown wrote: Hi everyone, In our previous catch up (about 4/5 weeks ago when I was visiting Matus/Simon), we talked about nunc-stans and getting it at least cleaned up and into the code base. I've been looking at it again, and really thinking about it and reflecting on it and I have a lot of questions and ideas now. The main question is *why* do we want it merged? Is it performance? Recently I provided a patch that yielded an approximate ~30% speed up in the entire server through put just by changing our existing connection code. Is it features? What features are we wanting from this? We have no complaints about our current threading model and thread allocations. Is it maximum number of connections? We can always change the conntable to a better datastructure that would help scale this number higher (which would also yield a performance gain). It is mostly about the c10k problem, trying to figure out a way to use epoll, via an event framework like libevent, libev, or libtevent, but in a multi-threaded way (at the time none of those were really thread safe, or suitable for use in the way we do multi-threading in 389). It wasn't about performance, although I hoped that using lock-free data structures might solve some of the performance issues around thread contention, and perhaps using a "proper" event framework might give us some performance boost e.g. the idle thread processing using libevent timeouts. I think that using poll() is never going to scale as well as epoll() in some cases e.g. lots of concurrent connections, no matter what sort of datastructure you use for the conntable. As far as features goes - it would be nice to give plugins the ability to inject event requests, get timeout events, using the same framework as the main server engine. The more I have looked at the code, I guess with time and experience, the more hesitant I am to actually commit to merging it. It was designed by people who did not understand low-level concurrency issues and memory architectures of systems, I resemble that remark. I suppose you could "turn off" the lock-free code and use mutexes. so it's had a huge number of (difficult and subtle) unsafety issues. And while most of those are fixed, what it does is duplicating the connection structure from core 389, It was supposed to eventually replace the connection code. leading to weird solutions like lock sharing and having to use monitors and more. We've tried a few times to push forward with this, but each time we end up with a lot of complexity and fragility. So I'm currently thinking a better idea is to step back, re-evaluate what the problem is we are trying to solve for, then to solve *that*. The question now is "what is the concern that ns would solve". From knowing that, then we can make a plan and approach it more constructively I think. I agree. There are probably better ways to solve the problems now. At the end of the day, I'm questioning if we should just rm -r src/nunc-stans and rethink this whole approach - there are just too many architectural flaws and limitations in ns that are causing us headaches. Ideas and thoughts? -- Sincerely, William ___ 389-devel mailing list -- 389-devel@lists.fedoraproject.org To unsubscribe send an email to 389-devel-le...@lists.fedoraproject.org Fedora Code of Conduct: https://docs.fedoraproject.org/en-US/project/code-of-conduct/ List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines List Archives: https://lists.fedoraproject.org/archives/list/389-devel@lists.fedoraproject.org ___ 389-devel mailing list -- 389-devel@lists.fedoraproject.org To unsubscribe send an email to 389-devel-le...@lists.fedoraproject.org Fedora Code of Conduct: https://docs.fedoraproject.org/en-US/project/code-of-conduct/ List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines List Archives: https://lists.fedoraproject.org/archives/list/389-devel@lists.fedoraproject.org
[389-devel] Re: Porting Perl scripts
On 6/24/19 10:00 AM, Mark Reynolds wrote: On 6/24/19 11:46 AM, Simon Pichugin wrote: Hi team, I am working on porting our admin Perl scripts to Python CLI. Please, check the list and share your opinion: - cl-dump.pl - dumps and decodes changelog. Is it used often (if at all)? https://access.redhat.com/documentation/en-us/red_hat_directory_server/10/html-single/configuration_command_and_file_reference/index#cl_dump.pl_Dump_and_decode_changelog This is used often actually, and is a good debugging tool. I think it just creates a task, so it should be ported to CLI (added to replication CLI sub commands) - logconv.pl - parse and analise the access logs. Pretty big one, is it priority? How much people use it? issue is created -https://pagure.io/389-ds-base/issue/50283 https://access.redhat.com/documentation/en-us/red_hat_directory_server/10/html-single/configuration_command_and_file_reference/index#logconv_pl Does not need to be ported as its a standalone tool Would be great to eliminate perl altogether . . . but this one will be tricky to port to python . . . - migrate.pl - which migration scenarios do we plan to support? Do we depricate old ones? Do we need the script? https://access.redhat.com/documentation/en-us/red_hat_directory_server/10/html-single/configuration_command_and_file_reference/index#migrate-ds.pl This script is obsolete IMHO - ns_accountstatus.pl, ns_inactivate.pl, ns_activate.pl - the issue is discussed here -https://pagure.io/389-ds-base/issue/50206 I think we should extend status at least. Also, William put there some of his thoughts. What do you think, guys? Will we refactor (kinda depricate) some "account lock" as William proposing? https://access.redhat.com/documentation/en-us/red_hat_directory_server/10/html-single/configuration_command_and_file_reference/index#ldif2db.pl_Import-ns_accountstatus.pl_Establish_account_status I will update the ticket, but we need the same functionality of the ns_* tools, especially the new status work that went into ns_accountstatus.pl - that all came from customer escalations so we must not lose that functionality. - syntax-validate.pl - it probably will go to 'healthcheck' tool issue is created -https://pagure.io/389-ds-base/issue/50173 https://access.redhat.com/documentation/en-us/red_hat_directory_server/10/html-single/configuration_command_and_file_reference/index#syntax-validate.pl Yes - repl_monitor.pl - should we make it a part of 'healthcheck' too? https://access.redhat.com/documentation/en-us/red_hat_directory_server/10/html-single/configuration_command_and_file_reference/index#repl_monitor.pl_Monitor_replication_status Yes Thanks, Simon ___ 389-devel mailing list --389-devel@lists.fedoraproject.org To unsubscribe send an email to389-devel-le...@lists.fedoraproject.org Fedora Code of Conduct:https://docs.fedoraproject.org/en-US/project/code-of-conduct/ List Guidelines:https://fedoraproject.org/wiki/Mailing_list_guidelines List Archives:https://lists.fedoraproject.org/archives/list/389-devel@lists.fedoraproject.org ___ 389-devel mailing list -- 389-devel@lists.fedoraproject.org To unsubscribe send an email to 389-devel-le...@lists.fedoraproject.org Fedora Code of Conduct: https://docs.fedoraproject.org/en-US/project/code-of-conduct/ List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines List Archives: https://lists.fedoraproject.org/archives/list/389-devel@lists.fedoraproject.org ___ 389-devel mailing list -- 389-devel@lists.fedoraproject.org To unsubscribe send an email to 389-devel-le...@lists.fedoraproject.org Fedora Code of Conduct: https://docs.fedoraproject.org/en-US/project/code-of-conduct/ List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines List Archives: https://lists.fedoraproject.org/archives/list/389-devel@lists.fedoraproject.org
[389-devel] Re: Logging future direction and ideas.
On 5/9/19 9:13 PM, William Brown wrote: Hi all, So I think it's time for me to write some logging code to improve the situation. Relevant links before we start: https://pagure.io/389-ds-base/issue/49415 http://www.port389.org/docs/389ds/design/logging-performance-improvement.html https://pagure.io/389-ds-base/issue/50350 https://pagure.io/389-ds-base/issue/50361 All of these links touch on issues around logging, and I think they all combine to create three important points: * The performance of logging should be improved * The amount of details (fine grain) and information in logs should improve * The structure of the log content should be improved to aid interaction (possibly even machine parsable) I will turn this into a design document, but there are some questions I would like some input to help answer as part of this process to help set the direction and tasks to achieve. -- Should our logs as they exist today, continue to exist? I think that my view on this is "no". I think if we make something better, we have little need to continue to support our legacy interfaces. Of course, this would be a large change and it may not sit comfortably with people. A large part of this thinking is that the "new" log interface I want to add is focused on *operations* rather than auditing accesses or changes, or over looking at errors. The information of both the current access/audit/error would largely be melded into a single operation log, and then with tools like logconv, we could parse and extract information that would behave the same way as access/error/audit. At the same time - I can see how people *may* want a "realtime" audit of operations as they occur (IE access log), but this still today is already limited by having to "wait" for operations to complete. In a crash scenario, we would be able to still view the logs that are queued, so I think there are not so many concerns about losing information in these cases (in fact we'd probably have more). -- What should the operation log look like? I think it should be structured, and should be whole-units of information, related to a single operation. IE only at the conclusion of the operation is it logged (thus the async!). It should support arbitrary, nested timers, and would *not* support log levels - it's a detailed log of the process each query goes through. An example could be something like: [timestamp] - [conn=id op=id] - start operation [timestamp] - [conn=id op=id] - start time = time ... [timestamp] - [conn=id op=id] - started internal search '(some=filter)' [timestamp] - [conn=id op=id parentop=id] - start nested operation [timestamp] - [conn=id op=id parentop=id] - start time = time ... ... [timestamp] - [conn=id op=id parentop=id] - end time = time... [timestamp] - [conn=id op=id parentop=id] - duration = diff end - start [timestamp] - [conn=id op=id parentop=id] - end nested operation - result -> ... [timestamp] - [conn=id op=id] - ended internal search '(some=filter)' ... [timestamp] - [conn=id op=id] - end time = time [timestamp] - [conn=id op=id] - duration = diff end - start Due to the structured - blocked nature, there would be no interleaving of operation messages. therefor the log would appear as: [timestamp] - [conn=00 op=00] - start operation [timestamp] - [conn=00 op=00] - start time = time ... [timestamp] - [conn=00 op=00] - started internal search '(some=filter)' [timestamp] - [conn=00 op=00 parentop=01] - start nested operation [timestamp] - [conn=00 op=00 parentop=01] - start time = time ... ... [timestamp] - [conn=00 op=00 parentop=01] - end time = time... [timestamp] - [conn=00 op=00 parentop=01] - duration = diff end - start [timestamp] - [conn=00 op=00 parentop=01] - end nested operation - result -> ... [timestamp] - [conn=00 op=00] - ended internal search '(some=filter)' ... [timestamp] - [conn=00 op=00] - end time = time [timestamp] - [conn=00 op=00] - duration = diff end - start [timestamp] - [conn=22 op=00] - start operation [timestamp] - [conn=22 op=00] - start time = time ... [timestamp] - [conn=22 op=00] - started internal search '(some=filter)' [timestamp] - [conn=22 op=00 parentop=01] - start nested operation [timestamp] - [conn=22 op=00 parentop=01] - start time = time ... ... [timestamp] - [conn=22 op=00 parentop=01] - end time = time... [timestamp] - [conn=22 op=00 parentop=01] - duration = diff end - start [timestamp] - [conn=22 op=00 parentop=01] - end nested operation - result -> ... [timestamp] - [conn=22 op=00] - ended internal search '(some=filter)' ... [timestamp] - [conn=22 op=00] - end time = time [timestamp] - [conn=22 op=00] - duration = diff end - start An alternate method for structuring could be a machine readable format like json: { 'timestamp': 'time', 'duration': , 'bind': 'dn of who initiated operation', 'events': [ 'debug': 'msg', 'internal_search': { 'timestamp': 'time', 'duration': ,
[389-devel] Re: [discuss] Entry cache and backend txn plugin problems
On 2/26/19 4:26 PM, William Brown wrote: On 26 Feb 2019, at 18:32, Ludwig Krispenz wrote: Hi, I need a bit of time to read the docs and clear my thoughts, but one comment below On 02/25/2019 01:49 AM, William Brown wrote: On 23 Feb 2019, at 02:46, Mark Reynolds wrote: I want to start a brief discussion about a major problem we have backend transaction plugins and the entry caches. I'm finding that when we get into a nested state of be txn plugins and one of the later plugins that is called fails then while we don't commit the disk changes (they are aborted/rolled back) we DO keep the entry cache changes! For example, a modrdn operation triggers the referential integrity plugin which renames the member attribute in some group and changes that group's entry cache entry, but then later on the memberOf plugin fails for some reason. The database transaction is aborted, but the entry cache changes that RI plugin did are still present :-( I have also found other entry cache issues with modrdn and BE TXN plugins, and we know of other currently non-reproducible entry cache crashes as well related to mishandling of cache entries after failed operations. It's time to rework how we use the entry cache. We basically need a transaction style caching mechanism - we should not commit any entry cache changes until the original operation is fully successful. Unfortunately the way the entry cache is currently designed and used it will be a major change to try to change it. William wrote up this doc: http://www.port389.org/docs/389ds/design/cache_redesign.html But this also does not currently cover the nested plugin scenario either (not yet). I do know how how difficult it would be to implement William's proposal, or how difficult it would be to incorporate the txn style caching into his design. What kind of time frame could this even be implemented in? William what are your thoughts? I like coffee? How cool are planes? My thoughts are simple :) I think there is a pretty simple mental simplification we can make here though. Nested transactions “don’t really exist”. We just have *recursive* operations inside of one transaction. Once reframed like that, the entire situation becomes simpler. We have one thread in a write transaction that can have recursive/batched operations as required, which means that either “all operations succeed” or “none do”. Really, this is the behaviour we want anyway, and it’s the transaction model of LMDB and other kv stores that we could consider (wired tiger, sled in the future). I think the recursive/nested transaction on the database level are not the problem, we do this correctly already, either all or no change becomes persistent. What we do not manage is modifications we do in parallel on the in memory structure like the entry cache, changes to the EC are not managed by any txn and I do not see how any of the database txn models would help, they do not know about ec and can abort changes. We would need to incorporate the EC into a generic txn model, or have a way to flag ec entries as garbage for if a txn is aborted The issue is we allow parallel writes, which breaks the consistency guarantees of the EC anyway. LMDB won’t allow parallel writes (it’s single write - concurrent parallel readers), and most other modern kv stores take this approach too, so we should be remodelling our transactions to match this IMO. It will make the process of how we reason about the EC much much simpler I think. Some sort of in-memory data structure with fast lookup and transactional semantics (modify operations are stored as mvcc/cow so each read of the database with a given txn handle sees its own view of the ec, a txn commit updates the parent txn ec view, or the global ec view if no parent, from the copy, a txn abort deletes the txn's copy of the ec) is needed. A quick google search turns up several hits. I'm not sure if the B+Tree proposed at http://www.port389.org/docs/389ds/design/cache_redesign.html has transactional semantics, or if such code could be added to its implementation. With LMDB, if we could make the on-disk entry representation the same as the in-memory entry representation, then we could use LMDB as the entry cache too - the database would be the entry cache as well. If William's design is too huge of a change that will take too long to safely implement then perhaps we need to look into revising the existing cache design where we use "cache_add_tentative" style functions and only apply them at the end of the op. This is also not a trivial change. It’s pretty massive as a change - if we want to do it right. I’d say we need: * development and testing of a MVCC/COW cache implementation (proof that it really really works transactionally) * allow “disable/disconnect” of the entry cache, but with the higher level txn’s so that we can prove the txn semantics are correct * re-architect our transaction calls so
[389-devel] Re: Design Doc: Automatic server tuning by default
On 11/06/2016 04:07 PM, William Brown wrote: On Fri, 2016-11-04 at 12:07 +0100, Ludwig Krispenz wrote: On 11/04/2016 06:51 AM, William Brown wrote: http://www.port389.org/docs/389ds/design/autotuning.html I would like to hear discussion on this topic. thread number: independent of number of cpus I would have a default minmum number of threads, What do you think would be a good minimum? With too many threads to CPU, we can cause an overhead in context switching that is not efficient. Even if the threads are unused, or mostly idle? your test result for reduced thread number is with clients quickly handling responses and short operations. But if some threads are serving lazy clients or do database access and have to wait, you can quickly run out of threads handling new ops Mmm this is true. Nunc-Stans helps a bit here, but not completely. In this case, where there are a lot of mostly idle clients that want to maintain an open connection, nunc-stans helps a great deal, both because epoll is much better than a giant poll() array, and because libevent maintains a sorted idle connection list for you. I wonder if something like 16 or 24 or something is a good "minimum", and then if we detect more then we start to scale up. entry cache: you should not only take the available memory into account but also the size of the database, it doesn't make sense to blow up the cache and its associated data (eg hashtables) for a small database just because the memory is there Well, the cache size is "how much we *could* use" not "how much we will use". So setting a cache size of 20GB for a 10Mb database doesn't matter, as we'll still only use ~10Mb of memory. The inverse of this, is that if we did set cachesize on database size, what happens with a large online bulkload? We would need to retune the database cache size, which means a restart of the application. Not something that IPA/Admins want to hear. I think it's safer to just have the higher number. ___ 389-devel mailing list -- 389-devel@lists.fedoraproject.org To unsubscribe send an email to 389-devel-le...@lists.fedoraproject.org ___ 389-devel mailing list -- 389-devel@lists.fedoraproject.org To unsubscribe send an email to 389-devel-le...@lists.fedoraproject.org
[389-devel] Re: Close of 48241, let's not support bad crypto
On 10/03/2016 09:34 PM, William Brown wrote: On Mon, 2016-10-03 at 21:26 -0600, Rich Megginson wrote: On 10/03/2016 08:58 PM, William Brown wrote: Hi, I want to close #48241 [0] as "wontfix". I do not believe that it's appropriate to provide SHA3 as a password hashing algorithm. The SHA3 algorithm is designed to be fast, and cryptographically secure. It's target usage is for signatures and verification of these in a rapid manner. The fact that this algorithm is fast, and could be implemented in hardware is the reason it's not appropriate for password hashing. Passwords should be hashed with a slow algorithm, and in the future, an algorithm that is CPU and memory hard. This means that in the (hopefully unlikely) case of password hash leak or dump from ldap that the attacker must spend a huge amount of resources to brute force or attack any password that we are storing in the system. If the crypto/security team is ok with not supporting SHA3 for passwords, works for me. Who would be a point of contact to ask this? Nikos Mavrogiannopoulos <nmavr...@redhat.com> As a result, I would like to make this ticket "wontfix" with an explanation of why. I think it's better for us to pursue #397 [1]. PBKDF2 is a CPU hard algorithm, and scrypt is both CPU and Memory hard. These are the direction we should be going (asap). Thanks, [0] https://fedorahosted.org/389/ticket/48241 [1] https://fedorahosted.org/389/ticket/397 ___ 389-devel mailing list -- 389-devel@lists.fedoraproject.org To unsubscribe send an email to 389-devel-le...@lists.fedoraproject.org ___ 389-devel mailing list -- 389-devel@lists.fedoraproject.org To unsubscribe send an email to 389-devel-le...@lists.fedoraproject.org ___ 389-devel mailing list -- 389-devel@lists.fedoraproject.org To unsubscribe send an email to 389-devel-le...@lists.fedoraproject.org ___ 389-devel mailing list -- 389-devel@lists.fedoraproject.org To unsubscribe send an email to 389-devel-le...@lists.fedoraproject.org
[389-devel] Re: Close of 48241, let's not support bad crypto
On 10/03/2016 08:58 PM, William Brown wrote: Hi, I want to close #48241 [0] as "wontfix". I do not believe that it's appropriate to provide SHA3 as a password hashing algorithm. The SHA3 algorithm is designed to be fast, and cryptographically secure. It's target usage is for signatures and verification of these in a rapid manner. The fact that this algorithm is fast, and could be implemented in hardware is the reason it's not appropriate for password hashing. Passwords should be hashed with a slow algorithm, and in the future, an algorithm that is CPU and memory hard. This means that in the (hopefully unlikely) case of password hash leak or dump from ldap that the attacker must spend a huge amount of resources to brute force or attack any password that we are storing in the system. If the crypto/security team is ok with not supporting SHA3 for passwords, works for me. As a result, I would like to make this ticket "wontfix" with an explanation of why. I think it's better for us to pursue #397 [1]. PBKDF2 is a CPU hard algorithm, and scrypt is both CPU and Memory hard. These are the direction we should be going (asap). Thanks, [0] https://fedorahosted.org/389/ticket/48241 [1] https://fedorahosted.org/389/ticket/397 ___ 389-devel mailing list -- 389-devel@lists.fedoraproject.org To unsubscribe send an email to 389-devel-le...@lists.fedoraproject.org ___ 389-devel mailing list -- 389-devel@lists.fedoraproject.org To unsubscribe send an email to 389-devel-le...@lists.fedoraproject.org
[389-devel] Re: Sign compare checking
On 08/28/2016 11:13 PM, William Brown wrote: So either, this is a bug in the way openldap uses the ber_len_t type, we have a mistake in our logic, or something else hokey is going on. I would like to update this to: if ( (tag != LBER_END_OF_SEQORSET) && (len == 0) && (*fstr != NULL) ) Or even: if ( (tag != LBER_END_OF_SEQORSET) && (*fstr != NULL) ) What do you think of this assessment given the ber_len_t type? Looks like it's intentional by the openldap team. There are some other areas for this problem. Specifically: int ber_printf(BerElement *ber, const char *fmt, ...); lber.h:79:#define LBER_ERROR((ber_tag_t) -1) We check if (ber_printf(...) != LBER_ERROR) Of course, we can't satisfy either. We can't cast the LBER_ERROR from uint -> int without changing the value of it, and we can't cast the output of ber_printf from int -> uint, again, without potentially changing the value of it. So it seems that the openldap library may be impossible to satisfy the gcc type checking with -Wsign-compare. For now, I may just avoid these in my fixes, as it seems like a whole set of landmines I want to avoid ... Part of the problem is that we wanted to support being able to use both mozldap and openldap, without too much "helper" code/macros/#ifdef MOZLDAP/etc. It looks as though this is a place where we need to have some sort of helper. (as for why we still support mozldap - we still need an ldap c sdk that supports NSS for crypto until we can fix that in the server. Once we change 389 so that it can use openldap with openssl/gnutls for crypto, we should consider deprecating support for mozldap.) -- 389-devel mailing list 389-devel@lists.fedoraproject.org https://lists.fedoraproject.org/admin/lists/389-devel@lists.fedoraproject.org -- 389-devel mailing list 389-devel@lists.fedoraproject.org https://lists.fedoraproject.org/admin/lists/389-devel@lists.fedoraproject.org
[389-devel] Re: Please review: 48951 dsconf and dsadm foundations
On 08/22/2016 05:23 PM, William Brown wrote: On Sun, 2016-08-21 at 21:33 -0600, Rich Megginson wrote: On 08/21/2016 09:02 PM, William Brown wrote: Anything that is yum, systemd command, etc. is ansible. Anything about installing an instance or 389 specific we do. I think that is an arbitrary line of demarcation. ansible can be used for a lot more than that. Yes it can. But I don't have infinite time, and neither does the team. Lets get something to work first, then we can grow what ansible is able to integrate with. Lets design our code to be able to be integrate with ansible, but draw some basic lines on things we shouldn't duplicate and then remove in the future. This is why I want to draw the line that start/stop of the server, and certain remote admin tasks aren't part of the scope here. Saying this, in a way I'm not a fan of this also. Because we are doing behind the scenes magic, rather than simple, implicit tasks. What happens if someone crons this? What happens? We lose the intent of the admin in some cases. I think the principle should be "make it simple to do the easy things - make it possible to do the difficult things". In this case, if I am an admin running a cli, I think it should "do the right thing". If I'm setting up a cron job, I should be able to force it to use offline mode or whatever - it is easy to keep track of extra cli arguments if I'm automating something vs. running interactively on the command line. I agree with that principle, and is actually one of the guides I am following in my design. I think that here, we have a differing view of simple. My interpretation is. My idea of simple is "each task should do one specific thing, and do it well". you have db2ldif and db2ldif_task. Each one just does that one simple thing. The intent of the admin is clear at the moment they hit enter. Not if they don't know what is meant by "_task". It might as well be ".pl" to most admins. Most of the admins I've encountered say "I just want to get an ldif dump from the server - I have no idea what is the difference between db2ldif and db2ldif.pl." I think they will say the same thing about "db2ldif" vs. "db2ldif_task". I was thinking about this, this morning, and I think I have come to agree with you. Lets make this "you want to get from A to B, and we work out how to get there". Similar to ansible, which probably lends well to use using ansible in the future for things. Your idea of simple is "intuitive simple" for the admin, where behaviours are inferred from running application state. The admin says "how I want you to act" and the computer resolves the path to get there. And - if the admin knows the tool, because the admin has learned by experience, progressive disclosure, or RTFM, the admin can explicitly specify the exact modes of operation using command line flags. Using the tool simply is easy, using the tool in an advanced fashion is possible. I think the intent of the tool should be clear without huge amounts of experience and rtfm. We have a huge usability and barrier to entry problem in DS, and if we don't make changes to lower that, we will become irrelevant. We need to make it easier to use, while retaining every piece of advanced functionality that our experienced users expect :) (I think we agree on this point though) One day we will need to make a decision on which way to go with these tools, and which path we follow, but again, for now it's open. Of course, I am going to argue for the former, because that is the construction of my experience. Reality is that I've seen a lot of production systems get messed up because what seemed intuitive to the programmer, was not the intent of the admin. We are basically having the "boeing vs airbus" debate. Boeing has autopilots and computer assistance, but believes the pilot is always right and will give up control even if the pilot is going to do something the computer disagrees with. Airbus assumes the computer is always right, and will actively take control away from the pilot if they are going to do something the computer disagrees with. It's about what's right: The program? Or the human intent? And that question has never been answered. I think the discussion doesn't fall exactly on the "boeing vs airbus" axis, but perhaps isn't entirely orthogonal either. As said above, I think maybe we should go down the "programmer is right" idea, but with the ability for the sysadmin to take over if needed. +1 - I think you've got the right idea. -- 389-devel mailing list 389-devel@lists.fedoraproject.org https://lists.fedoraproject.org/admin/lists/389-devel@lists.fedoraproject.org -- 389-devel mailing list 389-devel@lists.fedoraproject.org https://lists.fedoraproject.org/admin/lists/389-devel@lists.fedoraproject.org
[389-devel] Re: Please review: 48951 dsconf and dsadm foundations
On 08/21/2016 07:56 PM, William Brown wrote: On Sun, 2016-08-21 at 19:44 -0600, Rich Megginson wrote: On 08/21/2016 05:28 PM, William Brown wrote: On Fri, 2016-08-19 at 11:21 +0200, Ludwig Krispenz wrote: Hi William, On 08/19/2016 02:22 AM, William Brown wrote: On Wed, 2016-08-17 at 14:53 +1000, William Brown wrote: https://fedorahosted.org/389/ticket/48951 https://fedorahosted.org/389/attachment/ticket/48951/0001-Ticket-48951-dsadm-and-dsconf-base-files.patch https://fedorahosted.org/389/attachment/ticket/48951/0002-Ticket-48951-dsadm-and-dsconf-refactor-installer-cod.patch https://fedorahosted.org/389/attachment/ticket/48951/0003-Ticket-48951-dsadm-and-dsconf-Installer-options-mana.patch https://fedorahosted.org/389/attachment/ticket/48951/0004-Ticket-48951-dsadm-and-dsconf-Ability-to-unit-test-t.patch https://fedorahosted.org/389/attachment/ticket/48951/0005-Ticket-48951-dsadm-and-dsconf-Backend-management-and.patch As a follow up, here is a design / example document http://www.port389.org/docs/389ds/design/dsadm-dsconf.html thanks for this work, it is looking great and is something we were really missing. But of course I have some comments (and I know I am late). - The naming dsadm and dsconf, and the split of tasks between them, is the same as in Sun/Oracle DSEE, and even if there is probably no legal restriction to use them; I'd prefer to have own names for our own tools. Fair enough. There is nothing saying these names are stuck in stone right now so if we have better ideas we can change it. I will however say that any command name, should not start with numbers (ie 389), and that it should generally be fast to type, easy to remember and less than 8 chars long if possible. What about "adm389" and "conf389"? Yeah, those could work. - I'm not convinced of splitting the tasks into two utilities, you will have different actions and options for the different resources/subcommands anyway, so you could have one for all. The issue is around connection to the server, and whether it needs to be made or not. The command in the code is: dsadm: command: action dsconf: connect to DS command action So dsconf takes advantage of all the commands being remote, so it shares common connection code. If we were to make the tools "one" we would need to make a decorator or something to repeat this, and there are some other issues there with the way that the argparse library works. I think this is an arbitrary distinction - needing a connection or not - but other projects use similar "admin client" vs. "more general use client" e.g. OpenShift has "oadm" vs. "oc". If this is a pattern that admins are used to, we just need to be consistent in applying that pattern. Also, I think, the goal should be to make all actions available local and remote, the start/stop/install should be possible remotely via rsh or another mechanism as long as the utilities are available on the target machine, so I propose one dsmanage or 389manage dsmanage is an okay name but, remote start stop is not an easy task. At that point, you are looking at needing to ssh, manage the acceptance of keys, you have to know the remote server ds prefix, you need to ssh as root (bad) or manage sudo (annoying). We already have the ability to remote stop/start/restart the server, with admin server at least. Not with systemd we don't. systemd + selinux has broken that for a stack of our products, and at the moment, we are publishing release notes that these don't work in certain cases. And rightly so, ds should not have the rights to touch system services in the way we were doing, it's a huge security risk. To make it work we need to do dbus and polkit magic, and the amount of motivation I have to give about this problem is low, especially when tools like ansible do it for us, much better. You need to potentially manage selinux, systemd etc. It gets really complicated, really fast, and at that point I'm going to turn around and say "no, just use ansible if you want to remote manage things". Lets keep these tools simple as we can, and let things like ansible which is designed for remote tasks, do their job. Right, but it will take a lot of work to determine what should be done in ansible vs. specialized tool. Not really. An admin will know "okay, if I want to start stop services I write action: service state=enabled dirsrv@instance". They will also know "well I want to reconfigure plugins on DS, I use conf389/dsconf". Anything that is yum, systemd command, etc. is ansible. Anything about installing an instance or 389 specific we do. I think that is an arbitrary line of demarcation. ansible can be used for a lot more than that. A better strategy is that we can potentially write a lib389 ansible module in the future allowing us to playbook tasks for DS. I wo
[389-devel] Re: Please review: 48951 dsconf and dsadm foundations
On 08/21/2016 05:28 PM, William Brown wrote: On Fri, 2016-08-19 at 11:21 +0200, Ludwig Krispenz wrote: Hi William, On 08/19/2016 02:22 AM, William Brown wrote: On Wed, 2016-08-17 at 14:53 +1000, William Brown wrote: https://fedorahosted.org/389/ticket/48951 https://fedorahosted.org/389/attachment/ticket/48951/0001-Ticket-48951-dsadm-and-dsconf-base-files.patch https://fedorahosted.org/389/attachment/ticket/48951/0002-Ticket-48951-dsadm-and-dsconf-refactor-installer-cod.patch https://fedorahosted.org/389/attachment/ticket/48951/0003-Ticket-48951-dsadm-and-dsconf-Installer-options-mana.patch https://fedorahosted.org/389/attachment/ticket/48951/0004-Ticket-48951-dsadm-and-dsconf-Ability-to-unit-test-t.patch https://fedorahosted.org/389/attachment/ticket/48951/0005-Ticket-48951-dsadm-and-dsconf-Backend-management-and.patch As a follow up, here is a design / example document http://www.port389.org/docs/389ds/design/dsadm-dsconf.html thanks for this work, it is looking great and is something we were really missing. But of course I have some comments (and I know I am late). - The naming dsadm and dsconf, and the split of tasks between them, is the same as in Sun/Oracle DSEE, and even if there is probably no legal restriction to use them; I'd prefer to have own names for our own tools. Fair enough. There is nothing saying these names are stuck in stone right now so if we have better ideas we can change it. I will however say that any command name, should not start with numbers (ie 389), and that it should generally be fast to type, easy to remember and less than 8 chars long if possible. What about "adm389" and "conf389"? - I'm not convinced of splitting the tasks into two utilities, you will have different actions and options for the different resources/subcommands anyway, so you could have one for all. The issue is around connection to the server, and whether it needs to be made or not. The command in the code is: dsadm: command: action dsconf: connect to DS command action So dsconf takes advantage of all the commands being remote, so it shares common connection code. If we were to make the tools "one" we would need to make a decorator or something to repeat this, and there are some other issues there with the way that the argparse library works. I think this is an arbitrary distinction - needing a connection or not - but other projects use similar "admin client" vs. "more general use client" e.g. OpenShift has "oadm" vs. "oc". If this is a pattern that admins are used to, we just need to be consistent in applying that pattern. Also, I think, the goal should be to make all actions available local and remote, the start/stop/install should be possible remotely via rsh or another mechanism as long as the utilities are available on the target machine, so I propose one dsmanage or 389manage dsmanage is an okay name but, remote start stop is not an easy task. At that point, you are looking at needing to ssh, manage the acceptance of keys, you have to know the remote server ds prefix, you need to ssh as root (bad) or manage sudo (annoying). We already have the ability to remote stop/start/restart the server, with admin server at least. You need to potentially manage selinux, systemd etc. It gets really complicated, really fast, and at that point I'm going to turn around and say "no, just use ansible if you want to remote manage things". Lets keep these tools simple as we can, and let things like ansible which is designed for remote tasks, do their job. Right, but it will take a lot of work to determine what should be done in ansible vs. specialized tool. A better strategy is that we can potentially write a lib389 ansible module in the future allowing us to playbook tasks for DS. I would like to see ansible playbooks for 389. Ansible is python, so we can leverage python-ldap/lib389 instead of having to fork/exec ldapsearch/ldapmodify. This is why I kept them separate, because I wanted to have simple, isolated domains in the commands for actions, that let us know clearly what we are doing. It's still an open discussion though. If this is a common patterns that admins are used to, then we should consider it. - could this be made interactive ? run the command, providing some or none options and then have a shell like env dsmanage >>> help .. connect .. create-x >>> connect -h ... replica-enable In the current form, no. However, the way I have written it, we should be able to pretty easily replace the command line framework on front and drop in something that does allow interactive commands like this. I was thinking: https://github.com/Datera/configshell This is already in EL, as it's part of the targetcli application. Think MVC - just make sure you can change the View. I tried to do this with setup-ds.pl - make it possible to "plug in" a different "UI".
[389-devel] Re: Logging performance improvement
On 06/30/2016 08:14 PM, William Brown wrote: On Thu, 2016-06-30 at 20:01 -0600, Rich Megginson wrote: On 06/30/2016 07:52 PM, William Brown wrote: Hi, I've been thinking about this for a while, so I decided to dump my thoughts to a document. I think I won't get to implementing this for a while, but it would really help our server performance. http://www.port389.org/docs/389ds/design/logging-performance-improvement.html Looks good. Can we quantify the current log overhead? Sure, I could probably sit down and work out a way to bench mark this . But without the alternative being written, hard to say. I could always patch out logging and drop the lock in a hacked build so we can show what "without logging contention" looks like? That's only one part of it - you'd have to figure out some way to get rid of the overhead of the formatting and flushing in the operation threads too. I suppose you could just write it and see what happens. -- 389-devel mailing list 389-devel@lists.fedoraproject.org https://lists.fedoraproject.org/admin/lists/389-devel@lists.fedoraproject.org -- 389-devel mailing list 389-devel@lists.fedoraproject.org https://lists.fedoraproject.org/admin/lists/389-devel@lists.fedoraproject.org -- 389-devel mailing list 389-devel@lists.fedoraproject.org https://lists.fedoraproject.org/admin/lists/389-devel@lists.fedoraproject.org -- 389-devel mailing list 389-devel@lists.fedoraproject.org https://lists.fedoraproject.org/admin/lists/389-devel@lists.fedoraproject.org
[389-devel] Re: Logging performance improvement
On 06/30/2016 07:52 PM, William Brown wrote: Hi, I've been thinking about this for a while, so I decided to dump my thoughts to a document. I think I won't get to implementing this for a while, but it would really help our server performance. http://www.port389.org/docs/389ds/design/logging-performance-improvement.html Looks good. Can we quantify the current log overhead? -- 389-devel mailing list 389-devel@lists.fedoraproject.org https://lists.fedoraproject.org/admin/lists/389-devel@lists.fedoraproject.org -- 389-devel mailing list 389-devel@lists.fedoraproject.org https://lists.fedoraproject.org/admin/lists/389-devel@lists.fedoraproject.org
Re: [389-devel] Please review: [389 Project] #48257: Fix coverity issues - 08/24/2015
On 11/05/2015 05:09 PM, Noriko Hosoi wrote: https://fedorahosted.org/389/ticket/48257 https://fedorahosted.org/389/attachment/ticket/48257/0001-Ticket-48257-Fix-coverity-issues-08-24-2015.patch Once this ticket is closed, is it okay to respin nunc_stans which is going to be version 0.1.6? Yes. After every "batch" of commits to nunc-stans the version should be bumped, where "batch" can be a single commit if no other commits are planned for the immediate future. Current: rpm/389-ds-base.spec.in:%global nunc_stans_ver 0.1.5 -- 389-devel mailing list 389-devel@lists.fedoraproject.org https://admin.fedoraproject.org/mailman/listinfo/389-devel -- 389-devel mailing list 389-devel@lists.fedoraproject.org https://admin.fedoraproject.org/mailman/listinfo/389-devel
Re: [389-devel] Please comment: [389 Project] #48285: The dirsrv user/group should be created in rpm %pre, and ideally with fixed uid/gid
On 10/21/2015 12:20 PM, Noriko Hosoi wrote: Thanks to William for his reviewing the patch. I'm going to push it. But before doing so, I have a question regarding the autogen files. The proposed patch requires to rerun autogen.sh and push the generated files to the git. My current env has automake 1.15 and it generates large diffs as attached to this email. -# Makefile.in generated by automake 1.13.4 from Makefile.am. +# Makefile.in generated by automake 1.15 from Makefile.am. Is it okay to push the attached patch 0002-Ticket-48285-The-dirsrv-user-group-should-be-created.patch to git or do we prefer to keep the diff minimum by runing autogen on the host having the same version of automake (1.13.4)? Should confirm that the generated configure script runs on el7 Thanks, --noriko On 10/20/2015 05:48 PM, Noriko Hosoi wrote: https://fedorahosted.org/389/ticket/48285 https://fedorahosted.org/389/attachment/ticket/48285/0001-Ticket-48285-The-dirsrv-user-group-should-be-created.patch git patch file (master) -- revised If these users and groups exist on the system: /etc/passwd:xdirsrv:x:389:389:389-ds-base:/usr/share/dirsrv:/sbin/nologin /etc/passwd:dirsrvy:x:390:390:389-ds-base:/usr/share/dirsrv:/sbin/nologin /etc/group:xdirsrv:x:389: /etc/group:dirsrvy:x:390: This pair is supposed to be generated: /etc/passwd:dirsrv:x:391:391:389-ds-base:/usr/share/dirsrv:/sbin/nologin /etc/group:dirsrv:x:391: -- 389-devel mailing list 389-devel@lists.fedoraproject.org https://admin.fedoraproject.org/mailman/listinfo/389-devel -- 389-devel mailing list 389-devel@lists.fedoraproject.org https://admin.fedoraproject.org/mailman/listinfo/389-devel -- 389-devel mailing list 389-devel@lists.fedoraproject.org https://admin.fedoraproject.org/mailman/listinfo/389-devel
Re: [389-devel] [lib389] Deref control advice needed
On 09/02/2015 10:35 AM, thierry bordaz wrote: On 08/27/2015 02:31 AM, Rich Megginson wrote: On 08/26/2015 03:28 AM, William Brown wrote: In relation to ticket 47757, I have started work on a deref control for Noriko. The idea is to get it working in lib389, then get it upstreamed into pyldap. At this point it's all done, except that the actual request control doesn't appear to work. Could one of the lib389 / ldap python experts cast their eye over this and let me know where I've gone wrong? I have improved this, but am having issues with the asn1spec for ber decoding. I have attached the updated patch, but specifically the issue is in _controls.py I would appreciate if anyone could take a look at this, and let me know if there is something I have missed. Not sure, but here is some code I did without using pyasn: https://github.com/richm/scripts/blob/master/derefctrl.py This is quite old by now, and is probably bit rotted with respect to python-ldap and python3. Old !! but it worked like a charm for me. I just had to do this modif because of change in python-ldap IIRC OK. But I would rather use William's version which is based on pyasn1 - it hurts my brain to hand code BER . . . diff derefctrl.py /tmp/derefctrl_orig.py 0a1 > 151,152c152 < self.criticality,self.derefspeclist,self.entry = criticality,derefspeclist or [],None < #LDAPControl.__init__(self,DerefCtrl.controlType,criticality,derefspeclist) --- > LDAPControl.__init__(self,DerefCtrl.controlType,criticality,derefspeclist) 154c154 < def encodeControlValue(self): --- > def encodeControlValue(self,value): 156c156 < for (derefattr,attrs) in self.derefspeclist: --- > for (derefattr,attrs) in value: """ controlValue ::= SEQUENCE OF derefRes DerefRes DerefRes ::= SEQUENCE { derefAttr AttributeDescription, derefValLDAPDN, attrVals[0] PartialAttributeList OPTIONAL } PartialAttributeList ::= SEQUENCE OF partialAttribute PartialAttribute """ class DerefRes(univ.Sequence): componentType = namedtype.NamedTypes( namedtype.NamedType('derefAttr', AttributeDescription()), namedtype.NamedType('derefVal', LDAPDN()), namedtype.OptionalNamedType('attrVals', PartialAttributeList()), ) class DerefResultControlValue(univ.SequenceOf): componentType = DerefRes() def decodeControlValue(self,encodedControlValue): self.entry = {} #decodedValue,_ = decoder.decode(encodedControlValue,asn1Spec=DerefResultControlValue()) # Gets the error: TagSet(Tag(tagClass=0, tagFormat=32, tagId=16), Tag(tagClass=128, tagFormat=32, tagId=0)) not in asn1Spec: {TagSet(Tag(tagClass=0, tagFormat=32, tagId=16)): PartialAttributeList()}/{} decodedValue,_ = decoder.decode(encodedControlValue) print(decodedValue.prettyPrint()) # Pretty print yields #Sequence: <-- Sequence of # =Sequence: <-- derefRes # =uniqueMember <-- derefAttr # =uid=test,dc=example,dc=com <-- derefVal # =Sequence: <-- attrVals # =uid # =Set: #=test # For now, while asn1spec is sad, we'll just rely on it being well formed # However, this isn't good, as without the asn1spec, we seem to actually be dropping values for result in decodedValue: derefAttr, derefVal, _ = result self.entry[str(derefAttr)] = str(derefVal) -- 389-devel mailing list 389-devel@lists.fedoraproject.org https://admin.fedoraproject.org/mailman/listinfo/389-devel -- 389-devel mailing list 389-devel@lists.fedoraproject.org https://admin.fedoraproject.org/mailman/listinfo/389-devel -- 389-devel mailing list 389-devel@lists.fedoraproject.org https://admin.fedoraproject.org/mailman/listinfo/389-devel
Re: [389-devel] [lib389] Deref control advice needed
On 08/26/2015 03:28 AM, William Brown wrote: In relation to ticket 47757, I have started work on a deref control for Noriko. The idea is to get it working in lib389, then get it upstreamed into pyldap. At this point it's all done, except that the actual request control doesn't appear to work. Could one of the lib389 / ldap python experts cast their eye over this and let me know where I've gone wrong? I have improved this, but am having issues with the asn1spec for ber decoding. I have attached the updated patch, but specifically the issue is in _controls.py I would appreciate if anyone could take a look at this, and let me know if there is something I have missed. Not sure, but here is some code I did without using pyasn: https://github.com/richm/scripts/blob/master/derefctrl.py This is quite old by now, and is probably bit rotted with respect to python-ldap and python3. controlValue ::= SEQUENCE OF derefRes DerefRes DerefRes ::= SEQUENCE { derefAttr AttributeDescription, derefValLDAPDN, attrVals[0] PartialAttributeList OPTIONAL } PartialAttributeList ::= SEQUENCE OF partialAttribute PartialAttribute class DerefRes(univ.Sequence): componentType = namedtype.NamedTypes( namedtype.NamedType('derefAttr', AttributeDescription()), namedtype.NamedType('derefVal', LDAPDN()), namedtype.OptionalNamedType('attrVals', PartialAttributeList()), ) class DerefResultControlValue(univ.SequenceOf): componentType = DerefRes() def decodeControlValue(self,encodedControlValue): self.entry = {} #decodedValue,_ = decoder.decode(encodedControlValue,asn1Spec=DerefResultControlValue()) # Gets the error: TagSet(Tag(tagClass=0, tagFormat=32, tagId=16), Tag(tagClass=128, tagFormat=32, tagId=0)) not in asn1Spec: {TagSet(Tag(tagClass=0, tagFormat=32, tagId=16)): PartialAttributeList()}/{} decodedValue,_ = decoder.decode(encodedControlValue) print(decodedValue.prettyPrint()) # Pretty print yields #Sequence: -- Sequence of # no-name=Sequence: -- derefRes # no-name=uniqueMember -- derefAttr # no-name=uid=test,dc=example,dc=com -- derefVal # no-name=Sequence: -- attrVals # no-name=uid # no-name=Set: #no-name=test # For now, while asn1spec is sad, we'll just rely on it being well formed # However, this isn't good, as without the asn1spec, we seem to actually be dropping values for result in decodedValue: derefAttr, derefVal, _ = result self.entry[str(derefAttr)] = str(derefVal) -- 389-devel mailing list 389-devel@lists.fedoraproject.org https://admin.fedoraproject.org/mailman/listinfo/389-devel -- 389-devel mailing list 389-devel@lists.fedoraproject.org https://admin.fedoraproject.org/mailman/listinfo/389-devel
Re: [389-devel] Review of plugin code
On 08/07/2015 05:18 PM, William Brown wrote: On Thu, 2015-08-06 at 14:25 -0700, Noriko Hosoi wrote: Hi William, Very interesting plug-in! Thanks. As a plugin, it's value is quite useless due to the nsDS5ReplicaType flags. But it's a nice simple exercise to get ones head around how the plugin architecture works from scratch. It's one thing to patch a plugin, compared to writing one from nothing. Regarding betxn plug-in, it is for putting the entire operation -- the primary update + associated updates by the enabled plug-ins -- in one transaction. By doing so, the entire updates are committed to the DB if and only if all of the updates are successful. Otherwise, all of them are rolled back. That guarantees there will be no consistency among entries. Okay, so if I can be a pain, how to betxn handle reads? Do reads come from within the transaction? Yes. Or is there a way to read from the database outside the transaction. Say for example: begin add some object Y read Y commit Does read Y see the object within the transaction? Yes. Is there a way to make the search happen so that it occurs outside the transaction, IE it doesn't see Y? Not a nested search operation. A nested search operation will always use the parent/context transaction. In that sense, your read-only plug-in is not a good example for betxn since it does not do any updates. :) Considering the purpose of the read-only plug-in, invoking it at the pre-op timing (before the transaction) would be the best. Very true! I kind of knew what betxn did, but I wanted to confirm more completely in my mind. So I think what my read-only plugin does at the moment works quite nicely then outside of betxn. Is there a piece of documentation (perhaps the plugin guide) that lists the order in which these operations are called? Not sure, but in general it is: incoming operation from client front end processing preoperation call backend bepreoperation start transaction betxnpreoperation do operation in the database betxnpostoperation end transaction bepostoperation return from backend send result to client postoperation Since MEP requires the updates on the DB, it's supposed to be called in betxn. That way, what was done in the MEP plug-in is committed or rolled back together with the primary updates. Makes sense. The toughest part is the deadlock prevention. At the start transaction, it holds a DB lock. And most plug-ins maintain its own mutex to protect its resource. It'd easily cause deadlock situation especially when multiple plug-ins are enabled (which is common :). So, please be careful not to acquire/free locks in the wrong order... Of course. This is always an issue in multi-threaded code and anything with locking. Stress tests are probably good to find these deadlocks, no? Yes. There is some code in dblayer.c that will stress the transaction code by locking/unlocking many db pages concurrently with external operations. https://git.fedorahosted.org/cgit/389/ds.git/tree/ldap/servers/slapd/back-ldbm/dblayer.c#n210 https://git.fedorahosted.org/cgit/389/ds.git/tree/ldap/servers/slapd/back-ldbm/dblayer.c#n4131 About your commented out code in read_only.c, I guess you copied the part from mep.c and are wondering what it is for? There are various type of plug-ins. $ egrep nsslapd-pluginType dse.ldif | sort | uniq nsslapd-pluginType: accesscontrol nsslapd-pluginType: bepreoperation nsslapd-pluginType: betxnpostoperation nsslapd-pluginType: betxnpreoperation nsslapd-pluginType: database nsslapd-pluginType: extendedop nsslapd-pluginType: internalpreoperation nsslapd-pluginType: matchingRule nsslapd-pluginType: object nsslapd-pluginType: preoperation nsslapd-pluginType: pwdstoragescheme nsslapd-pluginType: reverpwdstoragescheme nsslapd-pluginType: syntax The reason why slapi_register_plugin and slapi_register_plugin_ext were implemented was: /* * Allows a plugin to register a plugin. * This was added so that 'object' plugins could register all * the plugin interfaces that it supports. */ On the other hand, MEP has this type. nsslapd-pluginType: betxnpreoperation The type is not object, but the MEP plug-in is implemented as having the type. Originally, it might have been object... Then, we introduced the support for betxn. To make the transition to betxn smoothly, we put the code to check betxn is in the type. If there is betxn as in betxnpreoperation, call the plug-in in betxn, otherwise call them outside of the transaction. Having the switch in the configuration, we could go back to the original position without rebuilding the plug-in. Since we do not go back to pre-betxn era, the switch may not be too important. But keeping it would be a good idea for the consistency with the other plug-ins. Does this answer you question? Please feel free to let us know if it does not. That answers some of
[389-devel] Please review: Ticket #48224 - redux 2 - logconv.pl should handle *.tar.xz, *.txz, *.xz log files
https://fedorahosted.org/389/attachment/ticket/48224/0001-Ticket-48224-redux-2-logconv.pl-should-handle-.tar.x.patch -- 389-devel mailing list 389-devel@lists.fedoraproject.org https://admin.fedoraproject.org/mailman/listinfo/389-devel
[389-devel] Please review: Ticket #48224 - logconv.pl should handle *.tar.xz, *.txz, *.xz log files
https://fedorahosted.org/389/attachment/ticket/48224/0001-Ticket-48224-logconv.pl-should-handle-.tar.xz-.txz-..patch -- 389-devel mailing list 389-devel@lists.fedoraproject.org https://admin.fedoraproject.org/mailman/listinfo/389-devel
[389-devel] 389 performance testing tools
No readme yet, but here are the scripts: https://github.com/richm/389-perf-test -- 389-devel mailing list 389-devel@lists.fedoraproject.org https://admin.fedoraproject.org/mailman/listinfo/389-devel
[389-devel] Please review: nunc-stans: Ticket #33 - coverity - 13178 Explicit null dereferenced
https://fedorahosted.org/nunc-stans/attachment/ticket/33/0001-Ticket-33-coverity-13178-Explicit-null-dereferenced.patch -- 389-devel mailing list 389-devel@lists.fedoraproject.org https://admin.fedoraproject.org/mailman/listinfo/389-devel
[389-devel] Please review: nunc-stans: Tickets 34-38 - coverity 13179-13183 - Dereference before NULL check
https://fedorahosted.org/nunc-stans/attachment/ticket/34/0001-Tickets-34-38-coverity-13179-13183-Dereference-befor.patch -- 389-devel mailing list 389-devel@lists.fedoraproject.org https://admin.fedoraproject.org/mailman/listinfo/389-devel
[389-devel] Please review: Ticket #48122 nunc-stans FD leak
https://fedorahosted.org/389/attachment/ticket/48122/0001-Ticket-48122-nunc-stans-FD-leak.2.patch -- 389-devel mailing list 389-devel@lists.fedoraproject.org https://admin.fedoraproject.org/mailman/listinfo/389-devel
[389-devel] Take 2: Please review: Ticket #48178 add config param to enable nunc-stans
Fixed problem with previous patch. https://fedorahosted.org/389/attachment/ticket/48178/0001-Ticket-48178-add-config-param-to-enable-nunc-stans.patch -- 389-devel mailing list 389-devel@lists.fedoraproject.org https://admin.fedoraproject.org/mailman/listinfo/389-devel
[389-devel] Please review: Ticket #48178 add config param to enable nunc-stans
https://fedorahosted.org/389/attachment/ticket/48178/0001-Ticket-48178-add-config-param-to-enable-nunc-stans.patch -- 389-devel mailing list 389-devel@lists.fedoraproject.org https://admin.fedoraproject.org/mailman/listinfo/389-devel
Re: [389-devel] [389-users] GUI console and Kerberos
On 03/11/2015 11:54 AM, Paul Robert Marino wrote: Hey every one I have a question I know at least once in the past i setup the admin console so it could utilize Kerberos passwords based on a howto I found once which after I changed jobs I could never find again. today I was looking for something else and I saw a mention on the site about httpd needing to be compiled with http auth support. well I did a little digging and I found this file /etc/dirsrv/admin-serv/admserv.conf in that file I found a lot of entries that look like this LocationMatch /*/[tT]asks/[Cc]onfiguration/* AuthUserFile /etc/dirsrv/admin-serv/admpw AuthType basic AuthName Admin Server Require valid-user AdminSDK on ADMCgiBinDir /usr/lib64/dirsrv/cgi-bin NESCompatEnv on Options +ExecCGI Order allow,deny Allow from all /LocationMatch when I checked /etc/dirsrv/admin-serv/admpw sure enough I found the Password hash for the admin user. So my question is before I wast time experimenting could it possibly be as simple as changing the auth type to kerberos http://modauthkerb.sourceforge.net/configure.html I don't know. I don't think anyone has ever tried it. keep in mind my Kerberos Servers do not use LDAP as the backend. -- 389 users mailing list 389-us...@lists.fedoraproject.org https://admin.fedoraproject.org/mailman/listinfo/389-users -- 389-devel mailing list 389-devel@lists.fedoraproject.org https://admin.fedoraproject.org/mailman/listinfo/389-devel
[389-devel] Please review #2: Ticket #48105 create tests - tests subdir - test server/client programs - test scripts
I found a bug with my previous patch. https://fedorahosted.org/389/attachment/ticket/48105/0001-Ticket-48105-create-tests-tests-subdir-test-server-c.2.patch -- 389-devel mailing list 389-devel@lists.fedoraproject.org https://admin.fedoraproject.org/mailman/listinfo/389-devel
[389-devel] Please review: Ticket #48105 create tests - tests subdir - test server/client programs - test scripts
https://fedorahosted.org/389/attachment/ticket/48105/0001-Ticket-48105-create-tests-tests-subdir-test-server-c.patch -- 389-devel mailing list 389-devel@lists.fedoraproject.org https://admin.fedoraproject.org/mailman/listinfo/389-devel
[389-devel] Please review: Ticket #48106 create code doc with doxygen
https://fedorahosted.org/389/attachment/ticket/48106/0001-Ticket-48106-create-code-doc-with-doxygen.patch -- 389-devel mailing list 389-devel@lists.fedoraproject.org https://admin.fedoraproject.org/mailman/listinfo/389-devel
Re: [389-devel] No rule to make target dbmon.sh
On 02/26/2015 04:31 PM, William wrote: On latest master I am getting: make[1]: *** No rule to make target `ldap/admin/src/scripts/dbmon.sh', needed by `all-am'. Stop. Did a git clean -f -x -d followed by autoreconf -i; ./configure --with-openldap --prefix=/srv --enable-debug Not sure, but you should not need to run autoreconf unless you are changing one of the autoconf files. There is an autogen.sh script for this purpose instead of using autoreconf directly. What am I missing? -- 389-devel mailing list 389-devel@lists.fedoraproject.org https://admin.fedoraproject.org/mailman/listinfo/389-devel
[389-devel] we now have epel7 branches for fedpkg . . .
. . . but it looks like we are gated by rhel7.1 - waiting for TLS 1.1 fixes/packages to show up with rhel 7.1 -- 389-devel mailing list 389-devel@lists.fedoraproject.org https://admin.fedoraproject.org/mailman/listinfo/389-devel
[389-devel] No more spam from jenkins
I don't know what's wrong with jenkins. I tried to fix it, but I cannot figure out what the problem is. In the meantime, I have disabled it, so no more spam. Sorry for the spam. -- 389-devel mailing list 389-devel@lists.fedoraproject.org https://admin.fedoraproject.org/mailman/listinfo/389-devel
Re: [389-devel] if you tag a release, please release a tarball too
On 09/19/2014 01:15 AM, Timo Aaltonen wrote: On 19.09.2014 09:33, Timo Aaltonen wrote: Hi 1.3.3.3 is tagged in git since a week ago, but there's no tarball for it. Dunno if you have scripts for the release dance, but if you do please include the tarball build to it so it's not a manual thing to remember every time ;) I'll roll back to 1.3.3.2 in the meantime.. oh well, 1.3.3.2 tarball doesn't match the tag: tarball doesn't have 55e317f2a5d8fc488e76f2b4155298a45d25 nor 0363fa49265c0c27d510064cea361eb400802548 and ldap/servers/slapd/ssl.c has a diff to the comments of the cipher mess (from 58cb12a7b8cf9), and VERSION.sh on the tarball still has 'VERSION_PREREL=.a1' (should be gone in fefa20138b6a3a) so I don't know where the tarball was built from, this isn't cool.. Yep, we screwed up, sorry about that. I've just uploaded a new 1.3.3.3 release, and the sources page with the new checksum is building. -- 389-devel mailing list 389-devel@lists.fedoraproject.org https://admin.fedoraproject.org/mailman/listinfo/389-devel
Re: [389-devel] Please review: Fix slapi_td_plugin_lock_init prototype
On 09/15/2014 05:20 AM, Petr Viktorin wrote: diff --git a/ldap/servers/slapd/slapi-plugin.h b/ldap/servers/slapd/slapi-plugin.h index f1ecfe8..268e465 100644 --- a/ldap/servers/slapd/slapi-plugin.h +++ b/ldap/servers/slapd/slapi-plugin.h @@ -5582,7 +5582,7 @@ void slapi_td_get_val(int indexType, void **value); int slapi_td_dn_init(void); int slapi_td_set_dn(char *dn); void slapi_td_get_dn(char **dn); -int slapi_td_plugin_lock_init(); +int slapi_td_plugin_lock_init(void); int slapi_td_set_plugin_locked(int *value); void slapi_td_get_plugin_locked(int **value); -- Thanks - https://fedorahosted.org/389/ticket/47899 -- 389-devel mailing list 389-devel@lists.fedoraproject.org https://admin.fedoraproject.org/mailman/listinfo/389-devel
Re: [389-devel] Please review: Fix slapi_td_plugin_lock_init prototype
On 09/15/2014 07:28 PM, Petr Viktorin wrote: On 09/15/2014 09:06 PM, Rich Megginson wrote: On 09/15/2014 05:20 AM, Petr Viktorin wrote: diff --git a/ldap/servers/slapd/slapi-plugin.h b/ldap/servers/slapd/slapi-plugin.h index f1ecfe8..268e465 100644 --- a/ldap/servers/slapd/slapi-plugin.h +++ b/ldap/servers/slapd/slapi-plugin.h @@ -5582,7 +5582,7 @@ void slapi_td_get_val(int indexType, void **value); int slapi_td_dn_init(void); int slapi_td_set_dn(char *dn); void slapi_td_get_dn(char **dn); -int slapi_td_plugin_lock_init(); +int slapi_td_plugin_lock_init(void); int slapi_td_set_plugin_locked(int *value); void slapi_td_get_plugin_locked(int **value); -- Thanks - https://fedorahosted.org/389/ticket/47899 Thanks. I read the GIT Rules page on the wiki [0], which mentions patches not associated with a ticket. That is correct. I just wanted to make sure that this did not get lost. If all patches do need a ticket, it would be good to update it. [0] http://www.port389.org/docs/389ds/development/git-rules.html -- 389-devel mailing list 389-devel@lists.fedoraproject.org https://admin.fedoraproject.org/mailman/listinfo/389-devel
[389-devel] Please review: Ticket #47892 coverity defects found in 1.3.3.1
https://fedorahosted.org/389/attachment/ticket/47892/0001-Ticket-47892-coverity-defects-found-in-1.3.3.1.patch -- 389-devel mailing list 389-devel@lists.fedoraproject.org https://admin.fedoraproject.org/mailman/listinfo/389-devel
Re: [389-devel] Please review lib389: start/stop may hand indefinitely
On 09/05/2014 10:32 AM, thierry bordaz wrote: On 09/05/2014 01:10 PM, thierry bordaz wrote: Detected with testcase 47838 that defines ciphers not recognized during SSL init. 47838 testcase makes the full test suite to hang. -- 389-devel mailing list 389-devel@lists.fedoraproject.org https://admin.fedoraproject.org/mailman/listinfo/389-devel Hello, Rich pointed me that the indentation was bad in the second part of the fix. I was wrongly playing with tab instead of spaces. Here is a better fix ack thanks theirry -- 389-devel mailing list 389-devel@lists.fedoraproject.org https://admin.fedoraproject.org/mailman/listinfo/389-devel -- 389-devel mailing list 389-devel@lists.fedoraproject.org https://admin.fedoraproject.org/mailman/listinfo/389-devel
[389-devel] Please review: Redux: Ticket #47692 single valued attribute replicated ADD does not work
Previous fix was incomplete. https://fedorahosted.org/389/attachment/ticket/47692/0001-Ticket-47692-single-valued-attribute-replicated-ADD-.2.patch -- 389-devel mailing list 389-devel@lists.fedoraproject.org https://admin.fedoraproject.org/mailman/listinfo/389-devel
[389-devel] Please review: Ticket #47831 - server restart wipes out index config if there is a default index
https://fedorahosted.org/389/attachment/ticket/47831/0001-Ticket-47831-server-restart-wipes-out-index-config-i.patch -- 389-devel mailing list 389-devel@lists.fedoraproject.org https://admin.fedoraproject.org/mailman/listinfo/389-devel
Re: [389-devel] single valued attribute update resolution
On 04/25/2014 07:43 AM, Ludwig Krispenz wrote: There are still scenarios where replication can lead to inconsistent states for single valued attributes, which I think has two reasons: - for single valued attributes there are scenarios where modifications applied concurrently cannot be simply resolved without violating the schema - the code to handle single valued attribute resolution is quite complex and has always been extended to resolve reported issues, not making it simpler I tried to specify all potential scenarios which should be handled and what the expected consistent state should be. In parallel writing a test suite based on lib389 test framework to provide testcases for all scenarios and then test the current implementation. The doc and test suite can be used as a reference for a potential rework of the update resolution code. Please have a look at: http://port389.org/wiki/Update_resolution_for_single_valued_attributes comments, corrections, additonal requirements are welcome - the doc is not final :-) Very nice! Thanks, Ludwig -- 389-devel mailing list 389-devel@lists.fedoraproject.org https://admin.fedoraproject.org/mailman/listinfo/389-devel -- 389-devel mailing list 389-devel@lists.fedoraproject.org https://admin.fedoraproject.org/mailman/listinfo/389-devel
[389-devel] Please review: Take 2: Ticket #47772 empty modify returns LDAP_INVALID_DN_SYNTAX
https://fedorahosted.org/389/attachment/ticket/47772/0001-Ticket-47772-empty-modify-returns-LDAP_INVALID_DN_SY.2.patch -- 389-devel mailing list 389-devel@lists.fedoraproject.org https://admin.fedoraproject.org/mailman/listinfo/389-devel
[389-devel] Please review: Ticket #47772 empty modify returns LDAP_INVALID_DN_SYNTAX
https://fedorahosted.org/389/attachment/ticket/47772/0001-Ticket-47772-empty-modify-returns-LDAP_INVALID_DN_SY.patch -- 389-devel mailing list 389-devel@lists.fedoraproject.org https://admin.fedoraproject.org/mailman/listinfo/389-devel
[389-devel] Please review: Ticket #47774 mem leak in do_search - rawbase not freed upon certain errors
https://fedorahosted.org/389/attachment/ticket/47774/0001-Ticket-47774-mem-leak-in-do_search-rawbase-not-freed.patch -- 389-devel mailing list 389-devel@lists.fedoraproject.org https://admin.fedoraproject.org/mailman/listinfo/389-devel
Re: [389-devel] [389-users] Source directory is now list-able
On 04/08/2014 02:24 PM, Timo Aaltonen wrote: On 07.04.2014 21:52, Rich Megginson wrote: http://port389.org/sources is now open and list-able. The default sort order is latest first. The http://port389.org/wiki/Source page has been updated with this link. \o/ many thanks for this :) Sure, it was about time we did this :P Please let us know if there are any issues, or suggested improvements. My apache-fu is not good, perhaps there are some nice mod_autoindex hacks . . . -- 389-devel mailing list 389-devel@lists.fedoraproject.org https://admin.fedoraproject.org/mailman/listinfo/389-devel
[389-devel] Source directory is now list-able
http://port389.org/sources is now open and list-able. The default sort order is latest first. The http://port389.org/wiki/Source page has been updated with this link. -- 389-devel mailing list 389-devel@lists.fedoraproject.org https://admin.fedoraproject.org/mailman/listinfo/389-devel
Re: [389-devel] [389-users] git repo / tarball issues
On 04/04/2014 10:55 AM, Timo Aaltonen wrote: On 04.04.2014 19:42, Noriko Hosoi wrote: Hi Timo, Timo Aaltonen wrote: 1) 389-ds-console 1.2.7 has no tarball though it was tagged for release in Sep'12 You can download the tar ball from here now. http://port389.org/sources/389-ds-console-1.2.7.tar.bz2 Cool, thanks. It's a broken tarball though, you forgot '/' after the version.. Sorry. I've fixed it... Could you please try it, one more time? Yup, it's fine now. Also, you still need some way to fix the process of how these links get to the webpage too :) Yeah, that's what I thought, too. I searched an existing page on http://directory.fedoraproject.org, but I could not find it. Rich, could there be a good place to put the link(s)? you probably mean this? http://directory.fedoraproject.org/wiki/Source I think Timo (and probably other people who monitor the source tarballs) would like to have a URL to a directory containing the sources, rather than have to have the URL of the file. Then we could just push files to that directory, and he and others could just monitor that directory for new files. Now I see that you have a separate 389-announce list where only the stable releases get announced.. maybe send those to 389-users too? All right. I will do so from the next time. Thanks for your suggestion! great, thanks! -- 389-devel mailing list 389-devel@lists.fedoraproject.org https://admin.fedoraproject.org/mailman/listinfo/389-devel
[389-devel] Please review: Ticket #47492 - PassSync removes User must change password flag on the Windows side
https://fedorahosted.org/389/attachment/ticket/47492/0001-Ticket-47492-PassSync-removes-User-must-change-passw.3.patch -- 389-devel mailing list 389-devel@lists.fedoraproject.org https://admin.fedoraproject.org/mailman/listinfo/389-devel
[389-devel] Please review: Ticket #47492 - PassSync removes User must change password flag on the Windows side
https://fedorahosted.org/389/attachment/ticket/47492/0001-Ticket-47492-PassSync-removes-User-must-change-passw.3.patch -- 389-devel mailing list 389-devel@lists.fedoraproject.org https://admin.fedoraproject.org/mailman/listinfo/389-devel
Re: [389-devel] Changes for
On 04/01/2014 07:34 AM, Carsten Grzemba wrote: Hi Rich, this breaks the current implementaion for posix-winsync: Bug 716980 - winsync uses old AD entry if new one not found https://bugzilla.redhat.com/show_bug.cgi?id=716980 Resolves: bug 716980 Bug Description: winsync uses old AD entry if new one not found Reviewed by: nhosoi (Thanks!) Branch: master Fix Description: Clear out the old raw_entry before doing the search. This will leave a NULL in the raw entry. winsync plugins will need to handle a NULL for the raw_entry and/or ad_entry. In the moment posix_winsync_pre_ds_mod_user_cb returns imediataly on raw_entry == NULL How should the plugin handle the NULL for raw_entry? Not sure. Please reopen that ticket. If it broke posix-winsync, it is likely to break other winsync plugins (e.g. ipa winsync). Carsten -- 389-devel mailing list 389-devel@lists.fedoraproject.org https://admin.fedoraproject.org/mailman/listinfo/389-devel -- 389-devel mailing list 389-devel@lists.fedoraproject.org https://admin.fedoraproject.org/mailman/listinfo/389-devel
Re: [389-devel] Design review: Access control on entries specified in MODDN operation (ticket 47553)
On 02/25/2014 08:28 AM, thierry bordaz wrote: On 02/25/2014 04:17 PM, Rich Megginson wrote: On 02/25/2014 08:14 AM, thierry bordaz wrote: On 02/25/2014 03:46 PM, Rich Megginson wrote: On 02/25/2014 07:42 AM, thierry bordaz wrote: On 02/25/2014 03:34 PM, Rich Megginson wrote: On 02/25/2014 07:24 AM, thierry bordaz wrote: On 02/24/2014 10:47 PM, Noriko Hosoi wrote: Rich Megginson wrote: On 02/24/2014 09:00 AM, thierry bordaz wrote: Hello, IPA team filled this ticket https://fedorahosted.org/389/ticket/47553. It requires an ACI improvement so that during a MODDN a given user is only allowed to move an entry from one specified part of the DIT to an other specified part of the DIT. This without the need to grant the ADD permission. Here is the design of what could be implemented to support this need http://port389.org/wiki/Access_control_on_trees_specified_in_MODDN_operation regards thierry Since this not related to any Red Hat internal or customer information, we should move this discussion to the 389-devel list. Hi Thierry, Your design looks good. A minor question. The doc does not mention about deny. For instance, in your example DIT, can I allow moddn_to and moddn_from on the top dc=example,dc=com and deny them on cn=tests. Then, I can move an entry between cn=accounts and staging, but not to/from cn=tests? Or deny is not supposed to use there? Thanks, --noriko Hi Noriko, Thanks for having looked at the document. You are right, I missed to document how 'DENY' aci would work. I updated the design http://port389.org/wiki/Access_control_on_trees_specified_in_MODDN_operation#ACI_allow.2Fdeny_rights to indicate how a DENY rights could be used. By default if there is no ACI granting 'allow', the operation is rejected. So in that case, without ACI applicable on 'cn=tests', MODDN to/from 'cn=tests' will not be authorized. Adding a DENY to target 'cn=tests' would also work but I think it is not required. In the example I added, the 'ALLOW' right is granted to a tree (cn=accounts,SUFFIX) except to a subtree of it (cn=except,cn=accounts,SUFFIX) So in order to do a MODDN operation, you need both the moddn_from aci and moddn_to aci? For example: dn: dc=example,dc=com aci: (target=ldap:///cn=staging,dc=example,dc=com;)(version 3.0; acl MODDN from; allow (moddn_from)) userdn=ldap:///uid=admin_accounts,dc=example,dc=com; ;) If I only have this aci, will it allow anything? That is, if I don't have a (moddn_to) aci somewhere, will this (moddn_from) aci allow me to move anything? Yes it will allow you to do a MODDN if you are granted the 'ADD' right on the new superior entry. I think this double ACI can be an issue as freeipa was hoping to use a single ACI. But I have not found a solution to grant move (to/from) in a single aci syntax. I think it is very important to specify both the source and the destination of a MODDN operation. I don't think this will be possible in all cases without having 2 target DNs in a single ACI statement. My concern is that if we have something like : aci: target_rule (version 3.0; acl MODDN control; allow (moddn_to, moddn_from) bind_rule;) and 'target_rule' defines two DNs, then moddn_to/from are granted for both DNs. so in our case, the user would be allowed to move an entry staging-accounts but also account-staging. Right. It is necessary to be able to specify moddn_from=DN1 modrn_to=DN2 Ok yes it would work. Now I am unsure of the benefit of having a single aci with that new 'target_rule' syntax compare to two aci with the current syntax. I can imagine a performance gain in terms of aci scan and evaluation but wonder if there is an other benefit. One problem with having two acis is referential integrity - keeping the pairs in sync with other changes. Having to keep track of two acis is much more than twice as difficult as keeping track of a single aci. I can appreciate that it will be very difficult to change the aci syntax in such a way as to support two target clauses in a single aci. And, it might not be sufficient to simply have aci: (target_from=ldap:///dn_from;)(target_to=ldap:///dn_to;)... although I'm not sure if any of the other target keywords are applicable here - like targetattr, targetfilter, targattrfilter, etc. I sent the design pointer to freeipa-devel as well, sure I will get some comments on that :-) regards thierry -- 389-devel mailing list 389-devel@lists.fedoraproject.org https://admin.fedoraproject.org/mailman/listinfo/389-devel -- 389-devel mailing list 389-devel@lists.fedoraproject.org https://admin.fedoraproject.org/mailman/listinfo/389-devel -- 389-devel mailing list 389-devel@lists.fedoraproject.org https://admin.fedoraproject.org/mailman/listinfo/389-devel -- 389-devel mailing list 389-devel@lists.fedoraproject.org https://admin.fedoraproject.org/mailman/listinfo/389
Re: [389-devel] Design review: Access control on entries specified in MODDN operation (ticket 47553)
On 02/24/2014 02:47 PM, Noriko Hosoi wrote: Rich Megginson wrote: On 02/24/2014 09:00 AM, thierry bordaz wrote: Hello, IPA team filled this ticket https://fedorahosted.org/389/ticket/47553. It requires an ACI improvement so that during a MODDN a given user is only allowed to move an entry from one specified part of the DIT to an other specified part of the DIT. This without the need to grant the ADD permission. Here is the design of what could be implemented to support this need http://port389.org/wiki/Access_control_on_trees_specified_in_MODDN_operation regards thierry Since this not related to any Red Hat internal or customer information, we should move this discussion to the 389-devel list. Hi Thierry, Your design looks good. A minor question. The doc does not mention about deny. For instance, in your example DIT, can I allow moddn_to and moddn_from on the top dc=example,dc=com and deny them on cn=tests. Then, I can move an entry between cn=accounts and staging, but not to/from cn=tests? Or deny is not supposed to use there? In which entry do you set these ACIs? Do you set aci: (target=ldap:///cn=staging,dc=example,dc=com;)(version 3.0; acl MODDN from; allow (moddn_from)) userdn=ldap:///uid=admin_accounts,dc=example,dc=com; ;) in the cn=accounts,dc=example,dc=com entry? Do you set aci: (target=ldap:///cn=accounts,dc=example,dc=com;)(version 3.0; acl MODDN to; allow (moddn_to)) userdn=ldap:///uid=admin_accounts,dc=example,dc=com; ;) in the cn=staging,dc=example,dc=com entry? Thanks, --noriko -- 389-devel mailing list 389-devel@lists.fedoraproject.org https://admin.fedoraproject.org/mailman/listinfo/389-devel -- 389-devel mailing list 389-devel@lists.fedoraproject.org https://admin.fedoraproject.org/mailman/listinfo/389-devel
Re: [389-devel] mmr.pl deprecated?
On 02/11/2014 08:16 AM, Paul Robert Marino wrote: I would have no problem doing that. The more I look at it I may just use it as a template for creating a new set of Perl based replication management tools. Part of the reason for dropping mmr.pl is Perl :P We have a new project for creating a management framework, including replication management, in python. http://port389.org/wiki/Upstream_test_framework It started out as the basis for a test framework, but it is now separate (lib389). If you are planning to do something from scratch, and you can hack python, I suggest you take a look. Ive also been thinking of some other tools I would like to make in addition. for example I would like to make a password integrity check hook script for Heimdal Kerberos which would utilize 389 servers password change functionality that way 389 server can manage the password policy but in addition programs which don't use SASL but instead use the users password field for authentication can function without having to put the Kerberos database in the LDAP server. Ill send out an email to the user list once I create the github repo On Mon, Feb 10, 2014 at 11:09 AM, Rich Megginson rmegg...@redhat.com wrote: On 02/09/2014 02:29 PM, Paul Robert Marino wrote: I just noticed on the wiki that it says mmr.pl is deprecated because its it is too buggy and has no maintainer There is no dedicated source code repo for it. There is no place to file bugs/tickets against it. There is no one to fix the bugs/tickets. I'm just curious if there is a punch list of the bugs I might be willing to tackle them if so. Not that I know of, just various emails/irc messages over the years. Ive always found it to be a useful tool and I prefer not to click my mouse a hundred times when a command line script could do the job. I'm not saying I would definitely be willing to become the maintainer on a long term basis because I have too many project on my plate as it is, just that I would be willing to take some time to do any updates and bug fixes as I have time and possibly be willing to act as an temporary maintainer for a brief period of time. The code is fairly strait forward and at the least I could make the option parsing a lot more robust and the error messages far more helpful. The coding style is outdated and could use probably prototyping on the subroutines. Further more it might be useful to accept a config file and or environment variables for some of the information for example I hate putting passwords on the command line. I can also see that there are a lot more options that would be nice to be able to tune. but if there is a bug list I would be happy to spend some time on it and see how many I could run through in a short period of time. A quick search of bugzilla.redhat.com didn't seam to show any thing as far as I could tell It would probably be easiest to make a github repo for mmr.pl and use that for tracking changes, bugs/tickets, documentation. -- 389-devel mailing list 389-devel@lists.fedoraproject.org https://admin.fedoraproject.org/mailman/listinfo/389-devel -- 389-devel mailing list 389-devel@lists.fedoraproject.org https://admin.fedoraproject.org/mailman/listinfo/389-devel -- 389-devel mailing list 389-devel@lists.fedoraproject.org https://admin.fedoraproject.org/mailman/listinfo/389-devel -- 389-devel mailing list 389-devel@lists.fedoraproject.org https://admin.fedoraproject.org/mailman/listinfo/389-devel
Re: [389-devel] mmr.pl deprecated?
On 02/11/2014 08:45 AM, Paul Robert Marino wrote: Sorry I'm a Perl programmer and a dam good one :-). I never drank the Python Coolaid and I don't think I ever will. I fundamentally dislike the language its too fragile in the name of enforcing better code formatting practices. Franky I've seen many a Python script that was just as bad and ugly as the worst Perl scripts I've seen. Perl has served me well even though its not as popular right now. Ok, that's fine. If you're willing to do the work, I have no problem with your choice of language. On Tue, Feb 11, 2014 at 10:27 AM, Rich Megginson rmegg...@redhat.com wrote: On 02/11/2014 08:16 AM, Paul Robert Marino wrote: I would have no problem doing that. The more I look at it I may just use it as a template for creating a new set of Perl based replication management tools. Part of the reason for dropping mmr.pl is Perl :P We have a new project for creating a management framework, including replication management, in python. http://port389.org/wiki/Upstream_test_framework It started out as the basis for a test framework, but it is now separate (lib389). If you are planning to do something from scratch, and you can hack python, I suggest you take a look. Ive also been thinking of some other tools I would like to make in addition. for example I would like to make a password integrity check hook script for Heimdal Kerberos which would utilize 389 servers password change functionality that way 389 server can manage the password policy but in addition programs which don't use SASL but instead use the users password field for authentication can function without having to put the Kerberos database in the LDAP server. Ill send out an email to the user list once I create the github repo On Mon, Feb 10, 2014 at 11:09 AM, Rich Megginson rmegg...@redhat.com wrote: On 02/09/2014 02:29 PM, Paul Robert Marino wrote: I just noticed on the wiki that it says mmr.pl is deprecated because its it is too buggy and has no maintainer There is no dedicated source code repo for it. There is no place to file bugs/tickets against it. There is no one to fix the bugs/tickets. I'm just curious if there is a punch list of the bugs I might be willing to tackle them if so. Not that I know of, just various emails/irc messages over the years. Ive always found it to be a useful tool and I prefer not to click my mouse a hundred times when a command line script could do the job. I'm not saying I would definitely be willing to become the maintainer on a long term basis because I have too many project on my plate as it is, just that I would be willing to take some time to do any updates and bug fixes as I have time and possibly be willing to act as an temporary maintainer for a brief period of time. The code is fairly strait forward and at the least I could make the option parsing a lot more robust and the error messages far more helpful. The coding style is outdated and could use probably prototyping on the subroutines. Further more it might be useful to accept a config file and or environment variables for some of the information for example I hate putting passwords on the command line. I can also see that there are a lot more options that would be nice to be able to tune. but if there is a bug list I would be happy to spend some time on it and see how many I could run through in a short period of time. A quick search of bugzilla.redhat.com didn't seam to show any thing as far as I could tell It would probably be easiest to make a github repo for mmr.pl and use that for tracking changes, bugs/tickets, documentation. -- 389-devel mailing list 389-devel@lists.fedoraproject.org https://admin.fedoraproject.org/mailman/listinfo/389-devel -- 389-devel mailing list 389-devel@lists.fedoraproject.org https://admin.fedoraproject.org/mailman/listinfo/389-devel -- 389-devel mailing list 389-devel@lists.fedoraproject.org https://admin.fedoraproject.org/mailman/listinfo/389-devel -- 389-devel mailing list 389-devel@lists.fedoraproject.org https://admin.fedoraproject.org/mailman/listinfo/389-devel -- 389-devel mailing list 389-devel@lists.fedoraproject.org https://admin.fedoraproject.org/mailman/listinfo/389-devel -- 389-devel mailing list 389-devel@lists.fedoraproject.org https://admin.fedoraproject.org/mailman/listinfo/389-devel
Re: [389-devel] Please review test cases update with new modules
On 02/11/2014 09:55 AM, thierry bordaz wrote: Some lib389 routines moved or their name changes (schema, tasks, index and plugins): ack -- 389-devel mailing list 389-devel@lists.fedoraproject.org https://admin.fedoraproject.org/mailman/listinfo/389-devel -- 389-devel mailing list 389-devel@lists.fedoraproject.org https://admin.fedoraproject.org/mailman/listinfo/389-devel
Re: [389-devel] mmr.pl deprecated?
On 02/09/2014 02:29 PM, Paul Robert Marino wrote: I just noticed on the wiki that it says mmr.pl is deprecated because its it is too buggy and has no maintainer There is no dedicated source code repo for it. There is no place to file bugs/tickets against it. There is no one to fix the bugs/tickets. I'm just curious if there is a punch list of the bugs I might be willing to tackle them if so. Not that I know of, just various emails/irc messages over the years. Ive always found it to be a useful tool and I prefer not to click my mouse a hundred times when a command line script could do the job. I'm not saying I would definitely be willing to become the maintainer on a long term basis because I have too many project on my plate as it is, just that I would be willing to take some time to do any updates and bug fixes as I have time and possibly be willing to act as an temporary maintainer for a brief period of time. The code is fairly strait forward and at the least I could make the option parsing a lot more robust and the error messages far more helpful. The coding style is outdated and could use probably prototyping on the subroutines. Further more it might be useful to accept a config file and or environment variables for some of the information for example I hate putting passwords on the command line. I can also see that there are a lot more options that would be nice to be able to tune. but if there is a bug list I would be happy to spend some time on it and see how many I could run through in a short period of time. A quick search of bugzilla.redhat.com didn't seam to show any thing as far as I could tell It would probably be easiest to make a github repo for mmr.pl and use that for tracking changes, bugs/tickets, documentation. -- 389-devel mailing list 389-devel@lists.fedoraproject.org https://admin.fedoraproject.org/mailman/listinfo/389-devel -- 389-devel mailing list 389-devel@lists.fedoraproject.org https://admin.fedoraproject.org/mailman/listinfo/389-devel
[389-devel] Please review: Ticket #47692 single valued attribute replicated ADD does not work
https://fedorahosted.org/389/attachment/ticket/47692/0001-Ticket-47692-single-valued-attribute-replicated-ADD-.patch -- 389-devel mailing list 389-devel@lists.fedoraproject.org https://admin.fedoraproject.org/mailman/listinfo/389-devel
Re: [389-devel] plugin PRE_ENTRY_FN scope
On 01/17/2014 11:20 AM, Deas, Jim wrote: Should I be able to use SLAPI_SEARCH_ATTR to view attributes about to be returned to the client in PRE_ENTRY_FN? Yes, looks like it, but you have to be prepared for the case where a client does not specify a search attribute list - in this case, the client is asking for all non-operational attributes in the entry. Can I start a new search inside PRE_ENTRY_FN to find values needed to augment the existing attributes being returned? Yes, that should work. However, doing this parsing and internal search for every single entry returned might be a big performance hit. You might want to examine the SLAPI_SEARCH_ATTRS and do the internal search in a SLAPI_PLUGIN_PRE_SEARCH_FN, then store the results in the operation (in an operation extension), then just use those results in your PRE_ENTRY_FN. This is what the deref plugin does. -- 389-devel mailing list 389-devel@lists.fedoraproject.org https://admin.fedoraproject.org/mailman/listinfo/389-devel -- 389-devel mailing list 389-devel@lists.fedoraproject.org https://admin.fedoraproject.org/mailman/listinfo/389-devel
Re: [389-devel] plugin problem using slapi_entry_attr_find
Another bug in your code. The argument for SLAPI_SEARCH_ATTRS should be the address of a char **.e.g. { char **attrs; int ii = 0; ... if (slapi_pblock_get( pb, SLAPI_SEARCH_ATTRS, attrs) !=0 )return (-1); for (ii = 0; attrs attrs[ii]; ++ii) { slapi_log_error(SLAPI_LOG_PLUGIN, my plugin, search attr %d is %s\n, ii, attrs[ii]); } ... In your plugin entry and in your plugin config you specify which types of operations (bind, search, add, etc.) your plugin will handle. E.g. a SLAPI_PLUGIN_PRE_BIND_FN will be called at the pre-operation stage of a BIND operation. Each type of plugin will have possibly different pblock parameters available. So, for example, if you use the same function as both a bind preop and a search preop - when called as a bind preop, the SLAPI_SEARCH_ATTRS will not be available. If you want to use the same function for different op types, declare different functions for each op type, then call your common function with the op type, like this: int bind_preop(Slapi_PBlock *pb) { return common_function(SLAPI_PLUGIN_PRE_BIND_FN, pb); } int search_preop(Slapi_PBlock *pb) { return common_function(SLAPI_PLUGIN_PRE_SEARCH_FN, pb); } ... int common_function(int type, Slapi_PBlock *pb) { ... if (type == SLAPI_PLUGIN_PRE_BIND_FN) { do some bind specific action } else if (type == SLAPI_PLUGIN_PRE_SEARCH_FN) { do some search specific action } ... On 01/16/2014 03:02 PM, Deas, Jim wrote: On further review it appears that the line in question will crash Dirsrv on some request from PAM or even 389-Console but not when searching groups via ldapsearch Should there be a statement that determines what type of query triggered the preop_result so I know if it’s proper to look for attributes? *From:*389-devel-boun...@lists.fedoraproject.org [mailto:389-devel-boun...@lists.fedoraproject.org] *On Behalf Of *Rich Megginson *Sent:* Thursday, January 16, 2014 11:29 AM *To:* 389 Directory server developer discussion. *Subject:* Re: [389-devel] plugin problem using slapi_entry_attr_find On 01/16/2014 11:39 AM, Deas, Jim wrote: My bet, a rookie mistake. Am I forgetting to init a pointer etc??? Adding the line surrounded by ** in this routine makes dirsrv unstable and crashes it after a few queries. /* Registered preop_result routine */ int gnest_preop_results( Slapi_PBlock *pb){ Slapi_Entry *e; Slapi_Attr **a; This should be Slapi_Attr *a; If (slapi_pblock_get( pb, SLAPI_SEARCH_ATTRS, e) !=0 )return (-1); /*This line makes the server unstable and crashes it after one or two queries / If(slapi_entry_attr_find(e, “memberUid”,a) == 0) slapi_log_error(SLAPI_LOG_PLUGIN, “gnest preop”,”memberUid found in record); /**/ Return (0); } *JD* -- 389-devel mailing list 389-devel@lists.fedoraproject.org mailto:389-devel@lists.fedoraproject.org https://admin.fedoraproject.org/mailman/listinfo/389-devel -- 389-devel mailing list 389-devel@lists.fedoraproject.org https://admin.fedoraproject.org/mailman/listinfo/389-devel -- 389-devel mailing list 389-devel@lists.fedoraproject.org https://admin.fedoraproject.org/mailman/listinfo/389-devel
Re: [389-devel] plugin problem using slapi_entry_attr_find
On 01/16/2014 04:49 PM, Deas, Jim wrote: I caught the pointer issue after my last post. I misunderstood the process thinking I have to download a master pblock then use a reference from that for obtaining values. I am trying to intercept queries that are returning posix group information and make dynamic changes in the memberUid list returned to the client. The purpose is to create a single nested group layer handled on the dirsrv side so existing Linux PAM systems do not need modification to use simple nested groups. *In the database, I have memeberUid values preceeded by '@' to designate them as group entries instead of people. I.E. memberUid = betty,fred,joan,@accountants for three users and all users part of group accountants. Process: Capture results Internal search for any @* as additional groups Remove @* and add found subgroups memberUid's to the existing results Ok. Then you will want to use a SLAPI_PLUGIN_PRE_ENTRY_FN plugin. I would suggest taking a look at the deref plugin code, which does something very similar - just before an entry is to be returned, the deref plugin adds some extra data to the entry to be returned. deref defines two plugin functions - a SLAPI_PLUGIN_PRE_SEARCH_FN and a SLAPI_PLUGIN_PRE_ENTRY_FN. It is the latter that does the work of adding the extra data to the entry to be returned to the user. -Original Message- From: 389-devel-boun...@lists.fedoraproject.org [mailto:389-devel-boun...@lists.fedoraproject.org] On Behalf Of Nathan Kinder Sent: Thursday, January 16, 2014 3:35 PM To: 389 Directory server developer discussion. Subject: Re: [389-devel] plugin problem using slapi_entry_attr_find On 01/16/2014 03:14 PM, Deas, Jim wrote: Rich, Thanks. I actually did have the address of operator on the code. Both the init and config are defining only a couple of specific functions (start_fn, pre_results_fn,pre_abandon_fn) one function defined for each. The one I am testing is preop_results() which does trigger, works as you suggested below, but crashes when adding a call to slapi_entry_attr_find() for many but not all remote inquiries. In the code you shared, you are setting e with this call: slapi_pblock_get( pb, SLAPI_SEARCH_ATTRS, e) The issue here is that e is a Slapi_Entry, but SLAPI_SEARCH_ATTRS doesn't retreive a Slapi_Entry from the pblock. This means e is incorrect at this point (it will likely have bad pointer values if you look into it in gdb). When you call slapi_entry_attr_find(), it is likely trying to dereference some of these bad pointer values, which leads to the crash. You need to pass a valid Slapi_Entry to this function (if you even need this function). What exactly are you trying to have your plug-in do? Thanks, -NGK Perhaps I am going at this all wrong. What sequence should I call to get a multivariable attribute? In this case a list of attribute ‘memberUid’ while rejecting preop_results not directed at returning Group information? JD *From:*389-devel-boun...@lists.fedoraproject.org [mailto:389-devel-boun...@lists.fedoraproject.org] *On Behalf Of *Rich Megginson *Sent:* Thursday, January 16, 2014 2:25 PM *To:* 389 Directory server developer discussion. *Subject:* Re: [389-devel] plugin problem using slapi_entry_attr_find Another bug in your code. The argument for SLAPI_SEARCH_ATTRS should be the address of a char **.e.g. { char **attrs; int ii = 0; ... if (slapi_pblock_get( pb, SLAPI_SEARCH_ATTRS, attrs) !=0 )return (-1); for (ii = 0; attrs attrs[ii]; ++ii) { slapi_log_error(SLAPI_LOG_PLUGIN, my plugin, search attr %d is %s\n, ii, attrs[ii]); } ... In your plugin entry and in your plugin config you specify which types of operations (bind, search, add, etc.) your plugin will handle. E.g. a SLAPI_PLUGIN_PRE_BIND_FN will be called at the pre-operation stage of a BIND operation. Each type of plugin will have possibly different pblock parameters available. So, for example, if you use the same function as both a bind preop and a search preop - when called as a bind preop, the SLAPI_SEARCH_ATTRS will not be available. If you want to use the same function for different op types, declare different functions for each op type, then call your common function with the op type, like this: int bind_preop(Slapi_PBlock *pb) { return common_function(SLAPI_PLUGIN_PRE_BIND_FN, pb); } int search_preop(Slapi_PBlock *pb) { return common_function(SLAPI_PLUGIN_PRE_SEARCH_FN, pb); } ... int common_function(int type, Slapi_PBlock *pb) { ... if (type == SLAPI_PLUGIN_PRE_BIND_FN) { do some bind specific action } else if (type == SLAPI_PLUGIN_PRE_SEARCH_FN) { do some search specific action } ... On 01/16/2014 03:02 PM, Deas, Jim wrote: On further review it appears that the line in question will crash Dirsrv on some request from PAM or even 389-Console
Re: [389-devel] ISO 8601 parser
On 12/23/2013 11:06 AM, Nathaniel McCallum wrote: https://www.redhat.com/archives/freeipa-devel/2013-December/msg00229.html 389ds may be interested in the ISO 8601 parser contained in the patch. It offers two main advantages to the one already contained in the 389ds tree: 1. It is*far* more flexible in what it can parse. 2. It is thoroughly tested (currently ~15k tests). Thanks! https://fedorahosted.org/389/ticket/47658 -- 389-devel mailing list 389-devel@lists.fedoraproject.org https://admin.fedoraproject.org/mailman/listinfo/389-devel
[389-devel] Please review: lib389 - various cleanups
https://fedorahosted.org/389/ticket/47643 -- 389-devel mailing list 389-devel@lists.fedoraproject.org https://admin.fedoraproject.org/mailman/listinfo/389-devel
Re: [389-devel] Regarding Microsoft announcemnt: MD5 deprecation on Microsoft Windows
Hi, As you would have noticed, Microsoft has announced for Deprecation of MD5 Hashing Algorithm for Microsoft Root Certificate Program. Is there any impact on us because of this decision with regards to WinSync part? No. Can somebody help me in giving some insight onto this? Regards, Jyoti -- 389-devel mailing list 389-devel@lists.fedoraproject.org https://admin.fedoraproject.org/mailman/listinfo/389-devel
[389-devel] Please review: Ticket #47631 objectclass may, must lists skip rest of objectclass once first is found in sup
https://fedorahosted.org/389/attachment/ticket/47631/0001-Ticket-47631-objectclass-may-must-lists-skip-rest-of.patch -- 389-devel mailing list 389-devel@lists.fedoraproject.org https://admin.fedoraproject.org/mailman/listinfo/389-devel
[389-devel] Please review: take 2: Ticket #47623 fix memleak caused by 47347
https://fedorahosted.org/389/attachment/ticket/47623/0001-Ticket-47623-fix-memleak-caused-by-47347.patch -- 389-devel mailing list 389-devel@lists.fedoraproject.org https://admin.fedoraproject.org/mailman/listinfo/389-devel
[389-devel] Please review: Ticket #47623 fix memleak caused by 47347
https://fedorahosted.org/389/attachment/ticket/47623/0001-Ticket-47623-fix-memleak-caused-by-47347.patch -- 389-devel mailing list 389-devel@lists.fedoraproject.org https://admin.fedoraproject.org/mailman/listinfo/389-devel
[389-devel] Please review: Ticket #47300 [RFE] remove-ds-admin.pl: redesign the behaviour
https://fedorahosted.org/389/attachment/ticket/47300/0001-Ticket-47300-RFE-remove-ds-admin.pl-redesign-the-beh.patch -- 389-devel mailing list 389-devel@lists.fedoraproject.org https://admin.fedoraproject.org/mailman/listinfo/389-devel
Re: [389-devel] Please review lib389 ticket 47575: add test case for ticket47560
On 10/30/2013 12:12 PM, thierry bordaz wrote: On 10/30/2013 06:59 PM, Rich Megginson wrote: On 10/30/2013 10:47 AM, thierry bordaz wrote: Hello, This tickets implement a test case and propose a layout of the CI tests in the 389-ds. The basic idea is to put CI tests under: head/dirsrvtests/ tickets/ standalone_test.py m1c1_test.py m2_c1_test.py ... Does tickets in this case mean tickets for issues in the 389 trac? Yes in my mind, this directory would contains test cases for 389 tickets. File or directory? I don't understand - is standalone_test.py supposed to be a real ticket? Or will the tickets directory contain files like ticket47424.py, ticket47332.py, etc.? testsuites/ acl_test.py replication_test.py ... For example, test_standalone.py would setup a standalone topology and will contain all ticket test cases that are applicable on standalone topology. https://fedorahosted.org/389/attachment/ticket/47575/0001-Ticket-47575-CI-test-add-test-case-for-ticket47560.patch So we would just keep adding tests to the single file standalone_test.py, every time we add a test for a trac ticket that deals with a standalone server? Yes, if we have a test case for a ticket_xyz, we may add a new class method class Test_standAlone(object): def setup(self): ... def teardown(self): ... def test_ticket_xyz(self): def _test_ticket_xyx_setup(): initialization of test case ticket xyz def _test_ticket_xyz_teardown(): cleanup for test case ticket xyz _test_ticket_xyz_setup() test case _test_ticket_xyz_teardown() def test_ticket_abc(self) ... def test_final(self) triggers the cleanup of the standalone instance This won't be in a separate file called ticketXYZ.py? regards thierry -- 389-devel mailing list 389-devel@lists.fedoraproject.org https://admin.fedoraproject.org/mailman/listinfo/389-devel -- 389-devel mailing list 389-devel@lists.fedoraproject.org https://admin.fedoraproject.org/mailman/listinfo/389-devel
Re: [389-devel] Proof of concept: mocking DS in lib389
On 10/26/2013 12:49 AM, Jan Rusnacko wrote: On 10/25/2013 11:00 PM, Rich Megginson wrote: On 10/25/2013 01:36 PM, Jan Rusnacko wrote: Hello Roberto and Thierry, as I promised, I am sending you a proof-of-concept code that demonstrates, how we can mock DS in unit tests for library function (see attachment). You can run tests just by executing py.test in tests directory. Only 3 files are of interest here: lib389/dsmodules/repl.py - this is a Python module with functions - they expect DS instance as the first argument. Since they are functions, not methods, I can just mock DS and pass that fake one as the first argument to them in unit tests. tests/test_dsmodules/conftest.py - this file contains definition of mock DS class along with py.test fixture, that returns it. tests/test_dsmodules/test_repl.py - this contains unit tests for functions from repl.py. What I do is quite simple - I override ldapadd, ldapdelete .. methods of mock DS class, so that instead of sending command to real DS instance, they just store the data in 'dit' dictionary (which represents content stored in DS). This way, I can check that when I call e.g. function enable_changelog(..), in the end DS will have correct changelog entry. To put it very bluntly - enable_changelog(..) function just adds correct changelog entry to whatever is passed to it as the first argument. In unit tests, it is mock DS, otherwise it would be real DS class that sends real ldap commands to real DS instance behind. def test_add_repl_manager(fake_ds_inst_with_repl): ds_inst = fake_ds_inst_with_repl ds_inst.repl.add_repl_manager(cn=replication manager, cn=config, Secret123) assert ds_inst.dit[cn=replication manager, cn=config][userPassword] == Secret123 assert ds_inst.dit[cn=replication manager, cn=config][nsIdleTimeout] == 0 assert ds_inst.dit[cn=replication manager, cn=config][cn] == replication manager If you are using a real directory server instance, doing add_repl_manager() is going to make a real LDAP ADD request, right? Correct. If you pass DS with real ldapadd method that makes real reqests, its going to use that. Will it still update the ds_inst.dit dict? ds_inst.dit is updated in mocked ldapadd. So in real ldapadd, no. Wouldn't you have to do a real LDAP Search request to get the actual values? Yes, correct. ds_inst.dit[] .. call is specific to mocked DS. But you are right - I could add fake ldapsearch method, that would return entries from 'dit' dictionary and use that to retrieve entries from mocked DS. Because, otherwise, you have separate tests for mock DS and real DS? Or perhaps I'm missing something? Now I can successfully test that enable_changelog really works, without going into trouble defining DSInstance or ldap calls at all. Also, I believe this approach would work for 95% of all functions in lib389. Another benefit is that unit tests are much faster, than on real DS instance. Sidenote: even though everything is defined in separate namespace of 'repl' module as function, in runtime they can be used as normal methods of class DSInstance. That is handled by DSModuleProxy. We already went through this, but not with Roberto. Hopefully, now with some code in our hands, we will be able to understand each other on this 'mocking' issue and come to conclusions more quickly. Let me know what you think. Thank you, Jan -- 389-devel mailing list 389-devel@lists.fedoraproject.org https://admin.fedoraproject.org/mailman/listinfo/389-devel
[389-devel] Please review: Ticket #47478 No groups file? error restarting Admin server
https://fedorahosted.org/389/attachment/ticket/47478/0001-Ticket-47478-No-groups-file-error-restarting-Admin-s.patch -- 389-devel mailing list 389-devel@lists.fedoraproject.org https://admin.fedoraproject.org/mailman/listinfo/389-devel
[389-devel] Please review: Ticket #47300 [RFE] remove-ds-admin.pl: redesign the behaviour
https://fedorahosted.org/389/attachment/ticket/47300/0001-Ticket-47300-RFE-remove-ds-admin.pl-redesign-the-beh.patch -- 389-devel mailing list 389-devel@lists.fedoraproject.org https://admin.fedoraproject.org/mailman/listinfo/389-devel
Re: [389-devel] fractional replication monitoring proposal
On 10/15/2013 02:41 PM, Mark Reynolds wrote: https://fedorahosted.org/389/ticket/47368 So we run into issues when trying to figure out if replicas are in synch(if those replicas use fractional replication and strip mods). What happens is that an update is made on master A, but due to fractional replication there is no update made to any replicas. So if you look at the ruv in the tombstone entry on each server, it would appear they are out of synch. So using the ruv in the db tombstone is no longer accurate when using fractional replication. So what we really want to know is - When there are differences when comparing RUV max CSN values, how much of the differences are due only to unreplicated operations? I'm proposing a new ruv to be stored in the backend replica entry: e.g. cn=replica,cn=dc=example,dc=com,cn=mapping tree,cn=config. I'm calling this the replicated ruv. So whenever we actually send an update to a replica, this ruv will get updated. Since we can not compare this replicated ruv to the replicas tombstone ruv, we can instead compare the replicated ruv to the ruv in the replica's repl agreement(unless it is a dedicated consumer - here we might be able to still look at the db tombstone ruv to determine the status). Will have to check to see if, on a dedicated consumer, the RUV is updated by internal operations. Problems with this approach: - All the servers need to have the same replication configuration(the same fractional replication policy and attribute stripping) to give accurate results. - If one replica has an agreement that does NOT filter the updates, but has agreements that do filter updates, then we can not correctly determine its synchronization state with the fractional replicas. - Performance hit from updating another ruv(in cn=config)? Yes. We already have a lot of churn in dse.ldif due to - uuid generator - csn generator - updating consumer RUV in each replication agreement. Fractional replication simply breaks our monitoring process. I'm not sure, not without updating the repl protocol, that we can cover all deployment scenarios(mixed fractional repl agmts, etc). However, I think this approach would work for most deployments(compared to none at the moment). For IPA, since they don't use consumers, this approach would work for them. And finally, all of this would have to be handled by a updated version of repl-monitor.pl. This is just my preliminary idea on how to handle this. Feedback is welcome!! Thanks in advance, Mark -- Mark Reynolds 389 Development Team Red Hat, Inc mreyno...@redhat.com -- 389-devel mailing list 389-devel@lists.fedoraproject.org https://admin.fedoraproject.org/mailman/listinfo/389-devel -- 389-devel mailing list 389-devel@lists.fedoraproject.org https://admin.fedoraproject.org/mailman/listinfo/389-devel
Re: [389-devel] fractional replication monitoring proposal
On 10/16/2013 09:28 AM, Mark Reynolds wrote: On 10/16/2013 11:05 AM, Ludwig Krispenz wrote: On 10/15/2013 10:41 PM, Mark Reynolds wrote: https://fedorahosted.org/389/ticket/47368 So we run into issues when trying to figure out if replicas are in synch(if those replicas use fractional replication and strip mods). What happens is that an update is made on master A, but due to fractional replication there is no update made to any replicas. So if you look at the ruv in the tombstone entry on each server, it would appear they are out of synch. So using the ruv in the db tombstone is no longer accurate when using fractional replication. I'm proposing a new ruv to be stored in the backend replica entry: e.g. cn=replica,cn=dc=example,dc=com,cn=mapping tree,cn=config. I'm calling this the replicated ruv. So whenever we actually send an update to a replica, this ruv will get updated. I don't see how this will help, you have an additional info on waht has been replicated (which is available on the consumer as well) and you have a max csn, but you don't know if there are outstanding fractional changes to be sent. Well you will know on master A what operations get replicated(this updates the new ruv before sending any changes), and you can use this ruv to compare against the other master B's ruv(in its replication agreement). Maybe I am missing your point? Do you mean changes that have not been read from the changelog yet? My plan was to update the new ruv in perform_operation() - right after all the stripping has been done and there is something to replicate. We need to have a ruv for replicated operations. I guess there are other scenarios I didn't think of, like if replication is in a backoff state, and valid changes are coming in. Maybe, we could do test stripping earlier in the replication process(when writing to the changelog?), In general, you can't do this, because you may have fractional replication some consumers but not to all consumers. and then update the new ruv there instead of waiting until we try and send the changes. Since we can not compare this replicated ruv to the replicas tombstone ruv, we can instead compare the replicated ruv to the ruv in the replica's repl agreement(unless it is a dedicated consumer - here we might be able to still look at the db tombstone ruv to determine the status). Problems with this approach: - All the servers need to have the same replication configuration(the same fractional replication policy and attribute stripping) to give accurate results. - If one replica has an agreement that does NOT filter the updates, but has agreements that do filter updates, then we can not correctly determine its synchronization state with the fractional replicas. - Performance hit from updating another ruv(in cn=config)? Fractional replication simply breaks our monitoring process. I'm not sure, not without updating the repl protocol, that we can cover all deployment scenarios(mixed fractional repl agmts, etc). However, I think this approach would work for most deployments(compared to none at the moment). For IPA, since they don't use consumers, this approach would work for them. And finally, all of this would have to be handled by a updated version of repl-monitor.pl. This is just my preliminary idea on how to handle this. Feedback is welcome!! Thanks in advance, Mark -- Mark Reynolds 389 Development Team Red Hat, Inc mreyno...@redhat.com -- 389-devel mailing list 389-devel@lists.fedoraproject.org https://admin.fedoraproject.org/mailman/listinfo/389-devel -- 389-devel mailing list 389-devel@lists.fedoraproject.org https://admin.fedoraproject.org/mailman/listinfo/389-devel -- Mark Reynolds 389 Development Team Red Hat, Inc mreyno...@redhat.com -- 389-devel mailing list 389-devel@lists.fedoraproject.org https://admin.fedoraproject.org/mailman/listinfo/389-devel -- 389-devel mailing list 389-devel@lists.fedoraproject.org https://admin.fedoraproject.org/mailman/listinfo/389-devel
[389-devel] Please review: Ticket #47559 hung server - related to sasl and initialize
https://fedorahosted.org/389/attachment/ticket/47559/0001-Ticket-47559-hung-server-related-to-sasl-and-initial.patch Note - I cannot reproduce this problem - so just going on the stack trace. -- 389-devel mailing list 389-devel@lists.fedoraproject.org https://admin.fedoraproject.org/mailman/listinfo/389-devel
[389-devel] Please review: bump autoconf to 2.69, automake to 1.13.4, libtool to 2.4.2
From 62c2689f723e4f2aad69e957f2a9ca584045f74f Mon Sep 17 00:00:00 2001 From: Rich Megginson rmegg...@redhat.com Date: Wed, 9 Oct 2013 17:46:21 -0600 Subject: [PATCH] bump autoconf to 2.69, automake to 1.13.4, libtool to 2.4.2 This also simplifies the process of setting these in the future --- autogen.sh | 108 +++ 1 files changed, 78 insertions(+), 30 deletions(-) diff --git a/autogen.sh b/autogen.sh index 7209d5b..8bb628b 100755 --- a/autogen.sh +++ b/autogen.sh @@ -1,43 +1,91 @@ #!/bin/sh +# set required versions of tools here +# the version is dotted integers like X.Y.Z where +# X, Y, and Z are integers +# comparisons are done using shell -lt, -gt, etc. +# this works if the numbers are zero filled as well +# so 06 == 6 + +# autoconf version required +# need 2.69 or later +ac_need_maj=2 +ac_need_min=69 +# automake version required +# need 1.13.4 or later +am_need_maj=1 +am_need_min=13 +am_need_rev=4 +# libtool version required +# need 2.4.2 or later +lt_need_maj=2 +lt_need_min=4 +lt_need_rev=2 +# should never have to touch anything below this line unless there is a bug +### + +# input +# arg1 - version string in the form X.Y[.Z] - the .Z is optional +# args remaining - the needed X, Y, and Z to match +# output +# return 0 - success - the version string is = the required X.Y.Z +# return 1 - failure - the version string is the required X.Y.Z +# NOTE: All input must be integers, otherwise you will see shell errors +checkvers() { +vers=$1; shift +needmaj=$1; shift +needmin=$1; shift +needrev=$1; shift +verslist=`echo $vers | tr '.' ' '` +set $verslist +maj=$1; shift +min=$1; shift +rev=$1; shift +if [ $maj -gt $needmaj ] ; then return 0; fi +if [ $maj -lt $needmaj ] ; then return 1; fi +# if we got here, maj == needmaj +if [ -z $needmin ] ; then return 0; fi +if [ $min -gt $needmin ] ; then return 0; fi +if [ $min -lt $needmin ] ; then return 1; fi +# if we got here, min == needmin +if [ -z $needrev ] ; then return 0; fi +if [ $rev -gt $needrev ] ; then return 0; fi +if [ $rev -lt $needrev ] ; then return 1; fi +# if we got here, rev == needrev +return 0 +} + # Check autoconf version -AC_VERSION=`autoconf --version | grep '^autoconf' | sed 's/.*) *//'` -case $AC_VERSION in -'' | 0.* | 1.* | 2.[0-4]* | 2.[0-9] | 2.5[0-8]* ) -echo You must have autoconf version 2.59 or later installed (found version $AC_VERSION). +AC_VERSION=`autoconf --version | sed '/^autoconf/ {s/^.* \([1-9][0-9.]*\)$/\1/; q}'` +if checkvers $AC_VERSION $ac_need_maj $ac_need_min ; then +echo Found valid autoconf version $AC_VERSION +else +echo You must have autoconf version $ac_need_maj.$ac_need_min or later installed (found version $AC_VERSION). exit 1 -;; -* ) -echo Found autoconf version $AC_VERSION -;; -esac +fi # Check automake version -AM_VERSION=`automake --version | grep '^automake' | sed 's/.*) *//'` -case $AM_VERSION in -1.1[0-9]* ) -echo Found automake version $AM_VERSION -;; -'' | 0.* | 1.[0-8]* | 1.9.[0-5]* ) -echo You must have automake version 1.9.6 or later installed (found version $AM_VERSION). +AM_VERSION=`automake --version | sed '/^automake/ {s/^.* \([1-9][0-9.]*\)$/\1/; q}'` +if checkvers $AM_VERSION $am_need_maj $am_need_min $am_need_rev ; then +echo Found valid automake version $AM_VERSION +else +echo You must have automake version $am_need_maj.$am_need_min.$am_need_rev or later installed (found version $AM_VERSION). exit 1 -;; -* ) -echo Found automake version $AM_VERSION -;; -esac +fi # Check libtool version -LT_VERSION=`libtool --version | grep ' libtool)' | sed 's/.*) \([0-9][0-9.]*\)[^ ]* .*/\1/'` -case $LT_VERSION in -'' | 0.* | 1.[0-4]* | 1.5.[0-9] | 1.5.[0-1]* | 1.5.2[0-1]* ) -echo You must have libtool version 1.5.22 or later installed (found version $LT_VERSION). +# NOTE: some libtool versions report a letter at the end e.g. on RHEL6 +# the version is 2.2.6b - for comparison purposes, just strip off the +# letter - note that the shell -lt and -gt comparisons will fail with +# test: 6b: integer expression expected if the number to compare +# contains a non-digit +LT_VERSION=`libtool --version | sed '/GNU libtool/ {s/^.* \([1-9][0-9a-zA-Z.]*\)$/\1/; s/[a-zA-Z]//g; q}'` +if checkvers $LT_VERSION $lt_need_maj $lt_need_min $lt_need_rev ; then +echo Found valid libtool version $LT_VERSION +else +echo You must have libtool version $lt_need_maj.$lt_need_min.$lt_need_rev or later installed (found version $LT_VERSION). exit 1 -;; -* ) -echo Found libtool version $LT_VERSION -;; -esac +fi # Run autoreconf echo Running autoreconf -fvi -- 1.7.1 -- 389-devel mailing list 389-devel@lists.fedoraproject.org https://admin.fedoraproject.org/mailman/listinfo/389-devel
[389-devel] Please review: Ticket #222 Admin Express issues Internal Server Error when the Config DS is down.
https://fedorahosted.org/389/attachment/ticket/222/0001-Ticket-222-Admin-Express-issues-Internal-Server-Erro.patch -- 389-devel mailing list 389-devel@lists.fedoraproject.org https://admin.fedoraproject.org/mailman/listinfo/389-devel
[389-devel] Please review: Ticket #47550 logconv: failed logins: Use of uninitialized value in numeric comparison at logconv.pl line 949
https://fedorahosted.org/389/attachment/ticket/47550/0001-Ticket-47550-logconv-failed-logins-Use-of-uninitiali.patch -- 389-devel mailing list 389-devel@lists.fedoraproject.org https://admin.fedoraproject.org/mailman/listinfo/389-devel
[389-devel] switch to F19 for autogen?
In the interest of reducing the autotool file churn, is everyone ok with switching to using F19 to run autogen? -- 389-devel mailing list 389-devel@lists.fedoraproject.org https://admin.fedoraproject.org/mailman/listinfo/389-devel
[389-devel] Please review: Ticket #418 Error with register-ds-admin.pl
https://fedorahosted.org/389/attachment/ticket/418/0001-Ticket-418-Error-with-register-ds-admin.pl.patch -- 389-devel mailing list 389-devel@lists.fedoraproject.org https://admin.fedoraproject.org/mailman/listinfo/389-devel
[389-devel] Please review: Ticket #47498 Error Message for Failed to create the configuration directory server
https://fedorahosted.org/389/attachment/ticket/47498/0001-Ticket-47498-Error-Message-for-Failed-to-create-the-.patch -- 389-devel mailing list 389-devel@lists.fedoraproject.org https://admin.fedoraproject.org/mailman/listinfo/389-devel
[389-devel] Please review: Ticket 47533 logconv: some stats do not work across server restarts
https://fedorahosted.org/389/attachment/ticket/47533/0001-logconv-some-stats-do-not-work-across-server-restart.patch -- 389-devel mailing list 389-devel@lists.fedoraproject.org https://admin.fedoraproject.org/mailman/listinfo/389-devel
[389-devel] Please review: Ticket #47501 logconv.pl uses /var/tmp for BDB temp files
https://fedorahosted.org/389/attachment/ticket/47501/0001-Ticket-47501-logconv.pl-uses-var-tmp-for-BDB-temp-fi.patch this is the same patch but ignoring whitespace diffs https://fedorahosted.org/389/attachment/ticket/47501/ignore-ws.patch -- 389-devel mailing list 389-devel@lists.fedoraproject.org https://admin.fedoraproject.org/mailman/listinfo/389-devel
[389-devel] Please review: Ticket #47504 idlistscanlimit per index/type/value
This is the final draft proposed patch https://fedorahosted.org/389/attachment/ticket/47504/0001-Ticket-47504-idlistscanlimit-per-index-type-value.patch Here are the diffs since the last patch: https://fedorahosted.org/389/attachment/ticket/47504/newdiffs The biggest change is to use DataList instead of a linked list for the idlistscanlimit info structures. Testing with valgrind revealed a couple of memory leaks which have been fixed. Some formatting has been improved. Error messages have been improved/fixed. -- 389-devel mailing list 389-devel@lists.fedoraproject.org https://admin.fedoraproject.org/mailman/listinfo/389-devel
Re: [389-devel] RFC: New Design: Fine Grained ID List Size
On 09/12/2013 07:08 PM, David Boreham wrote: On 9/11/2013 11:41 AM, Howard Chu wrote: Just out of curiosity, why is keeping a count per key a problem? If you're using BDB duplicate key support, can't you just use cursor-c_count() to get this? I.e., BDB already maintains key counts internally, why not leverage that? afaik you need to pass the DB_RECNUM flag at DB creation time to get record counting behavior, and it imposes a performance and concurrency penalty on writes. Also afaik 389DS does not set that flag except on VLV indexes (which need it, and coincidentally were the original reason for the feature being added to BDB). I'm using bdb 4.7 on RHEL 6. Looking at the code, it appears the dbc-count method for btree is __bamc_count() in bt_cursor.c. I'm not sure, but it looks as though this function has to iterate each page counting the duplicates on each page, which makes it a non-starter. Unless I'm mistaken, it doesn't look as though it keeps a counter on each update, then simply returns the counter. I don't see any code which would make the behavior different depending on if DB_RECNUM is used when the database is created. -- 389-devel mailing list 389-devel@lists.fedoraproject.org https://admin.fedoraproject.org/mailman/listinfo/389-devel -- 389-devel mailing list 389-devel@lists.fedoraproject.org https://admin.fedoraproject.org/mailman/listinfo/389-devel
Re: [389-devel] RFC: New Design: Fine Grained ID List Size
On 09/13/2013 02:39 PM, David Boreham wrote: On 9/13/2013 2:18 PM, Rich Megginson wrote: On 09/12/2013 07:08 PM, David Boreham wrote: On 9/11/2013 11:41 AM, Howard Chu wrote: Just out of curiosity, why is keeping a count per key a problem? If you're using BDB duplicate key support, can't you just use cursor-c_count() to get this? I.e., BDB already maintains key counts internally, why not leverage that? afaik you need to pass the DB_RECNUM flag at DB creation time to get record counting behavior, and it imposes a performance and concurrency penalty on writes. Also afaik 389DS does not set that flag except on VLV indexes (which need it, and coincidentally were the original reason for the feature being added to BDB). I'm using bdb 4.7 on RHEL 6. Looking at the code, it appears the dbc-count method for btree is __bamc_count() in bt_cursor.c. I'm not sure, but it looks as though this function has to iterate each page counting the duplicates on each page, which makes it a non-starter. Unless I'm mistaken, it doesn't look as though it keeps a counter on each update, then simply returns the counter. I don't see any code which would make the behavior different depending on if DB_RECNUM is used when the database is created. The DB_RECNUM count feature is not accessed via dbc-count() but through the dbc-c_get() call, passing DB_GET_RECNO, positioning at the last key. You do also need to use nested btrees for it to count the dups, afaik (but we're doing that in the DS indexes already I believe). I wrote a small bdbtest.py script which uses the python bdb interface. https://github.com/richm/scripts/blob/master/bdbtest.py This creates an env, opens a db with bsddb.db.DB_DUPSORT|bsddb.db.DB_RECNUM, adds several non-dup and dup records, opens a cursor and iterates them. This is the output: open dbenv in /var/tmp/dbtest open db /var/tmp/dbtest/dbtest.db4 no txn records key=key0 val=data0 extra=('', '\x01\x00\x00\x00') snip key=key9 val=data9 extra=('', '\n\x00\x00\x00') key=multikey val=multidata0 extra=('', '\x0b\x00\x00\x00') snip key=multikey val=multidata9 extra=('', '\x0b\x00\x00\x00') The extra is the str() output of cur.get(bsddb.db.DB_GET_RECNO) So for all of the dup records, the recno is the same '\b' == 11? I'm probably missing something, but how do I use this to get the number of duplicates? -- 389-devel mailing list 389-devel@lists.fedoraproject.org https://admin.fedoraproject.org/mailman/listinfo/389-devel -- 389-devel mailing list 389-devel@lists.fedoraproject.org https://admin.fedoraproject.org/mailman/listinfo/389-devel
Re: [389-devel] RFC: New Design: Fine Grained ID List Size
On 09/12/2013 07:39 AM, thierry bordaz wrote: On 09/10/2013 04:35 PM, Ludwig Krispenz wrote: On 09/10/2013 04:29 PM, Rich Megginson wrote: On 09/10/2013 01:47 AM, Ludwig Krispenz wrote: On 09/09/2013 07:19 PM, Rich Megginson wrote: On 09/09/2013 02:27 AM, Ludwig Krispenz wrote: On 09/07/2013 05:02 AM, David Boreham wrote: On 9/6/2013 8:49 PM, Nathan Kinder wrote: This is a good idea, and it is something that we discussed briefly off-list. The only downside is that we need to change the index format to keep a count of ids for each key. Implementing this isn't a big problem, but it does mean that the existing indexes need to be updated to populate the count based off of the contents (as you mention above). I don't think you need to do this (I certainly wasn't advocating doing so). The statistics state is much the same as that proposed in Rich's design. In fact you could probably just use that same information. My idea is more about where and how you use the information. All you need is something associated with each index that says not much point looking here if you're after something specific, move along, look somewhere else instead. This is much the same information as don't use a high scan limit here. In the short term, we are looking for a way to be able to improve performance for specific search filters that are not possible to modify on the client side (for whatever reason) while leaving the index file format exactly as it is. I still feel that there is potentially great value in keeping a count of ids per key so we can optimize things on the server side automatically without the need for complex index configuration on the administrator's part. I think we should consider this for an additional future enhancement. I'm saying the same thing. Keeping a cardinality count per key is way more than I'm proposing, and I'm not sure how useful that would be anyway, unless you want to do OLAP in the DS ;) we have the cardinality of the key in old-idl and this makes some searches where parts of the filter are allids fast. I'm late in the discussion, but I think Rich's proposal is very promising to address all the problems related to allids in new-idl. We could then eventually rework filter ordering based on these configurations. Right now we only have a filter ordering based on index type and try to postpone = or similar filter as they are known to be costly, but this could be more elaborate. An alternative would be to have some kind of index lookup caching. In the example in ticket 47474 the filter is ((|(objectClass=organizationalPerson)(objectClass=inetOrgPerson)(objectClass=organization)(objectClass=organizationalUnit)(objectClass=groupOf Names)(objectClass=groupOfUniqueNames)(objectClass=group))(c3sUserID=EndUser078458)) and probably only the c3sUserID=x part will change, if we cache the result for the ((|(objectClass=... part, even if it is expensive, it would be done only once. Thanks everyone for the comments. I have added Noriko's suggestion: http://port389.org/wiki/Design/Fine_Grained_ID_List_Size David, Ludwig: Does the current design address your concerns, and/or provide the necessary first step for further refinements? yes, the topic of filter reordering or caching could be looked at independently. Just one concern abou the syntax: nsIndexIDListScanLimit: maxsize[:indextype][:flag[,flag...]][:value[,value...]] since everything is optional, how do you decide if in nsIndexIDListScanLimit: 6:eq:AND AND is a value or a flag ? and as it defines limits for specific keys, could the attributname reflect this, eg nsIndexKeyIDListScanLimit or nsIndexKeyScanLimit or ... ? Thanks, yes, it is ambiguous. I think it may have to use keyword=value, so something like this: nsIndexIDListScanLimit: limit=NNN [type=eq[,sub]] [flags=ADD[,OR]] [values=val[,val...]] That should be easy to parse for both humans and machines. For values, will have to figure out a way to have escapes (e.g. if a value contains a comma or an escape character). Was thinking of using LDAP escapes (e.g. \, or \032) they should be treated as in filters and normalized, in the config it should be the string representation according to the attributetype Hi, I was wondering if this configuration attribute at the index level, could not also be implemented at the bind-base level. It could be - it would be more difficult to do - you would have to have the nsIndexIDListScanLimit attribute specified in the user entry, and it would have to specify the attribute type e.g. dn: uid=admin, nsIndexIDListScanLimit: limit= attr=objectclass type=eq value=inetOrgPerson Or perhaps a new attribute - nsIndexIDListScanLimit should be not operational for use in nsIndex, but should be operational for use in a user entry. If an application use to bind with a given entry, it could use its own limitations put for example into operational
[389-devel] Please review: Ticket #47455 - valgrind - value mem leaks, uninit mem usage
My earlier fix broke slapi-nis https://fedorahosted.org/389/attachment/ticket/47455/0001-Ticket-47455-valgrind-value-mem-leaks-uninit-mem-usa.2.patch -- 389-devel mailing list 389-devel@lists.fedoraproject.org https://admin.fedoraproject.org/mailman/listinfo/389-devel
[389-devel] Please review: fix mem leak in admldapBuildInfoSSL when there is no password
From 41a05f031694a46786ffbd29c61b122fe2d56e3b Mon Sep 17 00:00:00 2001 From: Rich Megginson rmegg...@redhat.com Date: Tue, 13 Aug 2013 10:05:11 -0600 Subject: [PATCH 1/3] fix mem leak in admldapBuildInfoSSL when there is no password --- lib/libadmsslutil/admsslutil.c |1 + 1 files changed, 1 insertions(+), 0 deletions(-) diff --git a/lib/libadmsslutil/admsslutil.c b/lib/libadmsslutil/admsslutil.c index 7ea9eb0..a86c37c 100644 --- a/lib/libadmsslutil/admsslutil.c +++ b/lib/libadmsslutil/admsslutil.c @@ -106,6 +106,7 @@ admldapBuildInfoSSL(AdmldapInfo info, int *errorcode) } else { /* no password means just punt rather than do anon bind */ /* this mimics the same logic in admldapBuildInfoCbk() */ + ldap_unbind_ext(ld, NULL, NULL); *errorcode = ADMUTIL_LDAP_ERR; return 1; /* have to return true here to mimic admldapBuildInfoCbk() */ } -- 1.7.1 -- 389-devel mailing list 389-devel@lists.fedoraproject.org https://admin.fedoraproject.org/mailman/listinfo/389-devel
[389-devel] Please review: Ticket #47413 389-admin fails to build with latest httpd
https://fedorahosted.org/389/attachment/ticket/47413/0001-Ticket-47413-389-admin-fails-to-build-with-latest-ht.patch -- 389-devel mailing list 389-devel@lists.fedoraproject.org https://admin.fedoraproject.org/mailman/listinfo/389-devel
[389-devel] Please review: another compiler warning for F20 admin
From 1d7282d6a4c3e773419dcf4cde592ce0b04edd93 Mon Sep 17 00:00:00 2001 From: Rich Megginson rmegg...@redhat.com Date: Fri, 16 Aug 2013 10:51:55 -0600 Subject: [PATCH 2/3] compiler warning - ldif_read_record lineno type depends on openldap version --- lib/libdsa/dsalib_confs.c | 16 ++-- 1 files changed, 14 insertions(+), 2 deletions(-) diff --git a/lib/libdsa/dsalib_confs.c b/lib/libdsa/dsalib_confs.c index 36d9356..b4a1f4c 100644 --- a/lib/libdsa/dsalib_confs.c +++ b/lib/libdsa/dsalib_confs.c @@ -38,6 +38,18 @@ #include nspr.h #include plstr.h +/* ldif_read_record lineno argument type depends on openldap version */ +#if defined(USE_OPENLDAP) +#include ldap_features.h +#if LDAP_VENDOR_VERSION = 20434 /* changed in 2.4.34 */ +typedef unsigned long int ldif_record_lineno_t; +#else +typedef int ldif_record_lineno_t; +#endif +#else +typedef int ldif_record_lineno_t; +#endif + int dsalib_ldif_parse_line( char *line, @@ -75,11 +87,11 @@ ds_get_conf_from_file(FILE *conf) int listsize = 0; char**conf_list = NULL; char *entry = 0; -int lineno = 0; -int i = 0; #if defined(USE_OPENLDAP) int buflen = 0; #endif +ldif_record_lineno_t lineno; +int i = 0; #if defined(USE_OPENLDAP) while (ldif_read_record(conf, lineno, entry, buflen)) { -- 1.7.1 -- 389-devel mailing list 389-devel@lists.fedoraproject.org https://admin.fedoraproject.org/mailman/listinfo/389-devel
[389-devel] Please review: Redux: Ticket #47455 - valgrind - value mem leaks, uninit mem usage
I found some more leaks and errors. With this patch, valgrind of mmr/usn/modify test on both masters is clean. https://fedorahosted.org/389/attachment/ticket/47455/0001-Ticket-47455-valgrind-value-mem-leaks-uninit-mem-usa.patch -- 389-devel mailing list 389-devel@lists.fedoraproject.org https://admin.fedoraproject.org/mailman/listinfo/389-devel
[389-devel] Please review: take 3: Ticket #47455 - valgrind - value mem leaks, uninit mem usage
https://fedorahosted.org/389/attachment/ticket/47455/0001-Ticket-47455-valgrind-value-mem-leaks-uninit-mem-usa.patch -- 389-devel mailing list 389-devel@lists.fedoraproject.org https://admin.fedoraproject.org/mailman/listinfo/389-devel
[389-devel] Please review: Ticket #47448 - Segfault in 389-ds-base-1.3.1.4-1.fc19 when setting up FreeIPA replication
https://fedorahosted.org/389/attachment/ticket/47448/0001-Ticket-47448-Segfault-in-389-ds-base-1.3.1.4-1.fc19-.patch -- 389-devel mailing list 389-devel@lists.fedoraproject.org https://admin.fedoraproject.org/mailman/listinfo/389-devel
[389-devel] Please review: Ticket #47455 - valgrind - value mem leaks, uninit mem usage
https://fedorahosted.org/389/attachment/ticket/47455/slapd.vg.22729 https://fedorahosted.org/389/attachment/ticket/47455/0001-Ticket-47455-valgrind-value-mem-leaks-uninit-mem-usa.patch -- 389-devel mailing list 389-devel@lists.fedoraproject.org https://admin.fedoraproject.org/mailman/listinfo/389-devel
[389-devel] Please review: Ticket 47427 - Overflow in nsslapd-disk-monitoring-threshold
https://fedorahosted.org/389/attachment/ticket/47427/0001-Ticket-47427-Overflow-in-nsslapd-disk-monitoring-thr.4.patch -- 389-devel mailing list 389-devel@lists.fedoraproject.org https://admin.fedoraproject.org/mailman/listinfo/389-devel
[389-devel] Please review: Ticket #47378 - fix recent compiler warnings
https://fedorahosted.org/389/attachment/ticket/47378/0001-Ticket-47378-fix-recent-compiler-warnings.patch -- 389-devel mailing list 389-devel@lists.fedoraproject.org https://admin.fedoraproject.org/mailman/listinfo/389-devel
[389-devel] Please review: Ticket #47377 - make listen backlog size configurable
https://fedorahosted.org/389/attachment/ticket/47377/0001-Ticket-47377-make-listen-backlog-size-configurable.patch -- 389-devel mailing list 389-devel@lists.fedoraproject.org https://admin.fedoraproject.org/mailman/listinfo/389-devel
[389-devel] Please review: Ticket #47359 - new ldap connections can block ldaps and ldapi connections
https://fedorahosted.org/389/attachment/ticket/47359/0001-Ticket-47359-new-ldap-connections-can-block-ldaps-an.patch -- 389-devel mailing list 389-devel@lists.fedoraproject.org https://admin.fedoraproject.org/mailman/listinfo/389-devel
[389-devel] Please review: take 3: Ticket #47299 - allow cmdline scripts to work with non-root user
https://fedorahosted.org/389/attachment/ticket/47299/0001-Ticket-47299-allow-cmdline-scripts-to-work-with-non-.3.patch -- 389-devel mailing list 389-devel@lists.fedoraproject.org https://admin.fedoraproject.org/mailman/listinfo/389-devel