Re: Orphaning 'hub' (the git wrapper for Github)

2020-09-29 Thread Rich Megginson

On 9/29/20 8:55 AM, Stephen Gallagher wrote:

Upstream is slowly dying out, since the official Github CLI[1] was
announced. The latest upstream release doesn't pass the test suite on
Golang 1.15[2] and I don't have the time to spare these days, so I'm
going to orphan it and let someone else take over.

Is there a plan to package `gh` in Fedora?



[1] https://cli.github.com/
[2] https://github.com/github/hub/issues/2616
___
devel mailing list -- devel@lists.fedoraproject.org
To unsubscribe send an email to devel-le...@lists.fedoraproject.org
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/devel@lists.fedoraproject.org

___
devel mailing list -- devel@lists.fedoraproject.org
To unsubscribe send an email to devel-le...@lists.fedoraproject.org
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/devel@lists.fedoraproject.org


[389-devel] Re: Future of nunc-stans

2019-10-22 Thread Rich Megginson

On 10/22/19 8:28 PM, William Brown wrote:



I think turbo mode was to try and shortcut returning to the conntable and then having the 
blocking on the connections poll because the locking strategies before weren't as good. I 
think there is still some value in turbo "for now" but if we can bring in 
libevent, then it diminishes because we become event driven rather than poll driven.


"turbo mode" means "keep reading from this socket as quickly as possible until you 
get EAGAIN/EWOULDBLOCK" i.e. keep reading from the socket as fast as possible as long as there 
is data immediately available.


Yep that's how I understood it - it's trying to prevent a longer delay until 
it's poll()-ed again.


This is very useful for replication consumers, especially during online init, 
when the supplier is feeding you data as fast as possible.  Otherwise, its 
usefulness is limited to applications where you have a single client hammering 
you with requests, of which test/stress clients form a significant percentage.


Don't you know though, microoptimising for benchmarks is the new and hip trend.

Joking aside, there probably are situations for now where it's still useful, 
but ifwe can bring in libevent and be event driven rather than using poll() we 
shouldn't have to worry to much.

Another option is when we hit EAGAIN/EWOULDBLOCK we move the task back to the 
slapi work q rather than re-waiting on it in the poll phase.


+1





—
Sincerely,

William Brown

Senior Software Engineer, 389 Directory Server
SUSE Labs
___
389-devel mailing list -- 389-devel@lists.fedoraproject.org
To unsubscribe send an email to 389-devel-le...@lists.fedoraproject.org
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/389-devel@lists.fedoraproject.org


___
389-devel mailing list -- 389-devel@lists.fedoraproject.org
To unsubscribe send an email to 389-devel-le...@lists.fedoraproject.org
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/389-devel@lists.fedoraproject.org


[389-devel] Re: Future of nunc-stans

2019-10-08 Thread Rich Megginson

On 10/8/19 4:55 PM, William Brown wrote:

Hi everyone,

In our previous catch up (about 4/5 weeks ago when I was visiting Matus/Simon), 
we talked about nunc-stans and getting it at least cleaned up and into the code 
base.

I've been looking at it again, and really thinking about it and reflecting on 
it and I have a lot of questions and ideas now.

The main question is *why* do we want it merged?

Is it performance? Recently I provided a patch that yielded an approximate ~30% 
speed up in the entire server through put just by changing our existing 
connection code.
Is it features? What features are we wanting from this? We have no complaints 
about our current threading model and thread allocations.
Is it maximum number of connections? We can always change the conntable to a 
better datastructure that would help scale this number higher (which would also 
yield a performance gain).


It is mostly about the c10k problem, trying to figure out a way to use 
epoll, via an event framework like libevent, libev, or libtevent, but in 
a multi-threaded way (at the time none of those were really thread safe, 
or suitable for use in the way we do multi-threading in 389).


It wasn't about performance, although I hoped that using lock-free data 
structures might solve some of the performance issues around thread 
contention, and perhaps using a "proper" event framework might give us 
some performance boost e.g. the idle thread processing using libevent 
timeouts.  I think that using poll() is never going to scale as well as 
epoll() in some cases e.g. lots of concurrent connections, no matter 
what sort of datastructure you use for the conntable.


As far as features goes - it would be nice to give plugins the ability 
to inject event requests, get timeout events, using the same framework 
as the main server engine.





The more I have looked at the code, I guess with time and experience, the more 
hesitant I am to actually commit to merging it. It was designed by people who 
did not understand low-level concurrency issues and memory architectures of 
systems,


I resemble that remark.  I suppose you could "turn off" the lock-free 
code and use mutexes.



so it's had a huge number of (difficult and subtle) unsafety issues. And while 
most of those are fixed, what it does is duplicating the connection structure 
from core 389,


It was supposed to eventually replace the connection code.


leading to weird solutions like lock sharing and having to use monitors and 
more. We've tried a few times to push forward with this, but each time we end 
up with a lot of complexity and fragility.


So I'm currently thinking a better idea is to step back, re-evaluate what the 
problem is we are trying to solve for, then to solve *that*.

The question now is "what is the concern that ns would solve". From knowing 
that, then we can make a plan and approach it more constructively I think.


I agree.  There are probably better ways to solve the problems now.



At the end of the day, I'm questioning if we should just rm -r src/nunc-stans 
and rethink this whole approach - there are just too many architectural flaws 
and limitations in ns that are causing us headaches.

Ideas and thoughts?

--
Sincerely,

William
___
389-devel mailing list -- 389-devel@lists.fedoraproject.org
To unsubscribe send an email to 389-devel-le...@lists.fedoraproject.org
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/389-devel@lists.fedoraproject.org


___
389-devel mailing list -- 389-devel@lists.fedoraproject.org
To unsubscribe send an email to 389-devel-le...@lists.fedoraproject.org
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/389-devel@lists.fedoraproject.org


[389-devel] Re: Porting Perl scripts

2019-06-24 Thread Rich Megginson

On 6/24/19 10:00 AM, Mark Reynolds wrote:



On 6/24/19 11:46 AM, Simon Pichugin wrote:

Hi team,
I am working on porting our admin Perl scripts to Python CLI.
Please, check the list and share your opinion:

- cl-dump.pl - dumps and decodes changelog. Is it used often (if at all)?
https://access.redhat.com/documentation/en-us/red_hat_directory_server/10/html-single/configuration_command_and_file_reference/index#cl_dump.pl_Dump_and_decode_changelog

This is used often actually, and is a good debugging tool.   I think it just 
creates a task, so it should be ported to CLI (added to replication CLI sub 
commands)

- logconv.pl - parse and analise the access logs. Pretty big one, is it 
priority? How much people use it?
   issue is created -https://pagure.io/389-ds-base/issue/50283
https://access.redhat.com/documentation/en-us/red_hat_directory_server/10/html-single/configuration_command_and_file_reference/index#logconv_pl

Does not need to be ported as its a standalone tool



Would be great to eliminate perl altogether . . . but this one will be tricky 
to port to python . . .



- migrate.pl - which migration scenarios do we plan to support?
   Do we depricate old ones? Do we need the script?
https://access.redhat.com/documentation/en-us/red_hat_directory_server/10/html-single/configuration_command_and_file_reference/index#migrate-ds.pl

This script is obsolete IMHO

- ns_accountstatus.pl, ns_inactivate.pl, ns_activate.pl - the issue is
   discussed here -https://pagure.io/389-ds-base/issue/50206
   I think we should extend status at least. Also, William put there some
   of his thoughts. What do you think, guys? Will we refactor
   (kinda depricate) some "account lock" as William proposing?
https://access.redhat.com/documentation/en-us/red_hat_directory_server/10/html-single/configuration_command_and_file_reference/index#ldif2db.pl_Import-ns_accountstatus.pl_Establish_account_status
I will update the ticket, but we need the same functionality of the ns_* tools, especially the new status work that went into ns_accountstatus.pl - that all came from customer escalations so 
we must not lose that functionality.

- syntax-validate.pl - it probably will go to 'healthcheck' tool
   issue is created -https://pagure.io/389-ds-base/issue/50173
https://access.redhat.com/documentation/en-us/red_hat_directory_server/10/html-single/configuration_command_and_file_reference/index#syntax-validate.pl

Yes

- repl_monitor.pl - should we make it a part of 'healthcheck' too?
https://access.redhat.com/documentation/en-us/red_hat_directory_server/10/html-single/configuration_command_and_file_reference/index#repl_monitor.pl_Monitor_replication_status

Yes

Thanks,
Simon

___
389-devel mailing list --389-devel@lists.fedoraproject.org
To unsubscribe send an email to389-devel-le...@lists.fedoraproject.org
Fedora Code of 
Conduct:https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines:https://fedoraproject.org/wiki/Mailing_list_guidelines
List 
Archives:https://lists.fedoraproject.org/archives/list/389-devel@lists.fedoraproject.org


___
389-devel mailing list -- 389-devel@lists.fedoraproject.org
To unsubscribe send an email to 389-devel-le...@lists.fedoraproject.org
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/389-devel@lists.fedoraproject.org


___
389-devel mailing list -- 389-devel@lists.fedoraproject.org
To unsubscribe send an email to 389-devel-le...@lists.fedoraproject.org
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/389-devel@lists.fedoraproject.org


[389-devel] Re: Logging future direction and ideas.

2019-05-10 Thread Rich Megginson

On 5/9/19 9:13 PM, William Brown wrote:

Hi all,

So I think it's time for me to write some logging code to improve the 
situation. Relevant links before we start:

https://pagure.io/389-ds-base/issue/49415
http://www.port389.org/docs/389ds/design/logging-performance-improvement.html
https://pagure.io/389-ds-base/issue/50350
https://pagure.io/389-ds-base/issue/50361


All of these links touch on issues around logging, and I think they all combine 
to create three important points:

* The performance of logging should be improved
* The amount of details (fine grain) and information in logs should improve
* The structure of the log content should be improved to aid interaction 
(possibly even machine parsable)

I will turn this into a design document, but there are some questions I would 
like some input to help answer as part of this process to help set the 
direction and tasks to achieve.

-- Should our logs as they exist today, continue to exist?

I think that my view on this is "no". I think if we make something better, we 
have little need to continue to support our legacy interfaces. Of course, this would be a 
large change and it may not sit comfortably with people.

A large part of this thinking is that the "new" log interface I want to add is 
focused on *operations* rather than auditing accesses or changes, or over looking at 
errors. The information of both the current access/audit/error would largely be melded 
into a single operation log, and then with tools like logconv, we
could parse and extract information that would behave the same way as 
access/error/audit.

At the same time - I can see how people *may* want a "realtime" audit of operations as 
they occur (IE access log), but this still today is already limited by having to "wait" 
for operations to complete.

In a crash scenario, we would be able to still view the logs that are queued, 
so I think there are not so many concerns about losing information in these 
cases (in fact we'd probably have more).

-- What should the operation log look like?

I think it should be structured, and should be whole-units of information, 
related to a single operation. IE only at the conclusion of the operation is it 
logged (thus the async!). It should support arbitrary, nested timers, and would 
*not* support log levels - it's a detailed log of the process each query goes 
through.

An example could be something like:

[timestamp] - [conn=id op=id] - start operation
[timestamp] - [conn=id op=id] - start time = time ...
[timestamp] - [conn=id op=id] - started internal search '(some=filter)'
[timestamp] - [conn=id op=id parentop=id] - start nested operation
[timestamp] - [conn=id op=id parentop=id] - start time = time ...
...
[timestamp] - [conn=id op=id parentop=id] - end time = time...
[timestamp] - [conn=id op=id parentop=id] - duration = diff end - start
[timestamp] - [conn=id op=id parentop=id] - end nested operation - result -> ...
[timestamp] - [conn=id op=id] - ended internal search '(some=filter)'
...
[timestamp] - [conn=id op=id] - end time = time
[timestamp] - [conn=id op=id] - duration = diff end - start


Due to the structured - blocked nature, there would be no interleaving of 
operation messages. therefor the log would appear as:

[timestamp] - [conn=00 op=00] - start operation
[timestamp] - [conn=00 op=00] - start time = time ...
[timestamp] - [conn=00 op=00] - started internal search '(some=filter)'
[timestamp] - [conn=00 op=00 parentop=01] - start nested operation
[timestamp] - [conn=00 op=00 parentop=01] - start time = time ...
...
[timestamp] - [conn=00 op=00 parentop=01] - end time = time...
[timestamp] - [conn=00 op=00 parentop=01] - duration = diff end - start
[timestamp] - [conn=00 op=00 parentop=01] - end nested operation - result -> ...
[timestamp] - [conn=00 op=00] - ended internal search '(some=filter)'
...
[timestamp] - [conn=00 op=00] - end time = time
[timestamp] - [conn=00 op=00] - duration = diff end - start
[timestamp] - [conn=22 op=00] - start operation
[timestamp] - [conn=22 op=00] - start time = time ...
[timestamp] - [conn=22 op=00] - started internal search '(some=filter)'
[timestamp] - [conn=22 op=00 parentop=01] - start nested operation
[timestamp] - [conn=22 op=00 parentop=01] - start time = time ...
...
[timestamp] - [conn=22 op=00 parentop=01] - end time = time...
[timestamp] - [conn=22 op=00 parentop=01] - duration = diff end - start
[timestamp] - [conn=22 op=00 parentop=01] - end nested operation - result -> ...
[timestamp] - [conn=22 op=00] - ended internal search '(some=filter)'
...
[timestamp] - [conn=22 op=00] - end time = time
[timestamp] - [conn=22 op=00] - duration = diff end - start

An alternate method for structuring could be a machine readable format like 
json:

{
 'timestamp': 'time',
 'duration': ,
 'bind': 'dn of who initiated operation',
 'events': [
 'debug': 'msg',
 'internal_search': {
  'timestamp': 'time',
  'duration': ,
  

[389-devel] Re: [discuss] Entry cache and backend txn plugin problems

2019-02-26 Thread Rich Megginson

On 2/26/19 4:26 PM, William Brown wrote:



On 26 Feb 2019, at 18:32, Ludwig Krispenz  wrote:

Hi, I need a bit of time to read the docs and clear my thoughts, but one 
comment below
On 02/25/2019 01:49 AM, William Brown wrote:

On 23 Feb 2019, at 02:46, Mark Reynolds  wrote:

I want to start a brief discussion about a major problem we have backend 
transaction plugins and the entry caches.  I'm finding that when we get into a 
nested state of be txn plugins and one of the later plugins that is called 
fails then while we don't commit the disk changes (they are aborted/rolled 
back) we DO keep the entry cache changes!

For example, a modrdn operation triggers the referential integrity plugin which 
renames the member attribute in some group and changes that group's entry cache 
entry, but then later on the memberOf plugin fails for some reason.  The 
database transaction is aborted, but the entry cache changes that RI plugin did 
are still present :-(  I have also found other entry cache issues with modrdn 
and BE TXN plugins, and we know of other currently non-reproducible entry cache 
crashes as well related to mishandling of cache entries after failed operations.

It's time to rework how we use the entry cache.  We basically need a 
transaction style caching mechanism - we should not commit any entry cache 
changes until the original operation is fully successful. Unfortunately the way 
the entry cache is currently designed and used it will be a major change to try 
to change it.

William wrote up this doc: 
http://www.port389.org/docs/389ds/design/cache_redesign.html

But this also does not currently cover the nested plugin scenario either (not 
yet).  I do know how how difficult it would be to implement William's proposal, 
or how difficult it would be to incorporate the txn style caching into his 
design.  What kind of time frame could this even be implemented in?  William 
what are your thoughts?

I like coffee? How cool are planes? My thoughts are simple :)

I think there is a pretty simple mental simplification we can make here though. 
Nested transactions “don’t really exist”. We just have *recursive* operations 
inside of one transaction.

Once reframed like that, the entire situation becomes simpler. We have one 
thread in a write transaction that can have recursive/batched operations as 
required, which means that either “all operations succeed” or “none do”. 
Really, this is the behaviour we want anyway, and it’s the transaction model of 
LMDB and other kv stores that we could consider (wired tiger, sled in the 
future).

I think the recursive/nested transaction on the database level are not the 
problem, we do this correctly already, either all or no change becomes 
persistent.
What we do not manage is modifications we do in parallel on the in memory 
structure like the entry cache, changes to the EC are not managed by any txn 
and I do not see how any of the database txn models would help, they do not 
know about ec and can abort changes.
We would need to incorporate the EC into a generic txn model, or have a way to 
flag ec entries as garbage for if a txn is aborted

The issue is we allow parallel writes, which breaks the consistency guarantees 
of the EC anyway. LMDB won’t allow parallel writes (it’s single write - 
concurrent parallel readers), and most other modern kv stores take this 
approach too, so we should be remodelling our transactions to match this IMO. 
It will make the process of how we reason about the EC much much simpler I 
think.



Some sort of in-memory data structure with fast lookup and transactional semantics (modify operations are stored as mvcc/cow so each read of the database with a given txn handle sees its own 
view of the ec, a txn commit updates the parent txn ec view, or the global ec view if no parent, from the copy, a txn abort deletes the txn's copy of the ec) is needed.  A quick google search 
turns up several hits.  I'm not sure if the B+Tree proposed at http://www.port389.org/docs/389ds/design/cache_redesign.html has transactional semantics, or if such code could be added to its 
implementation.


With LMDB, if we could make the on-disk entry representation the same as the in-memory entry representation, then we could use LMDB as the entry cache too - the database would be the entry 
cache as well.






If William's design is too huge of a change that will take too long to safely implement 
then perhaps we need to look into revising the existing cache design where we use 
"cache_add_tentative" style functions and only apply them at the end of the op. 
 This is also not a trivial change.

It’s pretty massive as a change - if we want to do it right. I’d say we need:

* development and testing of a MVCC/COW cache implementation (proof that it 
really really works transactionally)
* allow “disable/disconnect” of the entry cache, but with the higher level 
txn’s so that we can prove the txn semantics are correct
* re-architect our transaction calls so 

Re: packaging ruby dependencies

2017-10-25 Thread Rich Megginson

On 10/25/2017 01:43 PM, nicolas.mail...@laposte.net wrote:

Hi Jason,

Packaging deps is a virtuous circle: it makes the work of the next person who 
needs them for other software easier, which increases the chance other software 
is packaged, which increases the value of the distro as a whole, its 
attractiveness, and the probability that someone will eventually do something 
useful for you (not limited to co-maintaining your packages). It's a long term 
virtuous investment circle.

Not packaging deps is less work in the short term, especially when you're among 
the first to arrive on an underdeveloped part of the distro, but has no 
positive externalities (and potentially negative ones as it can be interpreted 
as a vote of no confidence in the distro, discouraging others).

Believing in long term effects made Red Hat. Free software is the ultimate 
long-term investment. It's a lot of work to make software available in a form 
that may be consumed by others, without any guaranty someone will ever return 
anything useful. Yet it is successful, because together people are stronger 
than alone. Many entities that didn't understand this tried to outcompete Red 
Hat by focusing on actions with immediate returns, and skimping on long shot 
investments like sharing stuff with others. They had limited successes in 
attracting long-term customers, in building communities, in getting others to 
cooperate.

We all like to receive, but to receive you need to give first.


I can't argue with that.

What I can do is present my painful experience.

I want to use fluentd (a medium sized ruby application) + a half dozen 
or so plugins packaged as separate gems.


This puts me on the hook to be the maintainer in perpetuity of 90+ ruby 
RPM packages (for all of the build time and run time dependencies).  Not 
what I had in mind.


This wouldn't be so bad if there were some sort of automation that would 
constantly scour rubygems.org for updates and automatically update and 
build the rpms.




Regards,


___
devel mailing list -- devel@lists.fedoraproject.org
To unsubscribe send an email to devel-le...@lists.fedoraproject.org


Re: packaging ruby dependencies

2017-10-25 Thread Rich Megginson

On 10/25/2017 09:49 AM, William Moreno wrote:



2017-10-25 9:38 GMT-06:00 Sandro Bonazzola >:




2017-10-25 16:54 GMT+02:00 Jason Brooks >:

Hi all --

As part of a documentation project I'm working on with the Fedora
Atomic WG, I started packaging asciibinder[1] with the
intention of
getting the package into Fedora. Along the way[2], I encountered a
bunch of required, unpackaged dependencies, which would also
have to
be added to Fedora.

[1] http://asciibinder.org/
[2]

https://copr.fedorainfracloud.org/coprs/jasonbrooks/asciibinder/packages/



It has me wondering whether packaging these gems as rpms is
worthwhile, especially since we'd end up running asciibinder in a
container, anyway.

What are people's thoughts on the value of packaging gems --
it's it
worthwhile, is it somehow UnFedora to not bother to package them?



Also consider that:

1 - your are sure that there is no broken depencies.
2 - when a new version of ruby is available those gems will be 
verified to work in next mass rebuild.

3 - all available tests as checked in every build.
4 - koshei will inform you new depencies changes.
5 - there will be stable software stack per release


Are you making these points in favor of, or against, creating rpm 
packages of gems?





___
devel mailing list -- devel@lists.fedoraproject.org
To unsubscribe send an email to devel-le...@lists.fedoraproject.org


___
devel mailing list -- devel@lists.fedoraproject.org
To unsubscribe send an email to devel-le...@lists.fedoraproject.org


Re: journal-triggerd, interest, alternatives?

2017-02-15 Thread Rich Megginson

On 02/15/2017 11:31 AM, Martin Langhoff wrote:

On Wed, Feb 15, 2017 at 11:19 AM, Rich Megginson <rmegg...@redhat.com> wrote:

Probably most Fedora users will use a general purpose tool like rsyslog
(already in Fedora) or fluentd/logstash to read events from journald and do
custom triggers.

thanks for the info! It seems to me that journal-triggerd fits a
different use case from the tools mentioned so far, so I'll keep
chipping at it :-)

journal-triggerd is a tiny utility written in C, meant for
local/standalone use, more general purpose than fail2ban (which I
use), but otherwise in a similar "simple to install, small footprint,
for single node" space.


I guess if you want a very small tool written for a very specific 
purpose, then that fits the bill.


If you don't mind using rsyslog, it will do what you want, and much 
more, and it's already in Fedora.




I've used logstash/kibana (fluentd seems similar), and those are log
aggregators, with a much  more involved setup, geared for big traffic.

cheers,



martin


___
devel mailing list -- devel@lists.fedoraproject.org
To unsubscribe send an email to devel-le...@lists.fedoraproject.org


Re: journal-triggerd, interest, alternatives?

2017-02-15 Thread Rich Megginson

On 02/15/2017 07:23 AM, Martin Langhoff wrote:

My applications largely log to system loggers. Looking around for
something that triggers an email when certain log entries appear in
system logs (ie: python stacktraces), I got just one hit,
journal-triggerd. It is not in Fedora.

https://github.com/jjk-jacky/journal-triggerd

Have I missed anything? Seems like the kind of tool we'd have already,
as we've had systemd/journald for a while.


Probably most Fedora users will use a general purpose tool like rsyslog 
(already in Fedora) or fluentd/logstash to read events from journald and 
do custom triggers.




There's a Mageia package, I'll take a look at its spec file, might
prepare one for consideration for Fedora...


cheers,



martin


___
devel mailing list -- devel@lists.fedoraproject.org
To unsubscribe send an email to devel-le...@lists.fedoraproject.org


[389-devel] Re: Design Doc: Automatic server tuning by default

2016-11-07 Thread Rich Megginson

On 11/06/2016 04:07 PM, William Brown wrote:

On Fri, 2016-11-04 at 12:07 +0100, Ludwig Krispenz wrote:

On 11/04/2016 06:51 AM, William Brown wrote:

http://www.port389.org/docs/389ds/design/autotuning.html

I would like to hear discussion on this topic.

thread number:
   independent of number of cpus I would have a default minmum number of
threads,

What do you think would be a good minimum? With too many threads to CPU,
we can cause an overhead in context switching that is not efficient.


Even if the threads are unused, or mostly idle?




your test result for reduced thread number is with clients quickly
handling responses and short operations.
   But if some threads are serving lazy clients or do database access and
have to wait, you can quickly run out of threads handling new ops

Mmm this is true. Nunc-Stans helps a bit here, but not completely.


In this case, where there are a lot of mostly idle clients that want to 
maintain an open connection, nunc-stans helps a great deal, both because 
epoll is much better than a giant poll() array, and because libevent 
maintains a sorted idle connection list for you.




I wonder if something like 16 or 24 or something is a good "minimum",
and then if we detect more then we start to scale up.


entry cache:
you should not only take the available memory into account but also the
size of the database, it doesn't make sense to blow up the cache and its
associated data (eg hashtables) for a small database just because the
memory is there

Well, the cache size is "how much we *could* use" not "how much we will
use". So setting a cache size of 20GB for a 10Mb database doesn't
matter, as we'll still only use ~10Mb of memory.

The inverse of this, is that if we did set cachesize on database size,
what happens with a large online bulkload? We would need to retune the
database cache size, which means a restart of the application. Not
something that IPA/Admins want to hear. I think it's safer to just have
the higher number.




___
389-devel mailing list -- 389-devel@lists.fedoraproject.org
To unsubscribe send an email to 389-devel-le...@lists.fedoraproject.org


___
389-devel mailing list -- 389-devel@lists.fedoraproject.org
To unsubscribe send an email to 389-devel-le...@lists.fedoraproject.org


[389-devel] Re: Close of 48241, let's not support bad crypto

2016-10-03 Thread Rich Megginson

On 10/03/2016 09:34 PM, William Brown wrote:

On Mon, 2016-10-03 at 21:26 -0600, Rich Megginson wrote:

On 10/03/2016 08:58 PM, William Brown wrote:

Hi,

I want to close #48241 [0] as "wontfix". I do not believe that it's
appropriate to provide SHA3 as a password hashing algorithm.

The SHA3 algorithm is designed to be fast, and cryptographically secure.
It's target usage is for signatures and verification of these in a rapid
manner.

The fact that this algorithm is fast, and could be implemented in
hardware is the reason it's not appropriate for password hashing.
Passwords should be hashed with a slow algorithm, and in the future, an
algorithm that is CPU and memory hard. This means that in the (hopefully
unlikely) case of password hash leak or dump from ldap that the attacker
must spend a huge amount of resources to brute force or attack any
password that we are storing in the system.

If the crypto/security team is ok with not supporting SHA3 for
passwords, works for me.

Who would be a point of contact to ask this?


Nikos Mavrogiannopoulos <nmavr...@redhat.com>


As a result, I would like to make this ticket "wontfix" with an
explanation of why. I think it's better for us to pursue #397 [1].
PBKDF2 is a CPU hard algorithm, and scrypt is both CPU and Memory hard.
These are the direction we should be going (asap).

Thanks,


[0] https://fedorahosted.org/389/ticket/48241
[1] https://fedorahosted.org/389/ticket/397



___
389-devel mailing list -- 389-devel@lists.fedoraproject.org
To unsubscribe send an email to 389-devel-le...@lists.fedoraproject.org

___
389-devel mailing list -- 389-devel@lists.fedoraproject.org
To unsubscribe send an email to 389-devel-le...@lists.fedoraproject.org



___
389-devel mailing list -- 389-devel@lists.fedoraproject.org
To unsubscribe send an email to 389-devel-le...@lists.fedoraproject.org


___
389-devel mailing list -- 389-devel@lists.fedoraproject.org
To unsubscribe send an email to 389-devel-le...@lists.fedoraproject.org


[389-devel] Re: Close of 48241, let's not support bad crypto

2016-10-03 Thread Rich Megginson

On 10/03/2016 08:58 PM, William Brown wrote:

Hi,

I want to close #48241 [0] as "wontfix". I do not believe that it's
appropriate to provide SHA3 as a password hashing algorithm.

The SHA3 algorithm is designed to be fast, and cryptographically secure.
It's target usage is for signatures and verification of these in a rapid
manner.

The fact that this algorithm is fast, and could be implemented in
hardware is the reason it's not appropriate for password hashing.
Passwords should be hashed with a slow algorithm, and in the future, an
algorithm that is CPU and memory hard. This means that in the (hopefully
unlikely) case of password hash leak or dump from ldap that the attacker
must spend a huge amount of resources to brute force or attack any
password that we are storing in the system.


If the crypto/security team is ok with not supporting SHA3 for 
passwords, works for me.




As a result, I would like to make this ticket "wontfix" with an
explanation of why. I think it's better for us to pursue #397 [1].
PBKDF2 is a CPU hard algorithm, and scrypt is both CPU and Memory hard.
These are the direction we should be going (asap).

Thanks,


[0] https://fedorahosted.org/389/ticket/48241
[1] https://fedorahosted.org/389/ticket/397



___
389-devel mailing list -- 389-devel@lists.fedoraproject.org
To unsubscribe send an email to 389-devel-le...@lists.fedoraproject.org


___
389-devel mailing list -- 389-devel@lists.fedoraproject.org
To unsubscribe send an email to 389-devel-le...@lists.fedoraproject.org


[EPEL-devel] Re: nodejs update

2016-09-08 Thread Rich Megginson

On 09/08/2016 11:27 AM, Stephen Gallagher wrote:

On 08/22/2016 11:23 AM, Stephen Gallagher wrote:

On 08/11/2016 07:43 AM, Stephen Gallagher wrote:

On 08/11/2016 05:16 AM, Zuzana Svetlikova wrote:

Hi!

As some of you may know, nodejs package that is present in
EPEL is pretty outdated. The current v0.10 that we have will
go EOL in October and npm (package manager) is already not
maintained.

Currently, upstreams' plan is to have two versions of Long
Term Support (LTS) at once, one in active development and one
in maintenance mode.
Currently active is v4, which is switching to maintenance in
April and v6 which is switching to LTS in October.
This is also reason why we would like to skip v4, although
both will get security updates. Nodejs v6 also comes with
newer npm and v8 (which might best be bundled, as it is in
Fedora and Software Collections) (v8 might concern ruby and
database maintainers, but old v8 package still remains in
the repo).

There was also an idea to have both LTS versions in repo,
but we're not quite sure, how we'd do it and if it's even a
good idea.

Also, another thing is, if it is worth of updating every year
to new LTS or update only after the current one goes EOL.
According to guidelines, I'd say it's the latter, but it's
not exactly how node development works and some feedback from
users on this would be nice, because I have none.


tl;dr Need to update nodejs, but can't decide if v4 or v6,
v4: will update sooner, shorter support (2018-04-01)
v6: longer support (2019-04-01), *might* break more things,
 won't be in stable sooner than mid-October if everything
 goes well

FYI, I think this tl;dr missed explaining why v6 won't be in stable until
mid-October. What Zuzana and I discussed on another list is that the Node.js v6
schedule has it going into LTS mode on the same day that 0.10.x reaches EOL.
However, v6 is already out and available. The major thing that changes at that
point is just that from then on, they commit to adding no more major features
(as I understand it). This is the best moment for us to switch over to it.

However, in the meantime we will probably want to be carrying 6.x in
updates-testing for at least a month prior to declaring it stable (with
autokarma disabled) with wide announcements about the impending upgrade. This
will be safe to do since Node.js 6.x has already reached a point where no
backwards-incompatible changes are allowed in, so we can start the migration
process early.


OK, as we stated before, we really need to get Node.js 6.x into the
updates-testing repository soon. We mentioned that we wanted it to sit there for
at least a month before we cut over, and "at least a month" means "by next week"
since the cut over is planned for 2016-10-01.

I'm putting together a COPR right now as a first pass at this upgrade:

https://copr.fedorainfracloud.org/coprs/g/nodejs-sig/nodejs-epel/

I've run into the following blocker issues:

* We cannot jump to 6.x in EPEL 6 easily at this time, because upstream strictly
requires GCC 4.8 or later and we only have 4.4 in EPEL 6. It might be possible
to resolve this with SCLs, but I am no expert there. Zuzana?

* Node.js 4.x and 6.x both *strictly* require functionality from OpenSSL 1.0.2
and cannot run (or indeed build) against OpenSSL 1.0.1. Currently, both EPEL 6
and EPEL 7 have 1.0.1 in their buildroots. I am not aware of any solution (SCL
or otherwise) for linking EPEL to a newer version of OpenSSL.

The OpenSSL 1.0.2 problem is a significant one; we cannot build against the
bundled copy of OpenSSL because it includes patented algorithms that are not
acceptable for inclusion in Fedora. We also cannot trivially backport Fedora's
OpenSSL 1.0.2 packages because EPEL forbids upgrading packages provided by the
base RHEL/CentOS repositories.


Right now, the only thing I can think of would be for someone to build a
parallel-installable OpenSSL 1.0.2 package for EPEL 6 and EPEL 7 (similar to the
openssl101e package available for EPEL 5) and patch our specfile to be able to
work with that instead.

This is a task I'm not anxious to embark upon personally; there is too much
overhead in maintaining a fork of OpenSSL to make me comfortable.

How shall we proceed?



OK, I spent far too much of today attempting to solve this problem. I got fairly
far into it, but at this point I have run out of time to work on it for the near
future.

What I have been trying to do:

I decided that the most expedient approach for EPEL 7 right now would be to
attempt to build OpenSSL statically into Node.js. We cannot do that with the
copy that upstream carries due to certain patents, so I decided to see if I
could script up something that would pull the source of the OpenSSL package from
Fedora Rawhide, drop it into the Node.js source tree and allow us to build it.

This sounds simple in theory, but it turns out that it's going to require a fair
bit of mucking about with the gyp build that Node.js uses. I've made some
headway on it, 

[389-devel] Re: Sign compare checking

2016-08-29 Thread Rich Megginson

On 08/28/2016 11:13 PM, William Brown wrote:

So either, this is a bug in the way openldap uses the ber_len_t type, we
have a mistake in our logic, or something else hokey is going on.

I would like to update this to:

if ( (tag != LBER_END_OF_SEQORSET) && (len == 0) && (*fstr != NULL) )

Or even:

if ( (tag != LBER_END_OF_SEQORSET) && (*fstr != NULL) )

What do you think of this assessment given the ber_len_t type?

Looks like it's intentional by the openldap team. There are some other
areas for this problem. Specifically:

int ber_printf(BerElement *ber, const char *fmt, ...);

lber.h:79:#define LBER_ERROR((ber_tag_t) -1)

We check if (ber_printf(...) != LBER_ERROR)

Of course, we can't satisfy either. We can't cast the LBER_ERROR from
uint -> int without changing the value of it, and we can't cast the
output of ber_printf from int -> uint, again, without potentially
changing the value of it. So it seems that the openldap library may be
impossible to satisfy the gcc type checking with -Wsign-compare.

For now, I may just avoid these in my fixes, as it seems like a whole
set of landmines I want to avoid ...


Part of the problem is that we wanted to support being able to use both 
mozldap and openldap, without too much "helper" code/macros/#ifdef 
MOZLDAP/etc.  It looks as though this is a place where we need to have 
some sort of helper.


(as for why we still support mozldap - we still need an ldap c sdk that 
supports NSS for crypto until we can fix that in the server. Once we 
change 389 so that it can use openldap with openssl/gnutls for crypto, 
we should consider deprecating support for mozldap.)






--
389-devel mailing list
389-devel@lists.fedoraproject.org
https://lists.fedoraproject.org/admin/lists/389-devel@lists.fedoraproject.org



--
389-devel mailing list
389-devel@lists.fedoraproject.org
https://lists.fedoraproject.org/admin/lists/389-devel@lists.fedoraproject.org


[EPEL-devel] Re: nodejs update

2016-08-25 Thread Rich Megginson

On 08/11/2016 05:43 AM, Stephen Gallagher wrote:

On 08/11/2016 05:16 AM, Zuzana Svetlikova wrote:

Hi!

As some of you may know, nodejs package that is present in
EPEL is pretty outdated. The current v0.10 that we have will
go EOL in October and npm (package manager) is already not
maintained.

Currently, upstreams' plan is to have two versions of Long
Term Support (LTS) at once, one in active development and one
in maintenance mode.
Currently active is v4, which is switching to maintenance in
April and v6 which is switching to LTS in October.
This is also reason why we would like to skip v4, although
both will get security updates. Nodejs v6 also comes with
newer npm and v8 (which might best be bundled, as it is in
Fedora and Software Collections) (v8 might concern ruby and
database maintainers, but old v8 package still remains in
the repo).

There was also an idea to have both LTS versions in repo,
but we're not quite sure, how we'd do it and if it's even a
good idea.

Also, another thing is, if it is worth of updating every year
to new LTS or update only after the current one goes EOL.
According to guidelines, I'd say it's the latter, but it's
not exactly how node development works and some feedback from
users on this would be nice, because I have none.


tl;dr Need to update nodejs, but can't decide if v4 or v6,
v4: will update sooner, shorter support (2018-04-01)
v6: longer support (2019-04-01), *might* break more things,
 won't be in stable sooner than mid-October if everything
 goes well

FYI, I think this tl;dr missed explaining why v6 won't be in stable until
mid-October. What Zuzana and I discussed on another list is that the Node.js v6
schedule has it going into LTS mode on the same day that 0.10.x reaches EOL.
However, v6 is already out and available. The major thing that changes at that
point is just that from then on, they commit to adding no more major features
(as I understand it). This is the best moment for us to switch over to it.

However, in the meantime we will probably want to be carrying 6.x in
updates-testing for at least a month prior to declaring it stable (with
autokarma disabled) with wide announcements about the impending upgrade. This
will be safe to do since Node.js 6.x has already reached a point where no
backwards-incompatible changes are allowed in, so we can start the migration
process early.


How does EPEL deal with the fact that nodejs won't work with openssl 
1.0.1?  For CentOS we have a patch that allows nodejs 4.x to build with 
openssl 1.0.1 in EL7.  Are you using a similar patch?  Do you know if 
the same patch will work with nodejs 6.x?






Also need feedback from users.


I hope I didn't forget anything important.

Regards

Zuzka
Node.js SIG





___
epel-devel mailing list
epel-devel@lists.fedoraproject.org
https://lists.fedoraproject.org/admin/lists/epel-devel@lists.fedoraproject.org



___
epel-devel mailing list
epel-devel@lists.fedoraproject.org
https://lists.fedoraproject.org/admin/lists/epel-devel@lists.fedoraproject.org


[389-devel] Re: Please review: 48951 dsconf and dsadm foundations

2016-08-22 Thread Rich Megginson

On 08/22/2016 05:23 PM, William Brown wrote:

On Sun, 2016-08-21 at 21:33 -0600, Rich Megginson wrote:

On 08/21/2016 09:02 PM, William Brown wrote:

Anything that is yum, systemd command, etc. is ansible. Anything about
installing an instance or 389 specific we do.

I think that is an arbitrary line of demarcation.  ansible can be used
for a lot more than that.

Yes it can. But I don't have infinite time, and neither does the team.
Lets get something to work first, then we can grow what ansible is able
to integrate with. Lets design our code to be able to be integrate with
ansible, but draw some basic lines on things we shouldn't duplicate and
then remove in the future. This is why I want to draw the line that
start/stop of the server, and certain remote admin tasks aren't part of
the scope here.



Saying this, in a way I'm not a fan of this also. Because we are doing
behind the scenes magic, rather than simple, implicit tasks. What
happens if someone crons this? What happens? We lose the intent of the
admin in some cases.

I think the principle should be "make it simple to do the easy things -
make it possible to do the difficult things".  In this case, if I am an
admin running a cli, I think it should "do the right thing".  If I'm
setting up a cron job, I should be able to force it to use offline mode
or whatever - it is easy to keep track of extra cli arguments if I'm
automating something vs. running interactively on the command line.

I agree with that principle, and is actually one of the guides I am
following in my design.

I think that here, we have a differing view of simple. My interpretation
is.

My idea of simple is "each task should do one specific thing, and do it
well". you have db2ldif and db2ldif_task. Each one just does that one
simple thing. The intent of the admin is clear at the moment they hit
enter.

Not if they don't know what is meant by "_task".  It might as well be
".pl" to most admins.

Most of the admins I've encountered say "I just want to get an ldif dump
from the server - I have no idea what is the difference between db2ldif
and db2ldif.pl."  I think they will say the same thing about "db2ldif"
vs. "db2ldif_task".

I was thinking about this, this morning, and I think I have come to
agree with you. Lets make this "you want to get from A to B, and we work
out how to get there". Similar to ansible, which probably lends well to
use using ansible in the future for things.


Your idea of simple is "intuitive simple" for the admin, where
behaviours are inferred from running application state. The admin says
"how I want you to act" and the computer resolves the path to get there.

And - if the admin knows the tool, because the admin has learned by
experience, progressive disclosure, or RTFM, the admin can explicitly
specify the exact modes of operation using command line flags.  Using
the tool simply is easy, using the tool in an advanced fashion is possible.

I think the intent of the tool should be clear without huge amounts of
experience and rtfm. We have a huge usability and barrier to entry
problem in DS, and if we don't make changes to lower that, we will
become irrelevant. We need to make it easier to use, while retaining
every piece of advanced functionality that our experienced users
expect :) (I think we agree on this point though)


One day we will need to make a decision on which way to go with these
tools, and which path we follow, but again, for now it's open. Of
course, I am going to argue for the former, because that is the
construction of my experience. Reality is that I've seen a lot of
production systems get messed up because what seemed intuitive to the
programmer, was not the intent of the admin. We are basically having the
"boeing vs airbus" debate. Boeing has autopilots and computer
assistance, but believes the pilot is always right and will give up
control even if the pilot is going to do something the computer
disagrees with. Airbus assumes the computer is always right, and will
actively take control away from the pilot if they are going to do
something the computer disagrees with. It's about what's right: The
program? Or the human intent? And that question has never been answered.

I think the discussion doesn't fall exactly on the "boeing vs airbus"
axis, but perhaps isn't entirely orthogonal either.

As said above, I think maybe we should go down the "programmer is right"
idea, but with the ability for the sysadmin to take over if needed.


+1 - I think you've got the right idea.






--
389-devel mailing list
389-devel@lists.fedoraproject.org
https://lists.fedoraproject.org/admin/lists/389-devel@lists.fedoraproject.org



--
389-devel mailing list
389-devel@lists.fedoraproject.org
https://lists.fedoraproject.org/admin/lists/389-devel@lists.fedoraproject.org


[389-devel] Re: Please review: 48951 dsconf and dsadm foundations

2016-08-21 Thread Rich Megginson

On 08/21/2016 07:56 PM, William Brown wrote:

On Sun, 2016-08-21 at 19:44 -0600, Rich Megginson wrote:

On 08/21/2016 05:28 PM, William Brown wrote:

On Fri, 2016-08-19 at 11:21 +0200, Ludwig Krispenz wrote:

Hi William,

On 08/19/2016 02:22 AM, William Brown wrote:

On Wed, 2016-08-17 at 14:53 +1000, William Brown wrote:

https://fedorahosted.org/389/ticket/48951

https://fedorahosted.org/389/attachment/ticket/48951/0001-Ticket-48951-dsadm-and-dsconf-base-files.patch
https://fedorahosted.org/389/attachment/ticket/48951/0002-Ticket-48951-dsadm-and-dsconf-refactor-installer-cod.patch
https://fedorahosted.org/389/attachment/ticket/48951/0003-Ticket-48951-dsadm-and-dsconf-Installer-options-mana.patch
https://fedorahosted.org/389/attachment/ticket/48951/0004-Ticket-48951-dsadm-and-dsconf-Ability-to-unit-test-t.patch
https://fedorahosted.org/389/attachment/ticket/48951/0005-Ticket-48951-dsadm-and-dsconf-Backend-management-and.patch



As a follow up, here is a design / example document

http://www.port389.org/docs/389ds/design/dsadm-dsconf.html

thanks for this work, it is looking great and is something we were
really missing.

But of course I have some comments  (and I know I am late).
- The naming dsadm and dsconf, and the split of tasks between them, is
the same as in Sun/Oracle DSEE, and even if there is probably no legal
restriction to use them; I'd prefer to have own names for our own tools.

Fair enough. There is nothing saying these names are stuck in stone
right now so if we have better ideas we can change it.

I will however say that any command name, should not start with numbers
(ie 389), and that it should generally be fast to type, easy to remember
and less than 8 chars long if possible.

What about "adm389" and "conf389"?

Yeah, those could work.


- I'm not convinced of splitting the tasks into two utilities, you will
have different actions and options for the different
resources/subcommands anyway, so you could have one for all.

The issue is around connection to the server, and whether it needs to be
made or not. The command in the code is:

dsadm:
command:
action

dsconf:
connect to DS
command
action

So dsconf takes advantage of all the commands being remote, so it shares
common connection code. If we were to make the tools "one" we would need
to make a decorator or something to repeat this, and there are some
other issues there with the way that the argparse library works.

I think this is an arbitrary distinction - needing a connection or not -
but other projects use similar "admin client" vs. "more general use
client" e.g. OpenShift has "oadm" vs. "oc".  If this is a pattern that
admins are used to, we just need to be consistent in applying that pattern.




Also, I think, the goal should be to make all actions available local
and remote, the start/stop/install should be possible remotely via rsh
or another mechanism as long as the utilities are available on the
target machine, so I propose one dsmanage or 389manage

dsmanage is an okay name but, remote start stop is not an easy task.

At that point, you are looking at needing to ssh, manage the acceptance
of keys, you have to know the remote server ds prefix, you need to ssh
as root (bad) or manage sudo (annoying).

We already have the ability to remote stop/start/restart the server,
with admin server at least.

Not with systemd we don't. systemd + selinux has broken that for a stack
of our products, and at the moment, we are publishing release notes that
these don't work in certain cases. And rightly so, ds should not have
the rights to touch system services in the way we were doing, it's a
huge security risk.

To make it work we need to do dbus and polkit magic, and the amount of
motivation I have to give about this problem is low, especially when
tools like ansible do it for us, much better.


You need to potentially manage
selinux, systemd etc. It gets really complicated, really fast, and at
that point I'm going to turn around and say "no, just use ansible if you
want to remote manage things".

Lets keep these tools simple as we can, and let things like ansible
which is designed for remote tasks, do their job.

Right, but it will take a lot of work to determine what should be done
in ansible vs. specialized tool.

Not really. An admin will know "okay, if I want to start stop services I
write action: service state=enabled dirsrv@instance". They will also
know "well I want to reconfigure plugins on DS, I use conf389/dsconf".

Anything that is yum, systemd command, etc. is ansible. Anything about
installing an instance or 389 specific we do.


I think that is an arbitrary line of demarcation.  ansible can be used 
for a lot more than that.





A better strategy is that we can potentially write a lib389 ansible
module in the future allowing us to playbook tasks for DS.

I wo

[389-devel] Re: Please review: 48951 dsconf and dsadm foundations

2016-08-21 Thread Rich Megginson

On 08/21/2016 05:28 PM, William Brown wrote:

On Fri, 2016-08-19 at 11:21 +0200, Ludwig Krispenz wrote:

Hi William,

On 08/19/2016 02:22 AM, William Brown wrote:

On Wed, 2016-08-17 at 14:53 +1000, William Brown wrote:

https://fedorahosted.org/389/ticket/48951

https://fedorahosted.org/389/attachment/ticket/48951/0001-Ticket-48951-dsadm-and-dsconf-base-files.patch
https://fedorahosted.org/389/attachment/ticket/48951/0002-Ticket-48951-dsadm-and-dsconf-refactor-installer-cod.patch
https://fedorahosted.org/389/attachment/ticket/48951/0003-Ticket-48951-dsadm-and-dsconf-Installer-options-mana.patch
https://fedorahosted.org/389/attachment/ticket/48951/0004-Ticket-48951-dsadm-and-dsconf-Ability-to-unit-test-t.patch
https://fedorahosted.org/389/attachment/ticket/48951/0005-Ticket-48951-dsadm-and-dsconf-Backend-management-and.patch



As a follow up, here is a design / example document

http://www.port389.org/docs/389ds/design/dsadm-dsconf.html

thanks for this work, it is looking great and is something we were
really missing.

But of course I have some comments  (and I know I am late).
- The naming dsadm and dsconf, and the split of tasks between them, is
the same as in Sun/Oracle DSEE, and even if there is probably no legal
restriction to use them; I'd prefer to have own names for our own tools.

Fair enough. There is nothing saying these names are stuck in stone
right now so if we have better ideas we can change it.

I will however say that any command name, should not start with numbers
(ie 389), and that it should generally be fast to type, easy to remember
and less than 8 chars long if possible.


What about "adm389" and "conf389"?




- I'm not convinced of splitting the tasks into two utilities, you will
have different actions and options for the different
resources/subcommands anyway, so you could have one for all.

The issue is around connection to the server, and whether it needs to be
made or not. The command in the code is:

dsadm:
command:
action

dsconf:
connect to DS
command
action

So dsconf takes advantage of all the commands being remote, so it shares
common connection code. If we were to make the tools "one" we would need
to make a decorator or something to repeat this, and there are some
other issues there with the way that the argparse library works.


I think this is an arbitrary distinction - needing a connection or not - 
but other projects use similar "admin client" vs. "more general use 
client" e.g. OpenShift has "oadm" vs. "oc".  If this is a pattern that 
admins are used to, we just need to be consistent in applying that pattern.






Also, I think, the goal should be to make all actions available local
and remote, the start/stop/install should be possible remotely via rsh
or another mechanism as long as the utilities are available on the
target machine, so I propose one dsmanage or 389manage

dsmanage is an okay name but, remote start stop is not an easy task.

At that point, you are looking at needing to ssh, manage the acceptance
of keys, you have to know the remote server ds prefix, you need to ssh
as root (bad) or manage sudo (annoying).


We already have the ability to remote stop/start/restart the server, 
with admin server at least.



You need to potentially manage
selinux, systemd etc. It gets really complicated, really fast, and at
that point I'm going to turn around and say "no, just use ansible if you
want to remote manage things".

Lets keep these tools simple as we can, and let things like ansible
which is designed for remote tasks, do their job.


Right, but it will take a lot of work to determine what should be done 
in ansible vs. specialized tool.




A better strategy is that we can potentially write a lib389 ansible
module in the future allowing us to playbook tasks for DS.


I would like to see ansible playbooks for 389.  Ansible is python, so we 
can leverage python-ldap/lib389 instead of having to fork/exec 
ldapsearch/ldapmodify.





This is why I kept them separate, because I wanted to have simple,
isolated domains in the commands for actions, that let us know clearly
what we are doing. It's still an open discussion though.


If this is a common patterns that admins are used to, then we should 
consider it.





- could this be made interactive ? run the command, providing some or
none options and then have a shell like env
dsmanage
  >>> help
.. connect
.. create-x
  >>> connect -h 
... replica-enable 

In the current form, no. However, the way I have written it, we should
be able to pretty easily replace the command line framework on front and
drop in something that does allow interactive commands like this. I was
thinking:

https://github.com/Datera/configshell

This is already in EL, as it's part of the targetcli application.


Think MVC - just make sure you can change the View.  I tried to do this 
with setup-ds.pl - make it possible to "plug in" a different "UI".




[389-devel] Re: Logging performance improvement

2016-06-30 Thread Rich Megginson

On 06/30/2016 08:14 PM, William Brown wrote:

On Thu, 2016-06-30 at 20:01 -0600, Rich Megginson wrote:

On 06/30/2016 07:52 PM, William Brown wrote:

Hi,

I've been thinking about this for a while, so I decided to dump my
thoughts to a document. I think I won't get to implementing this for a
while, but it would really help our server performance.

http://www.port389.org/docs/389ds/design/logging-performance-improvement.html

Looks good.  Can we quantify the current log overhead?

Sure, I could probably sit down and work out a way to bench mark
this .

But without the alternative being written, hard to say. I could always
patch out logging and drop the lock in a hacked build so we can show
what "without logging contention" looks like?


That's only one part of it - you'd have to figure out some way to get 
rid of the overhead of the formatting and flushing in the operation 
threads too.


I suppose you could just write it and see what happens.





--
389-devel mailing list
389-devel@lists.fedoraproject.org
https://lists.fedoraproject.org/admin/lists/389-devel@lists.fedoraproject.org


--
389-devel mailing list
389-devel@lists.fedoraproject.org
https://lists.fedoraproject.org/admin/lists/389-devel@lists.fedoraproject.org



--
389-devel mailing list
389-devel@lists.fedoraproject.org
https://lists.fedoraproject.org/admin/lists/389-devel@lists.fedoraproject.org



--
389-devel mailing list
389-devel@lists.fedoraproject.org
https://lists.fedoraproject.org/admin/lists/389-devel@lists.fedoraproject.org


[389-devel] Re: Logging performance improvement

2016-06-30 Thread Rich Megginson

On 06/30/2016 07:52 PM, William Brown wrote:

Hi,

I've been thinking about this for a while, so I decided to dump my
thoughts to a document. I think I won't get to implementing this for a
while, but it would really help our server performance.

http://www.port389.org/docs/389ds/design/logging-performance-improvement.html


Looks good.  Can we quantify the current log overhead?




--
389-devel mailing list
389-devel@lists.fedoraproject.org
https://lists.fedoraproject.org/admin/lists/389-devel@lists.fedoraproject.org



--
389-devel mailing list
389-devel@lists.fedoraproject.org
https://lists.fedoraproject.org/admin/lists/389-devel@lists.fedoraproject.org


Re: jwm

2016-05-31 Thread Rich Megginson

On 05/31/2016 11:56 AM, Dennis Gilmore wrote:

On Tuesday, May 31, 2016 2:31:04 PM CDT Bernardo Sulzbach wrote:

On 05/31/2016 01:59 PM, Rich Megginson wrote:

On 05/31/2016 10:42 AM, Bernardo Sulzbach wrote:

On 05/31/2016 01:39 PM, Michael Catanzaro wrote:

On Sun, 2016-05-29 at 17:17 -0400, Stephen John Smoogen wrote:

They usually have a 60 hour a week job

I hope this isn't accurate...?

I didn't write about it myself, but was left wondering anyways. Do RH
programmers usually work 60 hours per week? "On average", full time
means 40 to 44 hours around the world. I've even seen 30 hours being
called full time in some job postings.

It depends primarily on what country you live in.  In the US, for
salaried (as opposed to hourly) programmers, the pay is based off of a
45 hour work week e.g. 8am - 5pm Monday through Friday, lunch included
(i.e. you are paid to eat lunch).  Of course, this is strictly for
accounting purposes - hardly any salaried programmers work these hours,
and most programmers would say "well, I'm more or less working all the
time - I get great ideas for solving problems while I'm sleeping and
dreaming, in the shower, driving to work, on the bus, etc.", and those
hours aren't strictly accounted for.

  From experience, in Brazil it would either be 40 (the same you wrote,
but lunch is not paid for) or 44 (+ 4 hours in Saturday mornings).

I've worked on hourly rates, and unless you get a "change the background
color to black and text to red" task you are also going to do a
substantial amount of work when you are not "working", and these hours
are also not paid for.

I think that Michael and I were wondering whether RH programmers were
getting 60 paid hours, not thinking about work at least 60 hours per
week. This would mean an average of 12 "office" hours (supposing they do
not work on weekends) per day. Which seems pretty aggressive to most
professionals I've come across if they are going to sit through that in
an supervised office.

I see you write from an @redhat.com address. Are you saying that all
US-based RedHat developers get 45 hour work weeks or less? I'm talking
about what the papers say, not the actual amount of work.
--
devel mailing list
devel@lists.fedoraproject.org
https://lists.fedoraproject.org/admin/lists/devel@lists.fedoraproject.org

in the US it is 40 hours, you are not paid for your lunch hour.


Dennis is correct, I stand corrected.



Dennis


--
devel mailing list
devel@lists.fedoraproject.org
https://lists.fedoraproject.org/admin/lists/devel@lists.fedoraproject.org



--
devel mailing list
devel@lists.fedoraproject.org
https://lists.fedoraproject.org/admin/lists/devel@lists.fedoraproject.org


Re: jwm

2016-05-31 Thread Rich Megginson

On 05/31/2016 11:31 AM, Bernardo Sulzbach wrote:

On 05/31/2016 01:59 PM, Rich Megginson wrote:

On 05/31/2016 10:42 AM, Bernardo Sulzbach wrote:

On 05/31/2016 01:39 PM, Michael Catanzaro wrote:

On Sun, 2016-05-29 at 17:17 -0400, Stephen John Smoogen wrote:

They usually have a 60 hour a week job


I hope this isn't accurate...?



I didn't write about it myself, but was left wondering anyways. Do RH
programmers usually work 60 hours per week? "On average", full time
means 40 to 44 hours around the world. I've even seen 30 hours being
called full time in some job postings.


It depends primarily on what country you live in.  In the US, for
salaried (as opposed to hourly) programmers, the pay is based off of a
45 hour work week e.g. 8am - 5pm Monday through Friday, lunch included
(i.e. you are paid to eat lunch).  Of course, this is strictly for
accounting purposes - hardly any salaried programmers work these hours,
and most programmers would say "well, I'm more or less working all the
time - I get great ideas for solving problems while I'm sleeping and
dreaming, in the shower, driving to work, on the bus, etc.", and those
hours aren't strictly accounted for.



From experience, in Brazil it would either be 40 (the same you wrote, 
but lunch is not paid for) or 44 (+ 4 hours in Saturday mornings).


I've worked on hourly rates, and unless you get a "change the 
background color to black and text to red" task you are also going to 
do a substantial amount of work when you are not "working", and these 
hours are also not paid for.


I think that Michael and I were wondering whether RH programmers were 
getting 60 paid hours, not thinking about work at least 60 hours per week.



This would mean an average of 12 "office" hours (supposing they do not 
work on weekends) per day. Which seems pretty aggressive to most 
professionals I've come across if they are going to sit through that 
in an supervised office.


I see you write from an @redhat.com address. Are you saying that all 
US-based RedHat developers get 45 hour work weeks or less?I'm talking 
about what the papers say, not the actual amount of work.


AFAIK the accounting system accounts for salaried developers working 45 
hours per week.



--
devel mailing list
devel@lists.fedoraproject.org
https://lists.fedoraproject.org/admin/lists/devel@lists.fedoraproject.org


--
devel mailing list
devel@lists.fedoraproject.org
https://lists.fedoraproject.org/admin/lists/devel@lists.fedoraproject.org


Re: jwm

2016-05-31 Thread Rich Megginson

On 05/31/2016 10:42 AM, Bernardo Sulzbach wrote:

On 05/31/2016 01:39 PM, Michael Catanzaro wrote:

On Sun, 2016-05-29 at 17:17 -0400, Stephen John Smoogen wrote:

They usually have a 60 hour a week job


I hope this isn't accurate...?



I didn't write about it myself, but was left wondering anyways. Do RH 
programmers usually work 60 hours per week? "On average", full time 
means 40 to 44 hours around the world. I've even seen 30 hours being 
called full time in some job postings.


It depends primarily on what country you live in.  In the US, for 
salaried (as opposed to hourly) programmers, the pay is based off of a 
45 hour work week e.g. 8am - 5pm Monday through Friday, lunch included 
(i.e. you are paid to eat lunch).  Of course, this is strictly for 
accounting purposes - hardly any salaried programmers work these hours, 
and most programmers would say "well, I'm more or less working all the 
time - I get great ideas for solving problems while I'm sleeping and 
dreaming, in the shower, driving to work, on the bus, etc.", and those 
hours aren't strictly accounted for.



--
devel mailing list
devel@lists.fedoraproject.org
https://lists.fedoraproject.org/admin/lists/devel@lists.fedoraproject.org


--
devel mailing list
devel@lists.fedoraproject.org
https://lists.fedoraproject.org/admin/lists/devel@lists.fedoraproject.org


Re: [389-devel] Please review: [389 Project] #48257: Fix coverity issues - 08/24/2015

2015-11-06 Thread Rich Megginson

On 11/05/2015 05:09 PM, Noriko Hosoi wrote:

https://fedorahosted.org/389/ticket/48257

https://fedorahosted.org/389/attachment/ticket/48257/0001-Ticket-48257-Fix-coverity-issues-08-24-2015.patch 




Once this ticket is closed, is it okay to respin nunc_stans which is 
going to be version 0.1.6?


Yes.  After every "batch" of commits to nunc-stans the version should be 
bumped, where "batch" can be a single commit if no other commits are 
planned for the immediate future.




Current: rpm/389-ds-base.spec.in:%global nunc_stans_ver 0.1.5

--
389-devel mailing list
389-devel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/389-devel


--
389-devel mailing list
389-devel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/389-devel

Re: Cloning bugs: Just Don't Do It

2015-11-03 Thread Rich Megginson

On 11/03/2015 09:13 AM, Adam Williamson wrote:

You see that 'Clone' button in Bugzilla? You, yes you, with your cursor
hovering over it?

Don't do it! It's a trap.

Cloning a bug is almost never actually what you want to do. When you
clone a bug, all of the following are transferred to the new bug:

1. CCs
2. Description and comments, as one big ugly block as the new bug's
description
3. Pretty much all the metadata: whiteboard, keywords, tags,
dependencies. This includes stuff like blocker metadata, which is
almost never appropriate
4. External bug references
5. All sorts of other goddamn stuff

In my experience, you almost *never* actually wanted all of that.
Unless you really want a 2,000-line 'Description' which includes 50
comments and is entirely unreadable, everyone CCed on the old bug CCed
on the new bug, and all the metadata the same - just don't hit the
Clone button. Create a new bug and copy/paste anything relevant into
it.

In particular, Red Hat people, for the love of all that's holy, please
try not to clone Fedora bugs to RHEL unless it's really necessary! RHEL
bugs generate a metric assload of bureaucratic change emails that
Fedora contributors are almost never interested in. And no-one actually
likes trying to read those huge, unreadable clone bug Descriptions:
it's way, way nicer to create a new bug and cleanly summarize
whatever's actually relevant from the parent bug's description /
comments into the new one.

https://bugzilla.redhat.com/show_bug.cgi?id=1277621
-- 
devel mailing list
devel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/devel
Fedora Code of Conduct: http://fedoraproject.org/code-of-conduct

Re: [389-devel] Please comment: [389 Project] #48285: The dirsrv user/group should be created in rpm %pre, and ideally with fixed uid/gid

2015-10-21 Thread Rich Megginson

On 10/21/2015 12:20 PM, Noriko Hosoi wrote:
Thanks to William for his reviewing the patch.  I'm going to push it. 
But before doing so, I have a question regarding the autogen files.


The proposed patch requires to rerun autogen.sh and push the generated 
files to the git.  My current env has automake 1.15 and it generates 
large diffs as attached to this email.

-# Makefile.in generated by automake 1.13.4 from Makefile.am.
+# Makefile.in generated by automake 1.15 from Makefile.am.

Is it okay to push the attached patch 
0002-Ticket-48285-The-dirsrv-user-group-should-be-created.patch to git 
or do we prefer to keep the diff minimum by runing autogen on the host 
having the same version of automake (1.13.4)?


Should confirm that the generated configure script runs on el7



Thanks,
--noriko

On 10/20/2015 05:48 PM, Noriko Hosoi wrote:

https://fedorahosted.org/389/ticket/48285

https://fedorahosted.org/389/attachment/ticket/48285/0001-Ticket-48285-The-dirsrv-user-group-should-be-created.patch 


git patch file (master) -- revised

If these users and groups exist on the system:

/etc/passwd:xdirsrv:x:389:389:389-ds-base:/usr/share/dirsrv:/sbin/nologin 

/etc/passwd:dirsrvy:x:390:390:389-ds-base:/usr/share/dirsrv:/sbin/nologin 


/etc/group:xdirsrv:x:389:
/etc/group:dirsrvy:x:390:

This pair is supposed to be generated:

/etc/passwd:dirsrv:x:391:391:389-ds-base:/usr/share/dirsrv:/sbin/nologin
/etc/group:dirsrv:x:391:


--
389-devel mailing list
389-devel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/389-devel




--
389-devel mailing list
389-devel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/389-devel


--
389-devel mailing list
389-devel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/389-devel

Re: [389-devel] [lib389] Deref control advice needed

2015-09-02 Thread Rich Megginson

On 09/02/2015 10:35 AM, thierry bordaz wrote:

On 08/27/2015 02:31 AM, Rich Megginson wrote:

On 08/26/2015 03:28 AM, William Brown wrote:

In relation to ticket 47757, I have started work on a deref control for
Noriko.
The idea is to get it working in lib389, then get it upstreamed into pyldap.

At this point it's all done, except that the actual request control doesn't
appear to work. Could one of the lib389 / ldap python experts cast their eye
over this and let me know where I've gone wrong?

I have improved this, but am having issues with the asn1spec for ber decoding.

I have attached the updated patch, but specifically the issue is in _controls.py

I would appreciate if anyone could take a look at this, and let me know if there
is something I have missed.


Not sure, but here is some code I did without using pyasn:
https://github.com/richm/scripts/blob/master/derefctrl.py
This is quite old by now, and is probably bit rotted with respect to 
python-ldap and python3.




Old !! but it worked like a charm for me. I just had to do this modif 
because of change in python-ldap IIRC


OK.  But I would rather use William's version which is based on pyasn1 - 
it hurts my brain to hand code BER . . .



diff derefctrl.py /tmp/derefctrl_orig.py
0a1
>
151,152c152
< self.criticality,self.derefspeclist,self.entry =
criticality,derefspeclist or [],None
<
#LDAPControl.__init__(self,DerefCtrl.controlType,criticality,derefspeclist)
---
>
LDAPControl.__init__(self,DerefCtrl.controlType,criticality,derefspeclist)
154c154
< def encodeControlValue(self):
---
> def encodeControlValue(self,value):
156c156
< for (derefattr,attrs) in self.derefspeclist:
---
> for (derefattr,attrs) in value:



"""
  controlValue ::= SEQUENCE OF derefRes DerefRes

  DerefRes ::= SEQUENCE {
  derefAttr   AttributeDescription,
  derefValLDAPDN,
  attrVals[0] PartialAttributeList OPTIONAL }

  PartialAttributeList ::= SEQUENCE OF
 partialAttribute PartialAttribute
"""

class DerefRes(univ.Sequence):
 componentType = namedtype.NamedTypes(
 namedtype.NamedType('derefAttr', AttributeDescription()),
 namedtype.NamedType('derefVal', LDAPDN()),
 namedtype.OptionalNamedType('attrVals', PartialAttributeList()),
 )

class DerefResultControlValue(univ.SequenceOf):
 componentType = DerefRes()





 def decodeControlValue(self,encodedControlValue):
 self.entry = {}
 #decodedValue,_ =
decoder.decode(encodedControlValue,asn1Spec=DerefResultControlValue())
 # Gets the error: TagSet(Tag(tagClass=0, tagFormat=32, tagId=16),
Tag(tagClass=128, tagFormat=32, tagId=0)) not in asn1Spec:
{TagSet(Tag(tagClass=0, tagFormat=32, tagId=16)): PartialAttributeList()}/{}
 decodedValue,_ = decoder.decode(encodedControlValue)
 print(decodedValue.prettyPrint())
 # Pretty print yields
 #Sequence:  <-- Sequence of
 # =Sequence:  <-- derefRes
 #  =uniqueMember <-- derefAttr
 #  =uid=test,dc=example,dc=com <-- derefVal
 #  =Sequence: <-- attrVals
 #   =uid
 #   =Set:
 #=test
 # For now, while asn1spec is sad, we'll just rely on it being well
formed
 # However, this isn't good, as without the asn1spec, we seem to 
actually
be dropping values 
 for result in decodedValue:
 derefAttr, derefVal, _ = result
 self.entry[str(derefAttr)] = str(derefVal)



--
389-devel mailing list
389-devel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/389-devel




--
389-devel mailing list
389-devel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/389-devel




--
389-devel mailing list
389-devel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/389-devel

Re: [389-devel] [lib389] Deref control advice needed

2015-08-26 Thread Rich Megginson

On 08/26/2015 03:28 AM, William Brown wrote:

In relation to ticket 47757, I have started work on a deref control for
Noriko.
The idea is to get it working in lib389, then get it upstreamed into pyldap.

At this point it's all done, except that the actual request control doesn't
appear to work. Could one of the lib389 / ldap python experts cast their eye
over this and let me know where I've gone wrong?

I have improved this, but am having issues with the asn1spec for ber decoding.

I have attached the updated patch, but specifically the issue is in _controls.py

I would appreciate if anyone could take a look at this, and let me know if there
is something I have missed.


Not sure, but here is some code I did without using pyasn:
https://github.com/richm/scripts/blob/master/derefctrl.py
This is quite old by now, and is probably bit rotted with respect to 
python-ldap and python3.





  controlValue ::= SEQUENCE OF derefRes DerefRes

  DerefRes ::= SEQUENCE {
  derefAttr   AttributeDescription,
  derefValLDAPDN,
  attrVals[0] PartialAttributeList OPTIONAL }

  PartialAttributeList ::= SEQUENCE OF
 partialAttribute PartialAttribute


class DerefRes(univ.Sequence):
 componentType = namedtype.NamedTypes(
 namedtype.NamedType('derefAttr', AttributeDescription()),
 namedtype.NamedType('derefVal', LDAPDN()),
 namedtype.OptionalNamedType('attrVals', PartialAttributeList()),
 )

class DerefResultControlValue(univ.SequenceOf):
 componentType = DerefRes()





 def decodeControlValue(self,encodedControlValue):
 self.entry = {}
 #decodedValue,_ =
decoder.decode(encodedControlValue,asn1Spec=DerefResultControlValue())
 # Gets the error: TagSet(Tag(tagClass=0, tagFormat=32, tagId=16),
Tag(tagClass=128, tagFormat=32, tagId=0)) not in asn1Spec:
{TagSet(Tag(tagClass=0, tagFormat=32, tagId=16)): PartialAttributeList()}/{}
 decodedValue,_ = decoder.decode(encodedControlValue)
 print(decodedValue.prettyPrint())
 # Pretty print yields
 #Sequence:  -- Sequence of
 # no-name=Sequence:  -- derefRes
 #  no-name=uniqueMember -- derefAttr
 #  no-name=uid=test,dc=example,dc=com -- derefVal
 #  no-name=Sequence: -- attrVals
 #   no-name=uid
 #   no-name=Set:
 #no-name=test
 # For now, while asn1spec is sad, we'll just rely on it being well
formed
 # However, this isn't good, as without the asn1spec, we seem to 
actually
be dropping values 
 for result in decodedValue:
 derefAttr, derefVal, _ = result
 self.entry[str(derefAttr)] = str(derefVal)



--
389-devel mailing list
389-devel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/389-devel


--
389-devel mailing list
389-devel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/389-devel

Re: [389-devel] Review of plugin code

2015-08-07 Thread Rich Megginson

On 08/07/2015 05:18 PM, William Brown wrote:

On Thu, 2015-08-06 at 14:25 -0700, Noriko Hosoi wrote:

Hi William,

Very interesting plug-in!

Thanks. As a plugin, it's value is quite useless due to the nsDS5ReplicaType
flags. But it's a nice simple exercise to get ones head around how the plugin
architecture works from scratch. It's one thing to patch a plugin, compared to
writing one from nothing.


Regarding betxn plug-in, it is for putting the entire operation -- the
primary update + associated updates by the enabled plug-ins -- in one
transaction.  By doing so, the entire updates are committed to the DB if
and only if all of the updates are successful. Otherwise, all of them
are rolled back.  That guarantees there will be no consistency among
entries.

Okay, so if I can be a pain, how to betxn handle reads? Do reads come from
within the transaction?


Yes.


Or is there a way to read from the database outside the
transaction.

Say for example:

begin
add some object Y
read Y
commit

Does read Y see the object within the transaction?


Yes.


Is there a way to make the
search happen so that it occurs outside the transaction, IE it doesn't see Y?


Not a nested search operation.  A nested search operation will always 
use the parent/context transaction.






In that sense, your read-only plug-in is not a good example for betxn
since it does not do any updates. :)  Considering the purpose of the
read-only plug-in, invoking it at the pre-op timing (before the
transaction) would be the best.

Very true! I kind of knew what betxn did, but I wanted to confirm more
completely in my mind. So I think what my read-only plugin does at the moment
works quite nicely then outside of betxn.

Is there a piece of documentation (perhaps the plugin guide) that lists the
order in which these operations are called?


Not sure, but in general it is:

incoming operation from client
front end processing
preoperation
call backend
bepreoperation
start transaction
betxnpreoperation
do operation in the database
betxnpostoperation
end transaction
bepostoperation
return from backend
send result to client
postoperation




Since MEP requires the updates on the DB, it's supposed to be called in
betxn.  That way, what was done in the MEP plug-in is committed or
rolled back together with the primary updates.

Makes sense.


The toughest part is the deadlock prevention.  At the start transaction,
it holds a DB lock.  And most plug-ins maintain its own mutex to protect
its resource.  It'd easily cause deadlock situation especially when
multiple plug-ins are enabled (which is common :). So, please be careful
not to acquire/free locks in the wrong order...

Of course. This is always an issue in multi-threaded code and anything with
locking. Stress tests are probably good to find these deadlocks, no?


Yes.  There is some code in dblayer.c that will stress the transaction 
code by locking/unlocking many db pages concurrently with external 
operations.

https://git.fedorahosted.org/cgit/389/ds.git/tree/ldap/servers/slapd/back-ldbm/dblayer.c#n210
https://git.fedorahosted.org/cgit/389/ds.git/tree/ldap/servers/slapd/back-ldbm/dblayer.c#n4131




About your commented out code in read_only.c, I guess you copied the
part from mep.c and are wondering what it is for?
There are various type of plug-ins.

 $ egrep nsslapd-pluginType dse.ldif | sort | uniq
 nsslapd-pluginType: accesscontrol
 nsslapd-pluginType: bepreoperation
 nsslapd-pluginType: betxnpostoperation
 nsslapd-pluginType: betxnpreoperation
 nsslapd-pluginType: database
 nsslapd-pluginType: extendedop
 nsslapd-pluginType: internalpreoperation
 nsslapd-pluginType: matchingRule
 nsslapd-pluginType: object
 nsslapd-pluginType: preoperation
 nsslapd-pluginType: pwdstoragescheme
 nsslapd-pluginType: reverpwdstoragescheme
 nsslapd-pluginType: syntax

The reason why slapi_register_plugin and slapi_register_plugin_ext were
implemented was:

 /*
   * Allows a plugin to register a plugin.
   * This was added so that 'object' plugins could register all
   * the plugin interfaces that it supports.
   */

On the other hand, MEP has this type.

 nsslapd-pluginType: betxnpreoperation

The type is not object, but the MEP plug-in is implemented as having
the type.  Originally, it might have been object...  Then, we
introduced the support for betxn.  To make the transition to betxn
smoothly, we put the code to check betxn is in the type. If there is
betxn as in betxnpreoperation, call the plug-in in betxn, otherwise
call them outside of the transaction.  Having the switch in the
configuration, we could go back to the original position without
rebuilding the plug-in.

Since we do not go back to pre-betxn era, the switch may not be too
important.  But keeping it would be a good idea for the consistency with
the other plug-ins.

Does this answer you question?  Please feel free to let us know if it
does not.

That answers some of 

[389-devel] Please review: Ticket #48224 - redux 2 - logconv.pl should handle *.tar.xz, *.txz, *.xz log files

2015-07-20 Thread Rich Megginson

https://fedorahosted.org/389/attachment/ticket/48224/0001-Ticket-48224-redux-2-logconv.pl-should-handle-.tar.x.patch
--
389-devel mailing list
389-devel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/389-devel

[389-devel] Please review: Ticket #48224 - logconv.pl should handle *.tar.xz, *.txz, *.xz log files

2015-07-13 Thread Rich Megginson

https://fedorahosted.org/389/attachment/ticket/48224/0001-Ticket-48224-logconv.pl-should-handle-.tar.xz-.txz-..patch
--
389-devel mailing list
389-devel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/389-devel

[389-devel] 389 performance testing tools

2015-05-15 Thread Rich Megginson
No readme yet, but here are the scripts: 
https://github.com/richm/389-perf-test

--
389-devel mailing list
389-devel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/389-devel

[389-devel] Please review: nunc-stans: Ticket #33 - coverity - 13178 Explicit null dereferenced

2015-05-12 Thread Rich Megginson

https://fedorahosted.org/nunc-stans/attachment/ticket/33/0001-Ticket-33-coverity-13178-Explicit-null-dereferenced.patch
--
389-devel mailing list
389-devel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/389-devel

[389-devel] Please review: nunc-stans: Tickets 34-38 - coverity 13179-13183 - Dereference before NULL check

2015-05-12 Thread Rich Megginson

https://fedorahosted.org/nunc-stans/attachment/ticket/34/0001-Tickets-34-38-coverity-13179-13183-Dereference-befor.patch
--
389-devel mailing list
389-devel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/389-devel

[389-devel] Please review: Ticket #48122 nunc-stans FD leak

2015-05-11 Thread Rich Megginson

https://fedorahosted.org/389/attachment/ticket/48122/0001-Ticket-48122-nunc-stans-FD-leak.2.patch
--
389-devel mailing list
389-devel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/389-devel

[389-devel] Take 2: Please review: Ticket #48178 add config param to enable nunc-stans

2015-05-01 Thread Rich Megginson

Fixed problem with previous patch.

https://fedorahosted.org/389/attachment/ticket/48178/0001-Ticket-48178-add-config-param-to-enable-nunc-stans.patch
--
389-devel mailing list
389-devel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/389-devel

[389-devel] Please review: Ticket #48178 add config param to enable nunc-stans

2015-04-30 Thread Rich Megginson

https://fedorahosted.org/389/attachment/ticket/48178/0001-Ticket-48178-add-config-param-to-enable-nunc-stans.patch
--
389-devel mailing list
389-devel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/389-devel

Re: [389-devel] [389-users] GUI console and Kerberos

2015-03-12 Thread Rich Megginson

On 03/11/2015 11:54 AM, Paul Robert Marino wrote:

Hey every one
I have a question I know at least once in the past i setup the admin
console so it could utilize Kerberos passwords based on a howto I
found once which after I changed jobs I could never find again.

today I was looking for something else and I saw a mention on the site
about httpd needing to be compiled with http auth support.
well I did a little digging and I found this file
/etc/dirsrv/admin-serv/admserv.conf

in that file I found a lot of entries that look like this

LocationMatch /*/[tT]asks/[Cc]onfiguration/*
   AuthUserFile /etc/dirsrv/admin-serv/admpw
   AuthType basic
   AuthName Admin Server
   Require valid-user
   AdminSDK on
   ADMCgiBinDir /usr/lib64/dirsrv/cgi-bin
   NESCompatEnv on
   Options +ExecCGI
   Order allow,deny
   Allow from all
/LocationMatch


when I checked /etc/dirsrv/admin-serv/admpw sure enough I found the
Password hash for the admin user.

So my question is before I wast time experimenting could it possibly
be as simple as changing the auth type to kerberos
http://modauthkerb.sourceforge.net/configure.html


I don't know.  I don't think anyone has ever tried it.


keep in mind my Kerberos Servers do not use LDAP as the backend.
--
389 users mailing list
389-us...@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/389-users


--
389-devel mailing list
389-devel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/389-devel

[389-devel] Please review #2: Ticket #48105 create tests - tests subdir - test server/client programs - test scripts

2015-03-09 Thread Rich Megginson

I found a bug with my previous patch.

https://fedorahosted.org/389/attachment/ticket/48105/0001-Ticket-48105-create-tests-tests-subdir-test-server-c.2.patch
--
389-devel mailing list
389-devel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/389-devel

[389-devel] Please review: Ticket #48105 create tests - tests subdir - test server/client programs - test scripts

2015-03-06 Thread Rich Megginson

https://fedorahosted.org/389/attachment/ticket/48105/0001-Ticket-48105-create-tests-tests-subdir-test-server-c.patch
--
389-devel mailing list
389-devel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/389-devel

[389-devel] Please review: Ticket #48106 create code doc with doxygen

2015-03-04 Thread Rich Megginson

https://fedorahosted.org/389/attachment/ticket/48106/0001-Ticket-48106-create-code-doc-with-doxygen.patch
--
389-devel mailing list
389-devel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/389-devel

Re: [389-devel] No rule to make target dbmon.sh

2015-02-26 Thread Rich Megginson

On 02/26/2015 04:31 PM, William wrote:

On latest master I am getting:

make[1]: *** No rule to make target `ldap/admin/src/scripts/dbmon.sh',
needed by `all-am'.  Stop.

Did a git clean -f -x -d followed by autoreconf -i; ./configure
--with-openldap --prefix=/srv --enable-debug


Not sure, but you should not need to run autoreconf unless you are 
changing one of the autoconf files.  There is an autogen.sh script for 
this purpose instead of using autoreconf directly.




What am I missing?



--
389-devel mailing list
389-devel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/389-devel

Re: Removing (or trying to) BerkeleyDB from Fedora

2015-01-08 Thread Rich Megginson

On 01/08/2015 06:56 AM, Jan Staněk wrote:

Hi guys,
as the new BerkeleyDB 6.x has a more restrictive license than the
previous versions (AGPLv3 vs. LGPLv2), and due to that many projects
cannot use it, perhaps it is time to get rid of it from Fedora for good
- or at least trim down the list of packages dependent on it as much as
possible.

The topic of BerkeleyDB v6 in Fedora was already discussed at this list
[1], and it turned out that peaceful cooperation of multiple libdb
versions in system is very problematic. As some packages cannot use
newer versions, we are basically stuck with v5 - unless we get rid of it
altogether or find another solution.

I already started probing which packages depend on libdb and what can
be done to remove that dependency. My findings are briefly documented
on [2] and so far it seems that with some work it could be done.

However, as I have only very hazy ideas on how some of the dependent
packages are used or why they need libdb, I would like to ask for
cooperation, ideally from the package maintainers themselves. The
information on how to remove dependency, what would need to be done in
addition to removing the dependency, or why it is a bad idea to try
drop the dependency are all valuable.

Thank you very much for any help. I welcome both edits in the wiki at
[2] in case of relatively simple solutions, and ideas, thoughts and
explanations on this mailing list.

[1] https://lists.fedoraproject.org/pipermail/devel/2014-April/197406.html
[2]
https://fedoraproject.org/wiki/User:Jstanek/Draft_-_Removing_BerkeleyDB_from_Fedora


389-ds-base relies heavily on bdb and cannot easily be switched to use 
another db backend.
We are considering moving to lmdb (http://symas.com/mdb/) but that will 
be a long and painful process . . .


--
devel mailing list
devel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/devel
Fedora Code of Conduct: http://fedoraproject.org/code-of-conduct

[389-devel] we now have epel7 branches for fedpkg . . .

2014-11-10 Thread Rich Megginson
. . . but it looks like we are gated by rhel7.1 - waiting for TLS 1.1 
fixes/packages to show up with rhel 7.1

--
389-devel mailing list
389-devel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/389-devel

[389-devel] No more spam from jenkins

2014-10-21 Thread Rich Megginson
I don't know what's wrong with jenkins.  I tried to fix it, but I cannot 
figure out what the problem is.  In the meantime, I have disabled it, so 
no more spam.  Sorry for the spam.

--
389-devel mailing list
389-devel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/389-devel

Re: [389-devel] if you tag a release, please release a tarball too

2014-09-19 Thread Rich Megginson

On 09/19/2014 01:15 AM, Timo Aaltonen wrote:

On 19.09.2014 09:33, Timo Aaltonen wrote:

Hi

  1.3.3.3 is tagged in git since a week ago, but there's no tarball for
it. Dunno if you have scripts for the release dance, but if you do
please include the tarball build to it so it's not a manual thing to
remember every time ;)

I'll roll back to 1.3.3.2 in the meantime..

oh well, 1.3.3.2 tarball doesn't match the tag:

tarball doesn't have 55e317f2a5d8fc488e76f2b4155298a45d25 nor
0363fa49265c0c27d510064cea361eb400802548

and ldap/servers/slapd/ssl.c has a diff to the comments of the cipher
mess (from 58cb12a7b8cf9), and VERSION.sh on the tarball still has
'VERSION_PREREL=.a1' (should be gone in fefa20138b6a3a)

so I don't know where the tarball was built from, this isn't cool..


Yep, we screwed up, sorry about that.  I've just uploaded a new 1.3.3.3 
release, and the sources page with the new checksum is building.

--
389-devel mailing list
389-devel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/389-devel

Re: [389-devel] Please review: Fix slapi_td_plugin_lock_init prototype

2014-09-15 Thread Rich Megginson

On 09/15/2014 05:20 AM, Petr Viktorin wrote:

diff --git a/ldap/servers/slapd/slapi-plugin.h 
b/ldap/servers/slapd/slapi-plugin.h
index f1ecfe8..268e465 100644
--- a/ldap/servers/slapd/slapi-plugin.h
+++ b/ldap/servers/slapd/slapi-plugin.h
@@ -5582,7 +5582,7 @@ void slapi_td_get_val(int indexType, void **value);
  int slapi_td_dn_init(void);
  int slapi_td_set_dn(char *dn);
  void slapi_td_get_dn(char **dn);
-int slapi_td_plugin_lock_init();
+int slapi_td_plugin_lock_init(void);
  int slapi_td_set_plugin_locked(int *value);
  void slapi_td_get_plugin_locked(int **value);
  
--

Thanks - https://fedorahosted.org/389/ticket/47899
--
389-devel mailing list
389-devel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/389-devel

Re: [389-devel] Please review: Fix slapi_td_plugin_lock_init prototype

2014-09-15 Thread Rich Megginson

On 09/15/2014 07:28 PM, Petr Viktorin wrote:

On 09/15/2014 09:06 PM, Rich Megginson wrote:

On 09/15/2014 05:20 AM, Petr Viktorin wrote:

diff --git a/ldap/servers/slapd/slapi-plugin.h
b/ldap/servers/slapd/slapi-plugin.h
index f1ecfe8..268e465 100644
--- a/ldap/servers/slapd/slapi-plugin.h
+++ b/ldap/servers/slapd/slapi-plugin.h
@@ -5582,7 +5582,7 @@ void slapi_td_get_val(int indexType, void 
**value);

  int slapi_td_dn_init(void);
  int slapi_td_set_dn(char *dn);
  void slapi_td_get_dn(char **dn);
-int slapi_td_plugin_lock_init();
+int slapi_td_plugin_lock_init(void);
  int slapi_td_set_plugin_locked(int *value);
  void slapi_td_get_plugin_locked(int **value);
--

Thanks - https://fedorahosted.org/389/ticket/47899


Thanks.

I read the GIT Rules page on the wiki [0], which mentions patches not 
associated with a ticket. 


That is correct.  I just wanted to make sure that this did not get lost.


If all patches do need a ticket, it would be good to update it.

[0] http://www.port389.org/docs/389ds/development/git-rules.html




--
389-devel mailing list
389-devel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/389-devel

[389-devel] Please review: Ticket #47892 coverity defects found in 1.3.3.1

2014-09-12 Thread Rich Megginson

https://fedorahosted.org/389/attachment/ticket/47892/0001-Ticket-47892-coverity-defects-found-in-1.3.3.1.patch
--
389-devel mailing list
389-devel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/389-devel

Re: [389-devel] Please review lib389: start/stop may hand indefinitely

2014-09-05 Thread Rich Megginson

On 09/05/2014 10:32 AM, thierry bordaz wrote:

On 09/05/2014 01:10 PM, thierry bordaz wrote:
Detected with testcase 47838 that defines ciphers not recognized 
during SSL init. 47838 testcase makes the full test suite to hang.



--
389-devel mailing list
389-devel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/389-devel

Hello,

Rich pointed me that the indentation was bad in the second part of the 
fix. I was wrongly playing with tab instead of spaces.

Here is a better fix


ack



thanks
theirry


--
389-devel mailing list
389-devel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/389-devel


--
389-devel mailing list
389-devel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/389-devel

[389-devel] Please review: Redux: Ticket #47692 single valued attribute replicated ADD does not work

2014-07-10 Thread Rich Megginson

Previous fix was incomplete.
https://fedorahosted.org/389/attachment/ticket/47692/0001-Ticket-47692-single-valued-attribute-replicated-ADD-.2.patch
--
389-devel mailing list
389-devel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/389-devel

[389-devel] Please review: Ticket #47831 - server restart wipes out index config if there is a default index

2014-06-25 Thread Rich Megginson

https://fedorahosted.org/389/attachment/ticket/47831/0001-Ticket-47831-server-restart-wipes-out-index-config-i.patch
--
389-devel mailing list
389-devel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/389-devel

Re: [389-devel] single valued attribute update resolution

2014-04-30 Thread Rich Megginson

On 04/25/2014 07:43 AM, Ludwig Krispenz wrote:
There are still scenarios where replication can lead to inconsistent 
states for single valued attributes, which I think has two reasons:
- for single valued attributes there are scenarios where modifications 
applied concurrently cannot be simply resolved without violating the 
schema
- the code to handle single valued attribute resolution is quite 
complex and has always been extended to resolve reported issues, not 
making it simpler


I tried to specify all potential scenarios which should be handled and 
what the expected consistent state should be. In parallel writing a 
test suite based on lib389 test framework to provide testcases for all 
scenarios and then test the current implementation. The doc and test 
suite can be used as a reference for a potential rework of the update 
resolution code.


Please have a look at: 
http://port389.org/wiki/Update_resolution_for_single_valued_attributes


comments, corrections, additonal requirements are welcome - the doc is 
not final :-)


Very nice!



Thanks,
Ludwig
--
389-devel mailing list
389-devel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/389-devel


--
389-devel mailing list
389-devel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/389-devel

[389-devel] Please review: Take 2: Ticket #47772 empty modify returns LDAP_INVALID_DN_SYNTAX

2014-04-11 Thread Rich Megginson

https://fedorahosted.org/389/attachment/ticket/47772/0001-Ticket-47772-empty-modify-returns-LDAP_INVALID_DN_SY.2.patch
--
389-devel mailing list
389-devel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/389-devel

[389-devel] Please review: Ticket #47772 empty modify returns LDAP_INVALID_DN_SYNTAX

2014-04-09 Thread Rich Megginson

https://fedorahosted.org/389/attachment/ticket/47772/0001-Ticket-47772-empty-modify-returns-LDAP_INVALID_DN_SY.patch
--
389-devel mailing list
389-devel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/389-devel

[389-devel] Please review: Ticket #47774 mem leak in do_search - rawbase not freed upon certain errors

2014-04-09 Thread Rich Megginson

https://fedorahosted.org/389/attachment/ticket/47774/0001-Ticket-47774-mem-leak-in-do_search-rawbase-not-freed.patch
--
389-devel mailing list
389-devel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/389-devel

Re: [389-devel] [389-users] Source directory is now list-able

2014-04-08 Thread Rich Megginson

On 04/08/2014 02:24 PM, Timo Aaltonen wrote:

On 07.04.2014 21:52, Rich Megginson wrote:

http://port389.org/sources is now open and list-able.  The default sort
order is latest first.  The http://port389.org/wiki/Source page has been
updated with this link.

\o/

many thanks for this :)




Sure, it was about time we did this :P
Please let us know if there are any issues, or suggested improvements.  
My apache-fu is not good, perhaps there are some nice mod_autoindex 
hacks . . .

--
389-devel mailing list
389-devel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/389-devel

[389-devel] Source directory is now list-able

2014-04-07 Thread Rich Megginson
http://port389.org/sources is now open and list-able.  The default sort 
order is latest first.  The http://port389.org/wiki/Source page has been 
updated with this link.

--
389-devel mailing list
389-devel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/389-devel

Re: [389-devel] [389-users] git repo / tarball issues

2014-04-04 Thread Rich Megginson

On 04/04/2014 10:55 AM, Timo Aaltonen wrote:

On 04.04.2014 19:42, Noriko Hosoi wrote:

Hi Timo,

Timo Aaltonen wrote:

1) 389-ds-console 1.2.7 has no tarball though it was tagged for release
in Sep'12
You can download the tar ball from here now.
http://port389.org/sources/389-ds-console-1.2.7.tar.bz2

Cool, thanks. It's a broken tarball though, you forgot '/' after the
version..

Sorry.  I've fixed it...  Could you please try it, one more time?

Yup, it's fine now.


Also, you still need some way to fix the process of how these links get
to the webpage too :)

Yeah, that's what I thought, too.   I searched an existing page on
http://directory.fedoraproject.org, but I could not find it.

Rich, could there be a good place to put the link(s)?

you probably mean this?

http://directory.fedoraproject.org/wiki/Source


I think Timo (and probably other people who monitor the source tarballs) 
would like to have a URL to a directory containing the sources, rather 
than have to have the URL of the file.  Then we could just push files to 
that directory, and he and others could just monitor that directory for 
new files.





Now I see that you have a separate 389-announce list where only the
stable releases get announced.. maybe send those to 389-users too?

All right.  I will do so from the next time.  Thanks for your suggestion!

great, thanks!




--
389-devel mailing list
389-devel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/389-devel

[389-devel] Please review: Ticket #47492 - PassSync removes User must change password flag on the Windows side

2014-04-03 Thread Rich Megginson

https://fedorahosted.org/389/attachment/ticket/47492/0001-Ticket-47492-PassSync-removes-User-must-change-passw.3.patch
--
389-devel mailing list
389-devel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/389-devel

[389-devel] Please review: Ticket #47492 - PassSync removes User must change password flag on the Windows side

2014-04-03 Thread Rich Megginson

https://fedorahosted.org/389/attachment/ticket/47492/0001-Ticket-47492-PassSync-removes-User-must-change-passw.3.patch
--
389-devel mailing list
389-devel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/389-devel

Re: [389-devel] Changes for

2014-04-01 Thread Rich Megginson

On 04/01/2014 07:34 AM, Carsten Grzemba wrote:

Hi Rich,

this breaks the current implementaion for posix-winsync:

Bug 716980 - winsync uses old AD entry if new one not found
https://bugzilla.redhat.com/show_bug.cgi?id=716980
Resolves: bug 716980
Bug Description: winsync uses old AD entry if new one not found 
Reviewed by: nhosoi (Thanks!)

Branch: master
Fix Description: Clear out the old raw_entry before doing the search. 
This will leave a NULL in the raw entry. winsync plugins will need to 
handle a NULL for the raw_entry and/or ad_entry.


In the moment posix_winsync_pre_ds_mod_user_cb returns imediataly on 
raw_entry == NULL

How should the plugin handle the NULL for raw_entry?


Not sure.  Please reopen that ticket.  If it broke posix-winsync, it is 
likely to break other winsync plugins (e.g. ipa winsync).




Carsten


--
389-devel mailing list
389-devel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/389-devel


--
389-devel mailing list
389-devel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/389-devel

Re: [389-devel] strcasestr vs. PL_strcasestr

2014-03-07 Thread Rich Megginson

On 03/07/2014 07:15 AM, Carsten Grzemba wrote:

Hi Nathan,

is there special reason why you use /strcasestr/ in ACL plugin instead 
of /PL_strcasestr/.

It presents me some headache to build 1.3.2 on Solaris 10.
https://git.fedorahosted.org/cgit/389/ds.git/commit/ldap/servers/plugins/acl?id=95214606df95deb1cf9a30044fe64f780c030b34


No, there is no reason to use strcasestr.  We should be using 
PL_strcasestr for portability.




Carsten


Am 06.03.14 schrieb *Noriko Hosoi * nho...@redhat.com:

https://fedorahosted.org/389/ticket/47731

https://fedorahosted.org/389/attachment/ticket/47731/0001-Ticket-47731-A-tombstone-entry-is-deleted-by-ldapdel.patch

Description: A tombstone deletion by ldapdelete op from client is
supposed to fail. The failure from SLAPI_PLUGIN_BETXNPOSTOPERATION was
ignored in 389-ds-base-1.2.11 plugin_call_func and it was not passed to
the backend to abort. This patch added the check in the same way as in
389-ds-base-1.3.1 and newer.
--
389-devel mailing list
389-de...@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/389-devel



--
389-devel mailing list
389-de...@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/389-devel


--
389-devel mailing list
389-de...@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/389-devel

Re: [389-devel] Design review: Access control on entries specified in MODDN operation (ticket 47553)

2014-02-25 Thread Rich Megginson

On 02/25/2014 07:24 AM, thierry bordaz wrote:

On 02/24/2014 10:47 PM, Noriko Hosoi wrote:

Rich Megginson wrote:

On 02/24/2014 09:00 AM, thierry bordaz wrote:

Hello,

IPA team filled this ticket
https://fedorahosted.org/389/ticket/47553.

It requires an ACI improvement so that during a MODDN a given
user is only allowed to move an entry from one specified part
of the DIT to an other specified part of the DIT. This without
the need to grant the ADD permission.

Here is the design of what could be implemented to support this
need
http://port389.org/wiki/Access_control_on_trees_specified_in_MODDN_operation

regards
thierry



Since this not related to any Red Hat internal or customer 
information, we should move this discussion to the 389-devel list.



Hi Thierry,

Your design looks good.  A minor question.  The doc does not mention 
about deny.  For instance, in your example DIT, can I allow 
moddn_to and moddn_from on the top dc=example,dc=com and deny 
them on cn=tests.  Then, I can move an entry between cn=accounts 
and staging, but not to/from cn=tests?  Or deny is not supposed to 
use there?


Thanks,
--noriko


Hi Noriko,

Thanks for having looked at the document. You are right, I missed to 
document how 'DENY' aci would work.


 I updated the design 
http://port389.org/wiki/Access_control_on_trees_specified_in_MODDN_operation#ACI_allow.2Fdeny_rights 
to indicate how a DENY rights could be used.


By default if there is no ACI granting 'allow', the operation is 
rejected. So in that case, without ACI applicable on 'cn=tests', MODDN 
to/from 'cn=tests' will not be authorized.
Adding a DENY to target 'cn=tests' would also work but I think it is 
not required.


In the example I added, the 'ALLOW' right is granted to a tree 
(cn=accounts,SUFFIX) except to a subtree of it 
(cn=except,cn=accounts,SUFFIX)


So in order to do a MODDN operation, you need both the moddn_from aci 
and moddn_to aci?


For example:

dn: dc=example,dc=com
aci: (target=ldap:///cn=staging,dc=example,dc=com;)(version 3.0; acl 
MODDN from; allow (moddn_from))

 userdn=ldap:///uid=admin_accounts,dc=example,dc=com; ;)

If I only have this aci, will it allow anything?  That is, if I don't 
have a (moddn_to) aci somewhere, will this (moddn_from) aci allow me to 
move anything?




regards
thierry





--
389-devel mailing list
389-de...@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/389-devel




--
389-devel mailing list
389-de...@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/389-devel


--
389-devel mailing list
389-de...@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/389-devel

Re: [389-devel] Design review: Access control on entries specified in MODDN operation (ticket 47553)

2014-02-25 Thread Rich Megginson

On 02/25/2014 07:42 AM, thierry bordaz wrote:

On 02/25/2014 03:34 PM, Rich Megginson wrote:

On 02/25/2014 07:24 AM, thierry bordaz wrote:

On 02/24/2014 10:47 PM, Noriko Hosoi wrote:

Rich Megginson wrote:

On 02/24/2014 09:00 AM, thierry bordaz wrote:

Hello,

IPA team filled this ticket
https://fedorahosted.org/389/ticket/47553.

It requires an ACI improvement so that during a MODDN a given
user is only allowed to move an entry from one specified part
of the DIT to an other specified part of the DIT. This
without the need to grant the ADD permission.

Here is the design of what could be implemented to support
this need
http://port389.org/wiki/Access_control_on_trees_specified_in_MODDN_operation

regards
thierry



Since this not related to any Red Hat internal or customer 
information, we should move this discussion to the 389-devel list.



Hi Thierry,

Your design looks good.  A minor question.  The doc does not 
mention about deny.  For instance, in your example DIT, can I 
allow moddn_to and moddn_from on the top dc=example,dc=com 
and deny them on cn=tests.  Then, I can move an entry between 
cn=accounts and staging, but not to/from cn=tests?  Or deny is 
not supposed to use there?


Thanks,
--noriko


Hi Noriko,

Thanks for having looked at the document. You are right, I missed to 
document how 'DENY' aci would work.


 I updated the design 
http://port389.org/wiki/Access_control_on_trees_specified_in_MODDN_operation#ACI_allow.2Fdeny_rights 
to indicate how a DENY rights could be used.


By default if there is no ACI granting 'allow', the operation is 
rejected. So in that case, without ACI applicable on 'cn=tests', 
MODDN to/from 'cn=tests' will not be authorized.
Adding a DENY to target 'cn=tests' would also work but I think it is 
not required.


In the example I added, the 'ALLOW' right is granted to a tree 
(cn=accounts,SUFFIX) except to a subtree of it 
(cn=except,cn=accounts,SUFFIX)


So in order to do a MODDN operation, you need both the moddn_from aci 
and moddn_to aci?


For example:

dn: dc=example,dc=com
aci: (target=ldap:///cn=staging,dc=example,dc=com;)(version 3.0; acl 
MODDN from; allow (moddn_from))

 userdn=ldap:///uid=admin_accounts,dc=example,dc=com; ;)

If I only have this aci, will it allow anything?  That is, if I don't 
have a (moddn_to) aci somewhere, will this (moddn_from) aci allow me 
to move anything?


Yes it will allow you to do a MODDN if you are granted the 'ADD' right 
on the new superior entry.


I think this double ACI can be an issue as freeipa was hoping to use a 
single ACI. But I have not found a solution to grant move (to/from) in 
a single aci syntax.


I think it is very important to specify both the source and the 
destination of a MODDN operation.  I don't think this will be possible 
in all cases without having 2 target DNs in a single ACI statement.






regards
thierry





--
389-devel mailing list
389-de...@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/389-devel




--
389-devel mailing list
389-de...@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/389-devel




--
389-devel mailing list
389-de...@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/389-devel




--
389-devel mailing list
389-de...@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/389-devel

Re: [389-devel] Design review: Access control on entries specified in MODDN operation (ticket 47553)

2014-02-25 Thread Rich Megginson

On 02/25/2014 08:28 AM, thierry bordaz wrote:

On 02/25/2014 04:17 PM, Rich Megginson wrote:

On 02/25/2014 08:14 AM, thierry bordaz wrote:

On 02/25/2014 03:46 PM, Rich Megginson wrote:

On 02/25/2014 07:42 AM, thierry bordaz wrote:

On 02/25/2014 03:34 PM, Rich Megginson wrote:

On 02/25/2014 07:24 AM, thierry bordaz wrote:

On 02/24/2014 10:47 PM, Noriko Hosoi wrote:

Rich Megginson wrote:

On 02/24/2014 09:00 AM, thierry bordaz wrote:

Hello,

IPA team filled this ticket
https://fedorahosted.org/389/ticket/47553.

It requires an ACI improvement so that during a MODDN a
given user is only allowed to move an entry from one
specified part of the DIT to an other specified part of
the DIT. This without the need to grant the ADD permission.

Here is the design of what could be implemented to
support this need
http://port389.org/wiki/Access_control_on_trees_specified_in_MODDN_operation

regards
thierry



Since this not related to any Red Hat internal or customer 
information, we should move this discussion to the 389-devel list.



Hi Thierry,

Your design looks good.  A minor question.  The doc does not 
mention about deny.  For instance, in your example DIT, can I 
allow moddn_to and moddn_from on the top 
dc=example,dc=com and deny them on cn=tests.  Then, I can 
move an entry between cn=accounts and staging, but not to/from 
cn=tests?  Or deny is not supposed to use there?


Thanks,
--noriko


Hi Noriko,

Thanks for having looked at the document. You are right, I 
missed to document how 'DENY' aci would work.


 I updated the design 
http://port389.org/wiki/Access_control_on_trees_specified_in_MODDN_operation#ACI_allow.2Fdeny_rights 
to indicate how a DENY rights could be used.


By default if there is no ACI granting 'allow', the operation is 
rejected. So in that case, without ACI applicable on 'cn=tests', 
MODDN to/from 'cn=tests' will not be authorized.
Adding a DENY to target 'cn=tests' would also work but I think 
it is not required.


In the example I added, the 'ALLOW' right is granted to a tree 
(cn=accounts,SUFFIX) except to a subtree of it 
(cn=except,cn=accounts,SUFFIX)


So in order to do a MODDN operation, you need both the moddn_from 
aci and moddn_to aci?


For example:

dn: dc=example,dc=com
aci: (target=ldap:///cn=staging,dc=example,dc=com;)(version 3.0; 
acl MODDN from; allow (moddn_from))

 userdn=ldap:///uid=admin_accounts,dc=example,dc=com; ;)

If I only have this aci, will it allow anything?  That is, if I 
don't have a (moddn_to) aci somewhere, will this (moddn_from) aci 
allow me to move anything?


Yes it will allow you to do a MODDN if you are granted the 'ADD' 
right on the new superior entry.


I think this double ACI can be an issue as freeipa was hoping to 
use a single ACI. But I have not found a solution to grant move 
(to/from) in a single aci syntax.


I think it is very important to specify both the source and the 
destination of a MODDN operation.  I don't think this will be 
possible in all cases without having 2 target DNs in a single ACI 
statement.


My concern is that if we have something like :

aci: target_rule (version 3.0; acl MODDN control; allow (moddn_to, 
moddn_from)

 bind_rule;)

and 'target_rule' defines two DNs, then moddn_to/from are granted 
for both DNs. so in our case, the user would be allowed to move an 
entry staging-accounts but also account-staging.


Right.  It is necessary to be able to specify moddn_from=DN1 
modrn_to=DN2


Ok yes it would work.

Now I am unsure of the benefit of having a single aci with that new 
'target_rule' syntax compare to two aci with the current syntax. I can 
imagine a performance gain in terms of aci scan and evaluation but 
wonder if there is an other benefit.


One problem with having two acis is referential integrity - keeping the 
pairs in sync with other changes.  Having to keep track of two acis is 
much more than twice as difficult as keeping track of a single aci.


I can appreciate that it will be very difficult to change the aci syntax 
in such a way as to support two target clauses in a single aci.  And, it 
might not be sufficient to simply have


aci: (target_from=ldap:///dn_from;)(target_to=ldap:///dn_to;)...

although I'm not sure if any of the other target keywords are applicable 
here - like targetattr, targetfilter, targattrfilter, etc.




I sent the design pointer to freeipa-devel as well, sure I will get 
some comments on that :-)












regards
thierry





--
389-devel mailing list
389-devel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/389-devel




--
389-devel mailing list
389-devel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/389-devel




--
389-devel mailing list
389-devel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/389-devel












--
389-devel mailing list
389-devel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/389

Re: [389-devel] Design review: Access control on entries specified in MODDN operation (ticket 47553)

2014-02-24 Thread Rich Megginson

On 02/24/2014 02:47 PM, Noriko Hosoi wrote:

Rich Megginson wrote:

On 02/24/2014 09:00 AM, thierry bordaz wrote:

Hello,

IPA team filled this ticket
https://fedorahosted.org/389/ticket/47553.

It requires an ACI improvement so that during a MODDN a given
user is only allowed to move an entry from one specified part of
the DIT to an other specified part of the DIT. This without the
need to grant the ADD permission.

Here is the design of what could be implemented to support this
need
http://port389.org/wiki/Access_control_on_trees_specified_in_MODDN_operation

regards
thierry



Since this not related to any Red Hat internal or customer 
information, we should move this discussion to the 389-devel list.



Hi Thierry,

Your design looks good.  A minor question.  The doc does not mention 
about deny.  For instance, in your example DIT, can I allow 
moddn_to and moddn_from on the top dc=example,dc=com and deny 
them on cn=tests.  Then, I can move an entry between cn=accounts and 
staging, but not to/from cn=tests?  Or deny is not supposed to use 
there?


In which entry do you set these ACIs?

Do you set
aci: (target=ldap:///cn=staging,dc=example,dc=com;)(version 3.0; acl 
MODDN from; allow (moddn_from))

 userdn=ldap:///uid=admin_accounts,dc=example,dc=com; ;)
in the cn=accounts,dc=example,dc=com entry?

Do you set
aci: (target=ldap:///cn=accounts,dc=example,dc=com;)(version 3.0; acl 
MODDN to; allow (moddn_to))

 userdn=ldap:///uid=admin_accounts,dc=example,dc=com; ;)
in the cn=staging,dc=example,dc=com entry?



Thanks,
--noriko




--
389-devel mailing list
389-devel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/389-devel


--
389-devel mailing list
389-devel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/389-devel

Re: [389-devel] mmr.pl deprecated?

2014-02-11 Thread Rich Megginson

On 02/11/2014 08:16 AM, Paul Robert Marino wrote:

I would have no problem doing that.
The more I look at it I may just use it as a template for creating a
new set of Perl based replication management tools.


Part of the reason for dropping mmr.pl is Perl :P

We have a new project for creating a management framework, including 
replication management, in python.

http://port389.org/wiki/Upstream_test_framework
It started out as the basis for a test framework, but it is now separate 
(lib389).


If you are planning to do something from scratch, and you can hack 
python, I suggest you take a look.






Ive also been thinking of some other tools I would like to make in addition.
for example I would like to make a password integrity check hook
script for Heimdal Kerberos which would utilize 389 servers password
change functionality that way 389 server can manage the password
policy but in addition programs which don't use SASL but instead use
the users password field for authentication can function without
having to put the Kerberos database in the LDAP server.

Ill send out an email to the user list once I create the github repo

On Mon, Feb 10, 2014 at 11:09 AM, Rich Megginson rmegg...@redhat.com wrote:

On 02/09/2014 02:29 PM, Paul Robert Marino wrote:

I just noticed on the wiki that it says mmr.pl is deprecated because
its it is too buggy and has no maintainer


There is no dedicated source code repo for it.
There is no place to file bugs/tickets against it.
There is no one to fix the bugs/tickets.



I'm just curious if there is a punch list of the bugs I might be
willing to tackle them if so.


Not that I know of, just various emails/irc messages over the years.



Ive always found it to be a useful tool
and I prefer not to click my mouse a hundred times when a command line
script could do the job.
I'm not saying I would definitely be willing to become the maintainer
on a long term basis because I have too many project on my plate as it
is, just that I would be willing to take some time to do any updates
and bug fixes as I have time and possibly be willing to act as an
temporary maintainer for a brief period of time.

The code is fairly strait forward and at the least I could make the
option parsing a lot more robust and the error messages far more
helpful.
The coding style is outdated and could use probably prototyping on the
subroutines.
Further more it might be useful to accept a config file and or
environment variables for some of the information for example I hate
putting passwords on the command line.
I can also see that there are a lot more options that would be nice to
be able to tune.
but if there is a bug list I would be happy to spend some time on it
and see how many I could run through in a short period of time.
A quick search of bugzilla.redhat.com didn't seam to show any thing as
far as I could tell


It would probably be easiest to make a github repo for mmr.pl and use that
for tracking changes, bugs/tickets, documentation.


--
389-devel mailing list
389-devel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/389-devel


--
389-devel mailing list
389-devel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/389-devel

--
389-devel mailing list
389-devel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/389-devel


--
389-devel mailing list
389-devel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/389-devel

Re: [389-devel] mmr.pl deprecated?

2014-02-11 Thread Rich Megginson

On 02/11/2014 08:45 AM, Paul Robert Marino wrote:

Sorry I'm a Perl programmer and a dam good one :-).
I never drank the Python Coolaid and I don't think I ever will.
I fundamentally dislike the language its too fragile in the name of
enforcing better code formatting practices. Franky I've seen many a
Python script that was just as bad and ugly as the worst Perl scripts
I've seen.
Perl has served me well even though its not as popular right now.


Ok, that's fine.  If you're willing to do the work, I have no problem 
with your choice of language.









On Tue, Feb 11, 2014 at 10:27 AM, Rich Megginson rmegg...@redhat.com wrote:

On 02/11/2014 08:16 AM, Paul Robert Marino wrote:

I would have no problem doing that.
The more I look at it I may just use it as a template for creating a
new set of Perl based replication management tools.


Part of the reason for dropping mmr.pl is Perl :P

We have a new project for creating a management framework, including
replication management, in python.
http://port389.org/wiki/Upstream_test_framework
It started out as the basis for a test framework, but it is now separate
(lib389).

If you are planning to do something from scratch, and you can hack python, I
suggest you take a look.





Ive also been thinking of some other tools I would like to make in
addition.
for example I would like to make a password integrity check hook
script for Heimdal Kerberos which would utilize 389 servers password
change functionality that way 389 server can manage the password
policy but in addition programs which don't use SASL but instead use
the users password field for authentication can function without
having to put the Kerberos database in the LDAP server.

Ill send out an email to the user list once I create the github repo

On Mon, Feb 10, 2014 at 11:09 AM, Rich Megginson rmegg...@redhat.com
wrote:

On 02/09/2014 02:29 PM, Paul Robert Marino wrote:

I just noticed on the wiki that it says mmr.pl is deprecated because
its it is too buggy and has no maintainer


There is no dedicated source code repo for it.
There is no place to file bugs/tickets against it.
There is no one to fix the bugs/tickets.



I'm just curious if there is a punch list of the bugs I might be
willing to tackle them if so.


Not that I know of, just various emails/irc messages over the years.



Ive always found it to be a useful tool
and I prefer not to click my mouse a hundred times when a command line
script could do the job.
I'm not saying I would definitely be willing to become the maintainer
on a long term basis because I have too many project on my plate as it
is, just that I would be willing to take some time to do any updates
and bug fixes as I have time and possibly be willing to act as an
temporary maintainer for a brief period of time.

The code is fairly strait forward and at the least I could make the
option parsing a lot more robust and the error messages far more
helpful.
The coding style is outdated and could use probably prototyping on the
subroutines.
Further more it might be useful to accept a config file and or
environment variables for some of the information for example I hate
putting passwords on the command line.
I can also see that there are a lot more options that would be nice to
be able to tune.
but if there is a bug list I would be happy to spend some time on it
and see how many I could run through in a short period of time.
A quick search of bugzilla.redhat.com didn't seam to show any thing as
far as I could tell


It would probably be easiest to make a github repo for mmr.pl and use
that
for tracking changes, bugs/tickets, documentation.


--
389-devel mailing list
389-devel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/389-devel


--
389-devel mailing list
389-devel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/389-devel

--
389-devel mailing list
389-devel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/389-devel


--
389-devel mailing list
389-devel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/389-devel

--
389-devel mailing list
389-devel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/389-devel


--
389-devel mailing list
389-devel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/389-devel

Re: [389-devel] Please review test cases update with new modules

2014-02-11 Thread Rich Megginson

On 02/11/2014 09:55 AM, thierry bordaz wrote:
Some lib389 routines moved or their name changes (schema, tasks, index 
and plugins):



ack




--
389-devel mailing list
389-devel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/389-devel


--
389-devel mailing list
389-devel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/389-devel

Re: [389-devel] mmr.pl deprecated?

2014-02-10 Thread Rich Megginson

On 02/09/2014 02:29 PM, Paul Robert Marino wrote:

I just noticed on the wiki that it says mmr.pl is deprecated because
its it is too buggy and has no maintainer


There is no dedicated source code repo for it.
There is no place to file bugs/tickets against it.
There is no one to fix the bugs/tickets.



I'm just curious if there is a punch list of the bugs I might be
willing to tackle them if so.


Not that I know of, just various emails/irc messages over the years.


Ive always found it to be a useful tool
and I prefer not to click my mouse a hundred times when a command line
script could do the job.
I'm not saying I would definitely be willing to become the maintainer
on a long term basis because I have too many project on my plate as it
is, just that I would be willing to take some time to do any updates
and bug fixes as I have time and possibly be willing to act as an
temporary maintainer for a brief period of time.

The code is fairly strait forward and at the least I could make the
option parsing a lot more robust and the error messages far more
helpful.
The coding style is outdated and could use probably prototyping on the
subroutines.
Further more it might be useful to accept a config file and or
environment variables for some of the information for example I hate
putting passwords on the command line.
I can also see that there are a lot more options that would be nice to
be able to tune.
but if there is a bug list I would be happy to spend some time on it
and see how many I could run through in a short period of time.
A quick search of bugzilla.redhat.com didn't seam to show any thing as
far as I could tell


It would probably be easiest to make a github repo for mmr.pl and use 
that for tracking changes, bugs/tickets, documentation.



--
389-devel mailing list
389-devel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/389-devel


--
389-devel mailing list
389-devel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/389-devel

[389-devel] Please review: Ticket #47692 single valued attribute replicated ADD does not work

2014-02-07 Thread Rich Megginson

https://fedorahosted.org/389/attachment/ticket/47692/0001-Ticket-47692-single-valued-attribute-replicated-ADD-.patch
--
389-devel mailing list
389-devel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/389-devel

Re: [389-devel] 389-ds-base: /bin/sh scripts should use . instead of source

2014-01-24 Thread Rich Megginson

On 01/24/2014 04:13 AM, Roberto Polli wrote:

Hi @all,

iirc /bin/sh scripts should use . instead of source (see
http://pubs.opengroup.org/onlinepubs/9699919799/utilities/V3_chap02.html#dot
)

To use source you should change the interpreter to be /bin/bash

lib389@rpolli:~/workspaces/389-ds-base/ds/ldap/admin/src/scripts$ git diff .
diff --git a/ldap/admin/src/scripts/ldif2db.in
b/ldap/admin/src/scripts/ldif2db.in
index ce15349..fb24863 100755
--- a/ldap/admin/src/scripts/ldif2db.in
+++ b/ldap/admin/src/scripts/ldif2db.in
@@ -1,6 +1,6 @@
  #!/bin/sh
  
-source @datadir@/@package_name@/data/DSSharedLib

+. @datadir@/@package_name@/data/DSSharedLib
  
  libpath_add @libdir@/@package_name@/

  libpath_add @nss_libdir@


https://fedorahosted.org/389/ticket/47511
--
389-devel mailing list
389-de...@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/389-devel

Re: [389-devel] plugin PRE_ENTRY_FN scope

2014-01-17 Thread Rich Megginson

On 01/17/2014 11:20 AM, Deas, Jim wrote:

Should I be able to use SLAPI_SEARCH_ATTR to view attributes about to be 
returned to the client in PRE_ENTRY_FN?


Yes, looks like it, but you have to be prepared for the case where a 
client does not specify a search attribute list - in this case, the 
client is asking for all non-operational attributes in the entry.



  Can I start a new search inside PRE_ENTRY_FN to find values needed to augment 
the existing attributes being returned?


Yes, that should work.  However, doing this parsing and internal search 
for every single entry returned might be a big performance hit.  You 
might want to examine the SLAPI_SEARCH_ATTRS and do the internal search 
in a SLAPI_PLUGIN_PRE_SEARCH_FN, then store the results in the operation 
(in an operation extension), then just use those results in your 
PRE_ENTRY_FN.  This is what the deref plugin does.




--
389-devel mailing list
389-devel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/389-devel


--
389-devel mailing list
389-devel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/389-devel

[389-devel] Please review: Ticket #47675 logconv errors when search has invalid bind dn

2014-01-16 Thread Rich Megginson

https://fedorahosted.org/389/attachment/ticket/47675/0001-Ticket-47675-logconv-errors-when-search-has-invalid-.patch
--
389-devel mailing list
389-de...@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/389-devel

Re: [389-devel] plugin problem using slapi_entry_attr_find

2014-01-16 Thread Rich Megginson
Another bug in your code.  The argument for SLAPI_SEARCH_ATTRS should be 
the address of a char **.e.g.


{
char **attrs;
int ii = 0;
...
if (slapi_pblock_get( pb, SLAPI_SEARCH_ATTRS, attrs) !=0 )return (-1);
for (ii = 0; attrs  attrs[ii]; ++ii) {
slapi_log_error(SLAPI_LOG_PLUGIN, my plugin,
 search attr %d is %s\n, ii, attrs[ii]);
}
...

In your plugin entry and in your plugin config you specify which types 
of operations (bind, search, add, etc.) your plugin will handle.
E.g. a SLAPI_PLUGIN_PRE_BIND_FN will be called at the pre-operation 
stage of a BIND operation.


Each type of plugin will have possibly different pblock parameters 
available.  So, for example, if you use the same function as both a bind 
preop and a search preop - when called as a bind preop, the 
SLAPI_SEARCH_ATTRS will not be available.


If you want to use the same function for different op types, declare 
different functions for each op type, then call your common function 
with the op type, like this:


int
bind_preop(Slapi_PBlock *pb) {
return common_function(SLAPI_PLUGIN_PRE_BIND_FN, pb);
}

int
search_preop(Slapi_PBlock *pb) {
return common_function(SLAPI_PLUGIN_PRE_SEARCH_FN, pb);
}
...

int
common_function(int type, Slapi_PBlock *pb) {
...
if (type == SLAPI_PLUGIN_PRE_BIND_FN) {
   do some bind specific action
} else if (type == SLAPI_PLUGIN_PRE_SEARCH_FN) {
   do some search specific action
}
...

On 01/16/2014 03:02 PM, Deas, Jim wrote:


On further review it appears that the line in question will crash 
Dirsrv on some request from PAM or even 389-Console but not when 
searching groups via ldapsearch


Should there be a statement that determines what type of query 
triggered the preop_result so I know if it’s proper  to look for 
attributes?


*From:*389-devel-boun...@lists.fedoraproject.org 
[mailto:389-devel-boun...@lists.fedoraproject.org] *On Behalf Of *Rich 
Megginson

*Sent:* Thursday, January 16, 2014 11:29 AM
*To:* 389 Directory server developer discussion.
*Subject:* Re: [389-devel] plugin problem using slapi_entry_attr_find

On 01/16/2014 11:39 AM, Deas, Jim wrote:

My bet, a rookie mistake. Am I forgetting to init a pointer etc???

Adding  the line surrounded by **  in this routine makes
dirsrv unstable and crashes it after a few queries.

/* Registered preop_result routine */

int gnest_preop_results( Slapi_PBlock *pb){

Slapi_Entry *e;

Slapi_Attr  **a;

This should be Slapi_Attr *a;

If (slapi_pblock_get( pb, SLAPI_SEARCH_ATTRS, e) !=0 )return (-1);

/*This line makes the server unstable and crashes it 
after one or two queries /


If(slapi_entry_attr_find(e, “memberUid”,a) == 0) 
slapi_log_error(SLAPI_LOG_PLUGIN, “gnest preop”,”memberUid  found in 
record);


/**/

Return (0);

}

*JD*




--
389-devel mailing list
389-devel@lists.fedoraproject.org  mailto:389-devel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/389-devel



--
389-devel mailing list
389-devel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/389-devel


--
389-devel mailing list
389-devel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/389-devel

Re: [389-devel] plugin problem using slapi_entry_attr_find

2014-01-16 Thread Rich Megginson

On 01/16/2014 04:49 PM, Deas, Jim wrote:

I caught the pointer issue after my last post. I misunderstood the process 
thinking I have to download a master pblock then use a reference from that for 
obtaining values.
I am trying to intercept queries that are returning posix group information and 
make dynamic changes in the memberUid list returned to the client.
The purpose is to create a single nested group layer handled on the dirsrv side 
so existing Linux PAM systems do not need modification to use simple nested 
groups.

*In the database, I have memeberUid values preceeded by '@' to designate them 
as group entries instead of people. I.E. memberUid = 
betty,fred,joan,@accountants for three users and all users part of group 
accountants.

Process:
Capture results
Internal search for any @* as additional groups
Remove @* and add found subgroups memberUid's to the existing results


Ok.  Then you will want to use a SLAPI_PLUGIN_PRE_ENTRY_FN plugin. I 
would suggest taking a look at the deref plugin code, which does 
something very similar - just before an entry is to be returned, the 
deref plugin adds some extra data to the entry to be returned. deref 
defines two plugin functions - a SLAPI_PLUGIN_PRE_SEARCH_FN and a 
SLAPI_PLUGIN_PRE_ENTRY_FN.  It is the latter that does the work of 
adding the extra data to the entry to be returned to the user.









-Original Message-
From: 389-devel-boun...@lists.fedoraproject.org 
[mailto:389-devel-boun...@lists.fedoraproject.org] On Behalf Of Nathan Kinder
Sent: Thursday, January 16, 2014 3:35 PM
To: 389 Directory server developer discussion.
Subject: Re: [389-devel] plugin problem using slapi_entry_attr_find

On 01/16/2014 03:14 PM, Deas, Jim wrote:

Rich,

Thanks. I actually did have the address of operator on the code. Both
the init and config are defining only a couple of specific functions
(start_fn, pre_results_fn,pre_abandon_fn) one function defined for each.

The one I am testing is  preop_results() which does trigger, works as
you suggested below, but crashes when adding a call to
slapi_entry_attr_find() for many but not all remote inquiries.

In the code you shared, you are setting e with this call:

   slapi_pblock_get( pb, SLAPI_SEARCH_ATTRS, e)

The issue here is that e is a Slapi_Entry, but SLAPI_SEARCH_ATTRS doesn't retreive a 
Slapi_Entry from the pblock.  This means e is incorrect at this point (it will likely 
have bad pointer values if you look into it in gdb).

When you call slapi_entry_attr_find(), it is likely trying to dereference some 
of these bad pointer values, which leads to the crash.
  You need to pass a valid Slapi_Entry to this function (if you even need this 
function).

What exactly are you trying to have your plug-in do?

Thanks,
-NGK

Perhaps I am going at this all wrong. What sequence should I call to
get a multivariable attribute? In this case a list of attribute ‘memberUid’
while rejecting preop_results not directed at returning Group information?

  


JD

  


*From:*389-devel-boun...@lists.fedoraproject.org
[mailto:389-devel-boun...@lists.fedoraproject.org] *On Behalf Of *Rich
Megginson
*Sent:* Thursday, January 16, 2014 2:25 PM
*To:* 389 Directory server developer discussion.
*Subject:* Re: [389-devel] plugin problem using slapi_entry_attr_find

  


Another bug in your code.  The argument for SLAPI_SEARCH_ATTRS should
be the address of a char **.e.g.

{
 char **attrs;
 int ii = 0;
 ...
 if (slapi_pblock_get( pb, SLAPI_SEARCH_ATTRS, attrs) !=0 )return (-1);
 for (ii = 0; attrs  attrs[ii]; ++ii) {
 slapi_log_error(SLAPI_LOG_PLUGIN, my plugin,
  search attr %d is %s\n, ii, attrs[ii]);
 }
 ...

In your plugin entry and in your plugin config you specify which types
of operations (bind, search, add, etc.) your plugin will handle.
E.g. a SLAPI_PLUGIN_PRE_BIND_FN will be called at the pre-operation
stage of a BIND operation.

Each type of plugin will have possibly different pblock parameters
available.  So, for example, if you use the same function as both a
bind preop and a search preop - when called as a bind preop, the
SLAPI_SEARCH_ATTRS will not be available.

If you want to use the same function for different op types, declare
different functions for each op type, then call your common function
with the op type, like this:

int
bind_preop(Slapi_PBlock *pb) {
 return common_function(SLAPI_PLUGIN_PRE_BIND_FN, pb); }

int
search_preop(Slapi_PBlock *pb) {
 return common_function(SLAPI_PLUGIN_PRE_SEARCH_FN, pb); } ...

int
common_function(int type, Slapi_PBlock *pb) {
 ...
 if (type == SLAPI_PLUGIN_PRE_BIND_FN) {
do some bind specific action
 } else if (type == SLAPI_PLUGIN_PRE_SEARCH_FN) {
do some search specific action
 }
 ...

On 01/16/2014 03:02 PM, Deas, Jim wrote:

 On further review it appears that the line in question will crash
 Dirsrv on some request from PAM or even 389-Console

Re: [389-devel] ISO 8601 parser

2014-01-06 Thread Rich Megginson

On 12/23/2013 11:06 AM, Nathaniel McCallum wrote:

https://www.redhat.com/archives/freeipa-devel/2013-December/msg00229.html

389ds may be interested in the ISO 8601 parser contained in the patch.
It offers two main advantages to the one already contained in the 389ds
tree:
1. It is*far*  more flexible in what it can parse.
2. It is thoroughly tested (currently ~15k tests).

Thanks!

https://fedorahosted.org/389/ticket/47658
--
389-devel mailing list
389-devel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/389-devel

[389-devel] Please review: lib389 - various cleanups

2013-12-18 Thread Rich Megginson

https://fedorahosted.org/389/ticket/47643
--
389-devel mailing list
389-devel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/389-devel

Re: [389-devel] Regarding Microsoft announcemnt: MD5 deprecation on Microsoft Windows

2013-12-17 Thread Rich Megginson

Hi,

As you would have noticed, Microsoft has announced for Deprecation of 
MD5 Hashing Algorithm for Microsoft Root Certificate Program. Is there 
any impact on us because of this decision with regards to WinSync part?



No.


Can somebody help me in giving some insight onto this?

Regards,

Jyoti



--
389-devel mailing list
389-devel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/389-devel

[389-devel] Please review: Ticket #47631 objectclass may, must lists skip rest of objectclass once first is found in sup

2013-12-16 Thread Rich Megginson

https://fedorahosted.org/389/attachment/ticket/47631/0001-Ticket-47631-objectclass-may-must-lists-skip-rest-of.patch
--
389-devel mailing list
389-devel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/389-devel

[389-devel] Please review: take 2: Ticket #47623 fix memleak caused by 47347

2013-12-10 Thread Rich Megginson

https://fedorahosted.org/389/attachment/ticket/47623/0001-Ticket-47623-fix-memleak-caused-by-47347.patch
--
389-devel mailing list
389-devel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/389-devel

Re: FTBFS if -Werror=format-security flag is used

2013-12-09 Thread Rich Megginson

On 12/09/2013 03:33 PM, Przemek Klosowski wrote:

On 12/06/2013 09:21 AM, Ralf Corsepius wrote:


printf(string) is legitimate C, forcing printf(%s, string) is just 
silly.


My apologies for being repetitive, but the original point is that 
printf(string) is insecure unless you can guarantee that you control 
'string' now and forever. Also,  %s is the format for printing 
strings, so I just can't agree that coding printf(%s, string) is silly.


Silly is not the right word.  printf(%s, string) is inefficient. In 
this case, it would be better to use puts/fputs.







-- 
devel mailing list
devel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/devel
Fedora Code of Conduct: http://fedoraproject.org/code-of-conduct

[389-devel] Please review: Ticket #47623 fix memleak caused by 47347

2013-12-09 Thread Rich Megginson

https://fedorahosted.org/389/attachment/ticket/47623/0001-Ticket-47623-fix-memleak-caused-by-47347.patch
--
389-devel mailing list
389-devel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/389-devel

Re: [389-devel] Please review: various lib389 cleanups

2013-11-21 Thread Rich Megginson

On 11/21/2013 08:36 AM, thierry bordaz wrote:

On 11/21/2013 04:22 PM, Rich Megginson wrote:

On 11/21/2013 01:58 AM, thierry bordaz wrote:

Hi,

The changes look  very good.

I have a question regarding start/stop in Replica class. Why do not 
you use the function self.agreement.schedule(agmdn, 
interval='start') and self.agreement.schedule(agmdn, interval='stop') ?


I will change it to use schedule().



about the function 'agreement_dn(basedn, other)' why not putting it 
into the Agreement class ?
Note that it uses the functions 'agreements' that is Replica but I 
would expect it to be in Agreement class as well (renamed in 'list' ?).


Yes.  I think I will change the names also, to pause() and resume().

I think the following methods should be moved to the Agreement class: 
check_init start_and_wait wait_init start_async keep_in_sync 
agreement_dn (rename to dn) agreements (rename to list) (also, is 
it a problem that we have a method name list that is the same as a 
python keyword/built-in?)


Hi Rich,

Yes that is good. When fixing 
https://fedorahosted.org/389/ticket/47590 I noticed that with the new 
Agreement class these functions also needed to be moved. I opened 
https://fedorahosted.org/389/ticket/47600 for that. 'dn' and 'list' 
are very good.


When eclipse was complaining with names (function or variable) same as 
built-in keyword, I tried to change the name. Now I like 'list' name 
and it was not a problem with self.replica.list(), so I would vote to 
use 'list'.

New patch attached



regards
thierry




Regards
thierry

On 11/21/2013 03:21 AM, Rich Megginson wrote:




--
389-devel mailing list
389-de...@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/389-devel








From ff533c70b2b8d31116588aae1c36d457d7cf8697 Mon Sep 17 00:00:00 2001
From: Rich Megginson rmegg...@redhat.com
Date: Thu, 21 Nov 2013 09:33:23 -0700
Subject: [PATCH 8/9] move stop and restart to agreement.pause and agreement.unpause

---
 lib389/brooker.py |   85 +++-
 1 files changed, 38 insertions(+), 47 deletions(-)

diff --git a/lib389/brooker.py b/lib389/brooker.py
index da3d0c2..935f5dc 100644
--- a/lib389/brooker.py
+++ b/lib389/brooker.py
@@ -25,7 +25,8 @@ from lib389._replication import RUV
 from lib389._entry import FormatDict
 
 class Agreement(object):
-ALWAYS = None
+ALWAYS = '-2359 0123456'
+NEVER = '2358-2359 0'
 
 proxied_methods = 'search_s getEntry'.split()
 
@@ -110,22 +111,18 @@ class Agreement(object):
 
 
 
-def schedule(self, agmtdn, interval='start'):
+def schedule(self, agmtdn, interval=ALWAYS):
 Schedule the replication agreement
 @param agmtdn - DN of the replica agreement
 @param interval - in the form 
-- 'ALWAYS'
-- 'NEVER'
+- Agreement.ALWAYS
+- Agreement.NEVER
 - or 'HHMM-HHMM D+' With D=[0123456]+
 @raise ValueError - if interval is not valid
 
 
 # check the validity of the interval
-if str(interval).lower() == 'start':
-interval = '-2359 0123456'
-elif str(interval).lower == 'never':
-interval = '2358-2359 0'
-else:
+if interval != Agreement.ALWAYS and interval != Agreement.NEVER:
 self._check_interval(interval)
 
 # Check if the replica agreement exists
@@ -421,12 +418,42 @@ class Agreement(object):
 self.log.info(Starting total init %s % entry.dn)
 mod = [(ldap.MOD_ADD, 'nsds5BeginReplicaRefresh', 'start')]
 self.conn.modify_s(entry.dn, mod)
+
+def pause(self, agmtdn, interval=NEVER):
+Pause this replication agreement.  This replication agreement
+will send no more changes.  Use the resume() method to unpause
+@param agmtdn - agreement dn
+@param interval - (default NEVER) replication schedule to use
+
+self.log.info(Pausing replication %s % agmtdn)
+mod = [(
+ldap.MOD_REPLACE, 'nsds5ReplicaEnabled', ['off'])]
+try:
+self.conn.modify_s(agmtdn, mod)
+except LDAPError, e:
+# before 1.2.11, no support for nsds5ReplicaEnabled
+# use schedule hack
+self.schedule(interval)
+
+def resume(self, agmtdn, interval=ALWAYS):
+Resume a paused replication agreement, paused with the pause method.
+@param agmtdn  - agreement dn
+@param interval - (default ALWAYS) replication schedule to use
+
+self.log.info(Resuming replication %s % agmtdn)
+mod = [(
+ldap.MOD_REPLACE, 'nsds5ReplicaEnabled', ['on'])]
+try:
+self.conn.modify_s(agmtdn, mod)
+except LDAPError, e:
+# before 1.2.11, no support for nsds5ReplicaEnabled
+# use schedule hack

Re: [389-devel] Please review: various lib389 cleanups

2013-11-21 Thread Rich Megginson

On 11/21/2013 01:58 AM, thierry bordaz wrote:

Hi,

The changes look  very good.

I have a question regarding start/stop in Replica class. Why do not 
you use the function self.agreement.schedule(agmdn, interval='start') 
and self.agreement.schedule(agmdn, interval='stop') ?


I will change it to use schedule().



about the function 'agreement_dn(basedn, other)' why not putting it 
into the Agreement class ?
Note that it uses the functions 'agreements' that is Replica but I 
would expect it to be in Agreement class as well (renamed in 'list' ?).


Yes.  I think I will change the names also, to pause() and resume().

I think the following methods should be moved to the Agreement class: 
check_init start_and_wait wait_init start_async keep_in_sync 
agreement_dn (rename to dn) agreements (rename to list) (also, is it 
a problem that we have a method name list that is the same as a python 
keyword/built-in?)




Regards
thierry

On 11/21/2013 03:21 AM, Rich Megginson wrote:




--
389-devel mailing list
389-de...@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/389-devel




--
389-devel mailing list
389-de...@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/389-devel

Re: [389-devel] Please review: various lib389 cleanups

2013-11-21 Thread Rich Megginson

On 11/21/2013 09:38 AM, thierry bordaz wrote:

On 11/21/2013 05:34 PM, Rich Megginson wrote:

On 11/21/2013 08:36 AM, thierry bordaz wrote:

On 11/21/2013 04:22 PM, Rich Megginson wrote:

On 11/21/2013 01:58 AM, thierry bordaz wrote:

Hi,

The changes look  very good.

I have a question regarding start/stop in Replica class. Why do 
not you use the function self.agreement.schedule(agmdn, 
interval='start') and self.agreement.schedule(agmdn, 
interval='stop') ?


I will change it to use schedule().



about the function 'agreement_dn(basedn, other)' why not putting 
it into the Agreement class ?
Note that it uses the functions 'agreements' that is Replica but I 
would expect it to be in Agreement class as well (renamed in 
'list' ?).


Yes.  I think I will change the names also, to pause() and resume().

I think the following methods should be moved to the Agreement 
class: check_init start_and_wait wait_init start_async keep_in_sync 
agreement_dn (rename to dn) agreements (rename to list) (also, 
is it a problem that we have a method name list that is the same 
as a python keyword/built-in?)


Hi Rich,

Yes that is good. When fixing 
https://fedorahosted.org/389/ticket/47590 I noticed that with the 
new Agreement class these functions also needed to be moved. I 
opened https://fedorahosted.org/389/ticket/47600 for that. 'dn' and 
'list' are very good.


When eclipse was complaining with names (function or variable) same 
as built-in keyword, I tried to change the name. Now I like 'list' 
name and it was not a problem with self.replica.list(), so I would 
vote to use 'list'.

New patch attached


Yes pause/resume  always/never are good names.
ack.


Thanks.

Pushed:
To ssh://git.fedorahosted.org/git/389/lib389.git
   2a6593c..693e668  master - master



regards
theirry




regards
thierry




Regards
thierry

On 11/21/2013 03:21 AM, Rich Megginson wrote:




--
389-devel mailing list
389-de...@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/389-devel












--
389-devel mailing list
389-de...@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/389-devel

[389-devel] Please review: Ticket #434 admin-serv logs filling with admserv_host_ip_check: ap_get_remote_host could not resolve ip address

2013-11-20 Thread Rich Megginson

https://fedorahosted.org/389/attachment/ticket/434/0001-Ticket-434-admin-serv-logs-filling-with-admserv_host.patch
--
389-devel mailing list
389-de...@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/389-devel

[389-devel] Please review: Ticket #47478 No groups file? error restarting Admin server

2013-11-20 Thread Rich Megginson

https://fedorahosted.org/389/attachment/ticket/47478/0001-Ticket-47478-No-groups-file-error-restarting-Admin-s.patch
--
389-devel mailing list
389-de...@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/389-devel

[389-devel] Please review: Ticket #47300 [RFE] remove-ds-admin.pl: redesign the behaviour

2013-11-20 Thread Rich Megginson

https://fedorahosted.org/389/attachment/ticket/47300/0001-Ticket-47300-RFE-remove-ds-admin.pl-redesign-the-beh.patch
--
389-devel mailing list
389-devel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/389-devel

Re: [389-devel] Ticket #47384 (plugin library path validation) and out-of-tree modules

2013-11-19 Thread Rich Megginson

On 11/19/2013 09:44 AM, Nalin Dahyabhai wrote:

Hi, everyone.

I was recently adding a couple of changes to slapi-nis, and when I went
to run its self-tests, some of the tests that modify the plugin entry
started failing with LDAP_UNWILLING_TO_PERFORM.  I tracked the denial
down to validation code that was added as part of ticket #47384.

While the tests don't modify the nsslapd-pluginPath attribute (the
entry's added to dse.ldif before the server starts up), some make other
changes to the plugin entry, and when they attempt that,
check_plugin_path() rejects the modify request.

The checks that were added, which ensure that plugins are only loaded
from the server's plugin directory, make it kind of difficult to run
tests using the copies of plugins in my build tree.

The language in the ticket description's pretty firm that this isn't
going to be changed, and while I can _probably_ work around it on my
end, I figured I'd ask here before going down that route:  is there room
to expand this check to a whitelist, a search path, or some other method
that could be used to provide for my use case?
Sure.  Please file a ticket.  We can figure out some way to hack this 
for testing.  What would you suggest?




Thanks,

Nalin
--
389-devel mailing list
389-de...@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/389-devel


--
389-devel mailing list
389-de...@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/389-devel

Re: [389-devel] Ticket #47384 (plugin library path validation) and out-of-tree modules

2013-11-19 Thread Rich Megginson

On 11/19/2013 11:06 AM, Nalin Dahyabhai wrote:

On Tue, Nov 19, 2013 at 10:05:13AM -0700, Rich Megginson wrote:

On 11/19/2013 09:44 AM, Nalin Dahyabhai wrote:

The language in the ticket description's pretty firm that this isn't
going to be changed, and while I can _probably_ work around it on my
end, I figured I'd ask here before going down that route:  is there room
to expand this check to a whitelist, a search path, or some other method
that could be used to provide for my use case?

Sure.  Please file a ticket.  We can figure out some way to hack
this for testing.  What would you suggest?

Great!  I've opened ticket #47601 for this, and we can continue there if
you like.

Yes.

In case there's more to discuss on the list, here are the
options that come to mind:
* When checking a modify request, only check the nsslapd-pluginPath
   value if it shows up in the mods list.
* Add a run-time-configurable whitelist of acceptable locations.
* Replace the check with logic to go ahead and try loading the module,
   unloading it if the load succeeds.

I haven't tried any of these, but I think any of them would be enough.

Thanks,

Nalin


--
389-devel mailing list
389-de...@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/389-devel

[389-devel] Please review: Ticket #47585 Replication Failures related to skipped entries due to cleaned rids

2013-11-11 Thread Rich Megginson

https://fedorahosted.org/389/attachment/ticket/47585/0001-Ticket-47585-Replication-Failures-related-to-skipped.patch
--
389-devel mailing list
389-de...@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/389-devel

Re: [389-devel] lib389: cleanup __init__

2013-10-31 Thread Rich Megginson

On 10/31/2013 09:58 AM, Roberto Polli wrote:

Hi @all,

I started investigating in mocking with fakeldap, and it seems an easy and
viable way of adding unittests.

A main issue is the DSAdmin.__init__ complexity.

I thought - a long time ago actually - to remove from DSAdmin all cached
references to backends, suffixes and configuration.

If we want to add a cache layer we can do it afterward. And with some cache
pattern.
Part of the complexity is due to trying to keep data across a restart - 
that is, if you call stop() then start(), you want start() to 
automatically re-establish the connection - to do that, you need to 
store the credentials.




Let me know + Peace,
R.


--
389-devel mailing list
389-de...@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/389-devel

Re: [389-devel] lib389: cleanup __init__

2013-10-31 Thread Rich Megginson

On 10/31/2013 10:35 AM, Roberto Polli wrote:

Hi Rich,

On Thursday 31 October 2013 10:32:13 Rich Megginson wrote:

I thought - a long time ago actually - to remove from DSAdmin all cached
references to backends, suffixes and configuration.

Part of the complexity is due to trying to keep data across a restart

I agree with credential caching - as they should be quite unmutable, and I was
talking about __initPart2() which is called by __init__.

Are all the __initPart2() attributes essential?


No.  You could do lazy evaluation of those fields.  For example, 
instead of having a .dbdir field, have a .getdbdir() member that would 
do an ldapsearch if .dbdir is None.




Peace,
R:



--
389-devel mailing list
389-de...@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/389-devel

Re: [389-devel] Please review lib389 ticket 47575: add test case for ticket47560

2013-10-30 Thread Rich Megginson

On 10/30/2013 10:47 AM, thierry bordaz wrote:

Hello,

This tickets implement a test case and propose a layout of the CI
tests in the 389-ds.
The basic idea is to put CI tests under:
head/dirsrvtests/
tickets/
standalone_test.py
m1c1_test.py
m2_c1_test.py
...


Does tickets in this case mean tickets for issues in the 389 trac?



testsuites/
acl_test.py
replication_test.py
...

For example, test_standalone.py would setup a standalone topology
and will contain all ticket test cases that are applicable on
standalone topology.

https://fedorahosted.org/389/attachment/ticket/47575/0001-Ticket-47575-CI-test-add-test-case-for-ticket47560.patch


So we would just keep adding tests to the single file 
standalone_test.py, every time we add a test for a trac ticket that 
deals with a standalone server?




regards
thierry


--
389-devel mailing list
389-de...@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/389-devel


--
389-devel mailing list
389-de...@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/389-devel

Re: [389-devel] Please review lib389 ticket 47578: removal of 'sudo' and absolute path in lib389

2013-10-30 Thread Rich Megginson

On 10/30/2013 02:09 PM, thierry bordaz wrote:

On 10/30/2013 07:56 PM, Jan Rusnacko wrote:

Hello Thierry,

layout OK.

As for tests - instead of reinventing the wheel by defing class Test_standAlone
to set up instance, use py.test fixture.

Also, you should not force setup, test, teardown execution for each test by
specifying sub-methods for each test. Testing framework (py.test) should be
doing that. I think this will make your tests fail badly if some exception
occurs - if _test_ticket47560_setup raises exception, it will propagate back to
py.test and cleanup method will never be executed for that ticket.

Hi Jan,

thanks for you comments.
py.test will call for for each test_ticketxxx

setup
test_ticketxxx
teardown


setup will create the instance if it does not already exist, else it 
provide a dirsrv to it.
teardown will remove the instance if the test_ticketxx is not able to 
properly clean it up (if test_ticketxx raise the clean_please flag)
test_ticketxxx will start it executions with clean_please=true (set by 
setup), if it succeeds it set clean_please=false. Unless it fails to 
properly clean up the instance. In that case it sets clean_please to true.


What would be the advantages to make the setup/teardown sub-methods of 
the test_ticketxx ?
At least a small drawback is that the person that write 
test_ticketxxx will have to write them, instead of using the 
standard one.


Can we have the test framework call a default setup/teardown method if 
not provided by the test?





Also, I believe each ticket should have its own file which contains one or more
testcases. I think that would reasonably group relevant things together.


Right, it was my first intention. But then I had this idea to group 
the tickets per deployment module.

I don't know if it is a good idea, but it seems to be confusing :-[

On 10/30/2013 05:57 PM, thierry bordaz wrote:

https://fedorahosted.org/389/attachment/ticket/47578/0001-Ticket-47578-CI-tests-removal-of-sudo-and-absolute-p.patch



--
389-devel mailing list
389-de...@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/389-devel





--
389-devel mailing list
389-de...@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/389-devel


--
389-devel mailing list
389-de...@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/389-devel

Re: [389-devel] Please review lib389 ticket 47575: add test case for ticket47560

2013-10-30 Thread Rich Megginson

On 10/30/2013 12:12 PM, thierry bordaz wrote:

On 10/30/2013 06:59 PM, Rich Megginson wrote:

On 10/30/2013 10:47 AM, thierry bordaz wrote:

Hello,

This tickets implement a test case and propose a layout of the
CI tests in the 389-ds.
The basic idea is to put CI tests under:
head/dirsrvtests/
tickets/
standalone_test.py
m1c1_test.py
m2_c1_test.py
...


Does tickets in this case mean tickets for issues in the 389 trac?

Yes in my mind, this directory would contains test cases for 389 tickets.


File or directory?  I don't understand - is standalone_test.py supposed 
to be a real ticket?  Or will the tickets directory contain files like 
ticket47424.py, ticket47332.py, etc.?






testsuites/
acl_test.py
replication_test.py
...

For example, test_standalone.py would setup a standalone
topology and will contain all ticket test cases that are
applicable on standalone topology.

https://fedorahosted.org/389/attachment/ticket/47575/0001-Ticket-47575-CI-test-add-test-case-for-ticket47560.patch


So we would just keep adding tests to the single file 
standalone_test.py, every time we add a test for a trac ticket that 
deals with a standalone server?
Yes, if we have a test case for a ticket_xyz, we may add a new class 
method


class Test_standAlone(object):
def setup(self):
...
def teardown(self):
...

def test_ticket_xyz(self):
def _test_ticket_xyx_setup():
initialization of test case ticket xyz
def _test_ticket_xyz_teardown():
cleanup for test case ticket xyz

_test_ticket_xyz_setup()

test case

_test_ticket_xyz_teardown()





def test_ticket_abc(self)
...

def test_final(self)
triggers the cleanup of the standalone instance


This won't be in a separate file called ticketXYZ.py?











regards
thierry


--
389-devel mailing list
389-devel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/389-devel






--
389-devel mailing list
389-devel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/389-devel

Re: [389-devel] Proof of concept: mocking DS in lib389

2013-10-28 Thread Rich Megginson

On 10/26/2013 12:49 AM, Jan Rusnacko wrote:

On 10/25/2013 11:00 PM, Rich Megginson wrote:

On 10/25/2013 01:36 PM, Jan Rusnacko wrote:

Hello Roberto and Thierry,

as I promised, I am sending you a proof-of-concept code that demonstrates, how
we can mock DS in unit tests for library function (see attachment). You can run
tests just by executing py.test in tests directory.

Only 3 files are of interest here:

lib389/dsmodules/repl.py - this is a Python module with functions - they expect
DS instance as the first argument. Since they are functions, not methods, I can
just mock DS and pass that fake one as the first argument to them in unit tests.

tests/test_dsmodules/conftest.py - this file contains definition of mock DS
class along with py.test fixture, that returns it.

tests/test_dsmodules/test_repl.py - this contains unit tests for functions from
repl.py.

What I do is quite simple - I override ldapadd, ldapdelete .. methods of mock DS
class, so that instead of sending command to real DS instance, they just store
the data in 'dit' dictionary (which represents content stored in DS). This way,
I can check that when I call e.g. function enable_changelog(..), in the end DS
will have correct changelog entry.

To put it very bluntly - enable_changelog(..) function just adds correct
changelog entry to whatever is passed to it as the first argument. In unit
tests, it is mock DS, otherwise it would be real DS class that sends real ldap
commands to real DS instance behind.

def test_add_repl_manager(fake_ds_inst_with_repl):
 ds_inst = fake_ds_inst_with_repl
 ds_inst.repl.add_repl_manager(cn=replication manager, cn=config, 
Secret123)
 assert ds_inst.dit[cn=replication manager, cn=config][userPassword] ==
Secret123
 assert ds_inst.dit[cn=replication manager, cn=config][nsIdleTimeout] == 
0
 assert ds_inst.dit[cn=replication manager, cn=config][cn] ==
replication manager

If you are using a real directory server instance, doing add_repl_manager() is
going to make a real LDAP ADD request, right?

Correct. If you pass DS with real ldapadd method that makes real reqests, its
going to use that.

Will it still update the ds_inst.dit dict?

ds_inst.dit is updated in mocked ldapadd. So in real ldapadd, no.

Wouldn't you have to do a real LDAP Search request to get the
actual values?

Yes, correct. ds_inst.dit[] .. call is specific to mocked DS.

But you are right - I could add fake ldapsearch method, that would return
entries from 'dit' dictionary and use that to retrieve entries from mocked DS.


Because, otherwise, you have separate tests for mock DS and real DS?  Or 
perhaps I'm missing something?



Now I can successfully test that enable_changelog really works, without going
into trouble defining DSInstance or ldap calls at all. Also, I believe this
approach would work for 95% of all functions in lib389. Another benefit is that
unit tests are much faster, than on real DS instance.

Sidenote: even though everything is defined in separate namespace of 'repl'
module as function, in runtime they can be used as normal methods of class
DSInstance. That is handled by DSModuleProxy. We already went through this, but
not with Roberto.

Hopefully, now with some code in our hands, we will be able to understand each
other on this 'mocking' issue and come to conclusions more quickly.

Let me know what you think.

Thank you,
Jan


--
389-devel mailing list
389-devel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/389-devel

  1   2   3   4   >