On 22.5.2013 21:58, Simo Sorce wrote:
On Wed, 2013-05-22 at 17:01 +0200, Petr Spacek wrote:
Wow, it is pretty slow.
Yeah this is what I expected, crypto is not really fast.
[...]
The simplest way how to mitigate problem with slow start-up is:
1) Store signed version of the zone on the server's file system.
2) Load signed version from disk during start up.
3) In the background, do full zone reload+resign.
4) Switch old and new zones when it is done.

Maybe instead of 3/4 we can do something that requires less computation.
We can take the list of records in the zone, and load the list of
records from LDAP.

Here we set also the persistent search but we lock it so any update is
queued until we are done with the main resync task.
(We can temporarily also refuse DNS Updates I guess)

We cross check to find which records have been changed, which have been
removed, and which have been added.
Discard all the records that are unchanged (I assume the vast majority)
and then proceed to delete/modify/add the difference.

This would save a large amount of computation at every startup, even if
in the background the main issue here is not just time, but the fact
that you pegged the CPU to 98% for so long.

It will consume some computing power during start up, but the implementation
should be really simple. (BIND naturally can save and load zones :-))

I do not think the above would be much more difficult, and could save
quite a lot of computing if done in the right order and within a bind
database transaction I guess.

It sounds doable, I agree. Naturally, I plan to start with 'naive'/'in-memory only' implementation and add optimizations when the 'naive' part works.

The idea is that _location is dynamic though, isn't it ?
[...]
This is how I understood the design. Is it correct? If it is, then the value
is static from server's point of view. The 'dynamics' is a result of moving
client, because client is asking different servers for an answer.

Uhmm true, so we could simply store all the fields from within the
plugin so that RBTDB can sign them too.

I think my only concern is if the client can ever load some data from
one server and then some other data from another and find mismatching
signatures.
I didn't find any note about cross-checks between DNS servers. IMHO it doesn't matter, as long as signature matches the public key in the zone.

I think that some degree of inconsistency is natural part of DNS. Typically, all changes are propagated from the 'master'/'root of the tree topology' through multiple levels of slaves to the 'leaf slaves'.

Signatures contain timestamps and are periodically re-computed (in order of weeks) and it takes some time to propagate new signatures through the whole tree.

What changes are going to be required in bind-dyndb-ldap to use RBTDB
from Bind ? Do we have interfaces already ? Or will it require
additional changes to the glue code we currently use to load our plugin
into bind ?

I have some proof-of-concept code. AFAIK no change to public interfaces are
necessary.

There are 40 functions each database driver have to implement. Currently, we
have own implementation for most of them and some of them are NULL because are
required only for DNSSEC.

The typical change from our implementation to the native one looks like this:

static isc_result_t
find(dns_db_t *db, dns_name_t *name, dns_dbversion_t *version,
       dns_rdatatype_t type, unsigned int options, isc_stdtime_t now,
       dns_dbnode_t **nodep, dns_name_t *foundname, dns_rdataset_t *rdataset,
       dns_rdataset_t *sigrdataset)
{
- [next 200 lines of our code]
+       return dns_db_find(ldapdb->rbtdb, name, version, type, options, now,
+                          nodep, foundname, rdataset, sigrdataset);
}

Most of the work is about understanding how the native database work.

I assume rbtdb is now pretty stable and semantic changes are quite
unlikely ?

BIND (with our patches) has defined interface for database backends. Either bind-dyndb-ldap and RBTDB implement this interface, so semantic change is very very unlikely.

The plan is to use the 'public' RBTDB interface to avoid any touch between bind-dyndb-ldap and 'internal knobs' in RBTDB.

At the moment I'm able to load data from LDAP and push them to the native
database except the zone serial. It definitely needs more investigation, but
it seems doable.

Well if we store the data in the b permanently and synchronize at
startup I guess the serial problem vanishes completely ? (assuming we
use timestamp based serials)
Yes, basically we don't need to write it back to LDAP at all. The behaviour should be same as with current implementation.

<sarcasm>
Do you want to go back to 'light side of the force'? So we should start with
designing some LDAP->nsupdate gateway and use that for zone maintenance. It
doesn't solve adding/reconfiguring of zones on run-time, but it could be
handled by some stand-alone daemon with an abstraction layer at proper place.
</sacrasm>

Well the problem is loading of zones, that is why nsupdate can't be
used, we'd have to dump zones on the fly at restart and pile up
nsupdates if bind is not available, and then handle the case where for
some reason nsupdate fails and bind and LDAP get out of sync.

Yes, it is definitely not a simple task, but IMHO it could work. Some glue
logic specific for particular DNS server will be required in any case (for
zone addition/removal/reconfiguration - in BIND's case some tooling around
'rndc' tool), but most of the 'synchronization logic' can be done in generic 
way.

I can imagine that 'synchronization daemon' could be simpler than current code
(323 780 bytes ; 11 909 lines of C).

No, trust me synchronization is alwys fraught with nasty corner cases
and things getting out of sync. In fact I think we should implement the
re-sync I am asking for at load time at regular intervals so that even
if the internal database and LDAP go out fo sync we fix that every X
hours by performing a smart re-sync (which hopefully normally will
simply consist in comparing 2 snapshots (internal and LDAP) and finding
that they are basically in sync).
I agree that some periodical re-synchronization would be handy even for current solution.


It looks that we agree on nearly all points (I apologize if overlooked something). I will prepare a design document for transition to RBTDB and then another design document for DNSSEC implementation.

Also would mean nsupdates made by clients would not be reported back to
LDAP.
I don't agree. DNS has incremental zone transfers (RFC 1995) and change
notification mechanism (RFC 1996). A standard compliant DNS server can send
notification about changes in the zone and the (hypothetical) 'synchronization
daemon' can read the changes via IXFR.

And we are reimplementing half of the protocols again :)
We don't have to implement LDAP protocol because we have openldap libraries and there are libraries for DNS too. Why we would have to implement DNS? :-)

Anyway, it doesn't matter for now, I'm not pushing towards 'synchronization daemon' at the moment.

--
Petr^2 Spacek

_______________________________________________
Freeipa-devel mailing list
Freeipa-devel@redhat.com
https://www.redhat.com/mailman/listinfo/freeipa-devel

Reply via email to