On 21.5.2013 20:30, Simo Sorce wrote:
On Tue, 2013-05-21 at 18:32 +0200, Petr Spacek wrote:
Hello,

I found that we (probably) misunderstood each other. The sky-high level
overview of the proposal follow:

NO CHANGE:
1) LDAP stores all *unsigned* data.

2)
NO CHANGE:
a) bind-dyndb-ldap *on each server* fetches all unsigned data from LDAP and
store them in *in memory* database (we do it now)

THE DIFFERENCE:
b) All data will be stored in BIND's native RBT-database (RBTDB) instead of
our own in-memory database.

NEW PIECES:
3)
Mechanisms implemented in BIND's RBTDB will do DNSSEC signing etc. for us. The
BIND's feature is called 'in-line signing' and it can do all key/signature
maintenance for us, including periodical zone re-signing etc.


The whole point of this proposal is about code-reusage. I'm trying to avoid
re-inventing of the wheel.

Note that DNSSEC implementation in BIND has ~ 150 kiB of C code, stand-alone
signing utilities add another ~ 200 kiB of code (~ 7000 lines) . I really
don't want to re-write it again when it's not reasonable.

Further comments are in-line.

Ok putting some numbers on this topic really helps, thanks!

More inline.

[..]

I haven't seen any reasoning from you why letting Bind do this work is
a better idea.
Simply said - because all the code is already in BIND (the feature is called
'in-line signing', as I mentioned above).

I actually see some security reasons why putting this into a DS plugin
can have quite some advantages instead. Have you considered doing this
It could improve the security a bit, I agree. But I don't think that it is so
big advantage. BIND already has all the facilities for key material handling,
so the only thing we have to solve is how to distribute keys from LDAP to
running BIND.

Well it would mean sticking the key in ldap and letting Bind pull them
from there based on ACIs ...
The main issue would be changes in keys, but with the persistent search
I guess that's also not a huge deal.

Zone can be signed with multiple keys at same time, so key rotation is not a problem. Each signature contains key-id.

work in a DS plugin at all ? If you haven and have discarded the idea,
can you say why ?
1) It would require pulling ~ 200 kiB (~ 7000 lines) of DNSSEC signing code
into 389.

2) It would require pulling 'text->DNS wire format' parser into 389 (because
our LDAP stores plain text data but the signing process works with DNS wire
format).

3) It simplifies bind-dyndb-ldap, but we still need to re-implement DNS search
algorithm which takes DNSSEC oddities into account. (Note that the DNS search
algorithm is part of the database implementation. Bugs/limitations in our
implementation are the reason why wildard records are not supported...)

4) I'm not sure how it will work with replication. How to ensure that new
record will not appear in the zone until the associated RRset is (re)computed
by DS? (BIND has transaction mechanism built-in to the internal RBTDB.)

389ds has internal transactions, which is why I was thinking to do the
signatures on any change coming into LDAP (direct or via replication,
within the transaction.

The point is that you *can* do changes run-time, but you need to know about
the changes as soon as possible because each change requires significant
amount of work (and magic/mana :-).

It opens a lot of opportunities for race condition problems.

Yes, I am really concerned about the race conditions of course, however
I really wonder whether doing signing in bind is really a good idea.
We need to synchronize these signatures to all masters right ?
No, because signatures are computed and stored only in memory - and forgotten
after BIND shutdown. Yes, it requires re-computing on each load, this is
definitely disadvantage.

Ok I definitely need numbers here.
Can you do a test with a normal, text based, Bind zone with 10k entries
and see how much time it takes to re-sign everything ?

I suspect that will be way too much, so we will have the added problem
of having to maintain a local cache in order to be able to restart Bind
and have it actually server results in a reasonable time w/o killing the
machine completely.

Right, it is good idea. I never tried really big zone (for some reason?).

Command: /usr/bin/time dnssec-signzone -n 1 -o example.net example.net
Signing was limited to single core (parameter -n 1).

Unsigned zone: 327 285 bytes, ~ 10 000 A records and several other records
Signed zone: 10 847 688 bytes
Results:
38.28user 0.09system 0:38.80elapsed 98%CPU (0avgtext+0avgdata 18032maxresident)k
0inputs+21200outputs (0major+4646minor)pagefaults 0swaps

Wow, it is pretty slow.

CPU: Intel(R) Core(TM) i7-2620M CPU @ 2.70GHz
Operating memory: 4 GB of DDR3 @ 1333 MHz

The simplest way how to mitigate problem with slow start-up is:
1) Store signed version of the zone on the server's file system.
2) Load signed version from disk during start up.
3) In the background, do full zone reload+resign.
4) Switch old and new zones when it is done.

It will consume some computing power during start up, but the implementation should be really simple. (BIND naturally can save and load zones :-))

Doesn't that mean we need to store this data back in LDAP ?
No, only 'normal' DNS updates containing unsigned data will be written back to
LDAP. RRSIG and NSEC records will never reach LDAP.

That means more round-trips before the data ends up being usable, and we
do not have transactions in LDAP, so I am worried that doing the signing
in Bind may not be the best way to go.
I'm proposing to re-use BIND's transaction mechanism built in internal
database implementation.

=> It should be possible to save old database to disk (during BIND shutdown
or
periodically) and re-use this old database during server startup. I.e. server
will start replying immediately from 'old' database and then the server will
switch to the new database when dump from LDAP is finished.


This look like an advantage ? Why is it a disadvantage ?
It was mentioned as 'proposed remedy' for the disadvantage above.

I think having dual authoritative data sources may not be a good thing.
Consistency is a reason why I want to make persistent search mandatory.

A persistent search does not guarantee consistency only that updates are
sent to you as soon as they come in, you still may end up with bugs in
the implementation where you do not catch something and get out of date,
with the actual data in LDAP.

IMHO persistent storage could save the day if LDAP is down for some reason.
Old data in DNS are much better than no data in DNS.

Depends how OLD :-)
But maybe we can bake some timeouts in there and simply dump any cached
data if it really is too old.
Yes, it can be configurable. "Stop responding if synchronization with LDAP is failing for at least xxxx seconds".

=> As a side effect, BIND can start even if connection to LDAP server is down
- this can improve infrastructure resiliency a lot!

Same as above ?
The same here, it was mentioned as 'proposed remedy' for the disadvantage above.

When it comes to DNSSEC starting w/o LDAP may just mean that you have
different signatures for the same records on different masters. Is that
'legale' according to DNSSEC ?
1) You will have same signatures as long as records in LDAP and saved copy of
the database (on the disk) are equal.

Well given an IPA infrastructure uses Dynamic Updates I expect data to
change frequently enough that if you have an outage that lasts more than
a handful of minutes the data in the saved copy will not match the data
in LDAP.
I agree, that is definitely true, but I think that the most important pieces are NS, SRV and A records for servers. They are not changed that often.

IMHO admins would be happier if they have 100 records from 10 000 out of date but the infrastructure works than without any records at all (and broken infrastructure).

Again, there can be some LDAP-synchonization-timeout and DNS server can stop responding to queries when synchronization is lost for longer time.

2) I didn't find any new limitation imposed by DNSSEC. AFAIK some
inconsistency between servers is normal state in DNS, because zone transfers
take some time and the tree structure have many levels.

But in the bind case they assume a single master model, so they never
have inconsistencies, right ?
But in our case we have multiple masters, so I wonder if a client is
going to have issues ...
Are we guaranteed that if 2 servers have the exact same view they
generate the exact same signatures ? If that is the case then maybe we
are ok.

The problems arise when data *in single database* (i.e. on one server) are
inconsistent (e.g. signature != data in unsigned records). BIND solves this
with it's built-in transaction mechanisms.

Understood.

== Uncertain effects ==
- Memory consumption will change, but I'm not sure in which direction.
- SOA serial number maintenance is a open question.

Why SOA serial is a problem ?
It simply needs more investigation. BIND's RBTDB maintains SOA serial
internally (it is intertwined to transactions in the DB), so the write-back to
LDAP could be very delicate operation.

It means all masters will often be out of sync, this is not very good.
I don't think so. BIND can use timestamp-based serials in exactly same way as
we do. The only problem is how to implement 'read from internal DB'->'write to
LDAP' operation. It still needs more investigation.

Well if we use timestamp based serials ... why do we need to write
anything back ? :-) We can just let a DS plugin fill the serial in the
SOA with a timestamp just for schema compatibility purposes and just
assume bind has the same or a close enough serial internally.

You are right. The value in LDAP is mostly 'historical'. It is used only as 'starting value', i.e. the serial value used during BIND (re)start.

Decision if persistent search is a 'requirement' or not will have significant
impact on the design, so I will write the design document when this decision
is made.

I would like to know more details about the reasons before I can usefully 
comment.

I forgot to one another 'Uncertain effect':
- Support for dynamically generated '_location' records will be a big
adventure. It probably means no change from the state without persistent
search :-) After basic exploration it seems doable, but still a bit uncertain.

I need more info here, does it mean you have to store _location records
when they are generated ?
I tend to do _location record generation during zone-load, so everything will
be prepared when the query comes in. As a benefit it will allow zone transfers
even for signed zones. This still needs more investigation.

The idea is that _location is dynamic though, isn't it ?
The value seems to be 'dynamic', but only from client's point of view. AFAIK there are three options:
1) _location is configured for particular client statically in LDAP
2) Each individual server has own default value for _location (for clients without explicit configuration). 3) Each individual server can be configured to override all values in _location with one fixed value, i.e. all clients (e.g. in bandwith-constrained location) will use only the local server.

This is how I understood the design. Is it correct? If it is, then the value is static from server's point of view. The 'dynamics' is a result of moving client, because client is asking different servers for an answer.

Anyway what if we do not sign _location records ?
Will DNSSEC compliant clients fail in that case ?
I'm not 100 % sure, but I see two problems:

1) It seems that opt-out is allowed only for delegation points (NS records belonging to sub-domains).
http://tools.ietf.org/html/rfc5155#section-6

2) Opt-out allows an attacked to insert unsigned data in the replies.
http://www.stanford.edu/~jcm/papers/dnssec_ndss10.pdf section 3.4

Anyway, I don't think that it is necessary.

  > Maybe we can use the internal bind database
just for _location "zone" ?
I don't think that it is possible.

If _location.client-a and _location.client-b reside in the single database
then client-a and client-b have to reside in the same database. (The reason is
that _location.client-a and _location.client-b do not have immediate common
ancestor.)

Uhmm right, nvm.

My personal conclusion is that re-using of BIND's backend will save a huge
amount of work/code to maintain/bugs.

I can see that, unfortunately I fear it will make multi-master a lot
more difficult at the same time. And given we do want to have
multi-master properties we need to analyze that problem more carefully.
I agree. It is a delicate change and we should not hurry.

Also by welding ourselves to internal Bind infrastructure too much, it
will make it a lot more difficult for us to change the DNS
infrastructure. Bind10 will be completely different internally, and we
may simply decide to even not use bind10 at all and use a completely
different engine going forward. So I am quite wary of welding ourselves
even more to bind 9 internals.
Ehm ... how to say that ... 'to late'. I wasn't around when DNS design was
made, so I don't know all the reasons behind the decision, but IMHO we use
completely non-standard/obscure hacks all the time.

The proposal above doesn't extend our dependency on BIND, because we already
depend on BIND9 *completely*.

Not really, all the data is currently in LDAP, all we need is to write a
plugin for a different server and start serving data that way.

However if DNSSEC is handled with bind, then rewriting the plugin will
not be sufficient as we do not have the data in LDAP anymore, we also
need to find out that part.
Ah okay, now I got what you meant. I didn't imagine that the 'write a plugin for a different server' part is the simple piece :-)

I am not against your proposal because of this, just pointing out.

  It is about dropping our own internal database
implementation (buggy, incomplete, standard non-compliant) with the code from
the original BIND (which is at least standard compliant).

Understood.

What changes are going to be required in bind-dyndb-ldap to use RBTDB
from Bind ? Do we have interfaces already ? Or will it require
additional changes to the glue code we currently use to load our plugin
into bind ?

I have some proof-of-concept code. AFAIK no change to public interfaces are necessary.

There are 40 functions each database driver have to implement. Currently, we have own implementation for most of them and some of them are NULL because are required only for DNSSEC.

The typical change from our implementation to the native one looks like this:

static isc_result_t
find(dns_db_t *db, dns_name_t *name, dns_dbversion_t *version,
     dns_rdatatype_t type, unsigned int options, isc_stdtime_t now,
     dns_dbnode_t **nodep, dns_name_t *foundname, dns_rdataset_t *rdataset,
     dns_rdataset_t *sigrdataset)
{
- [next 200 lines of our code]
+       return dns_db_find(ldapdb->rbtdb, name, version, type, options, now,
+                          nodep, foundname, rdataset, sigrdataset);
}

Most of the work is about understanding how the native database work.

At the moment I'm able to load data from LDAP and push them to the native database except the zone serial. It definitely needs more investigation, but it seems doable.

<sarcasm>
Do you want to go back to 'light side of the force'? So we should start with
designing some LDAP->nsupdate gateway and use that for zone maintenance. It
doesn't solve adding/reconfiguring of zones on run-time, but it could be
handled by some stand-alone daemon with an abstraction layer at proper place.
</sacrasm>

Well the problem is loading of zones, that is why nsupdate can't be
used, we'd have to dump zones on the fly at restart and pile up
nsupdates if bind is not available, and then handle the case where for
some reason nsupdate fails and bind and LDAP get out of sync.

Yes, it is definitely not a simple task, but IMHO it could work. Some glue logic specific for particular DNS server will be required in any case (for zone addition/removal/reconfiguration - in BIND's case some tooling around 'rndc' tool), but most of the 'synchronization logic' can be done in generic way.

I can imagine that 'synchronization daemon' could be simpler than current code (323 780 bytes ; 11 909 lines of C).

Also would mean nsupdates made by clients would not be reported back to
LDAP.
I don't agree. DNS has incremental zone transfers (RFC 1995) and change notification mechanism (RFC 1996). A standard compliant DNS server can send notification about changes in the zone and the (hypothetical) 'synchronization daemon' can read the changes via IXFR.

The way from LDAP to DNS is not simple, that is definitely true ...

Using nsupdate was considered, it just is not feasible.

We should reconsider it during migration from BIND9 to something else :-)


Have a nice day.

--
Petr^2 Spacek

_______________________________________________
Freeipa-devel mailing list
Freeipa-devel@redhat.com
https://www.redhat.com/mailman/listinfo/freeipa-devel

Reply via email to