Re: [Freeipa-devel] DNSSEC support design considerations: key material handling

2013-09-02 Thread Petr Spacek

On 12.8.2013 14:30, Loris Santamaria wrote:

El vie, 09-08-2013 a las 16:22 +0200, Petr Spacek escribió:

On 9.8.2013 15:12, Rob Crittenden wrote:

Simo Sorce wrote:

On Fri, 2013-08-09 at 10:42 +0200, Petr Spacek wrote:

On 23.7.2013 10:55, Petr Spacek wrote:

On 19.7.2013 19:55, Simo Sorce wrote:

I will reply to the rest of the message later if necessary, still
digesting some of your answers, but I wanted to address the following
first.

On Fri, 2013-07-19 at 18:29 +0200, Petr Spacek wrote:


The most important question at the moment is What can we postpone?
How
fragile it can be for shipping it as part of Fedora 20? Could we
declare
DNSSEC support as technology preview/don't use it for anything
serious?


Until we figur out proper management in LDAP we will be a bit stuck, esp
if we want to consider usin the 'somthing' that stores keys instead of
toring them stright in LDAP.

So maybe we can start with allowing just one server to do DNSSEC and
source keys from files for now ?


The problem is that DNSSEC deployment *on single domain* is 'all or nothing':
All DNS servers have to support DNSSEC otherwise the validation on client
side
can fail randomly.

Note that *parent* zone indicates that the particular child zone is secured
with DNSSEC by sending DS (delegation signer) record to the client.
Validation
will fail if client receives DS record from the parent but no signatures are
present in data from 'child' zone itself.

This prevents downgrade (DNSSEC = plain DNS) attacks.

As a result, we have only two options: One DNS server with DNSSEC enabled or
arbitrary number DNS servers without DNSSEC, which is very unfortunate.


as soon as we have that workign we should also have clearer plans about
how we manage keys in LDAP (or elsewhere).


Dmitri, Martin and me discussed this proposal in person and the new plan is:
- Elect one super-master which will handle key generation (as we do with
special CA certificates)


I guess we can start this way, but how do you determine which one is
master ?

How do we select the 'super-master' for CA certificates? I would re-use the
same logic (for now).


I do not really like to have all this 'super roles', it's brittle and
admins will be confused which means one day their whole infrastructure
will be down because the keys are expired and all the clients will
refuse to communicate with anything.


AFAIU keys don't expire, rather there is a rollover process. The problem would
be if the server that controlled the rollover went away the keys would never
roll, leaving you potentially exposed.

In DNSSEC it could be a problem. Each signature contains validity interval and
validation will fail when it expires. It practically means that DNS will stop
working if the keys are not rotated in time. (More keys can co-exists, so the
roll-over process can be started e.g. a month before the current key really
expires.)


I think it is ok as a first implementation, but I think this *must not*
be the final state. We can and must do better than this.

I definitely agree. IMHO the basic problem is the same or very similar for
DNSSEC key generation  CA certificates, so we should solve both problems at
once - one day.

I mean - we need to coordinate key  cert maintenance between multiple masters
somehow - and this will be the common problem for CA  DNSSEC.


You could implement a protocol where each master has a day or the week
or the month where it checks if there are any pending keys or CA
certificates to renew and tries to do the job. Next day it is another
master's turn to do the same job and so on.

Every master is identified by an unique nsDS5ReplicaId, which could be
used as a vector to generate an ordered list of masters. If you have
masters with nsDS5ReplicaId 5,34,35,45 you can say that the one with
nsDS5ReplicaId 5 is master number one, the next is master number two and
so on.

On first day of the month it is master number one's turn to check of any
pending key and CA certificate renewal issues and to do the renewal. On
second day of the month it is master number two's turn to do the same.
So if a master was down the job will be done next day by the next
master.

The cicle will repeat every number of master days, in the example
every four days.


It is interesting idea... but I think that it is could be fragile and create 
some serious problems.


Please see and reply to e-mail in this thread:
https://www.redhat.com/archives/freeipa-devel/2013-September/msg00015.html

Thank you for your time  contribution!

--
Petr^2 Spacek

___
Freeipa-devel mailing list
Freeipa-devel@redhat.com
https://www.redhat.com/mailman/listinfo/freeipa-devel


Re: [Freeipa-devel] DNSSEC support design considerations: key material handling

2013-09-02 Thread Petr Spacek

On 9.8.2013 16:22, Petr Spacek wrote:

On 9.8.2013 15:12, Rob Crittenden wrote:

Simo Sorce wrote:

On Fri, 2013-08-09 at 10:42 +0200, Petr Spacek wrote:

On 23.7.2013 10:55, Petr Spacek wrote:

On 19.7.2013 19:55, Simo Sorce wrote:

I will reply to the rest of the message later if necessary, still
digesting some of your answers, but I wanted to address the following
first.

On Fri, 2013-07-19 at 18:29 +0200, Petr Spacek wrote:


The most important question at the moment is What can we postpone?
How
fragile it can be for shipping it as part of Fedora 20? Could we
declare
DNSSEC support as technology preview/don't use it for anything
serious?


Until we figur out proper management in LDAP we will be a bit stuck, esp
if we want to consider usin the 'somthing' that stores keys instead of
toring them stright in LDAP.

So maybe we can start with allowing just one server to do DNSSEC and
source keys from files for now ?


The problem is that DNSSEC deployment *on single domain* is 'all or
nothing':
All DNS servers have to support DNSSEC otherwise the validation on client
side
can fail randomly.

Note that *parent* zone indicates that the particular child zone is secured
with DNSSEC by sending DS (delegation signer) record to the client.
Validation
will fail if client receives DS record from the parent but no signatures are
present in data from 'child' zone itself.

This prevents downgrade (DNSSEC = plain DNS) attacks.

As a result, we have only two options: One DNS server with DNSSEC enabled or
arbitrary number DNS servers without DNSSEC, which is very unfortunate.


as soon as we have that workign we should also have clearer plans about
how we manage keys in LDAP (or elsewhere).


Dmitri, Martin and me discussed this proposal in person and the new plan is:
- Elect one super-master which will handle key generation (as we do with
special CA certificates)


I guess we can start this way, but how do you determine which one is
master ?

How do we select the 'super-master' for CA certificates? I would re-use the
same logic (for now).


I do not really like to have all this 'super roles', it's brittle and
admins will be confused which means one day their whole infrastructure
will be down because the keys are expired and all the clients will
refuse to communicate with anything.


AFAIU keys don't expire, rather there is a rollover process. The problem would
be if the server that controlled the rollover went away the keys would never
roll, leaving you potentially exposed.

In DNSSEC it could be a problem. Each signature contains validity interval and
validation will fail when it expires. It practically means that DNS will stop
working if the keys are not rotated in time. (More keys can co-exists, so the
roll-over process can be started e.g. a month before the current key really
expires.)


I think it is ok as a first implementation, but I think this *must not*
be the final state. We can and must do better than this.

I definitely agree. IMHO the basic problem is the same or very similar for
DNSSEC key generation  CA certificates, so we should solve both problems at
once - one day.

I mean - we need to coordinate key  cert maintenance between multiple masters
somehow - and this will be the common problem for CA  DNSSEC.


- Store generated DNSSEC keys in LDAP
- Encrypt stored keys with 'DNSSEC master key' shared by all servers


ok.


- Derive 'DNSSEC master key' from 'Kerberos master key' during server
install/upgrade and store it somewhere on the filesystem (as the Kerberos
master key, on each IPA server)


The Kerberos master key is not stored on disk, furthermore it could
change, so if you derive it at install time and install a replica after

Interesting. The master key is stored in the krbMKey attribute in
cn=REALM,cn=kerberos,dc=your,dc=domain , I didn't know that.


it was changed everything will break. I think we need to store the key
in LDAP, encrypted, and dump it to disk when a new one is generated.

I agree.


Aside, DNSSEC uses pub/private key crypto so this would be a special
'master key' used exclusively to encrypt keys in LDAP ?

That was the original intention - generate a new 'DNSSEC master key'/'DNSSEC
wrapping key' and let named+certmonger/oddjob to play with it.


- Consider certmonger or oddjob as key generation triggers


I do not understand this comment.

I mean: How hard would it be to extend certmonger/oddjob to take care of
DNSSEC key maintenance?


He is trying to automate the key rollover. I don't think certmonger will work
as it is designed for X.509 certs. Are you proposing an additional attribute
to schedule the rollover? I thought that it was a good idea to have some
flexibility here to prevent timed DoS attacks for rollover time.

It definitely requires some changes in certmonger, I'm just exploring various
possibilities.


I think that we should add one new thing - a 'salt' - used for Kerberos
master
key-DNSSEC master key derivation. It would allow us to re-generate DNSSEC
master key 

Re: [Freeipa-devel] DNSSEC support design considerations: key material handling

2013-08-12 Thread Martin Kosek
On 08/09/2013 04:13 PM, Anthony Messina wrote:
 On Friday, August 09, 2013 08:49:29 AM Simo Sorce wrote:
 Dmitri, Martin and me discussed this proposal in person and the new
 plan is: - Elect one super-master which will handle key generation (as
 we do with special CA certificates)
 
 I guess we can start this way, but how do you determine which one is 
 master ? I do not really like to have all this 'super roles', it's
 brittle and admins will be confused which means one day their whole
 infrastructure will be down because the keys are expired and all the
 clients will refuse to communicate with anything.
 
 I think it is ok as a first implementation, but I think this *must not* 
 be the final state. We can and must do better than this.
 
 I've been listening in on the DNSSEC discussion and do not mean to
 derail the course of this thread, however...
 
 From a sysadmin's perspective, I agree with Simo's comments insofar as
 they
 relate to not all masters being created equal.  Administratively,
 unequal masters have the potential to create single points of failure
 which may be difficult to resolve, especially on upgrade between minor
 versions and between replicas.
 
 Small-time sysadmins like myself who may only run one (maybe two) FreeIPA
  instances incur a significant about of trouble when that already limited
  resource isn't working properly after some issue with file ownership or 
 SELinux during a yum upgrade.
 
 In addition, I realize FreeIPA wasn't probably designed with small-ish 
 installs as the target use case.  But I would argue that since FreeIPA
 *is* so unified in how it handles Kerberos, LDAP, Certifiates, and DNS, it
 is a viable choice for small-timers (with the only exception being no real
 way to back up an instance without an always-on multi-master
 replica).
 
 As a user who has just completed a manual migration/upgrade to F19
 (after realizing that there really was no way to migrate/upgrade when the
 original install began on F17 2.1 on bare metal with the split slapd
 processes and Dogtag 9, through F18, to F19), I would like to see FreeIPA
 move forward but continue to deliver the above-mentioned services to the
 small-timers, who, without FreeIPA's unification, would never be able to
 manage or offer all of those services independently, like the big-timers
 might be able to.
 
 Thanks.  -A

Hello Anthony,

From your post above, I did not understand what is the actual problem with
FreeIPA vs. small-time admins. I personally think that FreeIPA is usable for
both small-timers and bigger deployments (sorry for having to undergo the
manual migration procedure).

If you see that this is not true in some part of FreeIPA, please comment or
file tickets/RFEs/Bugzillas which we can process and act on to amend the
situation.

Thanks in advance,
Martin

___
Freeipa-devel mailing list
Freeipa-devel@redhat.com
https://www.redhat.com/mailman/listinfo/freeipa-devel


Re: [Freeipa-devel] DNSSEC support design considerations: key material handling

2013-08-12 Thread Loris Santamaria
El vie, 09-08-2013 a las 16:22 +0200, Petr Spacek escribió:
 On 9.8.2013 15:12, Rob Crittenden wrote:
  Simo Sorce wrote:
  On Fri, 2013-08-09 at 10:42 +0200, Petr Spacek wrote:
  On 23.7.2013 10:55, Petr Spacek wrote:
  On 19.7.2013 19:55, Simo Sorce wrote:
  I will reply to the rest of the message later if necessary, still
  digesting some of your answers, but I wanted to address the following
  first.
 
  On Fri, 2013-07-19 at 18:29 +0200, Petr Spacek wrote:
 
  The most important question at the moment is What can we postpone?
  How
  fragile it can be for shipping it as part of Fedora 20? Could we
  declare
  DNSSEC support as technology preview/don't use it for anything
  serious?
 
  Until we figur out proper management in LDAP we will be a bit stuck, esp
  if we want to consider usin the 'somthing' that stores keys instead of
  toring them stright in LDAP.
 
  So maybe we can start with allowing just one server to do DNSSEC and
  source keys from files for now ?
 
  The problem is that DNSSEC deployment *on single domain* is 'all or 
  nothing':
  All DNS servers have to support DNSSEC otherwise the validation on client
  side
  can fail randomly.
 
  Note that *parent* zone indicates that the particular child zone is 
  secured
  with DNSSEC by sending DS (delegation signer) record to the client.
  Validation
  will fail if client receives DS record from the parent but no signatures 
  are
  present in data from 'child' zone itself.
 
  This prevents downgrade (DNSSEC = plain DNS) attacks.
 
  As a result, we have only two options: One DNS server with DNSSEC 
  enabled or
  arbitrary number DNS servers without DNSSEC, which is very unfortunate.
 
  as soon as we have that workign we should also have clearer plans about
  how we manage keys in LDAP (or elsewhere).
 
  Dmitri, Martin and me discussed this proposal in person and the new plan 
  is:
  - Elect one super-master which will handle key generation (as we do with
  special CA certificates)
 
  I guess we can start this way, but how do you determine which one is
  master ?
 How do we select the 'super-master' for CA certificates? I would re-use the 
 same logic (for now).
 
  I do not really like to have all this 'super roles', it's brittle and
  admins will be confused which means one day their whole infrastructure
  will be down because the keys are expired and all the clients will
  refuse to communicate with anything.
 
  AFAIU keys don't expire, rather there is a rollover process. The problem 
  would
  be if the server that controlled the rollover went away the keys would never
  roll, leaving you potentially exposed.
 In DNSSEC it could be a problem. Each signature contains validity interval 
 and 
 validation will fail when it expires. It practically means that DNS will stop 
 working if the keys are not rotated in time. (More keys can co-exists, so the 
 roll-over process can be started e.g. a month before the current key really 
 expires.)
 
  I think it is ok as a first implementation, but I think this *must not*
  be the final state. We can and must do better than this.
 I definitely agree. IMHO the basic problem is the same or very similar for 
 DNSSEC key generation  CA certificates, so we should solve both problems at 
 once - one day.
 
 I mean - we need to coordinate key  cert maintenance between multiple 
 masters 
 somehow - and this will be the common problem for CA  DNSSEC.

You could implement a protocol where each master has a day or the week
or the month where it checks if there are any pending keys or CA
certificates to renew and tries to do the job. Next day it is another
master's turn to do the same job and so on.

Every master is identified by an unique nsDS5ReplicaId, which could be
used as a vector to generate an ordered list of masters. If you have
masters with nsDS5ReplicaId 5,34,35,45 you can say that the one with
nsDS5ReplicaId 5 is master number one, the next is master number two and
so on.

On first day of the month it is master number one's turn to check of any
pending key and CA certificate renewal issues and to do the renewal. On
second day of the month it is master number two's turn to do the same.
So if a master was down the job will be done next day by the next
master.

The cicle will repeat every number of master days, in the example
every four days. 


-- 
Loris Santamaria   linux user #70506   xmpp:lo...@lgs.com.ve
Links Global Services, C.A.http://www.lgs.com.ve
Tel: 0286 952.06.87  Cel: 0414 095.00.10  sip:1...@lgs.com.ve

If I'd asked my customers what they wanted, they'd have said
a faster horse - Henry Ford


smime.p7s
Description: S/MIME cryptographic signature
___
Freeipa-devel mailing list
Freeipa-devel@redhat.com
https://www.redhat.com/mailman/listinfo/freeipa-devel

Re: [Freeipa-devel] DNSSEC support design considerations: key material handling

2013-08-12 Thread Anthony Messina
On Monday, August 12, 2013 09:34:19 AM Martin Kosek wrote:
 On 08/09/2013 04:13 PM, Anthony Messina wrote:
  On Friday, August 09, 2013 08:49:29 AM Simo Sorce wrote:
  Dmitri, Martin and me discussed this proposal in person and the new
  plan is: - Elect one super-master which will handle key generation (as
  we do with special CA certificates)
 
  
 
  I guess we can start this way, but how do you determine which one is 
  master ? I do not really like to have all this 'super roles', it's
  brittle and admins will be confused which means one day their whole
  infrastructure will be down because the keys are expired and all the
  clients will refuse to communicate with anything.
 
  
 
  I think it is ok as a first implementation, but I think this *must not* 
  be the final state. We can and must do better than this.
 
  
 
  I've been listening in on the DNSSEC discussion and do not mean to
  derail the course of this thread, however...
 
  
 
  From a sysadmin's perspective, I agree with Simo's comments insofar as
  they
  
  relate to not all masters being created equal.  Administratively,
  unequal masters have the potential to create single points of failure
  which may be difficult to resolve, especially on upgrade between minor
  versions and between replicas.
 
  
 
  Small-time sysadmins like myself who may only run one (maybe two) FreeIPA
 
   instances incur a significant about of trouble when that already limited
   resource isn't working properly after some issue with file ownership or 
 
  SELinux during a yum upgrade.
 
  
 
  In addition, I realize FreeIPA wasn't probably designed with small-ish 
  installs as the target use case.  But I would argue that since FreeIPA
  *is* so unified in how it handles Kerberos, LDAP, Certifiates, and DNS, it
  is a viable choice for small-timers (with the only exception being no real
  way to back up an instance without an always-on multi-master
  replica).
 
  
 
  As a user who has just completed a manual migration/upgrade to F19
  (after realizing that there really was no way to migrate/upgrade when the
  original install began on F17 2.1 on bare metal with the split slapd
  processes and Dogtag 9, through F18, to F19), I would like to see FreeIPA
  move forward but continue to deliver the above-mentioned services to the
  small-timers, who, without FreeIPA's unification, would never be able to
  manage or offer all of those services independently, like the big-timers
  might be able to.
 
  
 
  Thanks.  -A
 
 Hello Anthony,
 
 From your post above, I did not understand what is the actual problem with
 FreeIPA vs. small-time admins. I personally think that FreeIPA is usable for
 both small-timers and bigger deployments (sorry for having to undergo the
 manual migration procedure).
 
 If you see that this is not true in some part of FreeIPA, please comment or
 file tickets/RFEs/Bugzillas which we can process and act on to amend the
 situation.
 
 Thanks in advance,
 Martin

Martin, I *do* think FreeIPA is an excellent choice for small-time admins, 
especially with the increased effort on improving documentation, the upcoming 
ipa-client-advise tool, and the ipa-backup/restore tools.  I merely wanted to 
state that 1) I agreed with Simo's comments, and point out that 2) unequal 
masters with regard to DNSSEC  has the potential to be a single point of 
failure and an area of concern for small-time admins who may for example, 
already be coping with the, albeit solid, recommendations to run multiple 
concurrent masters in virtualized environments with duplicates of those 
environments available simply for testing upgrades (a fair amount of 
administrative overhead for a small-timer).

In short, I was voicing a sysadmin's opinion that FreeIPA should continue to 
evolve in a way that supports small-time admins as well.  I do not think there 
is a problem with FreeIPA vs. small-time admins and am hoping it stays that 
way.

As far as the manual migration...  This was likely an issue with documentation 
and/or release notes: 2.2 said it was ok to upgrade from 2.1, 3.0 said it was 
ok to upgrade from 2.2, etc.  This is likely all true, unless your original 
2.2 was based on a 2.1 with Dogtag9.  At that point in time, I had one FreeIPA 
master on bare metal.  I have since upgraded my infrastructure and started 
over to have two FreeIPA masters in VMs hosted on separate machines.  
Hopefully, this amount of redundancy will afford me some upgrade protection 
for the future.

Thanks again.  -A

-- 
Anthony - http://messinet.com - http://messinet.com/~amessina/gallery
8F89 5E72 8DF0 BCF0 10BE 9967 92DC 35DC B001 4A4E


signature.asc
Description: This is a digitally signed message part.
___
Freeipa-devel mailing list
Freeipa-devel@redhat.com
https://www.redhat.com/mailman/listinfo/freeipa-devel

Re: [Freeipa-devel] DNSSEC support design considerations: key material handling

2013-08-09 Thread Petr Spacek

On 23.7.2013 10:55, Petr Spacek wrote:

On 19.7.2013 19:55, Simo Sorce wrote:

I will reply to the rest of the message later if necessary, still
digesting some of your answers, but I wanted to address the following
first.

On Fri, 2013-07-19 at 18:29 +0200, Petr Spacek wrote:


The most important question at the moment is What can we postpone?
How
fragile it can be for shipping it as part of Fedora 20? Could we
declare
DNSSEC support as technology preview/don't use it for anything
serious?


Until we figur out proper management in LDAP we will be a bit stuck, esp
if we want to consider usin the 'somthing' that stores keys instead of
toring them stright in LDAP.

So maybe we can start with allowing just one server to do DNSSEC and
source keys from files for now ?


The problem is that DNSSEC deployment *on single domain* is 'all or nothing':
All DNS servers have to support DNSSEC otherwise the validation on client side
can fail randomly.

Note that *parent* zone indicates that the particular child zone is secured
with DNSSEC by sending DS (delegation signer) record to the client. Validation
will fail if client receives DS record from the parent but no signatures are
present in data from 'child' zone itself.

This prevents downgrade (DNSSEC = plain DNS) attacks.

As a result, we have only two options: One DNS server with DNSSEC enabled or
arbitrary number DNS servers without DNSSEC, which is very unfortunate.


as soon as we have that workign we should also have clearer plans about
how we manage keys in LDAP (or elsewhere).


Dmitri, Martin and me discussed this proposal in person and the new plan is:
- Elect one super-master which will handle key generation (as we do with 
special CA certificates)

- Store generated DNSSEC keys in LDAP
- Encrypt stored keys with 'DNSSEC master key' shared by all servers
- Derive 'DNSSEC master key' from 'Kerberos master key' during server 
install/upgrade and store it somewhere on the filesystem (as the Kerberos 
master key, on each IPA server)

- Consider certmonger or oddjob as key generation triggers

I think that we should add one new thing - a 'salt' - used for Kerberos master 
key-DNSSEC master key derivation. It would allow us to re-generate DNSSEC 
master key as necessary without a change in the Kerberos master key.



Does it make sense? Does anybody have any ideas/recommendations which 
libraries we should use for key derivation and key material en/decryption?


Thank you for your time!

--
Petr^2 Spacek

___
Freeipa-devel mailing list
Freeipa-devel@redhat.com
https://www.redhat.com/mailman/listinfo/freeipa-devel


Re: [Freeipa-devel] DNSSEC support design considerations: key material handling

2013-08-09 Thread Simo Sorce
On Fri, 2013-08-09 at 10:42 +0200, Petr Spacek wrote:
 On 23.7.2013 10:55, Petr Spacek wrote:
  On 19.7.2013 19:55, Simo Sorce wrote:
  I will reply to the rest of the message later if necessary, still
  digesting some of your answers, but I wanted to address the following
  first.
 
  On Fri, 2013-07-19 at 18:29 +0200, Petr Spacek wrote:
 
  The most important question at the moment is What can we postpone?
  How
  fragile it can be for shipping it as part of Fedora 20? Could we
  declare
  DNSSEC support as technology preview/don't use it for anything
  serious?
 
  Until we figur out proper management in LDAP we will be a bit stuck, esp
  if we want to consider usin the 'somthing' that stores keys instead of
  toring them stright in LDAP.
 
  So maybe we can start with allowing just one server to do DNSSEC and
  source keys from files for now ?
 
  The problem is that DNSSEC deployment *on single domain* is 'all or 
  nothing':
  All DNS servers have to support DNSSEC otherwise the validation on client 
  side
  can fail randomly.
 
  Note that *parent* zone indicates that the particular child zone is secured
  with DNSSEC by sending DS (delegation signer) record to the client. 
  Validation
  will fail if client receives DS record from the parent but no signatures are
  present in data from 'child' zone itself.
 
  This prevents downgrade (DNSSEC = plain DNS) attacks.
 
  As a result, we have only two options: One DNS server with DNSSEC enabled or
  arbitrary number DNS servers without DNSSEC, which is very unfortunate.
 
  as soon as we have that workign we should also have clearer plans about
  how we manage keys in LDAP (or elsewhere).
 
 Dmitri, Martin and me discussed this proposal in person and the new plan is:
 - Elect one super-master which will handle key generation (as we do with 
 special CA certificates)

I guess we can start this way, but how do you determine which one is
master ?
I do not really like to have all this 'super roles', it's brittle and
admins will be confused which means one day their whole infrastructure
will be down because the keys are expired and all the clients will
refuse to communicate with anything.

I think it is ok as a first implementation, but I think this *must not*
be the final state. We can and must do better than this.

 - Store generated DNSSEC keys in LDAP
 - Encrypt stored keys with 'DNSSEC master key' shared by all servers

ok.

 - Derive 'DNSSEC master key' from 'Kerberos master key' during server 
 install/upgrade and store it somewhere on the filesystem (as the Kerberos 
 master key, on each IPA server)

The Kerberos master key is not stored on disk, furthermore it could
change, so if you derive it at install time and install a replica after
it was changed everything will break. I think we need to store the key
in LDAP, encrypted, and dump it to disk when a new one is generated.

Aside, DNSSEC uses pub/private key crypto so this would be a special
'master key' used exclusively to encrypt keys in LDAP ?

 - Consider certmonger or oddjob as key generation triggers

I do not understand this comment.

 I think that we should add one new thing - a 'salt' - used for Kerberos 
 master 
 key-DNSSEC master key derivation. It would allow us to re-generate DNSSEC 
 master key as necessary without a change in the Kerberos master key.

Salts are not necessary, HKDF from a cryptographically random key does
not require it.

 Does it make sense? Does anybody have any ideas/recommendations which 
 libraries we should use for key derivation and key material en/decryption?

openssl/nss I already have all the basic code we need for that.

Simo.

-- 
Simo Sorce * Red Hat, Inc * New York

___
Freeipa-devel mailing list
Freeipa-devel@redhat.com
https://www.redhat.com/mailman/listinfo/freeipa-devel


Re: [Freeipa-devel] DNSSEC support design considerations: key material handling

2013-08-09 Thread Rob Crittenden

Simo Sorce wrote:

On Fri, 2013-08-09 at 10:42 +0200, Petr Spacek wrote:

On 23.7.2013 10:55, Petr Spacek wrote:

On 19.7.2013 19:55, Simo Sorce wrote:

I will reply to the rest of the message later if necessary, still
digesting some of your answers, but I wanted to address the following
first.

On Fri, 2013-07-19 at 18:29 +0200, Petr Spacek wrote:


The most important question at the moment is What can we postpone?
How
fragile it can be for shipping it as part of Fedora 20? Could we
declare
DNSSEC support as technology preview/don't use it for anything
serious?


Until we figur out proper management in LDAP we will be a bit stuck, esp
if we want to consider usin the 'somthing' that stores keys instead of
toring them stright in LDAP.

So maybe we can start with allowing just one server to do DNSSEC and
source keys from files for now ?


The problem is that DNSSEC deployment *on single domain* is 'all or nothing':
All DNS servers have to support DNSSEC otherwise the validation on client side
can fail randomly.

Note that *parent* zone indicates that the particular child zone is secured
with DNSSEC by sending DS (delegation signer) record to the client. Validation
will fail if client receives DS record from the parent but no signatures are
present in data from 'child' zone itself.

This prevents downgrade (DNSSEC = plain DNS) attacks.

As a result, we have only two options: One DNS server with DNSSEC enabled or
arbitrary number DNS servers without DNSSEC, which is very unfortunate.


as soon as we have that workign we should also have clearer plans about
how we manage keys in LDAP (or elsewhere).


Dmitri, Martin and me discussed this proposal in person and the new plan is:
- Elect one super-master which will handle key generation (as we do with
special CA certificates)


I guess we can start this way, but how do you determine which one is
master ?
I do not really like to have all this 'super roles', it's brittle and
admins will be confused which means one day their whole infrastructure
will be down because the keys are expired and all the clients will
refuse to communicate with anything.


AFAIU keys don't expire, rather there is a rollover process. The problem 
would be if the server that controlled the rollover went away the keys 
would never roll, leaving you potentially exposed.



I think it is ok as a first implementation, but I think this *must not*
be the final state. We can and must do better than this.


- Store generated DNSSEC keys in LDAP
- Encrypt stored keys with 'DNSSEC master key' shared by all servers


ok.


- Derive 'DNSSEC master key' from 'Kerberos master key' during server
install/upgrade and store it somewhere on the filesystem (as the Kerberos
master key, on each IPA server)


The Kerberos master key is not stored on disk, furthermore it could
change, so if you derive it at install time and install a replica after
it was changed everything will break. I think we need to store the key
in LDAP, encrypted, and dump it to disk when a new one is generated.

Aside, DNSSEC uses pub/private key crypto so this would be a special
'master key' used exclusively to encrypt keys in LDAP ?


- Consider certmonger or oddjob as key generation triggers


I do not understand this comment.


He is trying to automate the key rollover. I don't think certmonger will 
work as it is designed for X.509 certs. Are you proposing an additional 
attribute to schedule the rollover? I thought that it was a good idea to 
have some flexibility here to prevent timed DoS attacks for rollover time.



I think that we should add one new thing - a 'salt' - used for Kerberos master
key-DNSSEC master key derivation. It would allow us to re-generate DNSSEC
master key as necessary without a change in the Kerberos master key.


Salts are not necessary, HKDF from a cryptographically random key does
not require it.


Does it make sense? Does anybody have any ideas/recommendations which
libraries we should use for key derivation and key material en/decryption?


openssl/nss I already have all the basic code we need for that.


I prefer the procedure just outlined in 
https://www.redhat.com/archives/freeipa-devel/2013-August/msg00089.html 
which just calls dnssec-keygen rather than trying to roll your own. I 
don't know what derivation really buys you.


rob

___
Freeipa-devel mailing list
Freeipa-devel@redhat.com
https://www.redhat.com/mailman/listinfo/freeipa-devel


Re: [Freeipa-devel] DNSSEC support design considerations: key material handling

2013-08-09 Thread Anthony Messina
On Friday, August 09, 2013 08:49:29 AM Simo Sorce wrote:
  Dmitri, Martin and me discussed this proposal in person and the new plan
  is: - Elect one super-master which will handle key generation (as we do
  with special CA certificates)
 
 I guess we can start this way, but how do you determine which one is
 master ?
 I do not really like to have all this 'super roles', it's brittle and
 admins will be confused which means one day their whole infrastructure
 will be down because the keys are expired and all the clients will
 refuse to communicate with anything.
 
 I think it is ok as a first implementation, but I think this *must not*
 be the final state. We can and must do better than this.

I've been listening in on the DNSSEC discussion and do not mean to derail 
the course of this thread, however...

From a sysadmin's perspective, I agree with Simo's comments insofar as they 
relate to not all masters being created equal.  Administratively, unequal 
masters have the potential to create single points of failure which may be 
difficult to resolve, especially on upgrade between minor versions and between 
replicas.

Small-time sysadmins like myself who may only run one (maybe two) FreeIPA 
instances incur a significant about of trouble when that already limited 
resource isn't working properly after some issue with file ownership or 
SELinux during a yum upgrade.

In addition, I realize FreeIPA wasn't probably designed with small-ish 
installs as the target use case.  But I would argue that since FreeIPA *is* so 
unified in how it handles Kerberos, LDAP, Certifiates, and DNS, it is a viable 
choice for small-timers (with the only exception being no real way to back 
up an instance without an always-on multi-master replica).

As a user who has just completed a manual migration/upgrade to F19 (after 
realizing that there really was no way to migrate/upgrade when the original 
install began on F17 2.1 on bare metal with the split slapd processes and 
Dogtag 9, through F18, to F19), I would like to see FreeIPA move forward but 
continue to deliver the above-mentioned services to the small-timers, who, 
without FreeIPA's unification, would never be able to manage or offer all of 
those services independently, like the big-timers might be able to.

Thanks.  -A

-- 
Anthony - http://messinet.com - http://messinet.com/~amessina/gallery
8F89 5E72 8DF0 BCF0 10BE 9967 92DC 35DC B001 4A4E


signature.asc
Description: This is a digitally signed message part.
___
Freeipa-devel mailing list
Freeipa-devel@redhat.com
https://www.redhat.com/mailman/listinfo/freeipa-devel

Re: [Freeipa-devel] DNSSEC support design considerations: key material handling

2013-08-09 Thread Petr Spacek

On 9.8.2013 15:12, Rob Crittenden wrote:

Simo Sorce wrote:

On Fri, 2013-08-09 at 10:42 +0200, Petr Spacek wrote:

On 23.7.2013 10:55, Petr Spacek wrote:

On 19.7.2013 19:55, Simo Sorce wrote:

I will reply to the rest of the message later if necessary, still
digesting some of your answers, but I wanted to address the following
first.

On Fri, 2013-07-19 at 18:29 +0200, Petr Spacek wrote:


The most important question at the moment is What can we postpone?
How
fragile it can be for shipping it as part of Fedora 20? Could we
declare
DNSSEC support as technology preview/don't use it for anything
serious?


Until we figur out proper management in LDAP we will be a bit stuck, esp
if we want to consider usin the 'somthing' that stores keys instead of
toring them stright in LDAP.

So maybe we can start with allowing just one server to do DNSSEC and
source keys from files for now ?


The problem is that DNSSEC deployment *on single domain* is 'all or nothing':
All DNS servers have to support DNSSEC otherwise the validation on client
side
can fail randomly.

Note that *parent* zone indicates that the particular child zone is secured
with DNSSEC by sending DS (delegation signer) record to the client.
Validation
will fail if client receives DS record from the parent but no signatures are
present in data from 'child' zone itself.

This prevents downgrade (DNSSEC = plain DNS) attacks.

As a result, we have only two options: One DNS server with DNSSEC enabled or
arbitrary number DNS servers without DNSSEC, which is very unfortunate.


as soon as we have that workign we should also have clearer plans about
how we manage keys in LDAP (or elsewhere).


Dmitri, Martin and me discussed this proposal in person and the new plan is:
- Elect one super-master which will handle key generation (as we do with
special CA certificates)


I guess we can start this way, but how do you determine which one is
master ?
How do we select the 'super-master' for CA certificates? I would re-use the 
same logic (for now).



I do not really like to have all this 'super roles', it's brittle and
admins will be confused which means one day their whole infrastructure
will be down because the keys are expired and all the clients will
refuse to communicate with anything.


AFAIU keys don't expire, rather there is a rollover process. The problem would
be if the server that controlled the rollover went away the keys would never
roll, leaving you potentially exposed.
In DNSSEC it could be a problem. Each signature contains validity interval and 
validation will fail when it expires. It practically means that DNS will stop 
working if the keys are not rotated in time. (More keys can co-exists, so the 
roll-over process can be started e.g. a month before the current key really 
expires.)



I think it is ok as a first implementation, but I think this *must not*
be the final state. We can and must do better than this.
I definitely agree. IMHO the basic problem is the same or very similar for 
DNSSEC key generation  CA certificates, so we should solve both problems at 
once - one day.


I mean - we need to coordinate key  cert maintenance between multiple masters 
somehow - and this will be the common problem for CA  DNSSEC.



- Store generated DNSSEC keys in LDAP
- Encrypt stored keys with 'DNSSEC master key' shared by all servers


ok.


- Derive 'DNSSEC master key' from 'Kerberos master key' during server
install/upgrade and store it somewhere on the filesystem (as the Kerberos
master key, on each IPA server)


The Kerberos master key is not stored on disk, furthermore it could
change, so if you derive it at install time and install a replica after
Interesting. The master key is stored in the krbMKey attribute in 
cn=REALM,cn=kerberos,dc=your,dc=domain , I didn't know that.



it was changed everything will break. I think we need to store the key
in LDAP, encrypted, and dump it to disk when a new one is generated.

I agree.


Aside, DNSSEC uses pub/private key crypto so this would be a special
'master key' used exclusively to encrypt keys in LDAP ?
That was the original intention - generate a new 'DNSSEC master key'/'DNSSEC 
wrapping key' and let named+certmonger/oddjob to play with it.



- Consider certmonger or oddjob as key generation triggers


I do not understand this comment.
I mean: How hard would it be to extend certmonger/oddjob to take care of 
DNSSEC key maintenance?



He is trying to automate the key rollover. I don't think certmonger will work
as it is designed for X.509 certs. Are you proposing an additional attribute
to schedule the rollover? I thought that it was a good idea to have some
flexibility here to prevent timed DoS attacks for rollover time.
It definitely requires some changes in certmonger, I'm just exploring various 
possibilities.



I think that we should add one new thing - a 'salt' - used for Kerberos master
key-DNSSEC master key derivation. It would allow us to re-generate DNSSEC
master key as necessary without a 

Re: [Freeipa-devel] DNSSEC support design considerations: key material handling

2013-07-23 Thread Petr Spacek

On 19.7.2013 19:55, Simo Sorce wrote:

I will reply to the rest of the message later if necessary, still
digesting some of your answers, but I wanted to address the following
first.

On Fri, 2013-07-19 at 18:29 +0200, Petr Spacek wrote:


The most important question at the moment is What can we postpone?
How
fragile it can be for shipping it as part of Fedora 20? Could we
declare
DNSSEC support as technology preview/don't use it for anything
serious?


Until we figur out proper management in LDAP we will be a bit stuck, esp
if we want to consider usin the 'somthing' that stores keys instead of
toring them stright in LDAP.

So maybe we can start with allowing just one server to do DNSSEC and
source keys from files for now ?


The problem is that DNSSEC deployment *on single domain* is 'all or nothing': 
All DNS servers have to support DNSSEC otherwise the validation on client side 
can fail randomly.


Note that *parent* zone indicates that the particular child zone is secured 
with DNSSEC by sending DS (delegation signer) record to the client. Validation 
will fail if client receives DS record from the parent but no signatures are 
present in data from 'child' zone itself.


This prevents downgrade (DNSSEC = plain DNS) attacks.

As a result, we have only two options: One DNS server with DNSSEC enabled or 
arbitrary number DNS servers without DNSSEC, which is very unfortunate.



as soon as we have that workign we should also have clearer plans about
how we manage keys in LDAP (or elsewhere).


--
Petr^2 Spacek

___
Freeipa-devel mailing list
Freeipa-devel@redhat.com
https://www.redhat.com/mailman/listinfo/freeipa-devel


Re: [Freeipa-devel] DNSSEC support design considerations: key material handling

2013-07-19 Thread Simo Sorce
I will reply to the rest of the message later if necessary, still
digesting some of your answers, but I wanted to address the following
first.

On Fri, 2013-07-19 at 18:29 +0200, Petr Spacek wrote:
 
 The most important question at the moment is What can we postpone?
 How 
 fragile it can be for shipping it as part of Fedora 20? Could we
 declare 
 DNSSEC support as technology preview/don't use it for anything
 serious?

Until we figur out proper management in LDAP we will be a bit stuck, esp
if we want to consider usin the 'somthing' that stores keys instead of
toring them stright in LDAP.

So maybe we can start with allowing just one server to do DNSSEC and
source keys from files for now ?

as soon as we have that workign we should also have clearer plans about
how we manage keys in LDAP (or elsewhere).

Simo.
 
-- 
Simo Sorce * Red Hat, Inc * New York

___
Freeipa-devel mailing list
Freeipa-devel@redhat.com
https://www.redhat.com/mailman/listinfo/freeipa-devel


Re: [Freeipa-devel] DNSSEC support design considerations: key material handling

2013-07-17 Thread Simo Sorce
On Tue, 2013-07-16 at 17:15 +0200, Petr Spacek wrote:
 On 15.7.2013 21:07, Simo Sorce wrote:
  Is there any place I can read about the format and requirements of these
  files ?
 There is no single format, because it is algorithm-dependent. See below. 
 AFAIK 
 it is nothing supported by OpenSSL, but I can be wrong.

Thanks for attaching examples, it helps.

  KSK has to be rolled over manually because it requires changes in parent 
  zone.
  (It could be automated for sub-zones if their parent zone is also managed 
  by
  the same IPA server.)
 
  Is there any provision for using DNSSEC with private DNS deployments ?
 Yes, it is. DNSSEC supports 'Islands of Security' [Terminology]: DNS 
 resolvers 
 can be configured with 'trust anchors' explicitly. E.g. 'trust domain 
 example.com only if it is signed by /this/ key, use root key for rest of the 
 Internet' etc.
 
 [Terminology] http://tools.ietf.org/html/rfc4033#section-2

This means clients would have to be configured to explicitly trust a
specific key for a zone right ? How hard would it be for us to configure
IPA clients this way assuming by then we have a DNSSEC aware resolver we
can configure on them ?

  Or is this going to make sense only for IPA deployments that have valid
  delegation from the public DNS system ?
 
  Hmmm I guess that as long as the KSK in the 'parent' zone is imported
  properly a private deployment of corp.myzone.com using the KSK of
  myzone.com will work just fine even if corp.myzone.com is not actually
  delegated but is a private DNS tree ?
  Or is that incorrect ?
 
 AFAIK there *has to be* delegation via DS record [Delegation Signer, DS] from 
 the parent, but IMHO it could work if only the public key for internal zones 
 is published (without any delegation to internal name servers etc.). I didn't 
 try it, so 'here be dragons'.

Are there test/zones keys that can be used to experiment ?

[..]

  Initial key generation is closely related to the question how should we 
  handle
  (periodic) key regeneration? (e.g. Generate new ZSK each month.)
 
  We only really need to generate (or import) the KSK of the parent zone,
 It seems that there is slight misunderstanding. KSK is the 'master key' for 
 particular zone. This master key (KSK) signs other keys (ZSKs) and data are 
 signed by ZSKs.

Sorry I expressed myself badly, I mean we only need to generate one KSK
at install time and make it available to the admin to be signed by the
upper zone admins. But then all other keys including the ZSKs can be
completely managed within IPA w/o explicit admin work if we have the
right tooling.

[..]

  No, the problem is that we need to define 'who' generates the keys.
  Remember FreeIPA is a multimaster system, we cannot have potentially
  conflicting cron jobs running on multiple servers.
 Right. It sounds like the CRL generation problem. Should we do the same for 
 DNSSEC key regeneration? I.e. select one super-master and let it to handle 
 key 
 regeneration? Or should we find some more robust solution? I'm not against 
 any 
 of these possibilities :-)

Falling back to SPOF should be the last resort or a temporary step
during development.
I would like to avoid SPOF architectures if at all possible.
We could devise a way to automatically 'elect' a master, but have all
other DNS servers also monitor that keys are regenerated an made
available in the expected time frame and if not have one of the other
DNS servers try to assume the leader role.

I have some ideas hear using priorities etc, but I need to let them brew
in my mind a little bit more :)

[..]

  For these reasons I think that we can define new public key attribute in 
  the
  same way as private key attribute:
  attributetypes: ( x.x.x.x.x NAME 'idnsSecPublicKey' SYNTAX
  1.3.6.1.4.1.1466.115.121.1.40 SINGLE-VALUE )
 
  The resulting object class could be:
  objectClasses: ( x.x.x.x.x NAME 'idnsSecKeyPair' DESC 'DNSSEC key pair' SUP
  top STRUCTURAL MUST ( cn $ idnsSecPrivateKey $ idnsSecPublicKey ) )
 
  Will bind read these attributes ?
  Or will we have to dump these values into files via bind-dyndb-ldap for
  bind9 to read them back ?
 AFAIK it has to be in files: Private key in one file and public key in the 
 other file. I can't find any support for reading private keys from buffers.

Ok so to summarize we basically are going to load the private key file
in idnsSecPrivateKey and the public key file in idnsSecPublicKey as
blobs and the have bind-dyndb-ldap fetch them and save them into files
that bind can access.
This means bind-dyndb-ldap will need to grow the ability to also clean p
and synchronize the files over time. So there will need to be hooks to
regularly check all needed files are in place and obsolete ones are
deleted. Maybe we can grow a companion python helper to do this, as it
is a relatively simple task, that is not performance critical and will
be much easier to write in a scripting language than in C. But I am not
opposed to an in-daemon solution 

Re: [Freeipa-devel] DNSSEC support design considerations: key material handling

2013-07-16 Thread Petr Spacek

On 15.7.2013 21:07, Simo Sorce wrote:

On Mon, 2013-07-15 at 16:58 +0200, Petr Spacek wrote:

The remaining part is mostly about key management.

Following text mentions 'DNSSEC keys' many times, so I tried to summarize how
keys are used in DNSSEC. Feel free to skip it.

== DNSSEC theory ==

Each zone has *at least* two key pairs. They are called Key Signing Key (KSK,
the first key pair) and Zone Signing Key (ZSK, the second key pair).

- *Parent* zone contains a copy of public part of the KSK.
- Zone itself contains public part of ZSK (and KSK).
- Client uses public part of KSK (obtained from secure parent zone) for ZSK
verification.
- ZSK is used for signing of the real data in the zone (i.e. generating RRSIG
records) and verification on client side.

Each key and signature contains key-id, so one zone can be signed by multiple
KSKs and ZSKs at the same time. This solves the key roll over problem.

Each key contains this set of timestamps:
Created, Revoke - self descriptive :-)
Publish - public part of the key will be visible in zone after this time
Active - new signatures with this key can be generated after this time
Inactive - new signatures with this key cannot be generated after this time
Delete - public part of the key will deleted from the zone after this time

NIST says [1] that KSK should be changed ~ each 1-3 years (it requires change
in parent zone) and ZSK should be changed ~ each 1-3 months.

The recommendation says [1] that zone should have two ZSKs: One Active (used
for signature generation) and second only Published (ready for roll over in
case of emergency/when the first key pair expires). This mitigates problems
with caches and stale key material during roll-over.

BIND 9 can do signature maintenance/ZSK key roll over automatically. It needs
only keys stored in files (with proper timestamps) and all signatures will be
generated  removed when the right time passes.


Is there any place I can read about the format and requirements of these
files ?
There is no single format, because it is algorithm-dependent. See below. AFAIK 
it is nothing supported by OpenSSL, but I can be wrong.



KSK has to be rolled over manually because it requires changes in parent zone.
(It could be automated for sub-zones if their parent zone is also managed by
the same IPA server.)


Is there any provision for using DNSSEC with private DNS deployments ?
Yes, it is. DNSSEC supports 'Islands of Security' [Terminology]: DNS resolvers 
can be configured with 'trust anchors' explicitly. E.g. 'trust domain 
example.com only if it is signed by /this/ key, use root key for rest of the 
Internet' etc.


[Terminology] http://tools.ietf.org/html/rfc4033#section-2


Or is this going to make sense only for IPA deployments that have valid
delegation from the public DNS system ?

Hmmm I guess that as long as the KSK in the 'parent' zone is imported
properly a private deployment of corp.myzone.com using the KSK of
myzone.com will work just fine even if corp.myzone.com is not actually
delegated but is a private DNS tree ?
Or is that incorrect ?


AFAIK there *has to be* delegation via DS record [Delegation Signer, DS] from 
the parent, but IMHO it could work if only the public key for internal zones 
is published (without any delegation to internal name servers etc.). I didn't 
try it, so 'here be dragons'.


Normally it should work this way:
. (root zone is signed with well known key)
  - root contains DS record for com.
  - DS contains hash of the public key used in com. domain

com.
  - DNSKEY record contains whole public key for domain com.
  - key is accepted only if hash of the key matches DS record from parent (.)
  - DS record for example.com. is stored in com.

example.com.
  - DNSKEY record contains whole public key for domain example.com.
  - the key is accepted only if hash of the key matches DS record from parent 
(com.)


etc.

Client walks from the root zone down to the desired record and verifies 
signatures with keys obtained from DNSKEY record. The DNSKEY record itself has 
to be authenticated by DS - DNSKEY - DS - DNSKEY - ... - DNSKEY chain 
(from root down to the zone).


Yes, publishing of DS record creates information leak - about existence of a 
sub-domain - but this information leaks anyway in e-mail headers etc. IMHO it 
is much better than messing with private Trust Anchors etc.


[DS] http://tools.ietf.org/html/rfc4034#section-5



== End of DNSSEC theory ==



1) How will we handle generation of key pairs? How will externally generated
keys be imported?

Personally, I would start with 'classical' command line utilities like
dnssec-keygen etc. and extend 'ipa' tool to import generated keys. (Import =
read keys from text files and create appropriate attributes to LDAP.)


If you mean to do this as part of the ipa-dns-install script or
additional script (eg ipa-dnssec-enable) I am fine. I am not ok with
asking admins to manually run these commands.

Okay. I meant something like extension of 'ipa' 

Re: [Freeipa-devel] DNSSEC support design considerations: key material handling

2013-07-15 Thread Petr Spacek

Hello,

first pair of this message quickly concludes discussion about database part of 
the DNSSEC support and then key material handling is discussed.


I'm sorry for the wall of text.

On 27.6.2013 18:43, Simo Sorce wrote:

 * How to get sorted list of entries from LDAP? Use LDAP
   server-side sorting? Do we have necessary indices?
 
 We can do client side sorting as well I guess, I do not have a strong
 opinion here. The main reason why you need ordering is to detect delete
 records right ?

Exactly. I realized that server-side sorting doesn't make sense because we
plan to use syncrepl, so there is nothing to sort - only the flow of
incremental updates.

Syncrepl includes notice of deletions too, right ?
Yes. Client receives delete notification with entryUUID, so we can 
unambiguously identify the deleted entry.


I wrote example LDAP client and it works (against OpenLDAP :-).


  (Filesystem) cache maintenance

  Questions: How often should we save the cache from operating
 memory to disk?

Prerequisite to be able to evaluate this question. How expensive is it
to save the cache ?

My test zone contains 65535  records, 255 A records, 1 SOA + 1 NS record.

Benchmark results:
zone dump0.5 s (to text file)
zone load1 s (from text file)
zone delete  9 s (LOL. This is caused by implementation details of RBTDB.)

LDAP search on the whole sub-tree:  15 s


Ouch, this looks very slow, missing indexes ?)
I don't see any 'notes=U' in access log. Also, my OpenLDAP instance with the 
same DNS data can do the same search  2 seconds.



Is this just the search? or is it search + zone load ?

Just the search.


Load time for bind-dyndb-ldap 3.x:  120 s


So, a reload from scratch can take many 10s of seconds on big zones, did
this test include DNSSEC signing ? Or would we need to add that on top ?
The time is for plain load. Current code is horribly ineffective and generates 
one extra LDAP search for each update. This madness will be eliminated by 
syncrepl, so the plain load time should be cut to much smaller value. We will see.


The other problem is that current code serializes a lot of work. This also 
will be mitigated to certain level (not completely, for now).



Originally, I planed to write a script which would compare data in LDAP with
zone file on disk. This script could be used for debugging  automated
testing, so we can assess if the code behaves correctly and decide if we want
to implement automatic re-synchronization when necessary.


Wouldn't this script be subject to races depending at what time it is
accessing either LDAP or the file ?

Yes, it would. The script was intended for 'lab use':
1. Run DNS server.
2. Do big amount of dynamic updates in short time.
3. Shutdown DNS and LDAP servers.
4. Compare data in DNS database with data in LDAP.

This could tell us how often and how many inconsistencies occur. After that we 
can make up some re-synchronization intervals etc.



The main issue here is that it is hard to know when doing a full re-sync
is necessary. And because it is expensive I am wary of doing it
automatically too often.

However perhaps a timed event so it is done once a day it is not a bad
idea.

I agree.


I think that we sorted out necessary changes in storage/database part of the 
DNSSEC integration.



The remaining part is mostly about key management.

Following text mentions 'DNSSEC keys' many times, so I tried to summarize how 
keys are used in DNSSEC. Feel free to skip it.


== DNSSEC theory ==

Each zone has *at least* two key pairs. They are called Key Signing Key (KSK, 
the first key pair) and Zone Signing Key (ZSK, the second key pair).


- *Parent* zone contains a copy of public part of the KSK.
- Zone itself contains public part of ZSK (and KSK).
- Client uses public part of KSK (obtained from secure parent zone) for ZSK 
verification.
- ZSK is used for signing of the real data in the zone (i.e. generating RRSIG 
records) and verification on client side.


Each key and signature contains key-id, so one zone can be signed by multiple 
KSKs and ZSKs at the same time. This solves the key roll over problem.


Each key contains this set of timestamps:
Created, Revoke - self descriptive :-)
Publish - public part of the key will be visible in zone after this time
Active - new signatures with this key can be generated after this time
Inactive - new signatures with this key cannot be generated after this time
Delete - public part of the key will deleted from the zone after this time

NIST says [1] that KSK should be changed ~ each 1-3 years (it requires change 
in parent zone) and ZSK should be changed ~ each 1-3 months.


The recommendation says [1] that zone should have two ZSKs: One Active (used 
for signature generation) and second only Published (ready for roll over in 
case of emergency/when the first key pair expires). This mitigates problems 
with caches and stale key material during roll-over.


BIND 9 

Re: [Freeipa-devel] DNSSEC support design considerations: migration to RBTDB

2013-06-27 Thread Petr Spacek

On 21.6.2013 16:19, Simo Sorce wrote:

On Thu, 2013-06-20 at 14:30 +0200, Petr Spacek wrote:

On 23.5.2013 16:32, Simo Sorce wrote:

On Thu, 2013-05-23 at 14:35 +0200, Petr Spacek wrote:

It looks that we agree on nearly all points (I apologize if
overlooked
something). I will prepare a design document for transition to RBTDB
and then
another design document for DNSSEC implementation.


The current version of the design is available at:
https://fedorahosted.org/bind-dyndb-ldap/wiki/BIND9/Design/RBTDB


Great write-up, thanks.


There are several questions inside (search for text Question, it should find
all of them). I would like to get your opinion about the problems.

Note that 389 DS team decided to implement RFC 4533 (syncrepl), so persistent
search is definitely obsolete and we can do synchronization in some clever way.



Answering inline here after quoting the questions for the doc:

  Periodical re-synchronization
 
  Questions

   * Do we still need periodical re-synchronization if 389 DS
 team implements RFC 4533 (syncrepl)? It wasn't
 considered in the initial design.

We probably do. We have to be especially careful of the case when a
replica is re-initialized. We should either automatically detect that
this is happening or change ipa-replica-manage to kick named some how.

We also need a tool or maybe a special attribute in LDAP that is
monitored so that we can tell  bind-dyndb-ldap to do a full rebuild of
the cache on demand. This way admins can force a rebuild if they end up
noticing something wrong.
Is it acceptable to let admin to delete files  restart named manually? I 
don't wont to overcomplicate things at the beginning ...



   * What about dynamic updates during re-synchronization?

Should we return a temporary error ? Or maybe just queue up the change
and apply it right after the resync operation has finished ?
Unfortunately, the only reasonable error code is SERVFAIL. It is completely up 
to client if it tries to do update again or not.


I personally don't like queuing of updates because it confuses clients: Update 
is accepted by server but the client still can see an old value (for limited 
period of time).



   * How to get sorted list of entries from LDAP? Use LDAP
 server-side sorting? Do we have necessary indices?

We can do client side sorting as well I guess, I do not have a strong
opinion here. The main reason why you need ordering is to detect delete
records right ?
Exactly. I realized that server-side sorting doesn't make sense because we 
plan to use syncrepl, so there is nothing to sort - only the flow of 
incremental updates.


 Is thee a way to mark rdtdb records as updated instead

(with a generation number) and then do a second pass on the rbtdb tree
and remove any record that was not updated with the generation number ?
There is no 'generation' number, but we can extend the auxiliary database 
(i.e. database with UUID=DNS name mapping) with generation number. We will 
get UUID along with each update from LDAP, so we can simply use UUID for 
database lookup.


Then we can go though the UUID database and delete all records which don't 
have generation == expected_value.



This would also allow us to keep accepting dynamic updates by simply
marking records as generation+1 so that the resync will not overwrite
records that are updated during the resync phase.
I agree. The simplest variant can solve the basic case where 1 update was 
received during re-synchronization.


Proposed (simple) solution:
1) At the beginning of re-synchronization, set curr_gen = prev_gen+1
2) For each entry in LDAP do (via syncrepl):
- Only if entry['gen']   curr_gen:
--  Overwrite data in local RBTDB with data from LDAP
--  Overwrite entry['gen'] = curr_gen
- Else: Do nothing

In parallel:
1) Update request received from a client
2) Write new data to LDAP (syncrepl should cope with this)
3) Read UUID from LDAP (via RFC 4527 controls)
4) Write curr_gen to UUID database
5) Write data to local RBTDB
6) Reply 'update accepted' to the client

Crash at any time should not hurt: Curr_gen will be incremented on restart and 
re-sychronization will be restarted.


The worst case is that update will be stored in LDAP but client will not get 
reply because of crash (i.e. client times out).



There is a drawback: Two or more successive updates to a single entry can 
create race condition, as described at 
https://fedorahosted.org/bind-dyndb-ldap/wiki/BIND9/Design/RBTDB#Raceconditions1 .


The reason is that generation number is not incremented each time, but only 
overwritten with current global value (i.e. old + 1).



I don't like the other option with incrementing generation number. It could 
create nasty corner cases during re-synchronization and handling updates made 
directly in LDAP/by other DNS server.


It is not nice, but I think that we can live with it. The important fact is 
that 

Re: [Freeipa-devel] DNSSEC support design considerations: migration to RBTDB

2013-06-27 Thread Simo Sorce
On Thu, 2013-06-27 at 18:23 +0200, Petr Spacek wrote:
 On 21.6.2013 16:19, Simo Sorce wrote:
  On Thu, 2013-06-20 at 14:30 +0200, Petr Spacek wrote:
  On 23.5.2013 16:32, Simo Sorce wrote:
  On Thu, 2013-05-23 at 14:35 +0200, Petr Spacek wrote:
  It looks that we agree on nearly all points (I apologize if
  overlooked
  something). I will prepare a design document for transition to RBTDB
  and then
  another design document for DNSSEC implementation.
 
  The current version of the design is available at:
  https://fedorahosted.org/bind-dyndb-ldap/wiki/BIND9/Design/RBTDB
 
  Great write-up, thanks.
 
  There are several questions inside (search for text Question, it should 
  find
  all of them). I would like to get your opinion about the problems.
 
  Note that 389 DS team decided to implement RFC 4533 (syncrepl), so 
  persistent
  search is definitely obsolete and we can do synchronization in some clever 
  way.
 
 
  Answering inline here after quoting the questions for the doc:
 
Periodical re-synchronization
   
Questions
 
 * Do we still need periodical re-synchronization if 389 DS
   team implements RFC 4533 (syncrepl)? It wasn't
   considered in the initial design.
 
  We probably do. We have to be especially careful of the case when a
  replica is re-initialized. We should either automatically detect that
  this is happening or change ipa-replica-manage to kick named some how.
 
  We also need a tool or maybe a special attribute in LDAP that is
  monitored so that we can tell  bind-dyndb-ldap to do a full rebuild of
  the cache on demand. This way admins can force a rebuild if they end up
  noticing something wrong.
 Is it acceptable to let admin to delete files  restart named manually? I 
 don't wont to overcomplicate things at the beginning ...

Sure, probably fine, we can have a tool that simply just does that for
starters, and later on we can make it do more complex things if needed.

 * What about dynamic updates during re-synchronization?
 
  Should we return a temporary error ? Or maybe just queue up the change
  and apply it right after the resync operation has finished ?
 Unfortunately, the only reasonable error code is SERVFAIL. It is completely 
 up 
 to client if it tries to do update again or not.
 
 I personally don't like queuing of updates because it confuses clients: 
 Update 
 is accepted by server but the client still can see an old value (for limited 
 period of time).

Another option is to mark fields so that they are not updated with older
values, and just allow the thing to succeed.

 * How to get sorted list of entries from LDAP? Use LDAP
   server-side sorting? Do we have necessary indices?
 
  We can do client side sorting as well I guess, I do not have a strong
  opinion here. The main reason why you need ordering is to detect delete
  records right ?
 Exactly. I realized that server-side sorting doesn't make sense because we 
 plan to use syncrepl, so there is nothing to sort - only the flow of 
 incremental updates.

Syncrepl includes notice of deletions too, right ?

   Is thee a way to mark rdtdb records as updated instead
  (with a generation number) and then do a second pass on the rbtdb tree
  and remove any record that was not updated with the generation number ?
 There is no 'generation' number, but we can extend the auxiliary database 
 (i.e. database with UUID=DNS name mapping) with generation number. We will 
 get UUID along with each update from LDAP, so we can simply use UUID for 
 database lookup.
 
 Then we can go though the UUID database and delete all records which don't 
 have generation == expected_value.

Yes, something like this should work.

  This would also allow us to keep accepting dynamic updates by simply
  marking records as generation+1 so that the resync will not overwrite
  records that are updated during the resync phase.
 I agree. The simplest variant can solve the basic case where 1 update was 
 received during re-synchronization.
 
 Proposed (simple) solution:
 1) At the beginning of re-synchronization, set curr_gen = prev_gen+1
 2) For each entry in LDAP do (via syncrepl):
 - Only if entry['gen']   curr_gen:
 --  Overwrite data in local RBTDB with data from LDAP
 --  Overwrite entry['gen'] = curr_gen
 - Else: Do nothing
 
 In parallel:
 1) Update request received from a client
 2) Write new data to LDAP (syncrepl should cope with this)
 3) Read UUID from LDAP (via RFC 4527 controls)
 4) Write curr_gen to UUID database
 5) Write data to local RBTDB
 6) Reply 'update accepted' to the client
 
 Crash at any time should not hurt: Curr_gen will be incremented on restart 
 and 
 re-sychronization will be restarted.

Yep.

 The worst case is that update will be stored in LDAP but client will not get 
 reply because of crash (i.e. client times out).

Not a big deal. This can always happen for clients, as the 

Re: [Freeipa-devel] DNSSEC support design considerations: migration to RBTDB

2013-06-21 Thread Simo Sorce
On Thu, 2013-06-20 at 14:30 +0200, Petr Spacek wrote:
 Hello,
 
 On 23.5.2013 16:32, Simo Sorce wrote:
  On Thu, 2013-05-23 at 14:35 +0200, Petr Spacek wrote:
  It looks that we agree on nearly all points (I apologize if
  overlooked
  something). I will prepare a design document for transition to RBTDB
  and then
  another design document for DNSSEC implementation.
 
 The current version of the design is available at:
 https://fedorahosted.org/bind-dyndb-ldap/wiki/BIND9/Design/RBTDB

Great write-up, thanks.

 There are several questions inside (search for text Question, it should 
 find 
 all of them). I would like to get your opinion about the problems.
 
 Note that 389 DS team decided to implement RFC 4533 (syncrepl), so persistent 
 search is definitely obsolete and we can do synchronization in some clever 
 way.


Answering inline here after quoting the questions for the doc:

 Periodical re-synchronization

 Questions

  * Do we still need periodical re-synchronization if 389 DS
team implements RFC 4533 (syncrepl)? It wasn't
considered in the initial design.

We probably do. We have to be especially careful of the case when a
replica is re-initialized. We should either automatically detect that
this is happening or change ipa-replica-manage to kick named some how.

We also need a tool or maybe a special attribute in LDAP that is
monitored so that we can tell  bind-dyndb-ldap to do a full rebuild of
the cache on demand. This way admins can force a rebuild if they end up
noticing something wrong.

  * What about dynamic updates during re-synchronization?

Should we return a temporary error ? Or maybe just queue up the change
and apply it right after the resync operation has finished ?

  * How to get sorted list of entries from LDAP? Use LDAP
server-side sorting? Do we have necessary indices?

We can do client side sorting as well I guess, I do not have a strong
opinion here. The main reason why you need ordering is to detect delete
records right ? Is thee a way to mark rdtdb records as updated instead
(with a generation number) and then do a second pass on the rbtdb tree
and remove any record that was not updated with the generation number ?
This would also allow us to keep accepting dynamic updates by simply
marking records as generation+1 so that the resync will not overwrite
records that are updated during the resync phase.


 (Filesystem) cache maintenance

 Questions: How often should we save the cache from operating
memory to disk?

Prerequisite to be able to evaluate this question. How expensive is it
to save the cache ? Is DNS responsive during the save or does the
operation block updates or other functionality ?

  * On shutdown only?

NACK, you are left with very stale data on crashes.

  * On start-up (after initial synchronization) and on
shutdown?

It makes sense to dump right after a big synchronization if it doesn't
add substantial operational issues. Otherwise maybe a short interval
after synchronization.

  * Periodically? How often? At the end of periodical
re-synchronization?

Periodically is probably a good idea, if I understand it correctly it
means that it will make it possible to substantially reduce the load on
startup as we will have less data to fetch from a syncrepl requiest.

  * Each N updates?

I prefer a combination of each N updates but with time limits to avoid
doing it too often.
Ie something like every 1000 changes but not more often than every 30
minutes and not less often than 8 hours. (Numbers completely made up and
need to be tuned based on the answer about the prerequisites question
above).

  * If N % of the database was changed? (pspacek's favorite)

The problem with using % database is that for very small zones you risk
getting stuff saved too often, as changing a few records quickly makes
the % big compared to the zone size. For example a zone with 50 records
has a 10% change after just 5 records are changed. Conversely a big zone
requires a huge amount of changes before the % of changes builds up
leading potentially to dumping the database too infrequently. Example,
zone with 10 records, means you have to get 1 changes before you
come to the 10% mark. If dyndns updates are disabled this means the zone
may never get saved for weeks or months.
A small zone will also syncrepl quickly so it would be useless to save
it often while a big zone is better if it is up to date on disk so the
syncrepl operation will cost less on startup.

Finally N % is also hard to compute. What do you consider into it ?
Only total number of record changed ? Or do you factor in also if the
same record is changed multiple times ?
Consider fringe cases, zone with 1000 entries where only 1 entry is
changed 2000 times in a short period 

Re: [Freeipa-devel] DNSSEC support design considerations: migration to RBTDB

2013-06-20 Thread Petr Spacek

Hello,

On 23.5.2013 16:32, Simo Sorce wrote:

On Thu, 2013-05-23 at 14:35 +0200, Petr Spacek wrote:

It looks that we agree on nearly all points (I apologize if
overlooked
something). I will prepare a design document for transition to RBTDB
and then
another design document for DNSSEC implementation.


The current version of the design is available at:
https://fedorahosted.org/bind-dyndb-ldap/wiki/BIND9/Design/RBTDB

There are several questions inside (search for text Question, it should find 
all of them). I would like to get your opinion about the problems.


Note that 389 DS team decided to implement RFC 4533 (syncrepl), so persistent 
search is definitely obsolete and we can do synchronization in some clever way.


--
Petr^2 Spacek

___
Freeipa-devel mailing list
Freeipa-devel@redhat.com
https://www.redhat.com/mailman/listinfo/freeipa-devel


Re: [Freeipa-devel] DNSSEC support design considerations

2013-05-23 Thread Petr Spacek

On 22.5.2013 21:58, Simo Sorce wrote:

On Wed, 2013-05-22 at 17:01 +0200, Petr Spacek wrote:

Wow, it is pretty slow.

Yeah this is what I expected, crypto is not really fast.

[...]

The simplest way how to mitigate problem with slow start-up is:
1) Store signed version of the zone on the server's file system.
2) Load signed version from disk during start up.
3) In the background, do full zone reload+resign.
4) Switch old and new zones when it is done.


Maybe instead of 3/4 we can do something that requires less computation.
We can take the list of records in the zone, and load the list of
records from LDAP.

Here we set also the persistent search but we lock it so any update is
queued until we are done with the main resync task.
(We can temporarily also refuse DNS Updates I guess)

We cross check to find which records have been changed, which have been
removed, and which have been added.
Discard all the records that are unchanged (I assume the vast majority)
and then proceed to delete/modify/add the difference.

This would save a large amount of computation at every startup, even if
in the background the main issue here is not just time, but the fact
that you pegged the CPU to 98% for so long.


It will consume some computing power during start up, but the implementation
should be really simple. (BIND naturally can save and load zones :-))


I do not think the above would be much more difficult, and could save
quite a lot of computing if done in the right order and within a bind
database transaction I guess.


It sounds doable, I agree. Naturally, I plan to start with 'naive'/'in-memory 
only' implementation and add optimizations when the 'naive' part works.



The idea is that _location is dynamic though, isn't it ?

[...]

This is how I understood the design. Is it correct? If it is, then the value
is static from server's point of view. The 'dynamics' is a result of moving
client, because client is asking different servers for an answer.


Uhmm true, so we could simply store all the fields from within the
plugin so that RBTDB can sign them too.

I think my only concern is if the client can ever load some data from
one server and then some other data from another and find mismatching
signatures.
I didn't find any note about cross-checks between DNS servers. IMHO it doesn't 
matter, as long as signature matches the public key in the zone.


I think that some degree of inconsistency is natural part of DNS. Typically, 
all changes are propagated from the 'master'/'root of the tree topology' 
through multiple levels of slaves to the 'leaf slaves'.


Signatures contain timestamps and are periodically re-computed (in order of 
weeks) and it takes some time to propagate new signatures through the whole tree.



What changes are going to be required in bind-dyndb-ldap to use RBTDB
from Bind ? Do we have interfaces already ? Or will it require
additional changes to the glue code we currently use to load our plugin
into bind ?


I have some proof-of-concept code. AFAIK no change to public interfaces are
necessary.

There are 40 functions each database driver have to implement. Currently, we
have own implementation for most of them and some of them are NULL because are
required only for DNSSEC.

The typical change from our implementation to the native one looks like this:

static isc_result_t
find(dns_db_t *db, dns_name_t *name, dns_dbversion_t *version,
   dns_rdatatype_t type, unsigned int options, isc_stdtime_t now,
   dns_dbnode_t **nodep, dns_name_t *foundname, dns_rdataset_t *rdataset,
   dns_rdataset_t *sigrdataset)
{
- [next 200 lines of our code]
+   return dns_db_find(ldapdb-rbtdb, name, version, type, options, now,
+  nodep, foundname, rdataset, sigrdataset);
}

Most of the work is about understanding how the native database work.


I assume rbtdb is now pretty stable and semantic changes are quite
unlikely ?


BIND (with our patches) has defined interface for database backends. Either 
bind-dyndb-ldap and RBTDB implement this interface, so semantic change is very 
very unlikely.


The plan is to use the 'public' RBTDB interface to avoid any touch between 
bind-dyndb-ldap and 'internal knobs' in RBTDB.



At the moment I'm able to load data from LDAP and push them to the native
database except the zone serial. It definitely needs more investigation, but
it seems doable.


Well if we store the data in the b permanently and synchronize at
startup I guess the serial problem vanishes completely ? (assuming we
use timestamp based serials)
Yes, basically we don't need to write it back to LDAP at all. The behaviour 
should be same as with current implementation.



sarcasm
Do you want to go back to 'light side of the force'? So we should start with
designing some LDAP-nsupdate gateway and use that for zone maintenance. It
doesn't solve adding/reconfiguring of zones on run-time, but it could be
handled by some stand-alone daemon with an abstraction layer at proper 

Re: [Freeipa-devel] DNSSEC support design considerations

2013-05-23 Thread Simo Sorce
On Thu, 2013-05-23 at 14:35 +0200, Petr Spacek wrote:
 
 It looks that we agree on nearly all points (I apologize if
 overlooked 
 something). I will prepare a design document for transition to RBTDB
 and then 
 another design document for DNSSEC implementation.
 
ACK

Simo.

-- 
Simo Sorce * Red Hat, Inc * New York

___
Freeipa-devel mailing list
Freeipa-devel@redhat.com
https://www.redhat.com/mailman/listinfo/freeipa-devel


Re: [Freeipa-devel] DNSSEC support design considerations

2013-05-22 Thread Petr Spacek

On 21.5.2013 20:30, Simo Sorce wrote:

On Tue, 2013-05-21 at 18:32 +0200, Petr Spacek wrote:

Hello,

I found that we (probably) misunderstood each other. The sky-high level
overview of the proposal follow:

NO CHANGE:
1) LDAP stores all *unsigned* data.

2)
NO CHANGE:
a) bind-dyndb-ldap *on each server* fetches all unsigned data from LDAP and
store them in *in memory* database (we do it now)

THE DIFFERENCE:
b) All data will be stored in BIND's native RBT-database (RBTDB) instead of
our own in-memory database.

NEW PIECES:
3)
Mechanisms implemented in BIND's RBTDB will do DNSSEC signing etc. for us. The
BIND's feature is called 'in-line signing' and it can do all key/signature
maintenance for us, including periodical zone re-signing etc.


The whole point of this proposal is about code-reusage. I'm trying to avoid
re-inventing of the wheel.

Note that DNSSEC implementation in BIND has ~ 150 kiB of C code, stand-alone
signing utilities add another ~ 200 kiB of code (~ 7000 lines) . I really
don't want to re-write it again when it's not reasonable.

Further comments are in-line.


Ok putting some numbers on this topic really helps, thanks!

More inline.

[..]


I haven't seen any reasoning from you why letting Bind do this work is
a better idea.

Simply said - because all the code is already in BIND (the feature is called
'in-line signing', as I mentioned above).


I actually see some security reasons why putting this into a DS plugin
can have quite some advantages instead. Have you considered doing this

It could improve the security a bit, I agree. But I don't think that it is so
big advantage. BIND already has all the facilities for key material handling,
so the only thing we have to solve is how to distribute keys from LDAP to
running BIND.


Well it would mean sticking the key in ldap and letting Bind pull them
from there based on ACIs ...
The main issue would be changes in keys, but with the persistent search
I guess that's also not a huge deal.


Zone can be signed with multiple keys at same time, so key rotation is not a 
problem. Each signature contains key-id.



work in a DS plugin at all ? If you haven and have discarded the idea,
can you say why ?

1) It would require pulling ~ 200 kiB (~ 7000 lines) of DNSSEC signing code
into 389.

2) It would require pulling 'text-DNS wire format' parser into 389 (because
our LDAP stores plain text data but the signing process works with DNS wire
format).

3) It simplifies bind-dyndb-ldap, but we still need to re-implement DNS search
algorithm which takes DNSSEC oddities into account. (Note that the DNS search
algorithm is part of the database implementation. Bugs/limitations in our
implementation are the reason why wildard records are not supported...)

4) I'm not sure how it will work with replication. How to ensure that new
record will not appear in the zone until the associated RRset is (re)computed
by DS? (BIND has transaction mechanism built-in to the internal RBTDB.)


389ds has internal transactions, which is why I was thinking to do the
signatures on any change coming into LDAP (direct or via replication,
within the transaction.


The point is that you *can* do changes run-time, but you need to know about
the changes as soon as possible because each change requires significant
amount of work (and magic/mana :-).

It opens a lot of opportunities for race condition problems.


Yes, I am really concerned about the race conditions of course, however
I really wonder whether doing signing in bind is really a good idea.
We need to synchronize these signatures to all masters right ?

No, because signatures are computed and stored only in memory - and forgotten
after BIND shutdown. Yes, it requires re-computing on each load, this is
definitely disadvantage.


Ok I definitely need numbers here.
Can you do a test with a normal, text based, Bind zone with 10k entries
and see how much time it takes to re-sign everything ?

I suspect that will be way too much, so we will have the added problem
of having to maintain a local cache in order to be able to restart Bind
and have it actually server results in a reasonable time w/o killing the
machine completely.


Right, it is good idea. I never tried really big zone (for some reason?).

Command: /usr/bin/time dnssec-signzone -n 1 -o example.net example.net
Signing was limited to single core (parameter -n 1).

Unsigned zone: 327 285 bytes, ~ 10 000 A records and several other records
Signed zone: 10 847 688 bytes
Results:
38.28user 0.09system 0:38.80elapsed 98%CPU (0avgtext+0avgdata 18032maxresident)k
0inputs+21200outputs (0major+4646minor)pagefaults 0swaps

Wow, it is pretty slow.

CPU: Intel(R) Core(TM) i7-2620M CPU @ 2.70GHz
Operating memory: 4 GB of DDR3 @ 1333 MHz

The simplest way how to mitigate problem with slow start-up is:
1) Store signed version of the zone on the server's file system.
2) Load signed version from disk during start up.
3) In the background, do full zone reload+resign.
4) Switch old and new 

Re: [Freeipa-devel] DNSSEC support design considerations

2013-05-22 Thread Simo Sorce
On Wed, 2013-05-22 at 17:01 +0200, Petr Spacek wrote:
 Right, it is good idea. I never tried really big zone (for some reason?).
 
 Command: /usr/bin/time dnssec-signzone -n 1 -o example.net example.net
 Signing was limited to single core (parameter -n 1).
 
 Unsigned zone: 327 285 bytes, ~ 10 000 A records and several other records
 Signed zone: 10 847 688 bytes
 Results:
 38.28user 0.09system 0:38.80elapsed 98%CPU (0avgtext+0avgdata 
 18032maxresident)k
 0inputs+21200outputs (0major+4646minor)pagefaults 0swaps
 
 Wow, it is pretty slow.

Yeah this is what I expected, crypto is not really fast.

 CPU: Intel(R) Core(TM) i7-2620M CPU @ 2.70GHz
 Operating memory: 4 GB of DDR3 @ 1333 MHz
 
 The simplest way how to mitigate problem with slow start-up is:
 1) Store signed version of the zone on the server's file system.
 2) Load signed version from disk during start up.
 3) In the background, do full zone reload+resign.
 4) Switch old and new zones when it is done.

Maybe instead of 3/4 we can do something that requires less computation.
We can take the list of records in the zone, and load the list of
records from LDAP.

Here we set also the persistent search but we lock it so any update is
queued until we are done with the main resync task.
(We can temporarily also refuse DNS Updates I guess)

We cross check to find which records have been changed, which have been
removed, and which have been added.
Discard all the records that are unchanged (I assume the vast majority)
and then proceed to delete/modify/add the difference.

This would save a large amount of computation at every startup, even if
in the background the main issue here is not just time, but the fact
that you pegged the CPU to 98% for so long.

 It will consume some computing power during start up, but the implementation 
 should be really simple. (BIND naturally can save and load zones :-))

I do not think the above would be much more difficult, and could save
quite a lot of computing if done in the right order and within a bind
database transaction I guess.

  Well given an IPA infrastructure uses Dynamic Updates I expect data to
  change frequently enough that if you have an outage that lasts more than
  a handful of minutes the data in the saved copy will not match the data
  in LDAP.
 I agree, that is definitely true, but I think that the most important pieces 
 are NS, SRV and A records for servers. They are not changed that often.
 
 IMHO admins would be happier if they have 100 records from 10 000 out of date 
 but the infrastructure works than without any records at all (and broken 
 infrastructure).
 
 Again, there can be some LDAP-synchonization-timeout and DNS server can stop 
 responding to queries when synchronization is lost for longer time.

This may be a reasonable compromise.

  The idea is that _location is dynamic though, isn't it ?
 The value seems to be 'dynamic', but only from client's point of view. AFAIK 
 there are three options:
 1) _location is configured for particular client statically in LDAP
 2) Each individual server has own default value for _location (for clients 
 without explicit configuration).
 3) Each individual server can be configured to override all values in 
 _location with one fixed value, i.e. all clients (e.g. in 
 bandwith-constrained 
 location) will use only the local server.
 
 This is how I understood the design. Is it correct? If it is, then the value 
 is static from server's point of view. The 'dynamics' is a result of moving 
 client, because client is asking different servers for an answer.

Uhmm true, so we could simply store all the fields from within the
plugin so that RBTDB can sign them too.

I think my only concern is if the client can ever load some data from
one server and then some other data from another and find mismatching
signatures.

  Anyway what if we do not sign _location records ?
  Will DNSSEC compliant clients fail in that case ?
 I'm not 100 % sure, but I see two problems:
 
 1) It seems that opt-out is allowed only for delegation points (NS records 
 belonging to sub-domains).
 http://tools.ietf.org/html/rfc5155#section-6
 
 2) Opt-out allows an attacked to insert unsigned data in the replies.
 http://www.stanford.edu/~jcm/papers/dnssec_ndss10.pdf section 3.4

I think for location discovery this may be a problem we can accept.
But if we can avoid it we probably should.

 Anyway, I don't think that it is necessary.

ok.

  What changes are going to be required in bind-dyndb-ldap to use RBTDB
  from Bind ? Do we have interfaces already ? Or will it require
  additional changes to the glue code we currently use to load our plugin
  into bind ?
 
 I have some proof-of-concept code. AFAIK no change to public interfaces are 
 necessary.
 
 There are 40 functions each database driver have to implement. Currently, we 
 have own implementation for most of them and some of them are NULL because 
 are 
 required only for DNSSEC.
 
 The typical change from our implementation to 

Re: [Freeipa-devel] DNSSEC support design considerations

2013-05-21 Thread Petr Spacek

Hello,

I found that we (probably) misunderstood each other. The sky-high level 
overview of the proposal follow:


NO CHANGE:
1) LDAP stores all *unsigned* data.

2)
NO CHANGE:
a) bind-dyndb-ldap *on each server* fetches all unsigned data from LDAP and 
store them in *in memory* database (we do it now)


THE DIFFERENCE:
b) All data will be stored in BIND's native RBT-database (RBTDB) instead of 
our own in-memory database.


NEW PIECES:
3)
Mechanisms implemented in BIND's RBTDB will do DNSSEC signing etc. for us. The 
BIND's feature is called 'in-line signing' and it can do all key/signature 
maintenance for us, including periodical zone re-signing etc.



The whole point of this proposal is about code-reusage. I'm trying to avoid 
re-inventing of the wheel.


Note that DNSSEC implementation in BIND has ~ 150 kiB of C code, stand-alone 
signing utilities add another ~ 200 kiB of code (~ 7000 lines) . I really 
don't want to re-write it again when it's not reasonable.


Further comments are in-line.


On 20.5.2013 14:07, Simo Sorce wrote:

On Wed, 2013-05-15 at 17:11 +0200, Petr Spacek wrote:

On 15.5.2013 10:29, Simo Sorce wrote:

I investigated various scenarios for DNSSEC integration and I would like to
hear your opinions about proposed approach and it's effects.


The most important finding is that bind-dyndb-ldap can't support DNSSEC
without rewrite of the 'in-memory database' component.


Can you elaborate why a rewrite would be needed ? What constraint we do not 
meet ?


We have three main problems - partially with data structures and mostly with
the way how we work with the 'internal database':

1) DNSSEC requires strict record ordering, i.e. each record in database has to
have predecessor and successor (ordering by name and then by record data).
This can be done relatively simply, but it requires a full dump of the database.

2) On-line record signing requires a lot of data stored
per-record+per-signature. This would require bigger effort than point 1),
because many data structures and respective APIs and locking protocols have to
be re-designed.

3) Our current 'internal database' acts as a 'cache', i.e. records can appear
and disappear dynamically and the 'cache' is not considered as authoritative
source of data: LDAP search is conducted each time when some data are not
found etc. The result is that the same data can disappear and then appear
again in the cache etc.

Typical update scenario, with persistent search enabled:
a) DNS UPDATE from client is received by BIND
b) New data are written to LDAP
c) DN of modified object is received via persistent search
d) All RRs under the *updated name* are discarded from the cache
-- now the cache is not consistent with data in LDAP
e) Object from LDAP is fetched by plugin
-- a query for the updated name will enforce instant cache refresh, because
we know that the cache is not authoritative
f) All RRs in the object are updated (in cache)

The problem is that the cache in intermediate states (between -- marks) can't
be used as authoritative source and will produce incorrect signatures. The
text below contains more details.

Database's in BIND has concept of 'versions' ('transactions') which our
internal cache do not implement ... It could be solved by proper locking, of
course, but it will not be a piece of cake. We need to take care of many
parallel updates, parallel queries and parallel re-signing at the same time.

I don't say that it is impossible to implement our own backend with same
properties as BIND's database, but I don't see the value (and I can see a lot
of bugs :-).


Well, we do not necessarily need all the same properties of bind's
database, only those that allow us to properly handle DNSSEC, so let's
try to uncover what those constrains are first, so I can understand why
you propose this solution as better than something else.


Fortunately, it seems
that we can drop our own implementation of the internal DNS database
(ldap_driver.c and cache.c) and re-use the database from BIND (so called
RBTDB).

I'm trying to reach Adam Tkac with the question Why we decided to implement
it again rather than re-use BIND's code?.


Re-usage of BIND's implementation will have following properties:


== Advantages ==
- Big part of DNSSEC implementation from BIND9 can be reused.
- Overall plugin implementation will be simpler - we can drop many lines of
our code and bugs.
- Run-time performance could be much much better.

- We will get implementation for these tickets for free:
-- #95  wildcard CNAME does NOT work
-- #64  IXFR support (IMHO this is important!)
-- #6   Cache non-existing records

And partially:
-- #7   Allow limiting of the cache


Sounds very interesting.



== Disadvantages ==
- Support for configurations without persistent search will complicate things
a lot.
-- Proposal = Make persistent search obligatory. OpenLDAP supports LDAP
SyncRepl, so it should be possible to make plugin compatible with 389 and
OpenLDAP at the same time. I would 

Re: [Freeipa-devel] DNSSEC support design considerations

2013-05-21 Thread Simo Sorce
On Tue, 2013-05-21 at 18:32 +0200, Petr Spacek wrote:
 Hello,
 
 I found that we (probably) misunderstood each other. The sky-high level 
 overview of the proposal follow:
 
 NO CHANGE:
 1) LDAP stores all *unsigned* data.
 
 2)
 NO CHANGE:
 a) bind-dyndb-ldap *on each server* fetches all unsigned data from LDAP and 
 store them in *in memory* database (we do it now)
 
 THE DIFFERENCE:
 b) All data will be stored in BIND's native RBT-database (RBTDB) instead of 
 our own in-memory database.
 
 NEW PIECES:
 3)
 Mechanisms implemented in BIND's RBTDB will do DNSSEC signing etc. for us. 
 The 
 BIND's feature is called 'in-line signing' and it can do all key/signature 
 maintenance for us, including periodical zone re-signing etc.
 
 
 The whole point of this proposal is about code-reusage. I'm trying to avoid 
 re-inventing of the wheel.
 
 Note that DNSSEC implementation in BIND has ~ 150 kiB of C code, stand-alone 
 signing utilities add another ~ 200 kiB of code (~ 7000 lines) . I really 
 don't want to re-write it again when it's not reasonable.
 
 Further comments are in-line.

Ok putting some numbers on this topic really helps, thanks!

More inline.

[..]

  I haven't seen any reasoning from you why letting Bind do this work is
  a better idea.
 Simply said - because all the code is already in BIND (the feature is called 
 'in-line signing', as I mentioned above).
 
  I actually see some security reasons why putting this into a DS plugin
  can have quite some advantages instead. Have you considered doing this
 It could improve the security a bit, I agree. But I don't think that it is so 
 big advantage. BIND already has all the facilities for key material handling, 
 so the only thing we have to solve is how to distribute keys from LDAP to 
 running BIND.

Well it would mean sticking the key in ldap and letting Bind pull them
from there based on ACIs ...
The main issue would be changes in keys, but with the persistent search
I guess that's also not a huge deal.

  work in a DS plugin at all ? If you haven and have discarded the idea,
  can you say why ?
 1) It would require pulling ~ 200 kiB (~ 7000 lines) of DNSSEC signing code 
 into 389.
 
 2) It would require pulling 'text-DNS wire format' parser into 389 (because 
 our LDAP stores plain text data but the signing process works with DNS wire 
 format).
 
 3) It simplifies bind-dyndb-ldap, but we still need to re-implement DNS 
 search 
 algorithm which takes DNSSEC oddities into account. (Note that the DNS search 
 algorithm is part of the database implementation. Bugs/limitations in our 
 implementation are the reason why wildard records are not supported...)
 
 4) I'm not sure how it will work with replication. How to ensure that new 
 record will not appear in the zone until the associated RRset is (re)computed 
 by DS? (BIND has transaction mechanism built-in to the internal RBTDB.)

389ds has internal transactions, which is why I was thinking to do the
signatures on any change coming into LDAP (direct or via replication,
within the transaction.

  The point is that you *can* do changes run-time, but you need to know about
  the changes as soon as possible because each change requires significant
  amount of work (and magic/mana :-).
 
  It opens a lot of opportunities for race condition problems.
 
  Yes, I am really concerned about the race conditions of course, however
  I really wonder whether doing signing in bind is really a good idea.
  We need to synchronize these signatures to all masters right ?
 No, because signatures are computed and stored only in memory - and forgotten 
 after BIND shutdown. Yes, it requires re-computing on each load, this is 
 definitely disadvantage.

Ok I definitely need numbers here.
Can you do a test with a normal, text based, Bind zone with 10k entries
and see how much time it takes to re-sign everything ?

I suspect that will be way too much, so we will have the added problem
of having to maintain a local cache in order to be able to restart Bind
and have it actually server results in a reasonable time w/o killing the
machine completely.

  Doesn't that mean we need to store this data back in LDAP ?
 No, only 'normal' DNS updates containing unsigned data will be written back 
 to 
 LDAP. RRSIG and NSEC records will never reach LDAP.
 
  That means more round-trips before the data ends up being usable, and we
  do not have transactions in LDAP, so I am worried that doing the signing
  in Bind may not be the best way to go.
 I'm proposing to re-use BIND's transaction mechanism built in internal 
 database implementation.
 
  = It should be possible to save old database to disk (during BIND 
  shutdown
  or
  periodically) and re-use this old database during server startup. I.e. 
  server
  will start replying immediately from 'old' database and then the server 
  will
  switch to the new database when dump from LDAP is finished.
 
 
  This look like an advantage ? Why is it a disadvantage ?
  It was mentioned 

Re: [Freeipa-devel] DNSSEC support design considerations

2013-05-20 Thread Simo Sorce
On Wed, 2013-05-15 at 17:11 +0200, Petr Spacek wrote:
 On 15.5.2013 10:29, Simo Sorce wrote:
  I investigated various scenarios for DNSSEC integration and I would like to
  hear your opinions about proposed approach and it's effects.
 
 
  The most important finding is that bind-dyndb-ldap can't support DNSSEC
  without rewrite of the 'in-memory database' component.
 
  Can you elaborate why a rewrite would be needed ? What constraint we do not 
  meet ?
 
 We have three main problems - partially with data structures and mostly with 
 the way how we work with the 'internal database':
 
 1) DNSSEC requires strict record ordering, i.e. each record in database has 
 to 
 have predecessor and successor (ordering by name and then by record data). 
 This can be done relatively simply, but it requires a full dump of the 
 database.
 
 2) On-line record signing requires a lot of data stored 
 per-record+per-signature. This would require bigger effort than point 1), 
 because many data structures and respective APIs and locking protocols have 
 to 
 be re-designed.
 
 3) Our current 'internal database' acts as a 'cache', i.e. records can appear 
 and disappear dynamically and the 'cache' is not considered as authoritative 
 source of data: LDAP search is conducted each time when some data are not 
 found etc. The result is that the same data can disappear and then appear 
 again in the cache etc.
 
 Typical update scenario, with persistent search enabled:
 a) DNS UPDATE from client is received by BIND
 b) New data are written to LDAP
 c) DN of modified object is received via persistent search
 d) All RRs under the *updated name* are discarded from the cache
 -- now the cache is not consistent with data in LDAP
 e) Object from LDAP is fetched by plugin
 -- a query for the updated name will enforce instant cache refresh, because 
 we know that the cache is not authoritative
 f) All RRs in the object are updated (in cache)
 
 The problem is that the cache in intermediate states (between -- marks) 
 can't 
 be used as authoritative source and will produce incorrect signatures. The 
 text below contains more details.
 
 Database's in BIND has concept of 'versions' ('transactions') which our 
 internal cache do not implement ... It could be solved by proper locking, of 
 course, but it will not be a piece of cake. We need to take care of many 
 parallel updates, parallel queries and parallel re-signing at the same time.
 
 I don't say that it is impossible to implement our own backend with same 
 properties as BIND's database, but I don't see the value (and I can see a lot 
 of bugs :-).

Well, we do not necessarily need all the same properties of bind's
database, only those that allow us to properly handle DNSSEC, so let's
try to uncover what those constrains are first, so I can understand why
you propose this solution as better than something else.

  Fortunately, it seems
  that we can drop our own implementation of the internal DNS database
  (ldap_driver.c and cache.c) and re-use the database from BIND (so called
  RBTDB).
 
  I'm trying to reach Adam Tkac with the question Why we decided to 
  implement
  it again rather than re-use BIND's code?.
 
 
  Re-usage of BIND's implementation will have following properties:
 
 
  == Advantages ==
  - Big part of DNSSEC implementation from BIND9 can be reused.
  - Overall plugin implementation will be simpler - we can drop many lines of
  our code and bugs.
  - Run-time performance could be much much better.
 
  - We will get implementation for these tickets for free:
  -- #95  wildcard CNAME does NOT work
  -- #64 IXFR support (IMHO this is important!)
  -- #6  Cache non-existing records
 
  And partially:
  -- #7  Allow limiting of the cache
 
  Sounds very interesting.
 
 
  == Disadvantages ==
  - Support for configurations without persistent search will complicate 
  things
  a lot.
  -- Proposal = Make persistent search obligatory. OpenLDAP supports LDAP
  SyncRepl, so it should be possible to make plugin compatible with 389 and
  OpenLDAP at the same time. I would defer this to somebody from 
  users/OpenLDAP
  community.
 
  Why the persistent search would be required ?
 As I mentioned above - you need database dump, because DNSSEC requires strict 
 name and record ordering.
 
 It is possible to do incremental changes when the 'starting snapshot' is 
 established, but it means that we need information about each particular 
 change = that is what persistent search provides.

Ok, so it is to have a complete view of the databse, I assume to reduce
the number of re-computations needed for DNSSEC.

  - Data from LDAP have to be dumped to memory (or to file) before the server
  will start replying to queries.
  = This is not nice, but servers usually are not restarted often. IMHO it 
  is
  a
  good compromise between complexity and performance.
 
  I am not sure I understand what this means. Does it mean you cannot change 
  single
  cache entries on the 

Re: [Freeipa-devel] DNSSEC support design considerations

2013-05-15 Thread Simo Sorce


- Original Message -
 Hello list,
 
 I investigated various scenarios for DNSSEC integration and I would like to
 hear your opinions about proposed approach and it's effects.
 
 
 The most important finding is that bind-dyndb-ldap can't support DNSSEC
 without rewrite of the 'in-memory database' component.

Can you elaborate why a rewrite would be needed ? What constraint we do not 
meet ?

 Fortunately, it seems
 that we can drop our own implementation of the internal DNS database
 (ldap_driver.c and cache.c) and re-use the database from BIND (so called
 RBTDB).
 
 I'm trying to reach Adam Tkac with the question Why we decided to implement
 it again rather than re-use BIND's code?.
 
 
 Re-usage of BIND's implementation will have following properties:
 
 
 == Advantages ==
 - Big part of DNSSEC implementation from BIND9 can be reused.
 - Overall plugin implementation will be simpler - we can drop many lines of
 our code and bugs.
 - Run-time performance could be much much better.
 
 - We will get implementation for these tickets for free:
 -- #95  wildcard CNAME does NOT work
 -- #64IXFR support (IMHO this is important!)
 -- #6 Cache non-existing records
 
 And partially:
 -- #7 Allow limiting of the cache

Sounds very interesting.


 == Disadvantages ==
 - Support for configurations without persistent search will complicate things
 a lot.
 -- Proposal = Make persistent search obligatory. OpenLDAP supports LDAP
 SyncRepl, so it should be possible to make plugin compatible with 389 and
 OpenLDAP at the same time. I would defer this to somebody from users/OpenLDAP
 community.

Why the persistent search would be required ?

 - Data from LDAP have to be dumped to memory (or to file) before the server
 will start replying to queries.
 = This is not nice, but servers usually are not restarted often. IMHO it is
 a
 good compromise between complexity and performance.

I am not sure I understand what this means. Does it mean you cannot change 
single
cache entries on the fly when a change happens in LDAP ? Or something else ?

 = It should be possible to save old database to disk (during BIND shutdown
 or
 periodically) and re-use this old database during server startup. I.e. server
 will start replying immediately from 'old' database and then the server will
 switch to the new database when dump from LDAP is finished.


This look like an advantage ? Why is it a disadvantage ?

 = As a side effect, BIND can start even if connection to LDAP server is down
 - this can improve infrastructure resiliency a lot!

Same as above ?

 == Uncertain effects ==
 - Memory consumption will change, but I'm not sure in which direction.
 - SOA serial number maintenance is a open question.

Why SOA serial is a problem ?

 Decision if persistent search is a 'requirement' or not will have significant
 impact on the design, so I will write the design document when this decision
 is made.

I would like to know more details about the reasons before I can usefully 
comment.

Thanks for the research work done so far!

Simo.

-- 
Simo Sorce * Red Hat, Inc. * New York

___
Freeipa-devel mailing list
Freeipa-devel@redhat.com
https://www.redhat.com/mailman/listinfo/freeipa-devel


Re: [Freeipa-devel] DNSSEC support design considerations

2013-05-15 Thread Petr Spacek

On 15.5.2013 10:29, Simo Sorce wrote:

I investigated various scenarios for DNSSEC integration and I would like to
hear your opinions about proposed approach and it's effects.


The most important finding is that bind-dyndb-ldap can't support DNSSEC
without rewrite of the 'in-memory database' component.


Can you elaborate why a rewrite would be needed ? What constraint we do not 
meet ?


We have three main problems - partially with data structures and mostly with 
the way how we work with the 'internal database':


1) DNSSEC requires strict record ordering, i.e. each record in database has to 
have predecessor and successor (ordering by name and then by record data). 
This can be done relatively simply, but it requires a full dump of the database.


2) On-line record signing requires a lot of data stored 
per-record+per-signature. This would require bigger effort than point 1), 
because many data structures and respective APIs and locking protocols have to 
be re-designed.


3) Our current 'internal database' acts as a 'cache', i.e. records can appear 
and disappear dynamically and the 'cache' is not considered as authoritative 
source of data: LDAP search is conducted each time when some data are not 
found etc. The result is that the same data can disappear and then appear 
again in the cache etc.


Typical update scenario, with persistent search enabled:
a) DNS UPDATE from client is received by BIND
b) New data are written to LDAP
c) DN of modified object is received via persistent search
d) All RRs under the *updated name* are discarded from the cache
-- now the cache is not consistent with data in LDAP
e) Object from LDAP is fetched by plugin
-- a query for the updated name will enforce instant cache refresh, because 
we know that the cache is not authoritative

f) All RRs in the object are updated (in cache)

The problem is that the cache in intermediate states (between -- marks) can't 
be used as authoritative source and will produce incorrect signatures. The 
text below contains more details.


Database's in BIND has concept of 'versions' ('transactions') which our 
internal cache do not implement ... It could be solved by proper locking, of 
course, but it will not be a piece of cake. We need to take care of many 
parallel updates, parallel queries and parallel re-signing at the same time.


I don't say that it is impossible to implement our own backend with same 
properties as BIND's database, but I don't see the value (and I can see a lot 
of bugs :-).




Fortunately, it seems
that we can drop our own implementation of the internal DNS database
(ldap_driver.c and cache.c) and re-use the database from BIND (so called
RBTDB).

I'm trying to reach Adam Tkac with the question Why we decided to implement
it again rather than re-use BIND's code?.


Re-usage of BIND's implementation will have following properties:


== Advantages ==
- Big part of DNSSEC implementation from BIND9 can be reused.
- Overall plugin implementation will be simpler - we can drop many lines of
our code and bugs.
- Run-time performance could be much much better.

- We will get implementation for these tickets for free:
-- #95  wildcard CNAME does NOT work
-- #64  IXFR support (IMHO this is important!)
-- #6   Cache non-existing records

And partially:
-- #7   Allow limiting of the cache


Sounds very interesting.



== Disadvantages ==
- Support for configurations without persistent search will complicate things
a lot.
-- Proposal = Make persistent search obligatory. OpenLDAP supports LDAP
SyncRepl, so it should be possible to make plugin compatible with 389 and
OpenLDAP at the same time. I would defer this to somebody from users/OpenLDAP
community.


Why the persistent search would be required ?
As I mentioned above - you need database dump, because DNSSEC requires strict 
name and record ordering.


It is possible to do incremental changes when the 'starting snapshot' is 
established, but it means that we need information about each particular 
change = that is what persistent search provides.



- Data from LDAP have to be dumped to memory (or to file) before the server
will start replying to queries.
= This is not nice, but servers usually are not restarted often. IMHO it is
a
good compromise between complexity and performance.


I am not sure I understand what this means. Does it mean you cannot change 
single
cache entries on the fly when a change happens in LDAP ? Or something else ?

Sorry, I didn't explained this part in it's full depth.

You can change everything run-time, but there are small details which 
complicates loading of the zone and run-time changes:


1) A normal zones requires SOA + NS + A/ records (for NSs) to load. It is 
(hypothetically) possible to create empty zone, fill it with SOA, NS and A 
records and then incrementally add rest of the records.


The problem is that you need to re-implement DNS resolution algorithm to find 
which records you need at the beginning (SOA, NS, A/) and then