Re: Allegations regarding OpenBSD IPSEC

2011-01-03 Thread Andres Perera
On Tue, Dec 14, 2010 at 5:54 PM, Theo de Raadt dera...@cvs.openbsd.org
wrote:
 I have received a mail regarding the early development of the OpenBSD
 IPSEC stack. B It is alleged that some ex-developers (and the company
 they worked for) accepted US government money to put backdoors into
 our network stack, in particular the IPSEC stack. B Around 2000-2001.

Funny how this happened right after the massive wiki leaks.

It worked though, most people fell for it.



Re: Allegations regarding OpenBSD IPSEC

2010-12-31 Thread Ray Percival
On Dec 31, 2010, at 0:57, Otto Moerbeek o...@drijf.net wrote:

 On Thu, Dec 30, 2010 at 08:41:10PM -0700, Kjell Wooding wrote:

 Note that this assumes that there is no backdoor in random(6) (or
 arc4random_uniform, which it calls) designed to prevent the source file
 with the backdoor from being selected with the above command.


 That's true. I would submit a patch, but it would require every developer
to
 carry around a deck of cards, 12 dice, and a large pot full of numbered
 balls...

 I thought numbered balls only work (without a backdoor) if they are
 drawn from an *urn*.

-Otto

Only if the vicar's wife doesn't peek.



Re: Allegations regarding OpenBSD IPSEC

2010-12-30 Thread Janne Johansson
2010/12/21 Kurt Knochner cdowl...@googlemail.com

 2010/12/21 Theo de Raadt dera...@cvs.openbsd.org
   regarding the allegations about a backdoor beeing planted into OpenBSD,
 I
   did a code review myself [...]
 
  It is unfortunate that it required an allegation of this sort for
  people to get to the point where they stop blindly trusting and
  instead go audit the code

 without a 'hint' (true or fake), where would you start auditing the
 code? It's just too much.


Ted Unangst already solved that for all the potential lookers:

Quote from http://marc.info/?l=openbsd-miscm=124413533913404w=2
-
It's not about where you start. It's about starting anywhere. Here, watch,
it's this easy:
find /usr/src -name *.c | random 1
-

-- 
 To our sweethearts and wives.  May they never meet. -- 19th century toast



Re: Allegations regarding OpenBSD IPSEC

2010-12-30 Thread Ryan McBride
On Thu, Dec 30, 2010 at 09:38:41AM +0100, Janne Johansson wrote:
  without a 'hint' (true or fake), where would you start auditing the
  code? It's just too much.
 
 Ted Unangst already solved that for all the potential lookers:
 
 Quote from http://marc.info/?l=openbsd-miscm=124413533913404w=2
 -
 It's not about where you start. It's about starting anywhere. Here, watch,
 it's this easy:
 find /usr/src -name *.c | random 1
 -

Note that this assumes that there is no backdoor in random(6) (or
arc4random_uniform, which it calls) designed to prevent the source file
with the backdoor from being selected with the above command.



Re: Allegations regarding OpenBSD IPSEC

2010-12-30 Thread Kjell Wooding
 Note that this assumes that there is no backdoor in random(6) (or
 arc4random_uniform, which it calls) designed to prevent the source file
 with the backdoor from being selected with the above command.


That's true. I would submit a patch, but it would require every developer to
carry around a deck of cards, 12 dice, and a large pot full of numbered
balls...

-kj



Re: Allegations regarding OpenBSD IPSEC

2010-12-30 Thread Otto Moerbeek
On Thu, Dec 30, 2010 at 08:41:10PM -0700, Kjell Wooding wrote:

  Note that this assumes that there is no backdoor in random(6) (or
  arc4random_uniform, which it calls) designed to prevent the source file
  with the backdoor from being selected with the above command.
 
 
 That's true. I would submit a patch, but it would require every developer to
 carry around a deck of cards, 12 dice, and a large pot full of numbered
 balls...

I thought numbered balls only work (without a backdoor) if they are
drawn from an *urn*. 

-Otto



Re: Allegations regarding OpenBSD IPSEC

2010-12-24 Thread martin tarb
Theo de Raadt deraadt at cvs.openbsd.org writes:

 
  regarding the allegations about a backdoor beeing planted into OpenBSD, I
  did a code review myself [...]
 
 By the way...
 
 It is unfortunate that it required an allegation of this sort for
 people to get to the point where they stop blindly trusting and
 instead go audit the code
 
 But looked at from the half-glass-full side, it is refreshing to see
 people trying!
 
 
Actually, when I would design such a backdoor, I wouldn't go for the item
getting highest attention and most difficult to crack. And because the crypto
stuff is getting most attention, it's highly likely it'll be replaced every now
and then with something better: Backdoor gone.

I would do a social engineering. Challenge the IPSec stack to tell me what I
want to know.

How:
- Try to setup a connection with the server you want to have the keys from.
- Make a failure with this connection.
- This failure would use an additional parameter in the setup payload and answer
with the info I want to have.

So where to look:
In the state machine to initiate/setup the IPSec connection, especially the
error/declines it sends out. Things like: setup failure, invalid key:
(Yourkey+additional parameter).

That'll be something very difficult to find in reviews (who does look at the
error notices, reviews are in general looking after the main functionality)

Stack state machines tend to be related to the protocol basics and these don't
change, so it's very unlikely a backdoor like this is overruled by a better
implementation, especially if the original implementation is decent and robust.

This mechanism would need a handfull of connection setup attempts to get
everything you need to decompose a recorded stream. No intrusions will be
detected ever, unless logging is at debug level and who does wade through that
amount of logging ...

In some situations, you might need to be able to spoof the originating IP,
though having access to the network infrastructure itself, will be enough.

Easy, hardly any code required, very difficult to detect and very likely to
survive for a long period.



Re: Allegations regarding OpenBSD IPSEC

2010-12-24 Thread Otto Moerbeek
On Fri, Dec 24, 2010 at 07:27:02PM +, martin tarb wrote:

 Theo de Raadt deraadt at cvs.openbsd.org writes:
 
  
   regarding the allegations about a backdoor beeing planted into OpenBSD, I
   did a code review myself [...]
  
  By the way...
  
  It is unfortunate that it required an allegation of this sort for
  people to get to the point where they stop blindly trusting and
  instead go audit the code
  
  But looked at from the half-glass-full side, it is refreshing to see
  people trying!
  
  
 Actually, when I would design such a backdoor, I wouldn't go for the item
 getting highest attention and most difficult to crack. And because the crypto
 stuff is getting most attention, it's highly likely it'll be replaced every 
 now
 and then with something better: Backdoor gone.
 
 I would do a social engineering. Challenge the IPSec stack to tell me what I
 want to know.
 
 How:
 - Try to setup a connection with the server you want to have the keys from.
 - Make a failure with this connection.
 - This failure would use an additional parameter in the setup payload and 
 answer
 with the info I want to have.
 
 So where to look:
 In the state machine to initiate/setup the IPSec connection, especially the
 error/declines it sends out. Things like: setup failure, invalid key:
 (Yourkey+additional parameter).
 
 That'll be something very difficult to find in reviews (who does look at the
 error notices, reviews are in general looking after the main functionality)
 
 Stack state machines tend to be related to the protocol basics and these don't
 change, so it's very unlikely a backdoor like this is overruled by a better
 implementation, especially if the original implementation is decent and 
 robust.
 
 This mechanism would need a handfull of connection setup attempts to get
 everything you need to decompose a recorded stream. No intrusions will be
 detected ever, unless logging is at debug level and who does wade through that
 amount of logging ...
 
 In some situations, you might need to be able to spoof the originating IP,
 though having access to the network infrastructure itself, will be enough.
 
 Easy, hardly any code required, very difficult to detect and very likely to
 survive for a long period.

Please also check what djm@ wrote in one of the first replies to Theo
original mail:

http://marc.info/?l=openbsd-techm=129237675106730w=2

-Otto



Re: Allegations regarding OpenBSD IPSEC

2010-12-24 Thread martin tarb
Otto Moerbeek otto at drijf.net writes:
 Please also check what djm@ wrote in one of the first replies to Theo
 original mail:
 
 http://marc.info/?l=openbsd-techm=129237675106730w=2
 
   -Otto


Yep, I did see that one, though that one does focus on (intentional) bugs in the
the main crypto stuff, and my suggestion is that's not the location where to
look for backdoors.

To obvious, to complicated, to much coding required to realize something
usefull, etc.

There is no need to break the crypto stuff, if you can convince the IPSec
stack to send you the keys. When you do have the keys, the only thing you have
to do is decode the recorded crypted stream. When you are the FBI, you
definately have access to intermediate nodes, there's no need to let one of the
end-nodes generate the traffic to you. You only need the keys, just take care
the IPSec stack will tell you when you ask for it and only when you ask for it
with a crafted IPSec init packet.



Re: Allegations regarding OpenBSD IPSEC

2010-12-24 Thread Otto Moerbeek
On Fri, Dec 24, 2010 at 07:53:52PM +, martin tarb wrote:

 Otto Moerbeek otto at drijf.net writes:
  Please also check what djm@ wrote in one of the first replies to Theo
  original mail:
  
  http://marc.info/?l=openbsd-techm=129237675106730w=2
  
  -Otto
 
 
 Yep, I did see that one, though that one does focus on (intentional) bugs in 
 the
 the main crypto stuff, and my suggestion is that's not the location where to
 look for backdoors.

Huh, I quote:

So a subverted developer would probably need to work on the network stack.
I can think of a few obvious ways that they could leak plaintext or key
material:

and then Damien gives a few examples of how that could be accomplished.

 
 To obvious, to complicated, to much coding required to realize something
 usefull, etc.
 
 There is no need to break the crypto stuff, if you can convince the IPSec
 stack to send you the keys. When you do have the keys, the only thing you have
 to do is decode the recorded crypted stream. When you are the FBI, you
 definately have access to intermediate nodes, there's no need to let one of 
 the
 end-nodes generate the traffic to you. You only need the keys, just take care
 the IPSec stack will tell you when you ask for it and only when you ask for it
 with a crafted IPSec init packet.

What you describe above is one of the ways Damien mentions (as I read
it): If I was doing it, I'd try to make the reuse happen on something
like ICMP errors, so I could send error-inducing probe packets at
times I thought were interesting 

Note the reuse of mbus will have the effect of sending key material to
the outside.

Please elaborate in what respect you suggestion is different.

-Otto



Re: Allegations regarding OpenBSD IPSEC

2010-12-24 Thread martin tarb
Otto Moerbeek otto at drijf.net writes:
 Huh, I quote:
 
 So a subverted developer would probably need to work on the network stack.
 I can think of a few obvious ways that they could leak plaintext or key
 material:
 
 and then Damien gives a few examples of how that could be accomplished.
 
 What you describe above is one of the ways Damien mentions (as I read
 it): If I was doing it, I'd try to make the reuse happen on something
 like ICMP errors, so I could send error-inducing probe packets at
 times I thought were interesting 
 
 Note the reuse of mbus will have the effect of sending key material to
 the outside.
 
 Please elaborate in what respect you suggestion is different.
 
   -Otto


Yeah, the words network stack and crafted packet are there, though the rest is
significantly different. It's all network stack and the ICMP thing does focus on
randomly probing for a potentially not-cleared buffer, ie intentional failures
in the encryption.

What I try to make clear: Don't focus on the encryption stuff, no need to
break that, nor focus on the used buffers, etc. Just look what the STATE 
MACHINE in the IPSEC network stack (or if you want: What the state machine in 
the encryption setup) does, especially the handling of the error conditions. 
pretty easy to send a crafted packet and let the stack release the keys to the 
one asking. So: Don't look for technical bugs like failing memory clearing or
potentially insufficient entropy. But look for a function feature in the error
handling, technically perfect, though with an unwanted functionality. Such a
construction would look pretty legit and would work very well with normal
not-specifically crafted packets.

This thread (and the message you refer to) is moving into the direction of
encryption short-comings and I don't think that's where a backdoor has to be
expected.



Re: Allegations regarding OpenBSD IPSEC

2010-12-23 Thread Salvador Fandiño

On 12/23/2010 06:39 AM, Marsh Ray wrote:

On 12/22/2010 03:49 PM, Clint Pachl wrote:

Salvador Fandiqo wrote:


Could a random seed be patched into the kernel image at installation
time?
Admittedly this is not entropy, this is a just secret key and anyone
with access to the machine would be able to read it,


How is it different than any other installation file then?


because it is accessible *before* any filesystem is mounted, from second 
0 of the boot process.


- Salva



Re: Allegations regarding OpenBSD IPSEC

2010-12-23 Thread Clint Pachl

Salvador Fandiqo wrote:

On 12/23/2010 06:39 AM, Marsh Ray wrote:

On 12/22/2010 03:49 PM, Clint Pachl wrote:

Salvador Fandiqo wrote:


Could a random seed be patched into the kernel image at installation
time?
Admittedly this is not entropy, this is a just secret key and anyone
with access to the machine would be able to read it,


How is it different than any other installation file then?


because it is accessible *before* any filesystem is mounted, from 
second 0 of the boot process.




This reminds me of something.

The last time I installed FreeBSD about 5 years ago, it asked me to 
pound on the keyboard for like 60 seconds during installation (or at 
first boot, can't remember) in order to build up some randomness. I 
wonder what kind of entropy that provided?




Re: Allegations regarding OpenBSD IPSEC

2010-12-23 Thread olli hauer
On 2010-12-23 09:44, Clint Pachl wrote:
 Salvador Fandiqo wrote:
 On 12/23/2010 06:39 AM, Marsh Ray wrote:
 On 12/22/2010 03:49 PM, Clint Pachl wrote:
 Salvador Fandiqo wrote:

 Could a random seed be patched into the kernel image at installation
 time?
 Admittedly this is not entropy, this is a just secret key and anyone
 with access to the machine would be able to read it,

 How is it different than any other installation file then?

 because it is accessible *before* any filesystem is mounted, from second 0 of
 the boot process.


 This reminds me of something.
 
 The last time I installed FreeBSD about 5 years ago, it asked me to pound on 
 the
 keyboard for like 60 seconds during installation (or at first boot, can't
 remember) in order to build up some randomness. I wonder what kind of 
 entropy
 that provided?
 

It was only the first time sshd starts to generate enough entropy for the
ssh-key generation.

http://www.freebsd.org/cgi/cvsweb.cgi/src/etc/rc.d/sshd?rev=1.14;content-type=text%2Fplain



Re: Allegations regarding OpenBSD IPSEC

2010-12-23 Thread Otto Moerbeek
On Thu, Dec 23, 2010 at 10:43:49AM +0100, olli hauer wrote:

 On 2010-12-23 09:44, Clint Pachl wrote:
  Salvador Fandiqo wrote:
  On 12/23/2010 06:39 AM, Marsh Ray wrote:
  On 12/22/2010 03:49 PM, Clint Pachl wrote:
  Salvador Fandiqo wrote:
 
  Could a random seed be patched into the kernel image at installation
  time?
  Admittedly this is not entropy, this is a just secret key and anyone
  with access to the machine would be able to read it,
 
  How is it different than any other installation file then?
 
  because it is accessible *before* any filesystem is mounted, from second 0 
  of
  the boot process.
 
 
  This reminds me of something.
  
  The last time I installed FreeBSD about 5 years ago, it asked me to pound 
  on the
  keyboard for like 60 seconds during installation (or at first boot, can't
  remember) in order to build up some randomness. I wonder what kind of 
  entropy
  that provided?
  
 
 It was only the first time sshd starts to generate enough entropy for the
 ssh-key generation.
 
 http://www.freebsd.org/cgi/cvsweb.cgi/src/etc/rc.d/sshd?rev=1.14;content-type=text%2Fplain

In our case, the aim is to use the entropy collected during install
by the various entropy sources (tty, disk io, network io and more) to
generate a random seed that's being saved to disk so the first real
boot is able to stir the random pool with that and have enough entropy
to generate good hostkeys.

-Otto



Re: Allegations regarding OpenBSD IPSEC

2010-12-23 Thread Kurt Knochner
2010/12/23 Clint Pachl pa...@ecentryx.com:
 The last time I installed FreeBSD about 5 years ago, it asked me to pound on
 the keyboard for like 60 seconds during installation (or at first boot,
 can't remember) in order to build up some randomness. I wonder what kind
 of entropy that provided?

run it through a hash function and it's a good value. Patch that value
into the kernel and it's available from the start of the kernel. Then
use that value as a key for a HMAC, to hash time values (and other
entropy). Do all that and you have a good seed for a PRNG.
Unpredictable, different every time, different on all systems.

Regards
Kurt Knochner

http://knochner.com



Re: Allegations regarding OpenBSD IPSEC

2010-12-23 Thread Marsh Ray

On 12/23/2010 04:39 AM, Kurt Knochner wrote:

2010/12/22 Marsh Rayma...@extendedsubset.com:

In any case, generic statistical tests might detect really
horrible brokenness but they're are not the thing to certify CSRNGs
with.


Really? So, how do you certify the IMPLEMENTATION (bold, not
shouting) of a CSRNG,  (not the theoretical design)?


'Certify' means different things to different people of course. Most
professionals don't insist on having the implementations that they use
formally certified, but some do. For example, Firefox has a 'FIPS Mode'

https://developer.mozilla.org/en/NSS/FIPS_Mode_-_an_explanation

But I've never heard of anybody using it unless they have to.

It's a really good question: how do you prove that something is
unpredictable?

In the US, it is the agency NIST. They coordinate and adopt standards
for deterministic and non-det pseudorandom number generation.
(There are some really fascinating documents on that site.)

NIST ran the competition which chose AES and are currently running one
to select SHA-3. They have some people who know a bit about the subject:
http://csrc.nist.gov/staff/rolodex/kelsley_john.html

NIST publishes some stuff about random generation from their statistical
engineering division:
http://itl.nist.gov/div898/pubs/ar/ar1998/node6.html
http://www.itl.nist.gov/div898/pubs/ar/ar2000/node9.html

But the computer security division covers the cryptographic side:
http://csrc.nist.gov/groups/ST/toolkit/random_number.html
http://csrc.nist.gov/groups/ST/toolkit/rng/index.html

They are careful to point out the distinction between statistical 
testing and cryptanalysis:

These tests may be useful as a first step in determining whether or
not a generator is suitable for a particular cryptographic
application. However, no set of statistical tests can absolutely
certify a generator as appropriate for usage in a particular
application, i.e., statistical testing cannot serve as a substitute
for cryptanalysis.


It looks like the FIPS standards are what cover the certification of an 
actual cryptographic module implementation.

http://csrc.nist.gov/groups/STM/cmvp/inprocess.html

So the process would involve an approved design and you would have to 
submit your implementation to a NIST-approved Cryptographic Security 
and Testing laboratory for testing.


You can probably find some war stories about that process if you search 
around on line.


- Marsh



Re: Allegations regarding OpenBSD IPSEC

2010-12-23 Thread Renzo
 How much did you get?
 Is it safe for the boot process to generate keys now?

If you can only read.



Re: Allegations regarding OpenBSD IPSEC

2010-12-22 Thread Otto Moerbeek
On Tue, Dec 21, 2010 at 07:45:09PM +0100, Kurt Knochner wrote:

 The in libc the rc4 state is only initialized once at the first call of
 arc4_stir() and then there are consecutive calls to arc4_addrandom() which
 is the equivalent of rc4_crypt(). So, there is a difference in the
 implementation. May this is just due to different authors.

There's also a different purpose. See below.

 
 First question: Which one is the 'correct' implementation, as proposed in
 Applied Cryptography (hint in libc - arc4random.c)?
 Second question: Does it matter if the implementation is different than the
 one in Applied Cryptography?


Applied Cryptography only has a sketch. Details have to be filled in.

In summary, the kernel arc4 is reseeded completely with bytes from the
entropy pool periodically, while the libc arc4 is seeded once with
bytes form the kernel arc4 at first use after process startup and then
stirred with a sequence of random bytes obtained from the kernel after
every x bytes produced. 

I can maybe guess why it is this way, but I'd like knowledgeable person to
comment on this.

Note that the userland arc4 IS reseeded after an exec and stirred
extra in the child on fork, probably to avoid leaking key state to new
processes.

-Otto



Re: Allegations regarding OpenBSD IPSEC

2010-12-22 Thread Otto Moerbeek
On Wed, Dec 22, 2010 at 08:28:51AM +0300, Vadim Zhukov wrote:

 On 21 December 2010 G. 22:59:22 Theo de Raadt wrote:
  Go look at the function random_seed() in /usr/src/etc/rc
 
 And it's definitely worth looking... Patch below.

Believe it or not, but this diff has been circling around developers
already a few days ago. 

-Otto

 
 --
   Best wishes,
 Vadim Zhukov
 
 A: Because it messes up the order in which people normally read text.
 Q: Why is top-posting such a bad thing?
 A: Top-posting.
 Q: What is the most annoying thing in e-mail?
 
 
 Index: rc
 ===
 RCS file: /cvs/src/etc/rc,v
 retrieving revision 1.345
 diff -u -p -r1.345 rc
 --- rc8 Nov 2010 19:44:36 -   1.345
 +++ rc22 Dec 2010 05:25:37 -
 @@ -102,14 +102,12 @@ wsconsctl_conf()
  random_seed()
  {
   if [ -f /var/db/host.random -a X$random_seed_done = X ]; then
 - dd if=/var/db/host.random of=/dev/urandom bs=1024 count=64 \
 -  /dev/null 21
   dd if=/var/db/host.random of=/dev/arandom bs=1024 count=64 \
/dev/null 21
 
   # reset seed file, so that if a shutdown-less reboot occurs,
   # the next seed is not a repeat
 - dd if=/dev/urandom of=/var/db/host.random bs=1024 count=64 \
 + dd if=/dev/arandom of=/var/db/host.random bs=1024 count=64 \
/dev/null 21
 
   random_seed_done=1
 @@ -312,7 +310,7 @@ mount -s /var /dev/null 21
 
  # if there's no /var/db/host.random, make one through /dev/urandom
  if [ ! -f /var/db/host.random ]; then
 - dd if=/dev/urandom of=/var/db/host.random bs=1024 count=64 \
 + dd if=/dev/arandom of=/var/db/host.random bs=1024 count=64 \
   /dev/null 21
   chmod 600 /var/db/host.random /dev/null 21
  else



Re: Allegations regarding OpenBSD IPSEC

2010-12-22 Thread Salvador Fandiño

On 12/22/2010 01:46 AM, Theo de Raadt wrote:

2010/12/21 Theo de Raadtdera...@cvs.openbsd.org:

HANG ON.

Go look at the function random_seed() in /usr/src/etc/rc
Then look at when it is called.


so, the current state of the PRNG will be preserved during reboots.


That statement is false.


Good.


No.  You misread the code.


That gives some information about system entropy, which will be
good at all times, except for the very first boot of an
installation. See : rnd.c: randomwrite() -  add_entropy_words();


That part is true.  But what you said earlier is false.


However, arc4_stir will still be called once after every reboot.
During its first call, the value of nanotime() will be placed at the
beginning of buf, which is then beeing used to init the rc4 context.


What else do you think we should use?  Where do we invent entropy from
when the kernel has only been running for 0.01 of a second?


Could a random seed be patched into the kernel image at installation time?

Admittedly this is not entropy, this is a just secret key and anyone 
with access to the machine would be able to read it, but from the 
outside, specially considered that machines are not rebooted so often 
(and when they are, it is usually for updating them), it would look like 
real random data.


- Salva



Re: Allegations regarding OpenBSD IPSEC

2010-12-22 Thread Kurt Knochner
2010/12/22 Theo de Raadt dera...@cvs.openbsd.org:
 Go ahead, do a FIPS check on it.  You will be doing a FIPS check on
 4096 bytes here, then a gap of unknown length, then 4096 bytes here,
 then a gap of unknown length, then 4096 bytes here, then a gap of
 unknown length, 

that's true, if one uses just /dev/arandom (as other consumers will
call arc4random() in the background as well). However if one changes
the code of arc4random() and arc4random_buf() to emit all generated
random values, we will get the whole sequence, from the very first
byte, no matter what consumer requestes data. Reading from
/dev/arandom will then generate the required amount of random values
for the statistic tests, while we can still record all values.

I'll see if I'll be able to do that, just for the sake of learning
something about the internals of openbsd.

Do you have a hint, how I could emit the random values from arc4random
in a clever way? I thought of using an internal buffer and accessing
that through sysctl or another device, e.g. /dev/randstream. The later
looks more complicated, but will certainly teach me more about openbsd
internals.

Regards
Kurt Knochner

http://knochner.com/



Re: Allegations regarding OpenBSD IPSEC

2010-12-22 Thread Joachim Schipper
On Wed, Dec 22, 2010 at 04:29:59PM +0100, Kurt Knochner wrote:
 2010/12/22 Theo de Raadt dera...@cvs.openbsd.org:
  Go ahead, do a FIPS check on it.  You will be doing a FIPS check on
  4096 bytes here, then a gap of unknown length, then 4096 bytes here,
  then a gap of unknown length, then 4096 bytes here, then a gap of
  unknown length, 

 Do you have a hint, how I could emit the random values from arc4random
 in a clever way?

This isn't even remotely clever, but printf() and some base64 encoding
should work fine for a one-off experiment. There *is* a limit to how
much you can print before you fill up the dmesg; if insufficient, try
compiling with a CONFIG.MP_LARGEBUF like this:

---
include arch/amd64/conf/GENERIC.MP

option  MSGBUFSIZE=131072
---

You may wish to look at misc/ent.

Joachim



Re: Allegations regarding OpenBSD IPSEC

2010-12-22 Thread Eichert, Diana
-Original Message-
From: owner-t...@openbsd.org [mailto:owner-t...@openbsd.org] On Behalf Of
Joachim Schipper
Subject: Re: Allegations regarding OpenBSD IPSEC

 On Wed, Dec 22, 2010 at 04:29:59PM +0100, Kurt Knochner wrote:

 
  Do you have a hint, how I could emit the random values from arc4random
  in a clever way?

 This isn't even remotely clever, but printf() and some base64 encoding
 should work fine for a one-off experiment. There *is* a limit to how
 much you can print before you fill up the dmesg; if insufficient, try
 compiling with a CONFIG.MP_LARGEBUF like this:

 ---
 include arch/amd64/conf/GENERIC.MP

 option  MSGBUFSIZE=131072
 ---

 You may wish to look at misc/ent.

   Joachim

or use syslog(3) to output to your destination of choice.



Re: Allegations regarding OpenBSD IPSEC

2010-12-22 Thread Marsh Ray

On 12/22/2010 09:29 AM, Kurt Knochner wrote:


Do you have a hint, how I could emit the random values from arc4random
in a clever way? I thought of using an internal buffer and accessing
that through sysctl or another device, e.g. /dev/randstream.


You should definitely check out this page if you hadn't already:
http://www.phy.duke.edu/~rgb/General/dieharder.php
The dieharder test suite already comes with input modules for reading 
from system devices and lots of other sources.



The later
looks more complicated, but will certainly teach me more about openbsd
internals.


Well if that's your goal, I think you probably need to patch the kernel 
to DMA the stuff into video RAM and offload the processing of it there. 
:-) Or something else, be creative. Try to write a backdoor


In any case, generic statistical tests might detect really horrible 
brokenness but they're are not the thing to certify CSRNGs with. Somehow 
people managed to run them on RC4 for years before anyone noticed that 
the second byte of output was zero twice as often as it should be.


What could be really useful would be better models of the effective 
entropy contributed by kernel event classes going into the pool.


- Marsh



Re: Allegations regarding OpenBSD IPSEC

2010-12-22 Thread Clint Pachl

Salvador Fandiqo wrote:

On 12/22/2010 01:46 AM, Theo de Raadt wrote:

2010/12/21 Theo de Raadtdera...@cvs.openbsd.org:

HANG ON.

Go look at the function random_seed() in /usr/src/etc/rc
Then look at when it is called.


so, the current state of the PRNG will be preserved during reboots.


That statement is false.


Good.


No.  You misread the code.


That gives some information about system entropy, which will be
good at all times, except for the very first boot of an
installation. See : rnd.c: randomwrite() -  add_entropy_words();


That part is true.  But what you said earlier is false.


However, arc4_stir will still be called once after every reboot.
During its first call, the value of nanotime() will be placed at the
beginning of buf, which is then beeing used to init the rc4 context.


What else do you think we should use?  Where do we invent entropy from
when the kernel has only been running for 0.01 of a second?


Could a random seed be patched into the kernel image at installation 
time?


Admittedly this is not entropy, this is a just secret key and anyone 
with access to the machine would be able to read it, but from the 
outside, specially considered that machines are not rebooted so often 
(and when they are, it is usually for updating them), it would look 
like real random data.




Now that it's amateur suggestion hour (no offense Salva), I'm going to 
take a shot.


Would it be possible to use what randomness the system does have to seed 
some reader that pseudo-randomly reads arbitrary bits from the loaded 
kernel image in RAM?


This may differ per system, but doesn't uninitialized RAM start in an 
unknown state? If so, could that be added to the entropy pool if it is 
determined to be random (i.e. not initialized to zeros)?




Re: Allegations regarding OpenBSD IPSEC

2010-12-22 Thread Marsh Ray

On 12/22/2010 03:49 PM, Clint Pachl wrote:

Salvador Fandiqo wrote:


Could a random seed be patched into the kernel image at installation
time?
Admittedly this is not entropy, this is a just secret key and anyone
with access to the machine would be able to read it,


How is it different than any other installation file then?


Now that it's amateur suggestion hour (no offense Salva), I'm going to
take a shot.

Would it be possible to use what randomness the system does have to seed
some reader that pseudo-randomly reads arbitrary bits from the loaded
kernel image in RAM?


Well whatever you might read will fall into three classes:

1 Bits that are fixed in the kernel image. Obviously these don't count
2 Bits that can go for long periods without changing.
3 Bits that change frequently. You might as well read these directly.

The challenge with (3) and especially (2) is quantifying how much 
entropy you have actually gathered. If you can't quantify it, it doesn't 
help you turn on the system faster.


This is the main reason why high speed timers are so valuable: you at 
least expect them to change on a regular basis. Of course, they're so 
regular that you have to use the uncertainty of some other event to 
query the timer.


So the timer is just a way to convert timing uncertainty into bytes, 
it's not a source of entropy itself.


If only the universe had more than one time dimension, more entropy 
could be gathered at each event. But many CPUs actually do have more 
than one and they can vary more or less independently.


For example, on i386 and amd64 there's a 'readtsc' instruction. It has a 
very fast rate (e.g. CPU frequency) and it varies along with CPU 
frequency scaling. Even better, it's notoriously hard to get consistent 
results on multicore platforms. Unfortunately, I just read the Wikipedia 
article and they indicate that Intel CPUs are standardizing on giving it 
a fixed relationship to wall clock time.



This may differ per system, but doesn't uninitialized RAM start in an
unknown state? If so, could that be added to the entropy pool if it is
determined to be random (i.e. not initialized to zeros)?


Sure.
But it depends on all kinds of physical factors.

How much did you get?
Is it safe for the boot process to generate keys now?

Answering those questions accurately is more important than gathering 
bonus entropy that you can't be sure of.


- Marsh



Re: Allegations regarding OpenBSD IPSEC

2010-12-21 Thread Kurt Knochner
Hi,

upfront: sorry for double posting!! Some people told me, that I should send
my findings directly to the list instead of a link. Sorry if I  violated the
netiquette on the list!

So, here we go again (text from the forum where I posted it).

regarding the allegations about a backdoor beeing planted into OpenBSD, I
did a code review myself and I believe that I've found two bugs in the PRNG
code. I'm NOT saying that this is the backdoor or even part of the backdoor.
I'm not even saying, that these two bugs create a weakness in the PRNG
itself, however the two bugs just don't look good and possibly need more
investigation!!

Here we go...

OpenBSD uses arc4random() and arc4random_buf() all over the code to generate
random numbers. This code is defined in src/sys/dev/rnd.c.

Within arc4random() and arc4random_buf() the code flow is like this:

arc4random - arc4maybeinit - arc4_stir

arc4_stir() will be called at least every 10 minutes, as a timer is set
within arc4maybeinit() that resets the variable 'arc4random_initialized'
(see below).

 static void
 arc4maybeinit(void)
 {

 if (!arc4random_initialized) {
 #ifdef DIAGNOSTIC
 if (!rnd_attached)
 panic(arc4maybeinit: premature);
 #endif
 arc4random_initialized++;
 arc4_stir();
 /* 10 minutes, per dm@'s suggestion */
 timeout_add_sec(arc4_timeout, 10 * 60);
 }
 }

Now, let's have a look at arc4_stir().

 arc4_stir(void)
 {
 u_int8_t buf[256];
 int len;

 nanotime((struct timespec *) buf);
 len = sizeof(buf) - sizeof(struct timespec);
 get_random_bytes(buf + sizeof (struct timespec), len);
 len += sizeof(struct timespec);

 mtx_enter(rndlock);
if (rndstats.arc4_nstirs  0)
rc4_crypt(arc4random_state, buf, buf, sizeof(buf));

rc4_keysetup(arc4random_state, buf, sizeof(buf));
arc4random_count = 0;
   rndstats.arc4_stirs += len;
 rndstats.arc4_nstirs++;

/*
 * Throw away the first N words of output, as suggested in the
 * paper Weaknesses in the Key Scheduling Algorithm of RC4
 * by Fluher, Mantin, and Shamir.  (N = 256 in our case.)
 */
rc4_skip(arc4random_state, 256 * 4);
mtx_leave(rndlock);

 }

This initializes the RC4 context with some random data, gathered by system
enthropy, that is mainly done by get_random_bytes().

== Bug #1

HOWEVER: Have a look at the buffer that's beeing used as a seed for the RC4
key setup. It's beeing filled with the random data, BUT at the beginning it
will be filled with just the value of nanotime().

nanotime((struct timespec *) buf);
len = sizeof(buf) - sizeof(struct timespec);
get_random_bytes(buf + sizeof (struct timespec), len);
len += sizeof(struct timespec);


So, there is a lot of effort in get_random_bytes() to get real random data
for the buffer and then the value of nanotime() is prepended to the buffer?
That does not look right. Please consider: this buffer will be used as key
for  rc4_keysetup() and thus it should contain unrelated and unpredictable
data.

== Bug #2

The function rc4_crypt() get's called as soon as rndstats.arc4_nstirs  0.
This will be the case whenever arc4_stir get's called the second time (by
the timer reset - see above).

if (rndstats.arc4_nstirs  0)
rc4_crypt(arc4random_state, buf, buf, sizeof(buf));

rc4_keysetup(arc4random_state, buf, sizeof(buf));
arc4random_count = 0;
rndstats.arc4_stirs += len;
rndstats.arc4_nstirs++;

HOWEVER, right after the call of rc4_crypt(), we call rc4_keysetup() with
the same 'arc4random_state'. This makes the call to rc4_crypt() useless, as
the data structure will be overwritten again with the init data of the RC4
function.

AGAIN: I'm not saying that this is part of the backdoor nor that it weakens
the PRNG. HOWEVER, this does not look right and leaves some bad feeling for
me!

I think we will need some investigation on the effect of PRNG quality caused
by these two bugs.

Regards
Kurt Knochner

http://knochner.com/



Re: Allegations regarding OpenBSD IPSEC

2010-12-21 Thread Otto Moerbeek
On Tue, Dec 21, 2010 at 05:59:33PM +0100, Kurt Knochner wrote:

 Hi,
 
 upfront: sorry for double posting!! Some people told me, that I should send
 my findings directly to the list instead of a link. Sorry if I  violated the
 netiquette on the list!
 
 So, here we go again (text from the forum where I posted it).
 
 regarding the allegations about a backdoor beeing planted into OpenBSD, I
 did a code review myself and I believe that I've found two bugs in the PRNG
 code. I'm NOT saying that this is the backdoor or even part of the backdoor.
 I'm not even saying, that these two bugs create a weakness in the PRNG
 itself, however the two bugs just don't look good and possibly need more
 investigation!!
 
 Here we go...
 
 OpenBSD uses arc4random() and arc4random_buf() all over the code to generate
 random numbers. This code is defined in src/sys/dev/rnd.c.
 
 Within arc4random() and arc4random_buf() the code flow is like this:
 
 arc4random - arc4maybeinit - arc4_stir
 
 arc4_stir() will be called at least every 10 minutes, as a timer is set
 within arc4maybeinit() that resets the variable 'arc4random_initialized'
 (see below).
 
  static void
  arc4maybeinit(void)
  {
 
  if (!arc4random_initialized) {
  #ifdef DIAGNOSTIC
  if (!rnd_attached)
  panic(arc4maybeinit: premature);
  #endif
  arc4random_initialized++;
  arc4_stir();
  /* 10 minutes, per dm@'s suggestion */
  timeout_add_sec(arc4_timeout, 10 * 60);
  }
  }
 
 Now, let's have a look at arc4_stir().
 
  arc4_stir(void)
  {
  u_int8_t buf[256];
  int len;
 
  nanotime((struct timespec *) buf);
  len = sizeof(buf) - sizeof(struct timespec);
  get_random_bytes(buf + sizeof (struct timespec), len);
  len += sizeof(struct timespec);
 
  mtx_enter(rndlock);
 if (rndstats.arc4_nstirs  0)
 rc4_crypt(arc4random_state, buf, buf, sizeof(buf));
 
 rc4_keysetup(arc4random_state, buf, sizeof(buf));
 arc4random_count = 0;
rndstats.arc4_stirs += len;
  rndstats.arc4_nstirs++;
 
 /*
  * Throw away the first N words of output, as suggested in the
  * paper Weaknesses in the Key Scheduling Algorithm of RC4
  * by Fluher, Mantin, and Shamir.  (N = 256 in our case.)
  */
 rc4_skip(arc4random_state, 256 * 4);
 mtx_leave(rndlock);
 
  }
 
 This initializes the RC4 context with some random data, gathered by system
 enthropy, that is mainly done by get_random_bytes().
 
 == Bug #1
 
 HOWEVER: Have a look at the buffer that's beeing used as a seed for the RC4
 key setup. It's beeing filled with the random data, BUT at the beginning it
 will be filled with just the value of nanotime().
 
 nanotime((struct timespec *) buf);
 len = sizeof(buf) - sizeof(struct timespec);
 get_random_bytes(buf + sizeof (struct timespec), len);
 len += sizeof(struct timespec);
 
 
 So, there is a lot of effort in get_random_bytes() to get real random data
 for the buffer and then the value of nanotime() is prepended to the buffer?
 That does not look right. Please consider: this buffer will be used as key
 for  rc4_keysetup() and thus it should contain unrelated and unpredictable
 data.

I don't know the answer to this question, but my guess is that the
buffer is filled by nanotime() to cover the case that
get_random_bytes() does not have enough entropy available, so at least
some non-constant data is used. 

 
 == Bug #2
 
 The function rc4_crypt() get's called as soon as rndstats.arc4_nstirs  0.
 This will be the case whenever arc4_stir get's called the second time (by
 the timer reset - see above).
 
 if (rndstats.arc4_nstirs  0)
 rc4_crypt(arc4random_state, buf, buf, sizeof(buf));
 
 rc4_keysetup(arc4random_state, buf, sizeof(buf));
 arc4random_count = 0;
 rndstats.arc4_stirs += len;
 rndstats.arc4_nstirs++;
 
 HOWEVER, right after the call of rc4_crypt(), we call rc4_keysetup() with
 the same 'arc4random_state'. This makes the call to rc4_crypt() useless, as
 the data structure will be overwritten again with the init data of the RC4
 function.

rc4_crypt() changes both the state and the contents of buf, since buf is
used both as source and destination. That buf is used by rc4_keysetup()
to create a new state. So indeed the state is overwritten, but the
contents of buf produced by rc4_crypt() is used to do that. So both
calls serve their purpose.

-Otto

 
 AGAIN: I'm not saying that this is part of the backdoor nor that it weakens
 the PRNG. HOWEVER, this does not look right and leaves some bad feeling for
 me!
 
 I think we will need some investigation on the effect of PRNG quality caused
 by these two bugs.
 
 Regards
 Kurt Knochner
 
 http://knochner.com/



Re: Allegations regarding OpenBSD IPSEC

2010-12-21 Thread Ted Unangst
On Tue, Dec 21, 2010 at 11:59 AM, Kurt Knochner cdowl...@googlemail.com
wrote:
 This initializes the RC4 context with some random data, gathered by system
 enthropy, that is mainly done by get_random_bytes().

 == Bug #1

 HOWEVER: Have a look at the buffer that's beeing used as a seed for the RC4
 key setup. It's beeing filled with the random data, BUT at the beginning it
 will be filled with just the value of nanotime().

Even nanotime is better than all zeros.  It's to ensure the seed
values changes at least a little, even if there are no random bytes.

if (rndstats.arc4_nstirs  0)
rc4_crypt(arc4random_state, buf, buf, sizeof(buf));

rc4_keysetup(arc4random_state, buf, sizeof(buf));
arc4random_count = 0;
rndstats.arc4_stirs += len;
rndstats.arc4_nstirs++;

 HOWEVER, right after the call of rc4_crypt(), we call rc4_keysetup() with
 the same 'arc4random_state'. This makes the call to rc4_crypt() useless, as
 the data structure will be overwritten again with the init data of the RC4
 function.

buf is an input to rc4_keysetup, not an output.



Re: Allegations regarding OpenBSD IPSEC

2010-12-21 Thread Kurt Knochner
2010/12/21 Theo de Raadt dera...@cvs.openbsd.org

  regarding the allegations about a backdoor beeing planted into OpenBSD, I
  did a code review myself [...]

 By the way...

 It is unfortunate that it required an allegation of this sort for
 people to get to the point where they stop blindly trusting and
 instead go audit the code

without a 'hint' (true or fake), where would you start auditing the
code? It's just too much.

Now, as I have started with it, I will continue to do so, at least
with the crypto code and PRNG code. However, don't get me wrong. I'm
neither a cryptographer nor have I ever touched the openbsd code
before. I did some patching for BSDI BSD/OS (ages ago), but that's it
with my *bsd code contact.

 But looked at from the half-glass-full side, it is refreshing to see
 people trying!

:-)

BTW: iTWire mentions, that two bugs have been found in the crypto
code. Where can I find details on those bugs?

http://www.itwire.com/opinion-and-analysis/open-sauce/43995-openbsd-backdoor-claims-code-audit-begins

Regards
Kurt Knochner

http://knochner.com/



Re: Allegations regarding OpenBSD IPSEC

2010-12-21 Thread Otto Moerbeek
On Tue, Dec 21, 2010 at 07:45:09PM +0100, Kurt Knochner wrote:

 2010/12/21 Otto Moerbeek o...@drijf.net
 
   So, there is a lot of effort in get_random_bytes() to get real random
 data
   for the buffer and then the value of nanotime() is prepended to the
 buffer?
   That does not look right. Please consider: this buffer will be used as
 key
   for  rc4_keysetup() and thus it should contain unrelated and
 unpredictable
   data.
 
  I don't know the answer to this question, but my guess is that the
  buffer is filled by nanotime() to cover the case that
  get_random_bytes() does not have enough entropy available, so at least
  some non-constant data is used.
 
 get_random_bytes() calls extract_entropy() which is a a loop around nbytes,
 thus get_random_bytes() will most certainly deliver enough entropy.

extract_entropy() does not guarantee the collected bytes will have
enough entropy. Think about situations very early in machine startup,
where the state of things could be predictable. 

During multi-user startup OpenBSD will stir the random pool to ensure
this does not happen, but there is a window where rnd(4) might not
have collected enough entropy yet. 

For you remarks below I'll have to check some more things, don't have
time for that now.

-Otto

 
 So, why do we need a nanotime value in front of the buffer? I'm just
 thinking about those weaknesses of RC4 if the initial key is not good/strong
 enough?
 
 http://www.rsa.com/rsalabs/node.asp?id=2009
 
 But then, this is just the initialization of the PRNG and not the
 encryption itself, so maybe it has no meaning at all.
 
   HOWEVER, right after the call of rc4_crypt(), we call rc4_keysetup()
 with
   the same 'arc4random_state'. This makes the call to rc4_crypt() useless,
 as
   the data structure will be overwritten again with the init data of the
 RC4
   function.
 
  rc4_crypt() changes both the state and the contents of buf, since buf is
  used both as source and destination. That buf is used by rc4_keysetup()
  to create a new state. So indeed the state is overwritten, but the
  contents of buf produced by rc4_crypt() is used to do that. So both
  calls serve their purpose.
 
 yes, you are right! I did not see the changes to buf in rc4_crypt(). Sorry
 for that!
 
 But still, why is it done this way?
 
 I compared the implementation of arc4_stir with the one in libc
 (src/lib/libc/crypt/arc4random.c). The implementations are somehow
 different.
 
 The in libc the rc4 state is only initialized once at the first call of
 arc4_stir() and then there are consecutive calls to arc4_addrandom() which
 is the equivalent of rc4_crypt(). So, there is a difference in the
 implementation. May this is just due to different authors.
 
 First question: Which one is the 'correct' implementation, as proposed in
 Applied Cryptography (hint in libc - arc4random.c)?
 Second question: Does it matter if the implementation is different than the
 one in Applied Cryptography?
 
 A last thing:
 
 From: src/lib/libc/crypt/arc4random.c
 
 arc4_stir(void)
 {
 snip
 
 /*
  * Discard early keystream, as per recommendations in:
  * http://www.wisdom.weizmann.ac.il/~itsik/RC4/Papers/Rc4_ksa.ps
  */
 for (i = 0; i  256; i++)
 (void)arc4_getbyte();
 arc4_count = 160;
 }
 
 
 The first 256 Bytes will be skipped due to the mentioned paper. Similar code
 exists in rnd.c.
 
 /*
  * Throw away the first N words of output, as suggested in the
  * paper Weaknesses in the Key Scheduling Algorithm of RC4
  * by Fluher, Mantin, and Shamir.  (N = 256 in our case.)
  */
 rc4_skip(arc4random_state, 256 * 4);
 mtx_leave(rndlock);
 
 However here, 1024 bytes (256 * 4) will be skipped. Maybe that's just a
 misinterpretation of what a word is (byte or integer).
 
 Maybe I'm paranoid and see problems where there are none. But then, this is
 part of the crypto code and there should be no open questions about the
 implementation details.
 
 Regards
 Kurt Knochner
 
 http://knochner.com/



Re: Allegations regarding OpenBSD IPSEC

2010-12-21 Thread Kurt Knochner
2010/12/21 Kurt Knochner cdowl...@googlemail.com:
 2.) don't forget to check if sizeof(ts) = sizeof(buf), otherwise you
 will create a buffer overrun.

O.K. this one is not THAT critical, as buf is defined locally as
u_int8_t buf[256]; However I tend to make those superflous checks in
my code, just to make sure later changes won't break my logic ;-))

Regards
Kurt Knochner

http://knochner.com/



Re: Allegations regarding OpenBSD IPSEC

2010-12-21 Thread Otto Moerbeek
On Tue, Dec 21, 2010 at 08:36:35PM +0100, Kurt Knochner wrote:

 2010/12/21 Otto Moerbeek o...@drijf.net:
  get_random_bytes() calls extract_entropy() which is a a loop around nbytes,
  thus get_random_bytes() will most certainly deliver enough entropy.
 
  extract_entropy() does not guarantee the collected bytes will have
  enough entropy. Think about situations very early in machine startup,
  where the state of things could be predictable.
 
 O.K. I see... But isn't the value of nanotime() kind of predictable as well?

Yes, predictable, but different for each call.

-Otto



Re: Allegations regarding OpenBSD IPSEC

2010-12-21 Thread Theo de Raadt
 without a 'hint' (true or fake),

Well, the allegations came without any facts pointing at specific code.

At the moment my beliefs are somewhat along these lines:

(a) NETSEC, as a company, was in that peculiar near-DC business
of accepting contracts to do security and anti-security work
from parts of the government.
(b) For context: 1999-2001 was a period where lots of US govt
departments pushed the boundaries, because crypto was moved
from DOD to Commerce so that it could be exported subject
to some limits; the result was that crypto use by private
interests was set to explode, and thus many justifications, not
just technologies, were being invented to let the US Govt
continue wiretapping (they have always been addicted to it). 
(c) Gregory Perry did work at NETSEC, and interviewed and hired Jason
just out of school; by the time Jason started working there
Perry had been evicted from the company, for reasons unknown.
(d) Jason did not work on cryptography specifically since he was
mostly a device driver author, but did touch the ipsec layer
because that layer does IPCOMP as well.  Meaning he touched the
data-flow sides of this code, not the algorithms.
(e) After Jason left, Angelos (who had been working on the ipsec stack
already for 4 years or so, for he was the ARCHITECT and primary
developer of the IPSEC stack) accepted a contract at NETSEC and
(while travelling around the world) wrote the crypto layer that
permits our ipsec stack to hand-off requests to the drivers that
Jason worked on.  That crypto layer contained the half-assed
insecure idea of half-IV that the US govt was pushing at that time.
Soon after his contract was over this was ripped out.  Soon after
this the CBC oracle problem became known as well in published
papers, and ipsec/crypto moved towards random IV generation
(probably not viable before this, since we had lacked a high-quality
speedy PRNG... arc4random).  I do not believe that either of
these two problems, or other problems not yet spotted, are a
result of clear malice.  So far the issues we are digging up are
a function of the time in history.
(f) Both Jason and Angelos wrote much code in many areas that we all
rely on.  Daily.  Outside the ipsec stack.  I forwarded the
allegation which mentions them, but I will continue to find it
hard to point my own fingers at them.  Go read my original mail
for points (a) - (c).
(g) I believe that NETSEC was probably contracted to write backdoors
as alleged.
(h) If those were written, I don't believe they made it into our
tree.  They might have been deployed as their own product.
(i) If such NETSEC projects exists, I don't know if Jason, Angelos or
others knew or participated in such NETSEC projects.
(j) If Jason and Angelos knew NETSEC was in that business, I wish
they had told me.  The project and I might have adjusted ourself
to the situation in some way; don't know exactly how.  With this
view, I do not find Jason's mail to be fully transparent.
(k) I am happy that people are taking the opportunity to audit an
important part of the tree which many had assumed -- for far too
long -- to be safe as it is.

 where would you start auditing the code? It's just too much.

Actually, it is a very small part of the tree.  If we all do our part,
it will get better.  It still won't be perfect.  It is just too big.  But
we've proven that if we start nibbling at a source tree looking for small
bugs or unclear things which need improvement, the results always eventually
pay off.  So I can't suggest any specific place to start.

 Now, as I have started with it, I will continue to do so, at least
 with the crypto code and PRNG code.

After you sent out your mail, at least 10 people went and studied this
code.  I've already found a small bug in a totally different side of
the random subsystem, and am looking at cleaning up a truly ugly function.

That is the best process we can hope for.

  But looked at from the half-glass-full side, it is refreshing to see
  people trying!
 
 :-)
 
 BTW: iTWire mentions, that two bugs have been found in the crypto
 code. Where can I find details on those bugs?
 
 http://www.itwire.com/opinion-and-analysis/open-sauce/43995-openbsd-backdoor-claims-code-audit-begins

These are the first two bugs which were found.  The first one relates
to the CBC oracle problem mentioned earlier (it got fixed by angelos
in the software crypto stack, but the same problem was ignored in all
the drivers jason maintained.  Neither Jason nor Angelos were working for
NETSEC at that time, so I think this was just an accident.  Pretty serious
accident).

CVSROOT:/cvs
Module name:src

Re: Allegations regarding OpenBSD IPSEC

2010-12-21 Thread Kurt Knochner
2010/12/21 Otto Moerbeek o...@drijf.net:
 Yes, predictable, but different for each call.

hm... predictable is not a good term in the domain of a PRNG.

However the time value will not be used by itself. It is part of an
encrypt operation with itself + buf and a previous RC4 state, at least
after the second call to arc4_stir.

So, maybe this has no meaning at all. However I would recommend to
check this very thoroughly before changing any line of that code.
Maybe you'll add a weakness by removing the time value.

I would recommend to do the follwoing, and I'm trying to do it myself
during the next few days.

1.) Rewrite arc4random() and arc4random_buf() to store all random
values from boot time until the establishment of a few IPSEC tunnels.

2.) Repeat that procedure a few times, i.e. reboot, ipsec, store,
reboot, ipsec, store, etc.

3.) Take all those pseudo random value sequences and feed them into
the NIST test suite for random values (chi-square, diehard, etc.)

4.) Repeat those steps after the removal of the time value from the code.

5.) Try to interpret the outcome of the NIST tests. Maybe other people
(real cryptographers) should help with this last step.

Regards
Kurt Knochner

http://knochner.com/



Re: Allegations regarding OpenBSD IPSEC

2010-12-21 Thread Theo de Raadt
 No. Unless you know something I don't, This is voodoo. To do it once might
 add something, but to do it multiple times, with strongly correlated inputs
 seems potentially dangerous. Especially since you are XORing them. Does
 anyone elsewhere in the cryptographic world do something like this?

Yes, there is one other thing that does this.

freebsd md5 crypt -- It back-feeds inner state and outer state in loop
and data-dependent ways; it is a true miracle.

Noone should do that.  These Cryptographic primitives are only strongd
when they are used exactly as intended.



Re: Allegations regarding OpenBSD IPSEC

2010-12-21 Thread Ted Unangst
On Tue, Dec 21, 2010 at 2:54 PM, Kurt Knochner cdowl...@googlemail.com wrote:
 2.) Repeat that procedure a few times, i.e. reboot, ipsec, store,
 reboot, ipsec, store, etc.

 3.) Take all those pseudo random value sequences and feed them into
 the NIST test suite for random values (chi-square, diehard, etc.)

You are going to need a buttload of samples for your tests to have any
significance, and even then, all you've proven is that the numbers
have a good distribution, not that they are unpredictable.



Re: Allegations regarding OpenBSD IPSEC

2010-12-21 Thread Joachim Schipper
On Tue, Dec 21, 2010 at 01:33:46PM -0700, Theo de Raadt wrote:
  - Instead of XOR'ing the results of nanotime into the buffer, XOR
MD5(time), MD5(time + 1ns), MD5(time + 2ns) etc into the buffer. This
does not increase entropy, but having more-or-less uncorrelated data
in the entire buffer should make attacks more difficult.
 
 I do not understand what hashing principle you are basing this on.
 
 In essence, md5 doesn't care what is in the buffer, or where it is.
 Placing it at the front, vs massaging it in by hand... Fundamentally
 there is no difference... or is there?

This was based on the following intuition, which has very little to do
with hashing at all:

If our RC4 state is nanotime_noiseknown, an attacker may be able to
predict *most* of the RC4 state through the first couple of rounds
(until nanotime_noise sufficiently interferes with the known state).

It *seems harder* (but I'm not an expert on this kind of thing!) to
predict the first couple of rounds if nanotime_noise is hashed (which
means that you have to re-do the complete calculation for each possible
nanotime_noise, which may not necessarily be the case above), and if
this hashing is used to distribute the noise over the entire initial
state of the cipher (so that no known portion exists).

Again, though, this is just intuition, and it's not wise to trust our
intuition in this kind of thing. I actually *am* a cryptographer, but
I'm quite new at it and a mathematician specializing in a very different
area, so don't take this as gospel. (I'd be willing to spend some more
time looking into this if we consider it important.)

Joachim



Re: Allegations regarding OpenBSD IPSEC

2010-12-21 Thread Ted Unangst
On Tue, Dec 21, 2010 at 3:04 PM, Joachim Schipper
joac...@joachimschipper.nl wrote:
 On Tue, Dec 21, 2010 at 08:27:49PM +0100, Kurt Knochner wrote:
 2010/12/21 Joachim Schipper joac...@joachimschipper.nl:
  +   get_random_bytes(buf, sizeof(buf));
  +   nanotime(ts);
  +   for (i = 0; i  sizeof(ts); i++)
  +   buf[i] ^= ((uint8_t *) ts)[i];

 I like the idea of using XOR. However, there are two issues:

 1.) if nanotime() was called because of not enough random data from
 get_random_bytes() then you could end up with only the value ts in
 buf. Why not use a md5 hash of the time value (better than the plain
 time value) or better just don't use the time value at all.

 New diff.

 Improvements:
 - document the why of this loop (from Otto's message)
 - Instead of XOR'ing the results of nanotime into the buffer, XOR
  MD5(time), MD5(time + 1ns), MD5(time + 2ns) etc into the buffer. This
  does not increase entropy, but having more-or-less uncorrelated data
  in the entire buffer should make attacks more difficult.

This is way overkill.  Just xor the nanotime into the random bytes,
that's all that's needed.


Joachim

 Index: ../../dev/rnd.c
 ===
 RCS file: /usr/cvs/src/src/sys/dev/rnd.c,v
 retrieving revision 1.104
 diff -u -p -r1.104 rnd.c
 --- ../../dev/rnd.c 21 Nov 2010 22:58:40 -  1.104
 +++ ../../dev/rnd.c 21 Dec 2010 20:01:02 -
 @@ -779,13 +779,29 @@ get_random_bytes(void *buf, size_t nbyte
  static void
  arc4_stir(void)
  {
 -   u_int8_t buf[256];
 -   int len;
 +   u_int8_t buf[256], md5_buf[MD5_DIGEST_LENGTH];
 +   MD5_CTX  md5_ctx;
 +   struct timespec  ts;
 +   int  i, j;

 -   nanotime((struct timespec *) buf);
 -   len = sizeof(buf) - sizeof(struct timespec);
 -   get_random_bytes(buf + sizeof (struct timespec), len);
 -   len += sizeof(struct timespec);
 +   get_random_bytes(buf, sizeof(buf));
 +
 +   /*
 +* extract_entropy(), and thus get_random_bytes(), may not actually
be
 +* very random early in the startup sequence of some machines. This
is
 +* a desperate attempt to increase the randomness in the pool by
mixing
 +* in nanotime().
 +*/
 +   nanotime(ts);
 +   KDASSERT(sizeof(buf) % MD5_DIGEST_LENGTH == 0);
 +   for (i = 0; i  sizeof(buf); i += MD5_DIGEST_LENGTH) {
 +   ts.tv_nsec = (ts.tv_nsec + 1) % (1000 * 1000 * 1000);
 +   MD5Init(md5_ctx);
 +   MD5Update(md5_ctx, (u_int8_t *) ts, sizeof(ts));
 +   MD5Final(md5_buf, md5_ctx);
 +   for (j = 0; j  MD5_DIGEST_LENGTH; j++)
 +   buf[i + j] ^= md5_buf[j];
 +   }

mtx_enter(rndlock);
if (rndstats.arc4_nstirs  0)
 @@ -793,7 +809,7 @@ arc4_stir(void)

rc4_keysetup(arc4random_state, buf, sizeof(buf));
arc4random_count = 0;
 -   rndstats.arc4_stirs += len;
 +   rndstats.arc4_stirs += sizeof(buf);
rndstats.arc4_nstirs++;

/*



Re: Allegations regarding OpenBSD IPSEC

2010-12-21 Thread Kjell Wooding
 It *seems harder* (but I'm not an expert on this kind of thing!) to
 predict the first couple of rounds if nanotime_noise is hashed (which
 means that you have to re-do the complete calculation for each possible
 nanotime_noise, which may not necessarily be the case above), and if
 this hashing is used to distribute the noise over the entire initial
 state of the cipher (so that no known portion exists).


Hashing wasn't my objection. Hashing 3 times with data-dependent inputs and
XORing them together was.



Re: Allegations regarding OpenBSD IPSEC

2010-12-21 Thread Bob Beck
 This was based on the following intuition, which has very little to do
 with hashing at all:



 It *seems harder* (but I'm not an expert on this kind of thing!)



 Again, though, this is just intuition,

.

Then no offense Jochim - stop suggesting it.. intuition like this is
what gets us things like the PHK md5 password scheme.

Look at it - fine, but don't make suggestions based on intuition.



Re: Allegations regarding OpenBSD IPSEC

2010-12-21 Thread Joachim Schipper
On Tue, Dec 21, 2010 at 01:33:46PM -0700, Theo de Raadt wrote:
 I do not understand what hashing principle you are basing this on.

On closer reflection, neither do I (MD5 in CTR mode? Cute, but not
necessarily a good idea). Can we just pretend I never sent that message?

Joachim



Re: Allegations regarding OpenBSD IPSEC

2010-12-21 Thread Nicolas P. M. Legrand
On Tue, Dec 21, 2010 at 12:34:54PM -0700, Theo de Raadt wrote:
 [...] 
 Other more recent commits have come out of this as well.  Just go
 look at the Changelog ..

we're a bit late on the changelog right now, it stops on 5th of
december, gonna work on it very soon, sorry for the delay.



Re: Allegations regarding OpenBSD IPSEC

2010-12-21 Thread Kurt Knochner
2010/12/21 Ted Unangst ted.unan...@gmail.com:
 On Tue, Dec 21, 2010 at 2:54 PM, Kurt Knochner cdowl...@googlemail.com 
 wrote:
 2.) Repeat that procedure a few times, i.e. reboot, ipsec, store,
 reboot, ipsec, store, etc.

 3.) Take all those pseudo random value sequences and feed them into
 the NIST test suite for random values (chi-square, diehard, etc.)

 You are going to need a buttload of samples for your tests to have any
 significance, and even then, all you've proven is that the numbers
 have a good distribution, not that they are unpredictable.

yes, that's true. However, it's just a starting point. Do we currently
know that they have a good distribution? Is there any documented test
for the quality of the PRNG?

Regards
Kurt Knochner

http://knochner.com



Re: Allegations regarding OpenBSD IPSEC

2010-12-21 Thread Ted Unangst
On Tue, Dec 21, 2010 at 4:00 PM, Joachim Schipper
joac...@joachimschipper.nl wrote:
 If our RC4 state is nanotime_noiseknown, an attacker may be able to
 predict *most* of the RC4 state through the first couple of rounds
 (until nanotime_noise sufficiently interferes with the known state).

 It *seems harder* (but I'm not an expert on this kind of thing!) to
 predict the first couple of rounds if nanotime_noise is hashed (which
 means that you have to re-do the complete calculation for each possible
 nanotime_noise, which may not necessarily be the case above), and if
 this hashing is used to distribute the noise over the entire initial
 state of the cipher (so that no known portion exists).

The attacker either knows nanotime or they don't.  If they know it,
they know md5(nanotime) as well.

RC4 is weak sauce and leaks its key in the beginning, but we avoid
that by discarding, so there's no way to tell what the initial state
is except by guessing.  And guessing md5(whatever) is no harder than
guessing whatever.

The md5 step would only be helpful if the initial key to rc4 were then
also used to something *else*, meaning it had some value apart from
being the key.  But it doesn't.



Re: Improving early randomness (was: Allegations regarding OpenBSD IPSEC)

2010-12-21 Thread Joachim Schipper
On Tue, Dec 21, 2010 at 01:24:55PM -0700, Kjell Wooding wrote:
 MD5(time), MD5(time + 1ns), MD5(time + 2ns) etc into the buffer. This
  does not increase entropy, but having more-or-less uncorrelated data
  in the entire buffer should make attacks more difficult.
 
 No. Unless you know something I don't, This is voodoo. To do it once might
 add something, but to do it multiple times, with strongly correlated inputs
 seems potentially dangerous. Especially since you are XORing them. Does
 anyone elsewhere in the cryptographic world do something like this?
 
 Can you prove there are no statistical weaknesses in MD5 for such inputs?

Note, as has been pointed out to me, that the kernel only relies on the
entropy of nanotime() until we can get some actual data in, i.e. for a
*very* short time. Thus, this whole discussion is probably moot.

Of course I can't prove that MD5 works, but there *is* some actual
reasoning behind the code I sent:

- random XOR anything_uncorrelated is random, so this shouldn't hurt;
- the output of MD5(time) and MD5(time + 1ns) should look very
  different for (practical) hash functions. To the best of my knowledge,
  no vulnerabilities *of this kind* are known in MD5;
- spreading the entropy over the entire key should be preferable to
  concentrating it in a few bits.

That said, the last should is not a very strong argument.

I'm not aware what others do; certainly, no cryptographer will be happy
with a PRNG seeded by a timestamp, so this is not exactly best practice
(probably the best we can do at that time, though.)

Joachim



Re: Allegations regarding OpenBSD IPSEC

2010-12-21 Thread Jason Wright
On Tue, Dec 21, 2010 at 2:30 PM, Kurt Knochner cdowl...@googlemail.comwrote:


 yes, that's true. However, it's just a starting point. Do we currently
 know that they have a good distribution? Is there any documented test
 for the quality of the PRNG?

 Sam from FreeBSD imported my rng tester and hooked it up to FreeBSD's
port of OCF.  It basically just implements the old FIPS tests, but might
save someone some time.

http://www.freebsd.org/cgi/cvsweb.cgi/src/sys/dev/rndtest/

--Jason L. Wright



Re: Allegations regarding OpenBSD IPSEC

2010-12-21 Thread Theo de Raadt
 2010/12/21 Theo de Raadt dera...@cvs.openbsd.org:
  HANG ON.
 
  Go look at the function random_seed() in /usr/src/etc/rc
  Then look at when it is called.
 
 so, the current state of the PRNG will be preserved during reboots.

That statement is false.

 Good.

No.  You misread the code.

 That gives some information about system entropy, which will be
 good at all times, except for the very first boot of an
 installation. See : rnd.c: randomwrite() - add_entropy_words();

That part is true.  But what you said earlier is false.

 However, arc4_stir will still be called once after every reboot.
 During its first call, the value of nanotime() will be placed at the
 beginning of buf, which is then beeing used to init the rc4 context.

What else do you think we should use?  Where do we invent entropy from
when the kernel has only been running for 0.01 of a second?

 So, at the first glance it looks like using the value of nanotime() in
 arc4_stir is not necessary at all, as there will allways be enough
 system entropy.

False.

On some architectures, some entropy might have been fetched.

On some architectures, the system clock might have been read with enough
accuracy and random time advancement to provide some unknown.

On MOST architectures, the above two are true.

On some they are not.

Soon after mounting, /etc/rc will load a bucketload more entropy (even
on the first boot, I should add, since even the installation process
generates that file).

 At least I would XOR the value of nanotime() to buf,
 instead of just prepending it. MD5 and the like does not seem to be
 necessary, as buf will allways contain some good random data.

XOR it?  Why?

Please provide a citation regarding the benefit of XOR'ing feed data
before passing it into MD5 for the purpose of PRNG folding.  Note,
this is the first stage PRNG, and that a second stage kernel-use PRNG
is built on top of that the first one, and that a third stage
per-process PRNG is built on top of that.



Re: Allegations regarding OpenBSD IPSEC

2010-12-21 Thread Theo de Raadt
 On Tue, Dec 21, 2010 at 4:30 PM, Kurt Knochner cdowl...@googlemail.com 
 wrote:
  yes, that's true. However, it's just a starting point. Do we currently
  know that they have a good distribution? Is there any documented test
  for the quality of the PRNG?
 
 You can analyze the numbers coming out of /dev/arandom if you like,
 but the scheme basically depends on the security of rc4, which is
 still widely used.  I realize this is proof by assertion, but if you
 could decode an rc4 stream, that'd be a big deal.

I am so sad.

8 years after the fact, people still forget that our kernel rc4 stream
is cut up among hundreds of consumers.

Go ahead, do a FIPS check on it.  You will be doing a FIPS check on
4096 bytes here, then a gap of unknown length, then 4096 bytes here,
then a gap of unknown length, then 4096 bytes here, then a gap of
unknown length, 

After sharing a single pie with 200 people, you are using statistics
to claim it had no strawberries on it.



Re: Allegations regarding OpenBSD IPSEC

2010-12-21 Thread Theo de Raadt
 2010/12/21 Kurt Knochner cdowl...@googlemail.com:
  instead of just prepending it. MD5 and the like does not seem to be
  necessary, as buf will allways contain some good random data.
 
 I wanted to say: get_random_bytes() will allways return enough good
 random values.

That is completely irrelevant because get_random_bytes() is only used
as the *source material* for a RC4-based PRNG.

WE HAVE THREE LAYERS OF PRNG.



Re: Allegations regarding OpenBSD IPSEC

2010-12-21 Thread Theo de Raadt
 Is there any documented test for the quality of the PRNG?

Are you talking about our use of MD5, or our use of RC4?

If you are talking about our RC4, then there is; I will put it this
way: If our use of RC4 in this exactly-how-a-stream-cipher-works way
is bad, then every other use on this planet of steam ciphers is bad,
and very broken.  We are relying on the base concept.

The idea is that you can initialize a stream cipher with near-crap and
it will work OK for the way we are using it.

If the MD5 stuff we generate is crap, we are still probably more than
OK compared to everyone because we are going further, and doing the
slice/dice everyone-shares on the RC4 output.



Re: Allegations regarding OpenBSD IPSEC

2010-12-21 Thread Kurt Knochner
2010/12/22 Theo de Raadt dera...@cvs.openbsd.org:
 Is there any documented test for the quality of the PRNG?

 Are you talking about our use of MD5, or our use of RC4?

RC4.

 If you are talking about our RC4, then there is; I will put it this
 way: If our use of RC4 in this exactly-how-a-stream-cipher-works way
 is bad, then every other use on this planet of steam ciphers is bad,
 and very broken.  We are relying on the base concept.

I was just asking if the implementation of the RC4 based PRNG is done
correctly and if there has been a test of the quality of the PRNG
output. It just looked strange for me to seed the algorithm of the
PRNG with a plain time value, though it's just a few bytes at the
beginning of a larger block of data. So, if you believe the
implementation of the PRNG is correct, there is no need to further
analyze this issue.

 The idea is that you can initialize a stream cipher with near-crap and
 it will work OK for the way we are using it.

Right.

 If the MD5 stuff we generate is crap, we are still probably more than
 OK compared to everyone because we are going further, and doing the
 slice/dice everyone-shares on the RC4 output.

I did not say, that anything you generate is crap.



Re: Allegations regarding OpenBSD IPSEC

2010-12-21 Thread Theo de Raadt
 2010/12/22 Theo de Raadt dera...@cvs.openbsd.org:
  2010/12/21 Kurt Knochner cdowl...@googlemail.com:
   instead of just prepending it. MD5 and the like does not seem to be
   necessary, as buf will allways contain some good random data.
 
  I wanted to say: get_random_bytes() will allways return enough good
  random values.
 
  That is completely irrelevant because get_random_bytes() is only used
  as the *source material* for a RC4-based PRNG.
 
  WE HAVE THREE LAYERS OF PRNG.
 
 so, you are saying, that the use of nanotime() in arc4_stir() is irrelevant?
 
 That would be a result I can accept, as I already said: It could mean nothing.

12 to 16 bytes of kind-of-known but not really known data are mixed with
256 - (12 to 16) bytes of data to from the initial state of RC4, which is
then filtered by dropping the first 256 or 256*4 bytes of data as written
in the best paper that exists today.

Is it relevant?



Re: Allegations regarding OpenBSD IPSEC

2010-12-21 Thread Kurt Knochner
2010/12/22 Theo de Raadt dera...@cvs.openbsd.org:
 12 to 16 bytes of kind-of-known but not really known data are mixed with
 256 - (12 to 16) bytes of data to from the initial state of RC4, which is
 then filtered by dropping the first 256 or 256*4 bytes of data as written
 in the best paper that exists today.

 Is it relevant?

It's up to you to make that decision. You know the code better than
anybody else.



Re: Allegations regarding OpenBSD IPSEC

2010-12-21 Thread Matt Connor

.. steam ciphers is bad ...


Steam has much more entropy than a pseudo-number generator, in which 
case our implementation is obsolete.


-Matt



Re: Allegations regarding OpenBSD IPSEC

2010-12-21 Thread Kurt Knochner
2010/12/22 Theo de Raadt dera...@cvs.openbsd.org:
 so, the current state of the PRNG will be preserved during reboots.

 That statement is false.

you're right. As you posted in the other thread, the output of the
PRNG is saved during shutdown and that file is loaded as entropy data
during startup.

 No.  You misread the code.

I understood the code, just my description of the process was not
correct (detailed enough).

 However, arc4_stir will still be called once after every reboot.
 During its first call, the value of nanotime() will be placed at the
 beginning of buf, which is then beeing used to init the rc4 context.

 What else do you think we should use?

I don't know. I just wanted to discuss a possible issue. That's all...

 Where do we invent entropy from when the kernel has only
 been running for 0.01 of a second?

O.K. where do you need ramdom bytes during that state of the kernel?
All locations where arc4random* is called in the kernel are these:

src/sys/dev/ic/if_wi.c: sc-wi_icv = arc4random();
src/sys/dev/ic/if_wi_hostap.c:  arc4random();
src/sys/dev/ic/rt2860.c:uint32_t val = arc4random();
src/sys/dev/softraid_crypto.c:  arc4random_buf(sd-mds.mdd_crypto.scr_key,
src/sys/dev/softraid_crypto.c:  arc4random_buf(sd-mds.mdd_crypto.scr_maskkey,
src/sys/dev/usb/if_uath.c:  iv = (ic-ic_iv != 0) ? ic-ic_iv : 
arc4random();
src/sys/dev/usb/ehci.c: /* XXX prevent panics at boot by not using
arc4random */
src/sys/dev/usb/ehci.c: islot = EHCI_IQHIDX(lev, arc4random());
src/sys/dev/pci/ubsec.c:arc4random_buf(ses-ses_iv, 
sizeof(ses-ses_iv));
src/sys/dev/pci/safe.c: arc4random_buf(ses-ses_iv, 
sizeof(ses-ses_iv));
src/sys/dev/pci/noct.c: arc4random_buf(iv, sizeof(iv));
src/sys/dev/pci/if_iwi.c:   arc4random_buf(data, sizeof data);
src/sys/dev/pci/if_ix.c:arc4random_buf(random, sizeof(random));
src/sys/dev/pci/hifn7751.c: arc4random_buf(ses-hs_iv,
src/sys/dev/softraid.c: arc4random_buf(uuid-sui_id, sizeof(uuid-sui_id));

Those in dev/pci are about initializing hardware encryption devices.

The rest of the calls (to the level I checked), will need at least the
root filesystem to load some config data and then init some stuff
(i.e. WEP key generation, etc.).

So, until the filesystem is mounted, there is no need for arc4random()
in the kernel. After the filesystem has been mounted the entropy data
will be loaded from the file. If this is true. Where is the need for
the time value in arc4_stir()??

Maybe I'm wrong. If so, please direct me to the code that needs
arc4random() before the filesystem has been mounted, maybe EXCEPT the
hardware crypto devices. Most certainly those drivers don't need
arc4random during kernel init as well.

 So, at the first glance it looks like using the value of nanotime() in
 At least I would XOR the value of nanotime() to buf,
 instead of just prepending it. MD5 and the like does not seem to be
 necessary, as buf will allways contain some good random data.

 XOR it?  Why?

To fold the plain time value into some other random data returned by
get_random_bytes. If it's a bad idea to stir or fold data that
way, why is MD5 used in extract_entropy() to achieve the same goal?

 Please provide a citation regarding the benefit of XOR'ing feed data
 before passing it into MD5 for the purpose of PRNG folding.

I did not say that. I said, that XORing the time value with the data
of get_random_bytes() is probably sufficient and that MD5 would not be
required.



Re: Allegations regarding OpenBSD IPSEC

2010-12-21 Thread Theo de Raadt
  Where do we invent entropy from when the kernel has only
  been running for 0.01 of a second?
 
 O.K. where do you need ramdom bytes during that state of the kernel?
 All locations where arc4random* is called in the kernel are these:

[list of 16]

Unfortunately it looks like you missed a hundred or more.

 The rest of the calls (to the level I checked), will need at least the
 root filesystem to load some config data and then init some stuff
 (i.e. WEP key generation, etc.).

No, there is much more than that.  Processes get started and
initialize their libc-based prng's, as well as other state, including
address space randomization, stack biasing, etc etc.

 So, until the filesystem is mounted, there is no need for arc4random()
 in the kernel.

Totally false.

 After the filesystem has been mounted the entropy data
 will be loaded from the file. If this is true. Where is the need for
 the time value in arc4_stir()??

You must not be reading the same code I am.

 Maybe I'm wrong. If so, please direct me to the code that needs
 arc4random() before the filesystem has been mounted

Your approach is wrong.

 I did not say that. I said, that XORing the time value with the data
 of get_random_bytes() is probably sufficient and that MD5 would not be
 required.

The MD5 is required.



Re: Allegations regarding OpenBSD IPSEC

2010-12-21 Thread Kurt Knochner
2010/12/22 Theo de Raadt dera...@cvs.openbsd.org:
  Where do we invent entropy from when the kernel has only
  been running for 0.01 of a second?

 O.K. where do you need ramdom bytes during that state of the kernel?
 All locations where arc4random* is called in the kernel are these:

 [list of 16]

 Unfortunately it looks like you missed a hundred or more.

Damn, you're right. It seems my grep pattern was initialized in the
wrong way (maybe not enough entropy from the user) :-))

 No, there is much more than that.  Processes get started and
 initialize their libc-based prng's, as well as other state, including
 address space randomization, stack biasing, etc etc.

After adjusting my grep pattern, I found several more locations. A lot
of those need the filesystem. However at least one (for sure much
more) is indeed calling arc4random while there is no filesystem
mounted.

So, just forget my theory!

 So, until the filesystem is mounted, there is no need for arc4random()
 in the kernel.

 Totally false.

True (that it's false).

So, I guess the discussion about the use of nanotime() is finished, as
there is common agreement that it has no influence on the PRNG,
right?



Re: Allegations regarding OpenBSD IPSEC

2010-12-21 Thread Martin Toft
On Wed, Dec 22, 2010 at 08:28:51AM +0300, Vadim Zhukov wrote:
  # if there's no /var/db/host.random, make one through /dev/urandom
  ^
  if [ ! -f /var/db/host.random ]; then
 - dd if=/dev/urandom of=/var/db/host.random bs=1024 count=64 \
 + dd if=/dev/arandom of=/var/db/host.random bs=1024 count=64 \
   ^
   /dev/null 21
   chmod 600 /var/db/host.random /dev/null 21
  else



Re: Allegations regarding OpenBSD IPSEC

2010-12-17 Thread Pawel Veselov
On Thu, Dec 16, 2010 at 3:30 PM, Marc Espie es...@nerim.net wrote:
 I'm not going to comment on the mail itself, but I've seen a lot of
incredibly
 dubious articles on the net over the last few days.

 - use your brains, people. Just because a guy does say so doesn't mean
there's
 a backdoor. B Ever heard about FUD ?

 - of course OpenBSD is going to check. Geeez!! what do you think ?

I'm really sorry to pitch in here, but...

The centerpiece of this thread, besides technical details of how/whether to
prove/disprove the so-called accusations, seems to be an argument on
whether Perry's purely FUD'ing, promoting his company/pages, creating
the buzz, or whether his words should be taken for their face value.

I have to say that Perry here is credited with one thing he actually did not
do -- publish this to the world. There has been talk of alterior motives
here,
but for any of these motives, Perry had to know or pretty damn well guessed
that  the second thing Theo (hi, Theo) would do to his email was to publish
it.
Would you plan anything based on a predicted behavior of a person you
haven't communicated with in 10 years?

This is not to point finger at Theo for creating all this commotion, of
course;
this commotion can, however, be, an unintended accident, but the fact that
it came from Theo gave it a lot of credibility.

[skipped]



Re: Allegations regarding OpenBSD IPSEC

2010-12-17 Thread Kevin Chadwick
Does anyone know if there was an ultimate outcome to the investigation
of side channels supposedly put into DSA by the NSA?



Re: Allegations regarding OpenBSD IPSEC

2010-12-17 Thread Theo de Raadt
 On Thu, Dec 16, 2010 at 3:30 PM, Marc Espie es...@nerim.net wrote:
  I'm not going to comment on the mail itself, but I've seen a lot of
 incredibly
  dubious articles on the net over the last few days.
 
  - use your brains, people. Just because a guy does say so doesn't mean
 there's
  a backdoor. B Ever heard about FUD ?
 
  - of course OpenBSD is going to check. Geeez!! what do you think ?
 
 I'm really sorry to pitch in here, but...
 
 The centerpiece of this thread, besides technical details of how/whether to
 prove/disprove the so-called accusations, seems to be an argument on
 whether Perry's purely FUD'ing, promoting his company/pages, creating
 the buzz, or whether his words should be taken for their face value.

As for promoting his company, someone yesterday showed me this:

http://www.sunbiz.org/scripts/ficidet.exe?action=DETREGdocnum=G09000158184rdocnum=G09000158184

Look at the line marked Status.

 I have to say that Perry here is credited with one thing he actually did not
 do -- publish this to the world. There has been talk of alterior motives here,
 but for any of these motives, Perry had to know or pretty damn well guessed
 that  the second thing Theo (hi, Theo) would do to his email was to publish 
 it.
 Would you plan anything based on a predicted behavior of a person you
 haven't communicated with in 10 years?
 
 This is not to point finger at Theo for creating all this commotion, of 
 course;
 this commotion can, however, be, an unintended accident, but the fact that
 it came from Theo gave it a lot of credibility.

Whoa, wait a second here.  If you think I gave it credibility, you
need to go back and read my words again.  I called it an allegation,
and I stick with that.  I was extremely careful with my words, and you
are wrong to interpret them as you do.



Re: Allegations regarding OpenBSD IPSEC

2010-12-17 Thread Marc Espie
On Fri, Dec 17, 2010 at 08:59:13AM -0700, Theo de Raadt wrote:
  This is not to point finger at Theo for creating all this commotion, of 
  course;
  this commotion can, however, be, an unintended accident, but the fact that
  it came from Theo gave it a lot of credibility.
 
 Whoa, wait a second here.  If you think I gave it credibility, you
 need to go back and read my words again.  I called it an allegation,
 and I stick with that.  I was extremely careful with my words, and you
 are wrong to interpret them as you do.

Theo, it's hopeless. Kids these days. Can't read, can't code.

If you write anything, you can be certain they will take it out of context.
They don't understand what a context is.

Heck, they will use the excuse that they're not native speakers to say
they misunderstood.

I mean, why should they make the effort ? it's so easier to take a rumor
out of context, not verify the source, not verify what it says and run
with it.

There's NEVER an excuse for mediocrity.  I'm not a native speaker, Theo
isn't either.  That's not a good reason for not understanding/not writing
english.

That's the same with code, just because you learnt to program with a bad
crowd is no excuse for most of the linux and java code out there. ;-)



Re: Allegations regarding OpenBSD IPSEC

2010-12-17 Thread Pawel Veselov
On Fri, Dec 17, 2010 at 7:59 AM, Theo de Raadt dera...@cvs.openbsd.org
wrote:

[skipped]

  I have to say that Perry here is credited with one thing he actually did
not
  do -- publish this to the world. There has been talk of alterior motives
here,
  but for any of these motives, Perry had to know or pretty damn well
guessed
  that B the second thing Theo (hi, Theo) would do to his email was to
publish it.
  Would you plan anything based on a predicted behavior of a person you
  haven't communicated with in 10 years?
 
  This is not to point finger at Theo for creating all this commotion, of
course;
  this commotion can, however, be, an unintended accident, but the fact
that
  it came from Theo gave it a lot of credibility.

 Whoa, wait a second here. B If you think I gave it credibility, you
 need to go back and read my words again. B I called it an allegation,
 and I stick with that. B I was extremely careful with my words, and you
 are wrong to interpret them as you do.

Look, if somebody like me posted something like this here, it would be just
plain dismissed. If Perry posted his email here, he'd just be under fire to
show some or any proof. The reason this was so widely picked up
and generated so much flame and buzz, is because you posted it here.
It's an unfortunate consequence of a right action, really. I'm not even
remotely saying that you intended to give it weight, or that you
should've swept it under the rug.



Re: Allegations regarding OpenBSD IPSEC

2010-12-17 Thread Theo de Raadt
 On Fri, Dec 17, 2010 at 7:59 AM, Theo de Raadt dera...@cvs.openbsd.org wr=
 ote:
 
 [skipped]
 
   I have to say that Perry here is credited with one thing he actually di=
 d not
   do -- publish this to the world. There has been talk of alterior motive=
 s here,
   but for any of these motives, Perry had to know or pretty damn well gue=
 ssed
   that =C2=A0the second thing Theo (hi, Theo) would do to his email was t=
 o publish it.
   Would you plan anything based on a predicted behavior of a person you
   haven't communicated with in 10 years?
  
   This is not to point finger at Theo for creating all this commotion, of=
  course;
   this commotion can, however, be, an unintended accident, but the fact t=
 hat
   it came from Theo gave it a lot of credibility.
 
  Whoa, wait a second here. =C2=A0If you think I gave it credibility, you
  need to go back and read my words again. =C2=A0I called it an allegation,
  and I stick with that. =C2=A0I was extremely careful with my words, and y=
 ou
  are wrong to interpret them as you do.
 
 Look, if somebody like me posted something like this here, it would be just
 plain dismissed.

If that is the case -- that people would dismiss it automatically --
then the community is really stupid.  You are almost arguing that that
is the way it should be.

Allegation of not, code should always be checked, and re-checked, and
re-checked.

What I am seeing is that we have a ridiculously upside-down trust
model -- Trust the developers.

We never asked for people to trust us.  We might have earned some in
some people's eyes, but if so it has always been false, even before
this.  People should trust what they test, but the world has become
incredibly lazy.

We build this stuff by trusting each other as friends, and that is
done on an international level.  If anything, the layers and volume of
trust involved in software development should decrease trust. Oh
right, let's hear some of that many eyes crap again.  My favorite
part of the many eyes argument is how few bugs were found by the two
eyes of Eric (the originator of the statement).  All the many eyes are
apparently attached to a lot of hands that type lots of words about
many eyes, and never actually audit code.

If anything, the collaborative model we use should _decrease_ trust,
except, well, unless you compare it to the other model -- corporate
software -- where they don't even start from any position of trust.
There you are trusting the money, here you are trusting people I've
never met.

 If Perry posted his email here, he'd just be under fire to
 show some or any proof.

OK, so I post it, and then noone asks him for proof, now it suddenly
has more strength?  I am so bloody dissapointed in the community that
uses our stuff.

 The reason this was so widely picked up
 and generated so much flame and buzz, is because you posted it here.

How dismal.

 It's an unfortunate consequence of a right action, really. I'm not even
 remotely saying that you intended to give it weight, or that you
 should've swept it under the rug.

What a dismal world view.



Re: Allegations regarding OpenBSD IPSEC

2010-12-17 Thread Maxim Bourmistrov
Theo,
this thread is DEAD. Drop it.

No one believes in backdoors planted into OpenBSD.

I se commits - you dig all over the place.
If backdoor existed, then it is gone cause of this digging.

Without proof its just a plain BS.

P.S.
I lost my interest for a while ago now.


On Dec 17, 2010, at 7:23 PM, Theo de Raadt wrote:

 On Fri, Dec 17, 2010 at 7:59 AM, Theo de Raadt dera...@cvs.openbsd.org
wr=
 ote:

 [skipped]

 I have to say that Perry here is credited with one thing he actually di=
 d not
 do -- publish this to the world. There has been talk of alterior motive=
 s here,
 but for any of these motives, Perry had to know or pretty damn well gue=
 ssed
 that =C2=A0the second thing Theo (hi, Theo) would do to his email was t=
 o publish it.
 Would you plan anything based on a predicted behavior of a person you
 haven't communicated with in 10 years?

 This is not to point finger at Theo for creating all this commotion, of=
 course;
 this commotion can, however, be, an unintended accident, but the fact t=
 hat
 it came from Theo gave it a lot of credibility.

 Whoa, wait a second here. =C2=A0If you think I gave it credibility, you
 need to go back and read my words again. =C2=A0I called it an allegation,
 and I stick with that. =C2=A0I was extremely careful with my words, and
y=
 ou
 are wrong to interpret them as you do.

 Look, if somebody like me posted something like this here, it would be
just
 plain dismissed.

 If that is the case -- that people would dismiss it automatically --
 then the community is really stupid.  You are almost arguing that that
 is the way it should be.

 Allegation of not, code should always be checked, and re-checked, and
 re-checked.

 What I am seeing is that we have a ridiculously upside-down trust
 model -- Trust the developers.

 We never asked for people to trust us.  We might have earned some in
 some people's eyes, but if so it has always been false, even before
 this.  People should trust what they test, but the world has become
 incredibly lazy.

 We build this stuff by trusting each other as friends, and that is
 done on an international level.  If anything, the layers and volume of
 trust involved in software development should decrease trust. Oh
 right, let's hear some of that many eyes crap again.  My favorite
 part of the many eyes argument is how few bugs were found by the two
 eyes of Eric (the originator of the statement).  All the many eyes are
 apparently attached to a lot of hands that type lots of words about
 many eyes, and never actually audit code.

 If anything, the collaborative model we use should _decrease_ trust,
 except, well, unless you compare it to the other model -- corporate
 software -- where they don't even start from any position of trust.
 There you are trusting the money, here you are trusting people I've
 never met.

 If Perry posted his email here, he'd just be under fire to
 show some or any proof.

 OK, so I post it, and then noone asks him for proof, now it suddenly
 has more strength?  I am so bloody dissapointed in the community that
 uses our stuff.

 The reason this was so widely picked up
 and generated so much flame and buzz, is because you posted it here.

 How dismal.

 It's an unfortunate consequence of a right action, really. I'm not even
 remotely saying that you intended to give it weight, or that you
 should've swept it under the rug.

 What a dismal world view.



Re: Allegations regarding OpenBSD IPSEC

2010-12-17 Thread Daniel E. Hassler
I agree with Marc - it's hopeless  We live in a world where spin is 
king. Anything you say can and will be twisted against you.


On 12/17/10 9:39 AM, Marc Espie wrote:

On Fri, Dec 17, 2010 at 08:59:13AM -0700, Theo de Raadt wrote:

This is not to point finger at Theo for creating all this commotion, of course;
this commotion can, however, be, an unintended accident, but the fact that
it came from Theo gave it a lot of credibility.

Whoa, wait a second here.  If you think I gave it credibility, you
need to go back and read my words again.  I called it an allegation,
and I stick with that.  I was extremely careful with my words, and you
are wrong to interpret them as you do.

Theo, it's hopeless. Kids these days. Can't read, can't code.

If you write anything, you can be certain they will take it out of context.
They don't understand what a context is.

Heck, they will use the excuse that they're not native speakers to say
they misunderstood.

I mean, why should they make the effort ? it's so easier to take a rumor
out of context, not verify the source, not verify what it says and run
with it.

There's NEVER an excuse for mediocrity.  I'm not a native speaker, Theo
isn't either.  That's not a good reason for not understanding/not writing
english.

That's the same with code, just because you learnt to program with a bad
crowd is no excuse for most of the linux and java code out there. ;-)




Re: Allegations regarding OpenBSD IPSEC

2010-12-17 Thread Siju George
On Fri, Dec 17, 2010 at 11:39 PM, Pawel Veselov pawel.vese...@gmail.com wrote:

  Whoa, wait a second here. B If you think I gave it credibility, you
  need to go back and read my words again. B I called it an allegation,
  and I stick with that. B I was extremely careful with my words, and you
  are wrong to interpret them as you do.

 Look, if somebody like me posted something like this here, it would be just
 plain dismissed.


So it is good that Theo posted it here.
He is serious about this allegation. Serious about proving if it is
true or false.
He has opened the invitation to any in order to acheive the objective.
And he and others are dong the needful the outcome of which you will
be able to see in a a couple or more of days.

 If Perry posted his email here, he'd just be under fire to
 show some or any proof.


Well may be Theo does not fell the urge to push away responsibility on others?
Being the project leader he is doing just what a responsible and
accountable person will do.

The reason this was so widely picked up
 and generated so much flame and buzz, is because you posted it here.


So would you prefer he kept it secret?

 It's an unfortunate consequence of a right action, really. I'm not even
 remotely saying that you intended to give it weight, or that you
 should've swept it under the rug.


Then waht are you tring to say?

thanks

--Siju



Re: Allegations regarding OpenBSD IPSEC

2010-12-16 Thread Joachim Schipper
On Wed, Dec 15, 2010 at 07:04:27PM +, Kevin Chadwick wrote:
 Jason L. Wright ja...@thought.net wrote:
 I cannot fathom his motivation for writing such falsehood

 The real work on OCF did not begin in earnest until February 2000.
 
 I can't see how this gives you credibility but maybe the people who
 worked with you at the time can understand how your evidence supports
 what you say.

While the whole thing is most likely FUD, Perry did say

  Jason Wright and several other developers were responsible for those
  backdoors, and you would be well advised to review any and all code
  commits by Wright as well as the other developers he worked with
  originating from NETSEC.

so it's not like Jason is the only one.

Joachim



Re: Allegations regarding OpenBSD IPSEC

2010-12-16 Thread Marc Espie
I'm not going to comment on the mail itself, but I've seen a lot of incredibly
dubious articles on the net over the last few days.

- use your brains, people. Just because a guy does say so doesn't mean there's
a backdoor.  Ever heard about FUD ?

- of course OpenBSD is going to check. Geeez!! what do you think ?

- why would OpenBSD be in trouble ? where do you think *all the other IPsec
implementations* come from ? (hint: 10 years ago, what was the USofA view on
cryptography exports ? where is OpenBSD based. Second hint: Canada != UsOfA).

- why would the FBI only target OpenBSD ? if there's a backhole in OpenBSD,
which hosts some of the most paranoid Opensource developers alive, what do
you think is the likelyhood similar backholes exist in, say, Windows, or
MacOs, or Linux (check where their darn IPsec code comes from, damn it!)


I know that a lot of the guys reading tech@ are intelligent enough to *know*
all the rather obvious things I'm stating here, but it's looking like a lot
of stupid, stupid web sites are using this as their *only* source of
information, and do not engage their brain): if you read french, go check
http://www.macgeneration.com/news/voir/180982/un-systeme-espion-du-fbi-dans-openbsd
and be amazed at how clueless those writers are.

Just on the off chance that those idiots will read this, and realize how
stupid their generalizations are. Theo was careful enough to state facts,
and I'm a huge fan of what he's done (he's decided to go fully open with
this, which was a tough decision).
I don't see why this would impact OpenBSD negatively without affecting any
other OS... especially until we actually get proof...



Re: Allegations regarding OpenBSD IPSEC

2010-12-16 Thread Brandon Mercer
I about talked myself out of believing that this happened after explaining
this to a cow-orker today. They were quite surprised i'd buy into something
this speculative and far fetched at all. After listening to him generalize
it back to me it seems even sillier.
Brandon
On Dec 16, 2010 6:34 PM, Marc Espie es...@nerim.net wrote:
 I'm not going to comment on the mail itself, but I've seen a lot of
incredibly
 dubious articles on the net over the last few days.

 - use your brains, people. Just because a guy does say so doesn't mean
there's
 a backdoor. Ever heard about FUD ?

 - of course OpenBSD is going to check. Geeez!! what do you think ?

 - why would OpenBSD be in trouble ? where do you think *all the other
IPsec
 implementations* come from ? (hint: 10 years ago, what was the USofA view
on
 cryptography exports ? where is OpenBSD based. Second hint: Canada !=
UsOfA).

 - why would the FBI only target OpenBSD ? if there's a backhole in
OpenBSD,
 which hosts some of the most paranoid Opensource developers alive, what do
 you think is the likelyhood similar backholes exist in, say, Windows, or
 MacOs, or Linux (check where their darn IPsec code comes from, damn it!)


 I know that a lot of the guys reading tech@ are intelligent enough to
*know*
 all the rather obvious things I'm stating here, but it's looking like a
lot
 of stupid, stupid web sites are using this as their *only* source of
 information, and do not engage their brain): if you read french, go check

http://www.macgeneration.com/news/voir/180982/un-systeme-espion-du-fbi-dans-openbsd
 and be amazed at how clueless those writers are.

 Just on the off chance that those idiots will read this, and realize how
 stupid their generalizations are. Theo was careful enough to state facts,
 and I'm a huge fan of what he's done (he's decided to go fully open with
 this, which was a tough decision).
 I don't see why this would impact OpenBSD negatively without affecting any
 other OS... especially until we actually get proof...



Re: Allegations regarding OpenBSD IPSEC

2010-12-16 Thread Rod Whitworth
On Fri, 17 Dec 2010 00:30:27 +0100, Marc Espie wrote:

 if you read french, go check
http://www.macgeneration.com/news/voir/180982/un-systeme-espion-du-fbi-dans-openbsd
and be amazed at how clueless those writers are.

Gee, even the google page translation makes it clearer than my rusty
frangais (` mon icole secondaire de trop nombreuses annies il ya).

Thanks for the laughs, Marc.
*** NOTE *** Please DO NOT CC me. I am subscribed to the list.
Mail to the sender address that does not originate at the list server is 
tarpitted. The reply-to: address is provided for those who feel compelled to 
reply off list. Thankyou.

Rod/
---
This life is not the real thing.
It is not even in Beta.
If it was, then OpenBSD would already have a man page for it.



Re: Allegations regarding OpenBSD IPSEC

2010-12-16 Thread (private) HKS
On Thu, Dec 16, 2010 at 4:47 AM, Joachim Schipper
joac...@joachimschipper.nl wrote:
 On Wed, Dec 15, 2010 at 07:04:27PM +, Kevin Chadwick wrote:
 Jason L. Wright ja...@thought.net wrote:
 I cannot fathom his motivation for writing such falsehood

 The real work on OCF did not begin in earnest until February 2000.

 I can't see how this gives you credibility but maybe the people who
 worked with you at the time can understand how your evidence supports
 what you say.

 While the whole thing is most likely FUD, Perry did say

  Jason Wright and several other developers were responsible for those
  backdoors, and you would be well advised to review any and all code
  commits by Wright as well as the other developers he worked with
  originating from NETSEC.

 so it's not like Jason is the only one.

Joachim




OpenBSD is a great product, but y'all are too easily trolled.

His NDA with the FBI *expired* so he 1) discloses information that's
privileged at the very least and a political stick of dynamite at
worst, 2) discloses it in a private forum to an individual known for
his transparency and total lack of tact, 3) doesn't bother contacting
anyone in the press about it, 4) claims to know various other pundits
are on the FBI payroll, and 5) claims that the FBI deliberately
compromised an open source project in order to spy on its parent
organization and other government agencies.

Here's a tip: when a government organization works with private
contractors to help them spy on other government organizations, those
NDAs don't fucking expire.

Jesus.



Re: Allegations regarding OpenBSD IPSEC

2010-12-16 Thread SJP Lists
On Friday, 17 December 2010, (private) HKS hks.priv...@gmail.com wrote:
 On Thu, Dec 16, 2010 at 4:47 AM, Joachim Schipper
 joac...@joachimschipper.nl wrote:
 On Wed, Dec 15, 2010 at 07:04:27PM +, Kevin Chadwick wrote:
 Jason L. Wright ja...@thought.net wrote:
 I cannot fathom his motivation for writing such falsehood

 The real work on OCF did not begin in earnest until February 2000.

 I can't see how this gives you credibility but maybe the people who
 worked with you at the time can understand how your evidence supports
 what you say.

 While the whole thing is most likely FUD, Perry did say

  Jason Wright and several other developers were responsible for those
  backdoors, and you would be well advised to review any and all code
  commits by Wright as well as the other developers he worked with
  originating from NETSEC.

 so it's not like Jason is the only one.

Joachim




 OpenBSD is a great product, but y'all are too easily trolled.

 His NDA with the FBI *expired* so he 1) discloses information that's
 privileged at the very least and a political stick of dynamite at
 worst, 2) discloses it in a private forum to an individual known for
 his transparency and total lack of tact, 3) doesn't bother contacting
 anyone in the press about it, 4) claims to know various other pundits
 are on the FBI payroll, and 5) claims that the FBI deliberately
 compromised an open source project in order to spy on its parent
 organization and other government agencies.

 Here's a tip: when a government organization works with private
 contractors to help them spy on other government organizations, those
 NDAs don't fucking expire.

 Jesus.

That is what I would expect.

From memory, in my part of the World if you did this sort of work for
an intelligence agency, your role and work is kept secret until 40
years *after* your death.



Re: Allegations regarding OpenBSD IPSEC

2010-12-16 Thread Theo de Raadt
 I about talked myself out of believing that this happened after explaining
 this to a cow-orker today. They were quite surprised i'd buy into something
 this speculative and far fetched at all. After listening to him generalize
 it back to me it seems even sillier.

I think you are totally misreading espie.

It is an allegation in a world where we audit whether there is an
allegation or not.

If I read you right, what you are saying can be simplified to this:

Because this is an allegation, we need not audit.  Hey, let's post
instead!

I am sorry, but even if you don't mean it exactly like that, what you
said will be interpreted by many people to mean that.  What I see you
say above ridiculous.

You can say keep interpreting things so simplistically, but some of us
are not saying much because we are studying and re-learning the
workings of the ipsec and crypto layers works because that is what we
do.



Re: Allegations regarding OpenBSD IPSEC

2010-12-15 Thread Gregory Edigarov
On Wed, 15 Dec 2010 07:48:46 +0100
Otto Moerbeek o...@drijf.net wrote:

 On Tue, Dec 14, 2010 at 10:26:44PM -0500, Brandon Mercer wrote:
 
  If this type of thing really did happen and this actually is going
  on something as simple as systrace or dtrace would have found it
  correct? Surely folks have monitored and audited the actual
  function and traffic that goes across the wire... conversely amd
  has a debugger that'll get you access to more goodies than you
  could imagine and just recently I discovered a similar debugger
  on the wifi chip on my phone. Guess its better it doesn't work
  anyhow ;)
 
 It's generally impossible to see from a datastream if it leaks key
 data.  It can be pretty damn hard to verify code to show it does not
 leak key data

I think if it leaks data, it must leak data somewhere, i.e. there must
be a server somewhere, and this server must have an ip.
so if you look at your traffic, and you will find an ip other then ip
of your server, you will know where the leak goes.

just my 0.5 cents

   -Otto
 
  Brandon
  On Dec 14, 2010 8:33 PM, Damien Miller d...@mindrot.org wrote:
   On Tue, 14 Dec 2010, Bob Beck wrote:
  
   I wonder a lot about the motives of the original sender sending
   that
  message.
  
   Ignoring motive, and looking at opportunity:
  
   We have never allowed US citizens or foreign citizens working in
   the US to hack on crypto code (Niels Provos used to make trips to
   Canada to develop OpenSSH for this reason), so direct
   interference in the crypto code is unlikely. It would also be
   fairly obvious - the crypto code works as pretty basic block
   transform API, and there aren't many places where one could
   smuggle key bytes out. We always used arcrandom() for generating
   random numbers when we needed them, so deliberate biases of key
   material, etc would be quite visible.
  
   So a subverted developer would probably need to work on the
   network stack. I can think of a few obvious ways that they could
   leak plaintext or key material:
  
   1. Ensure that key bytes somehow wind up as padding. This would
   be pretty obvious, since current IPsec standards require
   deterministic padding. Our legacy random padding uses
   arc4random_buf().
  
   2. Arrange for particular structures to be adjacent to
   interesting data, like raw or scheduled keys and accidentally
   copy too much.
  
   3. Arrange for mbufs that previously contained plaintext or other
   interesting material to be accidentally reused. This seems to
   me the most likely avenue, and there have been bugs of this type
   found before. It's a pretty common mistake, so it is attractive
   for deniability, but it seems difficult to make this a reliable
   exploit. If I was doing it, I'd try to make the reuse happen on
   something like ICMP errors, so I could send error-inducing probe
   packets at times I thought were interesting :)
  
   4. Introduce timing side-channel leaks. These weren't widely
   talked about back in 2000 (at least not in the public domain),
   but have been well researched in the years since then. We have
   already introduced countermeasures against the obvious memcmp()
   leaks using timingsafe_bcmp(), but more subtle leaks could still
   remain.
  
   If anyone is concerned that a backdoor may exist and is keen to
   audit the network stack, then these are the places I'd recommend
   starting from.
  
   -d
 


-- 
With best regards,
Gregory Edigarov



Re: Allegations regarding OpenBSD IPSEC

2010-12-15 Thread Brandon Mercer
Unless of course someone was capturing the entire stream as it traversed the
internet and then simply extracted the keys later on.
On Dec 15, 2010 5:22 AM, Gregory Edigarov g...@bestnet.kharkov.ua wrote:
 On Wed, 15 Dec 2010 07:48:46 +0100
 Otto Moerbeek o...@drijf.net wrote:

 On Tue, Dec 14, 2010 at 10:26:44PM -0500, Brandon Mercer wrote:

  If this type of thing really did happen and this actually is going
  on something as simple as systrace or dtrace would have found it
  correct? Surely folks have monitored and audited the actual
  function and traffic that goes across the wire... conversely amd
  has a debugger that'll get you access to more goodies than you
  could imagine and just recently I discovered a similar debugger
  on the wifi chip on my phone. Guess its better it doesn't work
  anyhow ;)

 It's generally impossible to see from a datastream if it leaks key
 data. It can be pretty damn hard to verify code to show it does not
 leak key data

 I think if it leaks data, it must leak data somewhere, i.e. there must
 be a server somewhere, and this server must have an ip.
 so if you look at your traffic, and you will find an ip other then ip
 of your server, you will know where the leak goes.

 just my 0.5 cents

 -Otto

  Brandon
  On Dec 14, 2010 8:33 PM, Damien Miller d...@mindrot.org wrote:
   On Tue, 14 Dec 2010, Bob Beck wrote:
  
   I wonder a lot about the motives of the original sender sending
   that
  message.
  
   Ignoring motive, and looking at opportunity:
  
   We have never allowed US citizens or foreign citizens working in
   the US to hack on crypto code (Niels Provos used to make trips to
   Canada to develop OpenSSH for this reason), so direct
   interference in the crypto code is unlikely. It would also be
   fairly obvious - the crypto code works as pretty basic block
   transform API, and there aren't many places where one could
   smuggle key bytes out. We always used arcrandom() for generating
   random numbers when we needed them, so deliberate biases of key
   material, etc would be quite visible.
  
   So a subverted developer would probably need to work on the
   network stack. I can think of a few obvious ways that they could
   leak plaintext or key material:
  
   1. Ensure that key bytes somehow wind up as padding. This would
   be pretty obvious, since current IPsec standards require
   deterministic padding. Our legacy random padding uses
   arc4random_buf().
  
   2. Arrange for particular structures to be adjacent to
   interesting data, like raw or scheduled keys and accidentally
   copy too much.
  
   3. Arrange for mbufs that previously contained plaintext or other
   interesting material to be accidentally reused. This seems to
   me the most likely avenue, and there have been bugs of this type
   found before. It's a pretty common mistake, so it is attractive
   for deniability, but it seems difficult to make this a reliable
   exploit. If I was doing it, I'd try to make the reuse happen on
   something like ICMP errors, so I could send error-inducing probe
   packets at times I thought were interesting :)
  
   4. Introduce timing side-channel leaks. These weren't widely
   talked about back in 2000 (at least not in the public domain),
   but have been well researched in the years since then. We have
   already introduced countermeasures against the obvious memcmp()
   leaks using timingsafe_bcmp(), but more subtle leaks could still
   remain.
  
   If anyone is concerned that a backdoor may exist and is keen to
   audit the network stack, then these are the places I'd recommend
   starting from.
  
   -d



 --
 With best regards,
 Gregory Edigarov



Re: Allegations regarding OpenBSD IPSEC

2010-12-15 Thread Stuart Henderson
On 2010/12/15 12:20, Gregory Edigarov wrote:
 On Wed, 15 Dec 2010 07:48:46 +0100
 Otto Moerbeek o...@drijf.net wrote:
 
  On Tue, Dec 14, 2010 at 10:26:44PM -0500, Brandon Mercer wrote:
  
   If this type of thing really did happen and this actually is going
   on something as simple as systrace or dtrace would have found it
   correct? Surely folks have monitored and audited the actual
   function and traffic that goes across the wire... conversely amd

I think you misunderstand what systrace does.

   has a debugger that'll get you access to more goodies than you
   could imagine and just recently I discovered a similar debugger
   on the wifi chip on my phone. Guess its better it doesn't work
   anyhow ;)
  
  It's generally impossible to see from a datastream if it leaks key
  data.  It can be pretty damn hard to verify code to show it does not
  leak key data
 
 I think if it leaks data, it must leak data somewhere, i.e. there must
 be a server somewhere, and this server must have an ip.
 so if you look at your traffic, and you will find an ip other then ip
 of your server, you will know where the leak goes.
 
 just my 0.5 cents

That's not necessary, key data can be leaked in or alongside the
encrypted datastream itself, there's no need to send it anywhere. 
And it doesn't have to be a whole key, just something that makes
things cryptanalysis simpler.

*If there's something there*. Remember these are still just
allegations at this stage.



Re: Allegations regarding OpenBSD IPSEC

2010-12-15 Thread Jason L. Wright
Subject: Allegations regarding OpenBSD IPSEC

Every urban lengend is made more real by the inclusion of real names,
dates, and times. Gregory Perry's email falls into this category.  I
cannot fathom his motivation for writing such falsehood (delusions
of grandeur or a self-promotion attempt perhaps?)

I will state clearly that I did not add backdoors to the OpenBSD
operating system or the OpenBSD crypto framework (OCF). The code I
touched during that work relates mostly to device drivers to support
the framework. I don't believe I ever touched isakmpd or photurisd
(userland key management programs), and I rarely touched the ipsec
internals (cryptodev and cryptosoft, yes).  However, I welcome an
audit of everything I committed to OpenBSD's tree.

I demand an apology from Greg Perry (cc'd) for this accusation.  Do
not use my name to add credibility to your cloak and dagger fairy
tales.

I will point out that Greg did not even work at NETSEC while the OCF
development was going on.  Before January of 2000 Greg had left NETSEC.
The timeline for my involvement with IPSec can be clearly demonstrated
by looking at the revision history of:
src/sys/dev/pci/hifn7751.c (Dec 15, 1999)
src/sys/crypto/cryptosoft.c (March 2000)
The real work on OCF did not begin in earnest until February 2000.

Theo, a bit of warning would have been nice (an hour even... especially
since you had the allegations on Dec 11, 2010 and did not post them
until Dec 14, 2010).  The first notice I got was an email from a
friend at 6pm (MST) on Dec 14, 2010 with a link to the already posted
message.

So, keep my name out of the rumor mill.  It is a baseless accusation
the reason for which I cannot understand.

--Jason L. Wright



Re: Allegations regarding OpenBSD IPSEC

2010-12-15 Thread Peter N. M. Hansteen
The IPSEC allegations have produced a flurry of blog posts and
suchlike, mostly just rehashing the contents of Theo's original
message.  However, I've found two followups that are interesting for
their own separate reasons:

in http://blogs.csoonline.com/1296/an_fbi_backdoor_in_openbsd , there
appears to be some additional veribage from Gregory Perry, but IMHO it
does not really add much in the way of useful information.

The other item,

http://maycontaintracesofbolts.blogspot.com/2010/12/openbsd-ipsec-backdoor-allegations.html

is quite a bit more interesting, since it's a public challenge (with a
cash bounty) to come up with actual evidence of backdoor code in the
relevant parts of OpenBSD.  There have been offers to match original 3
* USD 100 bounty, so with a little more circulation the bounty could
turn into a good amount.

I would say the second post here deserves more attention; if you
agree, please make that URL visible via whatever news sites you can
think of (yup, it's in the /. submissions queue).

- Peter
-- 
Peter N. M. Hansteen, member of the first RFC 1149 implementation team
http://bsdly.blogspot.com/ http://www.bsdly.net/ http://www.nuug.no/
Remember to set the evil bit on all malicious network traffic
delilah spamd[29949]: 85.152.224.147: disconnected after 42673 seconds.



Re: Allegations regarding OpenBSD IPSEC

2010-12-15 Thread patrick keshishian
On Wed, Dec 15, 2010 at 11:33 AM, Peter N. M. Hansteen pe...@bsdly.net
wrote:
 The IPSEC allegations have produced a flurry of blog posts and
 suchlike, mostly just rehashing the contents of Theo's original
 message.  However, I've found two followups that are interesting for
 their own separate reasons:

 in http://blogs.csoonline.com/1296/an_fbi_backdoor_in_openbsd , there
 appears to be some additional veribage from Gregory Perry, but IMHO it
 does not really add much in the way of useful information.

 The other item,


http://maycontaintracesofbolts.blogspot.com/2010/12/openbsd-ipsec-backdoor-al
legations.html

 is quite a bit more interesting, since it's a public challenge (with a
 cash bounty) to come up with actual evidence of backdoor code in the
 relevant parts of OpenBSD.  There have been offers to match original 3
 * USD 100 bounty, so with a little more circulation the bounty could
 turn into a good amount.

It is easy to shoot one's mouth off like that about bounty offered,
given the ridiculously constrained conditions the bounty is offered
under. He might as well offered a million USD. No one will be able to
prove this under these restrictions.

--patrick



Re: Allegations regarding OpenBSD IPSEC

2010-12-15 Thread Peter N. M. Hansteen
patrick keshishian pkesh...@gmail.com writes:
 It is easy to shoot one's mouth off like that about bounty offered,
 given the ridiculously constrained conditions the bounty is offered
 under. He might as well offered a million USD. No one will be able to
 prove this under these restrictions.

I won't get into a discussion about DES' stated requirements, but I do
think it's a good-faith effort.  Then again, as Jason Dixon points out in
his blog http://obfuscurity.com/2010/12/Updates-on-the-OpenBSD-IPsec-Gossip ,
making a donation to the OpenBSD project is likely to give you more bang
for the buck.

- Peter
-- 
Peter N. M. Hansteen, member of the first RFC 1149 implementation team
http://bsdly.blogspot.com/ http://www.bsdly.net/ http://www.nuug.no/
Remember to set the evil bit on all malicious network traffic
delilah spamd[29949]: 85.152.224.147: disconnected after 42673 seconds.



Re: Allegations regarding OpenBSD IPSEC

2010-12-15 Thread Ted Unangst
On Wed, Dec 15, 2010 at 3:36 PM, Damien Miller d...@mindrot.org wrote:
 On Wed, 15 Dec 2010, patrick keshishian wrote:

 It is easy to shoot one's mouth off like that about bounty offered,
 given the ridiculously constrained conditions the bounty is offered
 under. He might as well offered a million USD. No one will be able to
 prove this under these restrictions.

 His conditions aren't ridiculously constrained, they seem to be pretty
 much approproiate for the allegations.

The requirement that the bug still be exploitable in the current code
is a little much.  A hidden side channel might possibly be quite
fragile and easily disarmed by accident without fixing the underlying
flaw, but that wouldn't invalidate the allegation.  That part did read
a lot like hedging the bet.

An exploit like this that only worked pre-4.4 (to pick a random older
release for example) would still be very valuable.



Re: Allegations regarding OpenBSD IPSEC

2010-12-15 Thread Kevin Chadwick
On Wed, 15 Dec 2010 10:27:31 -0800
Jason L. Wright ja...@thought.net wrote:

 I
 cannot fathom his motivation for writing such falsehood (delusions
 of grandeur or a self-promotion attempt perhaps?)

Perhaps,

Promote his domains rank in google or the facebook link? (Does anyone
know if he always puts facebook links in mails)

Wants IPSEC audited for some reason?

Divert devs attention from something else?

If it's one of these reasons or any other alterior motive then that's
just dispicible.

However, NDAs often last for 10 years which either adds weight to
the well thought urban myth theory or to the possibility that it may be
the truth.

The real work on OCF did not begin in earnest until February 2000.

I can't see how this gives you credibility but maybe the people who
worked with you at the time can understand how your evidence supports
what you say.



Re: Allegations regarding OpenBSD IPSEC

2010-12-15 Thread patrick keshishian
On Wed, Dec 15, 2010 at 12:36 PM, Damien Miller d...@mindrot.org wrote:
 On Wed, 15 Dec 2010, patrick keshishian wrote:

 It is easy to shoot one's mouth off like that about bounty offered,
 given the ridiculously constrained conditions the bounty is offered
 under. He might as well offered a million USD. No one will be able to
 prove this under these restrictions.

 His conditions aren't ridiculously constrained, they seem to be pretty
 much approproiate for the allegations.

seriously?

# - that the OpenBSD Crypto Framework contains vulnerabilities
#   which can be exploited by an eavesdropper to recover plaintext
#   from an IPSec stream,

There is a big assumption about the alleged backdoor or
leak; i.e., that it is used to directly extract plaintext
out of an IPSEC stream. OK. Maybe reasonable.

# - that these vulnerabilities can be traced directly to code
#   submitted by Jason Wright and / or other developers linked
#   to Perry, and

Do they really have to be linked back to Perry? Is that
really the important factor in the alleged backdoor's
existence?

# - that the nature of these vulnerabilities is such that there
#   is reason to suspect, independently of Perry's allegations,
#   that they were inserted intentionally-for instance, if the
#   surrounding code is unnecessarily awkward or obfuscated and
#   the obvious and straightforward alternative would either not
#   be vulnerable or be immediately recognizable as vulnerable

Oh, so the alleged backdoor if present _must_ be in
the form of obfuscated code. Okay...


# - Finally, I pledge USD 100 to the first person to present
#   convincing evidence showing that a government agency
#   successfully planted a backdoor in a security-critical
#   portion of the Linux kernel.

So not only one has to find the alleged backdoor, but
also link its author to a government agency .. via
how I wonder, payroll stub, signed contract, confession?
OK, Maybe not too unreasonable, but it still gives a nice
loophole for blogger to recant on his bounty.

# - In all three cases, the vulnerability must still be present
#   and exploitable when the evidence is assembled and presented
#   to the affected parties. Allowances will be made for the
#   responsible disclosure process.

Must still exist? So proving that at some point the
alleged backdoor existed and was placed in there by
an FBI/NSA pawn isn't good enough, but the alleged
backdoor must still exist. Nice...

# - Exploitability must be demonstrated, not theorized.

Ahh... must be demonstrated. So not only you need
to show there is an alleged leak but also you must
know the means by which the NSA or FBI intended to
use the alleged leak.

But OK.
--patrick



Re: Allegations regarding OpenBSD IPSEC

2010-12-15 Thread Tobias Weingartner
On Wednesday, December 15, Kevin Chadwick wrote:
 The real work on OCF did not begin in earnest until February 2000.
 
 I can't see how this gives you credibility but maybe the people who
 worked with you at the time can understand how your evidence supports
 what you say.

I've known Jason for quite a while, and nothing has ever
let me believe that I should question his character, motives
or otherwise make me believe he was not a straightforward
and honest person.

I think even in the USA a person is INNOCENT, until PROVEN
guilty.  So in this case, you're the one that is out of
line.  You're the one the onus of proof is on.  Jason has
no need to give you evidence.

Quite frankly, dragging Jason (or anyone else) through the
mud in this fashion is completely disgusting, deplorable,
and stinks.  This will be the last I say on this subject.

--Toby.



Re: Allegations regarding OpenBSD IPSEC

2010-12-15 Thread Kevin Chadwick
On Wed, 15 Dec 2010 14:57:24 -0700
Tobias Weingartner weing...@tepid.org wrote:

  So in this case, you're the one that is out of
 line.

If your talking to me then I tried to make it clear that I was sitting
on the fence. I was going to go further but then figured that would be
leaning in one direction. I certainly wouldn't want to offend anyone I
don't know but I'm not going to defend them or help their case if I
don't know whether they're guilty or not either.

If your putting evidence forward, then logic dictates that the same
reasoning applies in that it doesn't clear you unquestionably unless it
proves something which is why I asked if it did. Don't get me started
about law, because it's more about money than justice and please don't
read between my lines.

For what it's worth, my opinion which is irrelevant on the basis of next
to no evidence was that Jason is likely the one telling the truth and
I'm sure the people in the community that count to him will have a
better idea than me.

My intention was not to drag anyone through the mud but only help people
get to the truth, sorry if it also seemed like that to anyone else. If
he's wrongly accused for financial gain then that is truly terrible.



Allegations regarding OpenBSD IPSEC

2010-12-14 Thread Theo de Raadt
I have received a mail regarding the early development of the OpenBSD
IPSEC stack.  It is alleged that some ex-developers (and the company
they worked for) accepted US government money to put backdoors into
our network stack, in particular the IPSEC stack.  Around 2000-2001.

Since we had the first IPSEC stack available for free, large parts of
the code are now found in many other projects/products.  Over 10
years, the IPSEC code has gone through many changes and fixes, so it
is unclear what the true impact of these allegations are.

The mail came in privately from a person I have not talked to for
nearly 10 years.  I refuse to become part of such a conspiracy, and
will not be talking to Gregory Perry about this.  Therefore I am
making it public so that
(a) those who use the code can audit it for these problems,
(b) those that are angry at the story can take other actions,
(c) if it is not true, those who are being accused can defend themselves.

Of course I don't like it when my private mail is forwarded.  However
the little ethic of a private mail being forwarded is much smaller
than the big ethic of government paying companies to pay open source
developers (a member of a community-of-friends) to insert
privacy-invading holes in software.



From: Gregory Perry gregory.pe...@govirtual.tv
To: dera...@openbsd.org dera...@openbsd.org
Subject: OpenBSD Crypto Framework
Thread-Topic: OpenBSD Crypto Framework
Thread-Index: AcuZjuF6cT4gcSmqQv+Fo3/+2m80eg==
Date: Sat, 11 Dec 2010 23:55:25 +
Message-ID: 
8d3222f9eb68474da381831a120b1023019ac...@mbx021-e2-nj-5.exch021.domain.local
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
Content-Type: text/plain; charset=iso-8859-1
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
Status: RO

Hello Theo,

Long time no talk.  If you will recall, a while back I was the CTO at
NETSEC and arranged funding and donations for the OpenBSD Crypto
Framework.  At that same time I also did some consulting for the FBI,
for their GSA Technical Support Center, which was a cryptologic
reverse engineering project aimed at backdooring and implementing key
escrow mechanisms for smart card and other hardware-based computing
technologies.

My NDA with the FBI has recently expired, and I wanted to make you
aware of the fact that the FBI implemented a number of backdoors and
side channel key leaking mechanisms into the OCF, for the express
purpose of monitoring the site to site VPN encryption system
implemented by EOUSA, the parent organization to the FBI.  Jason
Wright and several other developers were responsible for those
backdoors, and you would be well advised to review any and all code
commits by Wright as well as the other developers he worked with
originating from NETSEC.

This is also probably the reason why you lost your DARPA funding, they
more than likely caught wind of the fact that those backdoors were
present and didn't want to create any derivative products based upon
the same.

This is also why several inside FBI folks have been recently
advocating the use of OpenBSD for VPN and firewalling implementations
in virtualized environments, for example Scott Lowe is a well
respected author in virtualization circles who also happens top be on
the FBI payroll, and who has also recently published several tutorials
for the use of OpenBSD VMs in enterprise VMware vSphere deployments.

Merry Christmas...

Gregory Perry
Chief Executive Officer
GoVirtual Education

VMware Training Products  Services

540-645-6955 x111 (local)
866-354-7369 x111 (toll free)
540-931-9099 (mobile)
877-648-0555 (fax)

http://www.facebook.com/GregoryVPerry
http://www.facebook.com/GoVirtual



Re: Allegations regarding OpenBSD IPSEC

2010-12-14 Thread Bob Beck
I wonder a lot about the motives of the original sender sending that message.

Is it simply a way to spread FUD and discredit openbsd?
Is it a personal gripe with the accused?
Is it an attempt to manipulate what is used in the market?
Is it outright lies
Is it outright truth and genuine altruism?

While I suspect we'll never know completely for sure, it makes an
interesting point. Is it genuine? partially genuine? how much truth is
in there? if it's true how much of this mattered and has it since been
fixed? (as that code when through a lot of fixes since that time)

Of course in these days of binary only blob drivers, I don't think the
government need resort to this sort of tactic these days. Those nice
binary-only drivers everyone loves running for video and wireless will
ensure that there are nice places in your favorite Open Source project
that can be coopted quietly by government organizations and have
access to your entire kernel. No need to be subtle.



Re: Allegations regarding OpenBSD IPSEC

2010-12-14 Thread Damien Miller
On Tue, 14 Dec 2010, Bob Beck wrote:

 I wonder a lot about the motives of the original sender sending that message.

Ignoring motive, and looking at opportunity:

We have never allowed US citizens or foreign citizens working in the US
to hack on crypto code (Niels Provos used to make trips to Canada to
develop OpenSSH for this reason), so direct interference in the crypto
code is unlikely. It would also be fairly obvious - the crypto code
works as pretty basic block transform API, and there aren't many places
where one could smuggle key bytes out. We always used arcrandom() for
generating random numbers when we needed them, so deliberate biases of
key material, etc would be quite visible.

So a subverted developer would probably need to work on the network stack.
I can think of a few obvious ways that they could leak plaintext or key
material:

1. Ensure that key bytes somehow wind up as padding. This would be pretty
   obvious, since current IPsec standards require deterministic padding.
   Our legacy random padding uses arc4random_buf().

2. Arrange for particular structures to be adjacent to interesting data,
   like raw or scheduled keys and accidentally copy too much. 

3. Arrange for mbufs that previously contained plaintext or other
   interesting material to be accidentally reused. This seems to me the
   most likely avenue, and there have been bugs of this type found before.
   It's a pretty common mistake, so it is attractive for deniability, but
   it seems difficult to make this a reliable exploit. If I was doing it,
   I'd try to make the reuse happen on something like ICMP errors, so I
   could send error-inducing probe packets at times I thought were
   interesting :)

4. Introduce timing side-channel leaks. These weren't widely talked about
   back in 2000 (at least not in the public domain), but have been well
   researched in the years since then. We have already introduced
   countermeasures against the obvious memcmp() leaks using
   timingsafe_bcmp(), but more subtle leaks could still remain.

If anyone is concerned that a backdoor may exist and is keen to audit the
network stack, then these are the places I'd recommend starting from.

-d



Re: Allegations regarding OpenBSD IPSEC

2010-12-14 Thread Brandon Mercer
If this type of thing really did happen and this actually is going on
something as simple as systrace or dtrace would have found it correct?
Surely folks have monitored and audited the actual function and traffic that
goes across the wire... conversely amd has a debugger that'll get you
access to more goodies than you could imagine and just recently I discovered
a similar debugger on the wifi chip on my phone. Guess its better it
doesn't work anyhow ;)
Brandon
On Dec 14, 2010 8:33 PM, Damien Miller d...@mindrot.org wrote:
 On Tue, 14 Dec 2010, Bob Beck wrote:

 I wonder a lot about the motives of the original sender sending that
message.

 Ignoring motive, and looking at opportunity:

 We have never allowed US citizens or foreign citizens working in the US
 to hack on crypto code (Niels Provos used to make trips to Canada to
 develop OpenSSH for this reason), so direct interference in the crypto
 code is unlikely. It would also be fairly obvious - the crypto code
 works as pretty basic block transform API, and there aren't many places
 where one could smuggle key bytes out. We always used arcrandom() for
 generating random numbers when we needed them, so deliberate biases of
 key material, etc would be quite visible.

 So a subverted developer would probably need to work on the network stack.
 I can think of a few obvious ways that they could leak plaintext or key
 material:

 1. Ensure that key bytes somehow wind up as padding. This would be pretty
 obvious, since current IPsec standards require deterministic padding.
 Our legacy random padding uses arc4random_buf().

 2. Arrange for particular structures to be adjacent to interesting data,
 like raw or scheduled keys and accidentally copy too much.

 3. Arrange for mbufs that previously contained plaintext or other
 interesting material to be accidentally reused. This seems to me the
 most likely avenue, and there have been bugs of this type found before.
 It's a pretty common mistake, so it is attractive for deniability, but
 it seems difficult to make this a reliable exploit. If I was doing it,
 I'd try to make the reuse happen on something like ICMP errors, so I
 could send error-inducing probe packets at times I thought were
 interesting :)

 4. Introduce timing side-channel leaks. These weren't widely talked about
 back in 2000 (at least not in the public domain), but have been well
 researched in the years since then. We have already introduced
 countermeasures against the obvious memcmp() leaks using
 timingsafe_bcmp(), but more subtle leaks could still remain.

 If anyone is concerned that a backdoor may exist and is keen to audit the
 network stack, then these are the places I'd recommend starting from.

 -d



Re: Allegations regarding OpenBSD IPSEC

2010-12-14 Thread Otto Moerbeek
On Tue, Dec 14, 2010 at 10:26:44PM -0500, Brandon Mercer wrote:

 If this type of thing really did happen and this actually is going on
 something as simple as systrace or dtrace would have found it correct?
 Surely folks have monitored and audited the actual function and traffic that
 goes across the wire... conversely amd has a debugger that'll get you
 access to more goodies than you could imagine and just recently I discovered
 a similar debugger on the wifi chip on my phone. Guess its better it
 doesn't work anyhow ;)

It's generally impossible to see from a datastream if it leaks key
data.  It can be pretty damn hard to verify code to show it does not
leak key data

-Otto

 Brandon
 On Dec 14, 2010 8:33 PM, Damien Miller d...@mindrot.org wrote:
  On Tue, 14 Dec 2010, Bob Beck wrote:
 
  I wonder a lot about the motives of the original sender sending that
 message.
 
  Ignoring motive, and looking at opportunity:
 
  We have never allowed US citizens or foreign citizens working in the US
  to hack on crypto code (Niels Provos used to make trips to Canada to
  develop OpenSSH for this reason), so direct interference in the crypto
  code is unlikely. It would also be fairly obvious - the crypto code
  works as pretty basic block transform API, and there aren't many places
  where one could smuggle key bytes out. We always used arcrandom() for
  generating random numbers when we needed them, so deliberate biases of
  key material, etc would be quite visible.
 
  So a subverted developer would probably need to work on the network stack.
  I can think of a few obvious ways that they could leak plaintext or key
  material:
 
  1. Ensure that key bytes somehow wind up as padding. This would be pretty
  obvious, since current IPsec standards require deterministic padding.
  Our legacy random padding uses arc4random_buf().
 
  2. Arrange for particular structures to be adjacent to interesting data,
  like raw or scheduled keys and accidentally copy too much.
 
  3. Arrange for mbufs that previously contained plaintext or other
  interesting material to be accidentally reused. This seems to me the
  most likely avenue, and there have been bugs of this type found before.
  It's a pretty common mistake, so it is attractive for deniability, but
  it seems difficult to make this a reliable exploit. If I was doing it,
  I'd try to make the reuse happen on something like ICMP errors, so I
  could send error-inducing probe packets at times I thought were
  interesting :)
 
  4. Introduce timing side-channel leaks. These weren't widely talked about
  back in 2000 (at least not in the public domain), but have been well
  researched in the years since then. We have already introduced
  countermeasures against the obvious memcmp() leaks using
  timingsafe_bcmp(), but more subtle leaks could still remain.
 
  If anyone is concerned that a backdoor may exist and is keen to audit the
  network stack, then these are the places I'd recommend starting from.
 
  -d