Re: Five Theses on Security Protocols

2010-08-01 Thread Guus Sliepen
On Sun, Aug 01, 2010 at 11:20:51PM +1200, Peter Gutmann wrote:

 But, if you query an online database, how do you authenticate its answer? If
 you use a key for that or SSL certificate, I see a chicken-and-egg problem.
 
 What's your threat model?

My threat model is practice.

I assume Perry assumed that you have some pre-established trust relationship
with the online database. However, I do not see myself having much of those.
Yes, my browser comes preloaded with a set of root certificates, but Verisign
is as much a third party to me as any SSL protected website I want to visit.

Anyway, suppose we do all trust Verisign. Then everybody needs its public key
on their computers to safely communicate with it. How is this public key
distributed? Just like those preloaded root certs in the browser? What if their
key gets compromised? How do we revoke that key and get a new one? We still
have all the same problems with the public key of our root of trust as we have
with long-lived certificates. Perry says we should do online checks in such a
case. So which online database can tell us if Verisign's public key is still
good? Do we need multiple trusted online databases who can vouch for each
other, and hope not all of them fail simultaneously?

Another issue with online verification is the increase in traffic. Would
Verisign like it if they get queried for a significant fraction of all the SSL
connections that are made by all users in the world?

-- 
Met vriendelijke groet / with kind regards,
  Guus Sliepen g...@sliepen.org


signature.asc
Description: Digital signature


Re: Gutmann Soundwave Therapy

2008-02-01 Thread Guus Sliepen
On Thu, Jan 31, 2008 at 03:46:47PM -0500, Thor Lancelot Simon wrote:

 On Thu, Jan 31, 2008 at 04:07:03PM +0100, Guus Sliepen wrote:
  
  Peter sent us his write-up up via private email a few days before he
  posted it to this list (which got it on Slashdot). I had little time to
  think about the issues he mentioned before his write-up became public.
  When it did, I (and others too) felt attacked in a cruel way. Peter
  ignored all the reasons *why* we used the kind of crypto we did at
  that moment, compared it to a very high standard, and made it feel like
  every thing we didn't do or didn't do as well as SSL made our crypto
  worthless. 
 
 There is no valid reason to ship snake oil cryptography (at any moment).
 
 There is no standard but a high standard which is appropriate for
 comparison.
 
 Since SSL was already available, there was no excuse to do anything
 worse.

Please understand the following:

I am not defending the use of our less-than-SSL crypto in tinc. But
there are reasons why we implemented it the way we did at that time. It
doesn't matter whether these reasons were bad or good. The result of
ignoring the why, and attacking others by pointing out everything they
do wrong in your perspective (even though your perspective is perfectly
right), and then finishing off the way Peter did, which is easily
perceived as an insult if you are on the receiving end of it, does not
encourage others to fix the problems, but rather puts others in
defensive mode.

Are you out to help others, or just to look down on them? If it's the
first, then please make others accept your help by just formulating
things in a more friendly way (although a patch with a fix would soften
up things as well). If it's the latter, please continue just as you are
doing now.

Now some (good and/or bad) reasons why we ended up with our
lesser-than-SSL crypto, in no particular order:

- SSL was not perceived at that time as a solution for our problem.
- We were application writers, not security specialists. We had to
  encrypt traffic, we did the best to our knowledge at that time.
- I had read Schneier's Applied Cryptography from front to end a few
  times. It made me feel I knew everything about crypto. Even Bruce
  admits he thought at that time he had put everything a programmer
  needed to know about crypto in that book. It doesn't mention SSL.
- We needed to tunnel data over UDP, with UDP semantics. SSL requires a
  reliable stream. Therefore, we had to use something other that SSL to
  tunnel data.
- It was fun to come up with a full duplex authentication scheme using
  RSA. More fun than using someone elses stuff.
- Because we could.
- We were Free Software developers who did it in our spare time for fun,
  we were not a company that sells it as one of its products.

 It seems that you still don't understand those things, or you would not
 complain about them even at this far removed date.  How unfortunate.

It seems that you haven't read the rest of my email, or you would not
have written that sentence. I am enlightened now :)

-- 
Met vriendelijke groet / with kind regards,
  Guus Sliepen [EMAIL PROTECTED]


signature.asc
Description: Digital signature


Re: Gutmann Soundwave Therapy

2008-01-31 Thread Guus Sliepen
On Tue, Jan 29, 2008 at 12:26:21PM -0500, Perry E. Metzger wrote:

 Clearly, more people need to know about Gutmann Soundwave Therapy.
 
 Ivan Krstić [EMAIL PROTECTED] writes:
[...]
  [0] Last paragraph, http://diswww.mit.edu/bloom-picayune/crypto/14238
 
 As it turns out, the central image of Peter's post was popularized
 earlier*.
 
 However, Peter clearly said this first in a security context, and I
 hope that the term Gutmann Soundwave Therapy spreads widely within
 our field as a way of ridiculing the desire to invent your own crypto
 algorithms and protocols. When it gets to the point where salesmen are
 vaguely aware of the phrase and fear it, we will know we have done our
 job successfully.

As one of the main developers of tinc, I have been at the receiving end
of Gutmann's therapy, or drive-by shooting as I experienced it at that
time.

Peter sent us his write-up up via private email a few days before he
posted it to this list (which got it on Slashdot). I had little time to
think about the issues he mentioned before his write-up became public.
When it did, I (and others too) felt attacked in a cruel way. Peter
ignored all the reasons *why* we used the kind of crypto we did at
that moment, compared it to a very high standard, and made it feel like
every thing we didn't do or didn't do as well as SSL made our crypto
worthless. 

We had some other people sending us security reviews of tinc, Jerome
Etienne for example. With them, we never had that feeling of being
attacked. The conversations we had with them encouraged us to improve
tinc.

Peter's write-up was the reason I subscribed to this cryptography
mailing list. After a while the anger/hurt feelings I had disappeared.
I knew then that Peter was right in his arguments. Nowadays I can look
at Peter's write-up more objectively and I can see that it is not as
ad-hominem as it felt back then, although the whole soundwave paragraph
still sounds very childish ;)

When tinc 2.0 will ever come out (unfortunately I don't have a lot of
time to work on it these days), it will probably use the GnuTLS library
and authenticate and connect daemons with TLS. For performance reasons,
you want to tunnel network packets via UDP instead of TCP, so hopefully
there is a working DTLS implementation as well then.

I hope that in the future, if you see an application doing something
wrong, you don't immediately give the developers the soundwave therapy.
Be a little bit more gentle and try to find out why it was written that
way in the first place. It will create a lot more understanding and
willingness from the developers to fix the problems.

Also, from experimenting with a version of tinc that uses TLS, I can
tell you that it not the perfect solution for our problem. The main
issue I see with SSL and TLS is with the credentials. Both X.509 and
OpenPGP are focussed on URLs or email addresses. It is not clear to me
how to store other information (like which subnets a node on the VPN is
authorised to use) in such credentials in a nice way, other than
shoehorning it into a CN (X.509) or uid (OpenPGP) field. Certificate
chain verification is something that often goes wrong; some SSL libraries do
not offer that functionality, or only do it when an application
explicitly requests it. With OpenPGP you can have a web of trust, but
how do you make use of it in an automated way? I expect that the next
round of penis-shaped soundwave therapy will not be focussed on
whether or not an application uses SSL, but on how it (mis)uses SSL.

-- 
Met vriendelijke groet / with kind regards,
  Guus Sliepen [EMAIL PROTECTED]


signature.asc
Description: Digital signature


Re: World's most powerful supercomputer goes online

2007-09-02 Thread Guus Sliepen
On Sat, Sep 01, 2007 at 03:46:45PM +1200, Peter Gutmann wrote:

 I feel I should add a followup to the earlier post, this was implied by the
 rhetorical question about what the LINPACK performance of a botnet is, but
 I'll make it explicit here:
 
 The standard benchmark for supercomputers is the LINPACK linear-algebra
 mathematical benchmark.  Now in practice the LINPACK performance of a botnet
 is likely to be nowhere near that of a specially-designed supercomputer, since
 it's more a distributed grid than a monolithic system.  On the other hand bot-
 herders are unlikely to care much about the linear algebra performance of
 their botnet since it doesn't represent the workload of any of the tasks that
 such a system would be used for.

Another interesting use may be data hiding. The botnet software could
store information in RAM (never on disk), and replicate it to other
nodes. If one node goes down, other nodes will still have the
information. If one node detects that virusscanners or forensic tools
are being used, it can easily wipe the information from RAM or just
reboot the machine without fear that the information would really be
lost.=20

Experience with tinc (a VPN daemon with peer-to-peer like architecture,
which replicates certain information to all daemons in a single VPN),
showed that even in a network with only 20 nodes, it is extremely hard
to get rid of information.  You either need to shut down all daemons at
the same time to make sure all state is lost, or modify the software to
allow explicit deletion of certain information. With more that 1 million
nodes it will be even harder to delete data.

-- 
Met vriendelijke groet / with kind regards,
  Guus Sliepen [EMAIL PROTECTED]


signature.asc
Description: Digital signature


Re: long-term GPG signing key

2006-01-17 Thread Guus Sliepen
On Sat, Jan 14, 2006 at 12:30:25PM -0700, Anne  Lynn Wheeler wrote:

 Guus Sliepen wrote:
  By default, GPG creates a signing key and an encryption key. The signing
  key is used both for signing other keys (including self-signing your own
  keys), and for signing documents (like emails). However, it is possible
  to split the signing key into a master key that you only use to sign
  other keys, and a key dedicated to signing documents. You can revoke the
  latter key and create a new one whenever you want, the master key is
  still valid. Also, when people sign your key, they sign your master key,
  not the subkeys. The signatures you accumulated will also still be
  valid. You can also keep the master key safely tucked away on an old
  laptop that you keep in a safe, and only export the subkeys to your
  workstation. That way the master key is very safe.

 as in previous post ... i assert that fundamental digital signature
 verification is an authentication operation
 http://www.garlic.com/~lynn/aadsm22.htm#5 long-term GPG signing keys
 
 and doesn't (by itself) carry with it characteristics of human
 signature, read, understood, approves, agrees, and/or authorizes.

It depends on how it is used. For example, when I sent this email, I
typed in the passphrase of my PGP key, authorising GnuPG to create a
signature for this email. This comes very close to human signing. I
read, understood, approve etc. with the contents of this email.

If assymetric cryptography is used to automatically sign a credit card
transaction without the user having to do more than click a button, then
I agree that in that situation, the digital signature is not the same as
a human signature.

[...]
 it is when you start equating private keys with certification and truth
 characteristics that you move into a completely different risk and
 threat domain.

I don't equate private keys with that. I do equate signatures made with
those keys with that.

 the other foray into embellishing private keys and digital signatures
 with human signature type characteristics was the non-repudiation
 activity. however, it is now commoningly accepted that to embellish
 digital signatures with non-repudiation attributes requires a whole lot
 of additional business processes ... not the simple operation of
 generating an authentication digital signature.
[...]
 the corollary is that digitally signed certificates and
 private keys embellished with certification and truth characteristics
 become less and less meaningful.

That is probably true, but in the mean time Travis still wants to know
how to create a PGP key with the properties he wishes for.

-- 
Met vriendelijke groet / with kind regards,
Guus Sliepen [EMAIL PROTECTED]


signature.asc
Description: Digital signature


Re: long-term GPG signing key

2006-01-13 Thread Guus Sliepen
On Tue, Jan 10, 2006 at 03:28:49AM -0600, Travis H. wrote:

 I'd like to make a long-term key for signing communication keys using
 GPG and I'm wondering what the current recommendation is for such.  I
 remember a problem with Elgamal signing keys and I'm under the
 impression that the 1024 bit strength provided by p in the DSA is not
 sufficiently strong when compared to my encryption keys, which are
 typically at least 4096-bit D/H, which I typically use for a year.
 
 The whole reason I'm using a signing key is that I have numerous older
 keys which have now expired and so the signatures on them are
 worthless.  I don't attend many keysigning parties so it's hard to
 make the system work without collecting signatures over a long period
 on some very high strength key.  Also, I'd like to use the signing key
 as a kind of identity, not tied to any particular email address, and
 only used to sign communication keys, which *are* tied to a email
 address and have shorter expiration times.
 
 Does anyone have any suggestions on how to do this, or suggestions to
 the effect that I should be doing something else?

By default, GPG creates a signing key and an encryption key. The signing
key is used both for signing other keys (including self-signing your own
keys), and for signing documents (like emails). However, it is possible
to split the signing key into a master key that you only use to sign
other keys, and a key dedicated to signing documents. You can revoke the
latter key and create a new one whenever you want, the master key is
still valid. Also, when people sign your key, they sign your master key,
not the subkeys. The signatures you accumulated will also still be
valid. You can also keep the master key safely tucked away on an old
laptop that you keep in a safe, and only export the subkeys to your
workstation. That way the master key is very safe.

About keys being tied to email addresses (uids): you can create a uid
with just your name, no email address, and if you like a comment with
your birthday or passport number in it. Let people sign that uid.

-- 
Met vriendelijke groet / with kind regards,
Guus Sliepen [EMAIL PROTECTED]


signature.asc
Description: Digital signature


Re: The future of security

2004-05-31 Thread Guus Sliepen
On Sun, May 30, 2004 at 12:36:53PM -0700, bear wrote:

 The bigger problem is that webs of trust don't work.
 They're a fine idea, but the fact is that nobody keeps
 track of the individual trust relationships or who signed
 a key;  few people even bother to find out whether there's
 a path of signers that leads from them to another person,
 or whether the path has some reasonably small distance.

PGP keys are used extensively in the Debian community; new developers
are only accepted if their PGP key has been signed by another Debian
developer, so that their always is a trust path from one developer to
any other. Some important things, like the upload of new packages or
submitting votes, will only be accepted by the automated services if
everything is properly signed.

There is a strong incentive in this community to have a signed PGP key;
if you didn't have one you couldn't do anything. In other areas there
just is no incentive for having such a thing... like email; it works
even if you don't sign it.

 I have not yet seen an example of reputation favoring
 one person over another in a web of trust model; it looks
 like people can't be bothered to keep track of the trust
 relationships or reputations within the web.

I think that's because the tools are lacking. GnuPG can determine trust
paths, but you have to manually assign trust levels to certain keys
and update the trustdb (which takes an awfully long time). If it would
just work a bit faster and determine and show trust paths out of the
box, I think PGP's web of trust model would be used a lot more.

-- 
Met vriendelijke groet / with kind regards,
Guus Sliepen [EMAIL PROTECTED]


signature.asc
Description: Digital signature


Re: Open Source (was Simple SSL/TLS - Some Questions)

2003-10-09 Thread Guus Sliepen
On Thu, Oct 09, 2003 at 09:42:18AM -0400, Perry E. Metzger wrote:

  If you want a VPN that road warriors can use, you have to do it with
  IP-over-TCP. Nothing else survives NAT and agressive firewalling, not even
  Microsoft PPTP.
 
 Unfortunately, IP over TCP has very bad properties. TCP stacks figure
 out what the maximum bandwidth they can send is by increasing the
 transmission rate until they get drops, and then backing off. However,
 the underlying TCP carrying the IP packets is a reliable,
 retransmitting service, so there will never be any drops seen by the
 overlayed TCP sessions. You end up with really ugly problems, in
 short.
 
 Port-forwarded TCP sessions, a la ssh, work a lot better.

If you run your VPN over TCP, and the VPN daemon therefore knows that
every packet it sends to the other side of the connection will arrive
anyway, you can do proxy-ACK, which essentially means you automatically
do port-forwarding for all TCP sessions on the virtual network
interface.

Still, not only is TCP-over-TCP a problem, anything realtime over TCP
(like VoIP, games, streaming video) suffers from it.

SCTP (RFC 2960) looks like a solution, although I don't know of NATs
that support it, and although some platforms already have some support
for it in their kernels, I don't think it's possible to write a user
space application using SCTP yet.

-- 
Met vriendelijke groet / with kind regards,
Guus Sliepen [EMAIL PROTECTED]


signature.asc
Description: Digital signature


Re: Simple SSL/TLS - Some Questions

2003-10-03 Thread Guus Sliepen
On Fri, Oct 03, 2003 at 05:55:25PM +0100, Jill Ramonsky wrote:

 It's worth summing up the design goals here, so nobody gets confused. 
 Trouble is, I haven't figured out what they should all be. The main 
 point of confusion/contention right now seem to be (1) should it be in C 
 or C++?, (2) should it support SSL or TLS or both?

If the applications have to interact with legacy systems, then they'd
need SSL...

 Regarding the choice of language, I think I would want this library (or 
 toolkit, or whatever) to be somehow different from OpenSSL - otherwise 
 what's the point? I mean ... this may be a dumb question, but ... if 
 people want C, can they not use the existing OpenSSL? Or is it simply 
 that OpenSSL is too complicated to use, so a simpler than OpenSSL C 
 version is required. What I mean is, I don't want to duplicate effort. 
 That seems dumb.

OpenSSL is very large, and although the API is pretty consistent and
easy to work with, the SSL part of it looks complicated anyway. Another
thing that is very annoying about OpenSSL is its license (and this has
probably been an incentive to create GnuTLS).

 My inclination is still to go with C++, and figure out 
 a way of turning it into C later if necessary ... but if majority 
 opinion says otherwise I'll reconsider.

Well as long as your library has a decent C interface, I wouldn't mind
if it was written in C, C++, Haskell or something even stranger.

-- 
Met vriendelijke groet / with kind regards,
Guus Sliepen [EMAIL PROTECTED]


signature.asc
Description: Digital signature


Re: Monoculture

2003-10-01 Thread Guus Sliepen
On Wed, Oct 01, 2003 at 02:34:23PM -0400, Ian Grigg wrote:

 Don Davis wrote:
 
  note that customers aren't usually dissatisfied with
  the crypto protocols per se;  they just want the
  protocol's implementation to meet their needs exactly,
  without extra baggage of flexibility, configuration
  complexity, and bulk.
[...]
 Including extra functionality means that they have
 to understand it, they have to agree with its choices,
 they have to follow the rules in using it, and have
 to pay the costs.  If they can ditch the stuff they
 don't want, that means they are generally much safer
 in making simple statements about the security model
 that they have left.

You clearly formulated what we are doing! We want to keep our crypto as
simple and to the point as necessary for tinc. We also want to
understand it ourselves. Implementing our own authentication protocol
helps us do all that.

Uhm, before getting flamed again: by our own, I don't mean we think we
necessarily have to implement something different from all the existing
protocols. We just want to understand it so well and want to be so
comfortable with it that we can implement it ourselves.

-- 
Met vriendelijke groet / with kind regards,
Guus Sliepen [EMAIL PROTECTED]


signature.asc
Description: Digital signature


Re: New authentication protocol, was Re: Tinc's response to Linux's answer to MS-PPTP

2003-09-30 Thread Guus Sliepen
On Mon, Sep 29, 2003 at 11:54:20AM -0700, Eric Rescorla wrote:

  Well all existing authentication schemes do what they are supposed do,
  that's not the problem. We just want one that is as simple as possible
  (so we can understand it better and implement it more easily), and which
  is efficient (both speed and bandwidth).
 
 In what way is your protocol either simpler or more efficient
 than, say, JFK or the TLS skeleton?

Compared with JFK: http://www.crypto.com/papers/jfk-ccs.pdf section 2.2
shows a lot of keys, IDs, derivatives of keys, random numbers and hashes
of various combinations of the previous, 3 public key encryptions and 2
symmetric cipher encryptions and HMACs. I do not consider that simple.

Compared with the entire TLS protocol it is much simpler, compared with
just the handshake protocol it is about as simple and probably just as
efficient, but as I said earlier, I want to get rid of the client/server
distinction.

 Again, it's important to distinguish between learning experiences
 and deployed protocols. I agree that it's worthwhile to try
 to do new protocols and let other people analyze them as
 a learning experience. But that's different from putting
 a not fully analyzed protocol into a deployed system.
[...]
 Well, I'd start by doing a back of the envelope performance
 analysis. If that doesn't show that your approach is better,
 then I'm not sure why you would wish to pursue it as a
 deployed solution.

I will not repeat our motiviations again. Please don't bother arguing
about this.

-- 
Met vriendelijke groet / with kind regards,
Guus Sliepen [EMAIL PROTECTED]


signature.asc
Description: Digital signature


Re: New authentication protocol, was Re: Tinc's response to 'Linux's answer to MS-PPTP'

2003-09-30 Thread Guus Sliepen
On Mon, Sep 29, 2003 at 09:51:20AM -0700, Bill Stewart wrote:

  =Step 1:
  Exchange ID messages. An ID message contains the name of the tinc
  daemon which sends it, the protocol version it uses, and various
  options (like which cipher and digest algorithm it wants to use).
 
 By name of the tinc daemon, do you mean identification information?
 That data should be encrypted, and therefore in step 2.
 (Alternatively, if you just mean tincd version 1.2.3.4, that's fine.

No, identification information. But still, it's just a name, not a
public key or certificate. It is only used by the receiver to choose
which public key (or certificate etc) to use in Step 2. This information
does not have to be encrypted, it has just as much meaning as the IP
address the sender has.

  Step 2:
  Exchange METAKEY messages. The METAKEY message contains the public part
  of a key used in a Diffie-Hellman key exchange.  This message is
  encrypted using RSA with OAEP padding, using the public key of the
  intended recipient.
 
 You can't encrypt the DH keyparts using RSA unless you first exchange
 RSA public key information, which the server can't do without knowing
 who the client is (the client presumably knows who the server is,
 so you _could_ have the client send the key encrypted to annoy MITMs.)

With tinc, public keys are never exchanged during authentication, they
are known beforehand. And again, there is no distinction between a
client and a server, it is peer to peer.

-- 
Met vriendelijke groet / with kind regards,
Guus Sliepen [EMAIL PROTECTED]


signature.asc
Description: Digital signature


Re: New authentication protocol, was Re: Tinc's response to Linux's answer to MS-PPTP

2003-09-29 Thread Guus Sliepen
On Mon, Sep 29, 2003 at 07:53:29AM -0700, Eric Rescorla wrote:

 I'm trying to figure out why you want to invent a new authentication
 protocol rather than just going back to the literature and ripping
 off one of the many skeletons that already exist (

Several reasons. Because it's fun, because we learn more from doing it
ourselves (we learn from our mistakes too), because we want something
that fits our needs. We could've just grabbed one from the shelf, but
then we could also have grabbed IPsec or PPP-over-SSH from the shelf,
instead of writing our own VPN daemon. However, we wanted something
different.

 STS,

If you mean station-to-station protocol, then actually that is pretty
much what we are doing now, except for encrypting instead of signing
using RSA.

 JFK, IKE, SKEME, SIGMA, etc.).

And I just ripped TLS from the list.

 That would save people from the trouble of having to analyze the
 details of your new protoocl.

Several people on this list have already demonstrated that they are very
willing to analyse new protocols. Also, I don't *expect* you to do so,
if you don't want to ignore me.

 Why are you using RSA encryption to authenticate your DH rather
 than using RSA signature?

If we use RSA encryption, then both sides know their message can only be
received by the intended recipient. If we use RSA signing, then we both
sides know the message they receive can only come from the assumed
sender. For the purpose of tinc's authentication protocol, I don't see
the difference, but...

 Now, the attacker chooses 0 as his DH public. This makes ZZ always
 equal to zero, no matter what the peer's DH key is.

I think you mean it is equal to 1 (X^0 is always 1). This is the first
time I've heard of this, I've never thought of this myself. In that case
I see the point of signing instead of encrypting.

-- 
Met vriendelijke groet / with kind regards,
Guus Sliepen [EMAIL PROTECTED]


signature.asc
Description: Digital signature


Re: New authentication protocol, was Re: Tinc's response to Linux's answer to MS-PPTP

2003-09-29 Thread Guus Sliepen
On Mon, Sep 29, 2003 at 05:57:46PM +0200, Guus Sliepen wrote:

  Now, the attacker chooses 0 as his DH public. This makes ZZ always
  equal to zero, no matter what the peer's DH key is.
 
 I think you mean it is equal to 1 (X^0 is always 1).

Whoops, stupid me. Please ignore that.

-- 
Met vriendelijke groet / with kind regards,
Guus Sliepen [EMAIL PROTECTED]


signature.asc
Description: Digital signature


Re: Tinc's response to Linux's answer to MS-PPTP

2003-09-28 Thread Guus Sliepen
On Sat, Sep 27, 2003 at 07:58:14PM +0100, M Taylor wrote:

 Perhaps a HMAC per chunk, rather than per the payload of a single UDP
 datagram. I suspect per every 5 UDP datagrams, roughly ~7000 bytes of 
 payload may work. This will increase latency.

That would not work either. It would have the same problems as a packet
that has been split into 5 fragments: if one of the fragments gets lost,
the whole packet will be discarded. Fragment reassembly is also
something that is not completely trivial, in the past there have been
some simple DoS attacks for various operating systems that did not
implement IP fragment reassembly correctly.

Each UDP packet must stand on its own, just like the network packet that has
been encapsulated within it.

 This should be redone from scratch, I would look at either using
 Diffie Hellman Key Exchange combined with digital signatures or the updated
 Needham Schroeder Public Key Protocol. Exchange two symmetric keys,
 one used for bulk data encryption, the other used for the HMAC
 authentication. 

I think I prefer the Diffie-Hellman key exchange; the Needham Schroeder
public key protocol needs more round trips and one more RSA
encryption/decryption step.

 I expect this is a reference to Why TCP Over TCP Is A Bad Idea
 http://sites.inka.de/~bigred/devel/tcp-tcp.html

Yes.

 If Guus Sliepen and Ivo Timmermans are willing to seriously rethink their
 high tolerance for unncessary weakness, I think tinc 2.0 could end up being
 a secure piece of software. I hope Guus and Ivo circulate their version 2.0 
 protocol before they do any coding, so that any remaining flaws can be easily 
 fixed in the paper design without changing a single line of code, saving time 
 and effort.

Those are the first encouraging words I've heard since Peter Gutmann's
writeup was posted on Slashdot, thank you! We do plan to get rid of all
the weaknesses, and once we know what we want and we have a draft, I'll
post it in this mailing list.

-- 
Met vriendelijke groet / with kind regards,
Guus Sliepen [EMAIL PROTECTED]


signature.asc
Description: Digital signature


Re: Tinc's response to Linux's answer to MS-PPTP

2003-09-27 Thread Guus Sliepen
 was it between the inducement of this idea and it's release?

About half a year I guess. But, we didn't design the current protocol as
a replacement of SSL. In fact we replaced something that was worse than
the current protocol.

-- 
Met vriendelijke groet / with kind regards,
Guus Sliepen [EMAIL PROTECTED]


signature.asc
Description: Digital signature


Tinc's response to Linux's answer to MS-PPTP

2003-09-26 Thread Guus Sliepen
Hello Peter Gutmann and others,

Because of its appearance on this mailing list and the Slashdot posting
about Linux's answer to MS-PPTP, and in the tinc users' interest, we
have created a section about the current security issues in tinc, which
currently contains a response to Peter Gutmann's writeup:

http://tinc.nl.linux.org/security

I want to emphasize for the cryptography community here that certain
tradeoffs have been made between security and efficiency in tinc. So
please read the response as why we think we need to do/used to do it
this way instead of why we think tinc is still as secure as anything
else. Comments are welcome. 

-- 
Met vriendelijke groet / with kind regards,
Guus Sliepen [EMAIL PROTECTED]


signature.asc
Description: Digital signature