Simson Garfinkel analyses Skype - Open Society Institute

2005-01-09 Thread Ian G
Voice Over Internet Protocol and Skype Security
Simson L. Garfinkel
January 7, 2005
With the increased deployment of high-speed (broadband) Internet 
connectivity, a growing number of businesses and individuals are using 
the Internet for voice telephony, a technique known as Voice over 
Internet Protocol (VoIP). With a VoIP system, two people can speak with 
each other by using headsets and microphones connected directly to their 
computers.

Skype is a proprietary VoIP system developed by Skype Technologies S.A. 
Like the popular KaZaA file-trading system, Skype is based on 
peer-to-peer technology: instead of transmitting all voice calls through 
a central server, as some VoIP services do (Vonage, for example), Skype 
clients seek out and find other Skype clients, then build from these 
connections a network that can be used to search for other users and 
send them messages.

Is Skype secure? How does its security compare with that of conventional 
telephone calls, or of other VoIP-based systems? In this article 
commissioned by OSI's Information Program, Simson Garfinkel, an expert 
on Internet security and networking issues, looks at the security 
properties of key importance for civil society organizations relying on 
Skype for voice communications.

http://www.soros.org/initiatives/information/articles_publications/articles/security_20050107/OSI_Skype5.pdf
--
News and views on what matters in finance+crypto:
   http://financialcryptography.com/
-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: entropy depletion (was: SSL/TLS passive sniffing)

2005-01-09 Thread Ian G
William Allen Simpson wrote:
There are already other worthy comments in the thread(s).

This is a great post.  One can't stress enough
that programmers need programming guidance,
not arcane information theoretic concepts.
We are using
computational devices, and therefore computational infeasibility is the
standard that we must meet.  We _NEED_ unpredictability rather than
pure entropy.

By this, do you mean that /dev/*random should deliver
unpredictability, and /dev/entropy should deliver ...
pure entropy?
So, here are my handy practical guidelines:
(1) As Metzger so wisely points out, the implementations of /dev/random,
/dev/urandom, etc. require careful auditing.  Folks have a tendency to
improve things over time, without a firm understanding of the
underlying requirements.

Right, but in the big picture, this is one of those
frequently omitted steps.  Why?  Coders don't have
time to acquire the knowledge or to incorporate
all the theory of RNG in, and as much of today's
software is based on open source, it is becoming the
baseline that no theoretical foundation is required
in order to do that work.  Whereas before, companies
c/would make a pretence at such a foundation, today,
it is acceptable to say that you've read the Yarrow
paper and are therefore qualified.
I don't think this is a bad thing, I'd rather have a
crappy /dev/random than none at all.  But if we
are to improve the auditing, etc, what we would
need is information on just _what that means_.
E.g., a sort of webtrust-CA list of steps to take
in checking that the implementation meets the
desiderata.
(2) The non-blocking nature of /dev/urandom is misunderstood.  In fact,
/dev/urandom should block while it doesn't have enough entropy to reach
its secure state.  Once it reaches that state, there is no future need
to block.

If that's the definition that we like then we should
create that definition, get it written in stone, and
start clubbing people with it (*).
(2A) Of course, periodically refreshing the secure state is a good
thing, to overcome any possible deficiencies or cycles in the PRNG.

As long as this doesn't effect definition (2) then it
matters not.  At the level of the definition, that is,
and this note belongs in the implementation notes
as do (2B), (2C).
(2B) I like Yarrow.  I was lucky enough to be there when it was first
presented.  I'm biased, as I'd come to many of the same conclusions,
and the strong rationale confirmed my own earlier ad hoc designs.

(2C) Unfortunately, Ted Ts'o basically announced to this list and
others that he didn't like Yarrow (Sun, 15 Aug 1999 23:46:19 -0400).  Of
course, since Ted was also a proponent of 40-bit DES keying, that depth
of analysis leads me to distrust anything else he does.  I don't know
whether the Linux implementation of /dev/{u}random was ever fixed.

( LOL... Being a proponent of 40-bit myself, I wouldn't
be so distrusting.  I'd hope he was just pointing out
that 40-bits is way stronger than the vast majority
of traffic out there;  that which we talked about here
is buried in the noise level when it comes to real effects
on security simply because it's so rare. )
(3) User programs (and virtually all system programs) should use
/dev/urandom, or its various equivalents.
(4) Communications programs should NEVER access /dev/random.  Leaking
known bits from /dev/random might compromise other internal state.
Indeed, /dev/random should probably have been named /dev/entropy in the
first place, and never used other than by entropy analysis programs in
a research context.

I certainly agree that overloading the term 'random'
has caused a lot of confusion.  And, I think it's an
excellent idea to abandon hope in that area, and
concentrate on terms that are useful.
If we can define an entropy device and present
that definition, then there is a chance that the
implementors of devices in Unixen will follow that
lead.  But entropy needs to be strongly defined in
practical programming terms, along with random
and potentially urandom, with care to eliminate
such crypto academic notions as information
theoretic arguments and entropy reduction.

(4A) Programs must be audited to ensure that they do not use
/dev/random improperly.
(4B) Accesses to /dev/random should be logged.
I'm confused by this aggresive containment of the
entropy/random device.  I'm assuming here that
/dev/random is the entropy device (better renamed
as /dev/entropy) and Urandom is the real good PRNG
which doesn't block post-good-state.
If I take out 1000 bits from the *entropy* device, what
difference does it make to the state?  It has no state,
other than a collection of unused entropy bits, which
aren't really state, because there is no relationship
from one bit to any other bit.  By definition.  They get
depleted, and more gets collected, which by definition
are unrelated.
Why then restrict it to non-communications usages?
What does it matter if an SSH daemon leaks bits used
in its *own* key generation if those bits can never be

Re: entropy depletion (was: SSL/TLS passive sniffing)

2005-01-09 Thread Taral
On Sat, Jan 08, 2005 at 10:46:17AM +0800, Enzo Michelangeli wrote:
 But that was precisely my initial position: that the insight on the
 internal state (which I saw, by definition, as the loss of entropy by the
 generator) that we gain from one bit of output is much smaller than one
 full bit. 

I think this last bit is untrue. You will find that the expected number
of states of the PRNG after extracting one bit of randomness is half of
the number of states you had before, thus resulting in one bit of
entropy loss.

-- 
Taral [EMAIL PROTECTED]
This message is digitally signed. Please PGP encrypt mail to me.
A: Because it fouls the order in which people normally read text.
Q: Why is top-posting such a bad thing?
A: Top-posting.
Q: What is the most annoying thing on usenet and in e-mail?


pgpEGgoI4O221.pgp
Description: PGP signature


Re: entropy depletion

2005-01-09 Thread William Allen Simpson
Ian G wrote:
(4A) Programs must be audited to ensure that they do not use
/dev/random improperly.
(4B) Accesses to /dev/random should be logged.
I'm confused by this aggresive containment of the
entropy/random device.  I'm assuming here that
/dev/random is the entropy device (better renamed
as /dev/entropy) and Urandom is the real good PRNG
which doesn't block post-good-state.
Yes, that's my assumption (and practice for many years).
If I take out 1000 bits from the *entropy* device, what
difference does it make to the state?  It has no state,
other than a collection of unused entropy bits, which
aren't really state, because there is no relationship
from one bit to any other bit.  By definition.  They get
depleted, and more gets collected, which by definition
are unrelated.
If we could actually get such devices, that would be good. 

In the real world, /dev/random is an emulated entropy device.  It hopes
to pick up bits and pieces of entropy and mashes them together.  In
common implementations, it fakes a guess of the current level of
entropy accumulated, and blocks when depleted. 

If there really were no relation to the previous output -- that is, a
_perfect_ lack of information about the underlying mechanism, such as
the argument that Hawking radiation conveys no information out of
black holes -- then it would never need to block, and there would
never have been a need for /dev/urandom!
(Much smarter people than I have been arguing about the information
theoretic principles of entropy in areas of physics and mathematics
for a very long time.) 

All I know is that it's really hard to get non-externally-observable
sources of entropy in embedded systems such as routers, my long-time
area of endeavor.  I'm happy to add in externally observable sources
such as communications checksums and timing, as long as they can be
mixed in unpredictable ways with the internal sources, to produce the
emulated entropy device.
Because it blocks, it is a critical resource, and should be logged.
After all, a malicious user might be grabbing all the entropy as a
denial of service attack.
Also, a malicious user might be monitoring the resource, looking for
cases where the output isn't actually very random.  In my experience,
rather a lot of supposed sources of entropy aren't very good.

Why then restrict it to non-communications usages?
Because we are starting from the postulate that observation of the
output could (however remotely) give away information about the
underlying state of the entropy generator(s).
What does it matter if an SSH daemon leaks bits used
in its *own* key generation if those bits can never be
used for any other purpose?
I was thinking about cookies and magic numbers, generally transmitted
verbatum.  However, since we have a ready source of non-blocking keying
material in /dev/urandom, it seems to be better to use that instead of
the blocking critical resource
--
William Allen Simpson
   Key fingerprint =  17 40 5E 67 15 6F 31 26  DD 0D B9 9B 6A 15 2C 32
-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Entropy and PRNGs

2005-01-09 Thread David Wagner
John Denker writes:
Ben Laurie wrote:
 http://www.apache-ssl.org/randomness.pdf

I just took a look at the first couple of pages.
IMHO it has much room for improvement.

I guess I have to take exception.  I disagree.  I think Ben Laurie's
paper is quite good.  I thought your criticisms missed some of the points
he was trying to make (these points are subtle, so this is completely
understandable).  Presumably his paper could be criticized as not clear
enough, since it seems it did not convey those points adequately, but
I don't think his paper is inaccurate.  I'll respond point-by-point.

*) For instance, on page 2 it says

 I can have a great source of entropy, but if an attacker knows
 all about that source, then it gives me no unpredictability at
 all.

That's absurd.  If it has no unpredictability, it has no entropy,
according to any reasonable definition of entropy, including the
somewhat loose definition on page 1.

Actually, I think Ben got it right.  Entropy depends on context.
The attacker might have extra context that allows him to narrow down
the possible values of the randomness samples.

For instance, imagine if we use packet inter-arrival times (measured down
to the nanosecond) as our randomness source.  From the point of view of
an outsider, there might a lot of entropy in these times, perhaps tens
of bits.  However, from the point of view of an attacker who can eavesdrop
on our local area network, there might be very little or no entropy.

This is the difference between unconditional and conditional entropy that
Ben was trying to introduce.  In information-theoretic notation, H(X)
vs H(X|Y).  Let X = packet inter-arrival time, and Y = everything seen by
a local eavesdropper, and you will see that H(X|Y) can be much smaller
than H(X).  Indeed, we can have H(X|Y) = 0 even if H(X) is very large.
This is Ben's point, and it is a good one.

*) Later on page 2 it says:

 Cryptographers want conditional entropy, but for UUIDs, and
 other applications, what is needed is unconditional entropy.

First of all, you can't throw around the term conditional
entropy without saying what it's conditioned on.

Conditioned on everything known to the attacker, of course.

Also importanty, for UUIDs no entropy is required at all.
A globally-accessible counter has no entropy whatsoever, and
suffices to solve the UUID problem

A counter is fine as long as there is only one machine in the universe
that will ever assign UUIDs.  However, if you want to do distributed
generation of UUIDs, then counters are insufficient because there is no
way to prevent overlap of two machine's counter spaces.

Perhaps what Ben should have said is that:
* Unconditional entropy is sufficient for UUIDs;
  conditional entropy is not needed.
* For centrally-assigned UUIDs, even unconditional entropy is unnecessary;
  a centrally-managed counter is fine.
* For distributed, unsynchronized assignment of UUIDs, unconditional
  entropy appears to be necessary and sufficient.

*) At the bottom of page 2 it says:

 Well, for many purposes, the system time has plenty of
 unconditional entropy, because it is usually different when
 used to generate different UUIDs.

No, the system time has no entropy whatsoever, not by any
reasonable definition of entropy.

Ok, this seems like a fair criticism.

*) On page 4, I find the name trueprng to be an oxymoron.
The P in PRNG stands for Pseudo, which for thousands of
years has meant false, i.e. the opposite of true.

Another reasonable point.  Perhaps truerng would be a better name, then?

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Safecracking for the computer scientist

2005-01-09 Thread Matt Blaze
I've been thinking for a while about the relationship between the
human-scale security systems used to protect the physical world
the cryptologic and software systems that protect the electronic
world.  I'm increasingly convinced that these areas have far more
in common that we might initially think, and that each can be
strengthened by applying lessons from the other.
I've started writing down much of what I've learned about a
particularly interesting area of high-end human-scale security --
safes and vaults.  A draft survey of safe security from a CS
viewpoint, Safecracking for the computer scientist, is at:
http://www.crypto.com/papers/safelocks.pdf
This is a big file -- about 2.5MB -- and is heavily illustrated.
This is the same paper that was slashdotted last weekend, but I
figured some here may not have seen it and may enjoy it.
-matt
-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: The Reader of Gentlemen's Mail, by David Kahn

2005-01-09 Thread Steven M. Bellovin
In message [EMAIL PROTECTED], Bill Stewart writ
es:
My wife was channel-surfing and ran across David Kahn talking about his 
recent book
The Reader of Gentlemen's Mail: Herbert O. Yardley and the Birth of 
American Codebreaking.

ISBN 0300098464 , Yale University Press, March 2004

Amazon's page has a couple of good detailed reviews
http://www.amazon.com/exec/obidos/ASIN/0300098464/qid=1105254301/sr=2-1/ref=pd
_ka_b_2_1/102-1630364-0272149


I have the book.  For the student of the history of cryptography, it's 
worth reading.  For the less dedicated, it's less worthwhile.  It's not 
The Codebreakers; it's not The Code Book; other than the title 
quote (and I assume most readers of this list know the story behind 
it), there are no major historical insights.

The most important insight, other than Yardley's personality, is what 
he was and wasn't as a cryptanalyst.  The capsule summary is that he 
was *not* a cryptanalytic superstar.  In that, he was in no way a peer 
of or a competitor to Friedman.  His primary ability was as a manager 
and entrepreneur -- he could sell the notion of a Black Chamber (with 
the notorious exception of his failure with Stimson), and he could 
recruit good (but not always great) people.  But he never adapted 
technically.  His forte was codes -- he know how to create them and how 
to crack them.  But the world's cryptanalytic services were also 
learning how to crack them with great regularity; that, as much as 
greater ease of use, was behind the widespread adoption of machine 
cryptography (Enigma, M-209, Typex, Purple, etc.) during the interwar
period.  Yardley never adapted and hence he (and his organizations) 
became technologically obsolete.

One of the reviews on Amazon.com noted skeptically Kahn's claim that 
Friedman was jealous of Yardley's success with women.  I have no idea 
if that's true, though moralistic revulsion may be closer.  But I 
wonder if the root of the personal antagonism may be more that of the 
technocrat for the manager...

--Prof. Steven M. Bellovin, http://www.cs.columbia.edu/~smb



-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]