[Cryptography] Vulnerabilities (in theory and in practice) in P25 two-way radios

2011-08-10 Thread Matt Blaze
Our (Sandy Clark, Travis Goodspeed, Perry Metzger, Zachary Wasserman, Kevin Xu 
and me) Usenix Security paper on vulnerabilities in the P25 two-way radio 
system (used by public safety agencies in the US and elsewhere) is out today.

See

   http://www.crypto.com/papers/p25sec.pdf

for the paper (pdf format) and

   http://www.crypto.com/p25

for a summary of mitigations.

-matt

___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Czech intel agency allegedly offered tax free cash to local crypto vendor to incorporate defects

2010-09-27 Thread Matt Blaze
I don't know anything beyond this this news story, but interesting...

http://www.praguemonitor.com/2010/09/14/mfd-bis-offers-tax-free-money-encryption-system
-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: SHA-1 collisions now at 2^{52}?

2009-05-02 Thread Matt Blaze


On May 2, 2009, at 5:53, Peter Gutmann wrote:


Perry E. Metzger pe...@piermont.com writes:

Greg Rose g...@qualcomm.com writes:
It already wasn't theoretical... if you know what I mean. The  
writing

has been on the wall since Wang's attacks four years ago.


Sure, but this should light a fire under people for things like TLS  
1.2.


Why?

Seriously, what threat does this pose to TLS 1.1 (which uses HMAC- 
SHA1 and
SHA-1/MD5 dual hashes)?  Do you think the phishers will even notice  
this as

they sort their multi-gigabyte databases of stolen credentials?

[snip]

I must admit I don't understand this line of reasoning (not to pick
on Perry, Greg, or Peter, all of whom have a high level of
crypto-clue and who certainly understand protocol design).

The serious concern here seems to me not to be that this particular
weakness is a last straw wedge that enables some practical attack
against some particular protocol -- maybe it is and maybe it isn't.
What worries me is that SHA-1 has been demonstrated to not have a
property -- infeasible to find collisions -- that protocol designers
might have relied on it for.

Security proofs become invalid when an underlying assumption is
shown to be invalid, which is what has happened here to many
fielded protocols that use SHA-1. Some of these protocols may well
still be secure in practice even under degraded assumptions, but to
find out, we'd have to analyze them again.  And that's a non-trivial
task that as far as I know has not been done yet (perhaps I'm wrong
and it has).  They'll never figure out how to exploit it is not,
sadly, a security proof.

Any attack that violates basic properties of a crypto primitive
is a serious problem for anyone relying on it, pretty much by
definition.

-matt

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Domestic surveillance and warrantless wiretaps

2008-12-28 Thread Matt Blaze

Like many people, I found last week's Newsweek cover
piece, revealing Thomas M. Tamm as the principal source
for James Risen and Eric Lichtblau's 2005 NY Times story
that broke the warrantless wiretap story, to be a riveting
read.

But I actually found a sidebar to the story even more
interesting. That story talks about the now famous 2004
incident at Ashcroft's hospital bed in which several
top DoJ officials threatened to resign. It turns out
that was not about warrantless content  collection,
but rather about the wholesale collection of call
records:

  http://www.newsweek.com/id/174602/output/print

This story raises a number of new -- and ultimately
quite disturbing -- questions about the nature of the
wiretap program and the extent of its reach into the
domestic communication of innocent Americans.  In
particular, put together with other reports about the
program, it seems to corroborate claims that telcos
(including my alma matter ATT) provided the NSA with
wholesale access to domestic call detail records, and
that top DoJ officials worried seriously that this
violated the law.

I discuss the implications of this in more detail on
my blog; perhaps some here will find it interesting:

http://www.crypto.com/blog/metatapping

-matt

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: road toll transponder hacked

2008-08-26 Thread Matt Blaze


On Aug 26, 2008, at 10:15, [EMAIL PROTECTED] wrote:

On Tue, Aug 26, 2008 at 9:24 AM, Perry E. Metzger  
[EMAIL PROTECTED] wrote:


http://www.technologyreview.com/Infotech/21301/?a=f

From the article: other toll systems, like E-Z Pass and I-Pass, need

to be looked at too

A couple years ago I got a letter from E-Z Pass a few days after I
used my transponder in my new car without registering my new car. They
gave me a grace period to register before making me pay some sort of
penalty.

So, I believe, at least for E-Z Pass, the attack would have to include
cloning the license plate and pictures may still be available whenever
a victim realizes they have been charged for trips they did not take.



I believe that's correct.  In fact, the plate recognition technology  
they

use seems to be good enough to make the transponder itself redundant.
I know several people with E-Z Pass who disconnected the internal
battery of their transponder (out of concern that there might be
hidden readers around town that track vehicles at places other than
toll gates).   Even with dead transponders, their accounts are still
charged accurately when they pass toll gates.  (The sign displays EZ  
Pass

not read or some such thing, but the account is debited within a day
or two anyway).

-matt

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Security by restraining order

2008-08-13 Thread Matt Blaze
The EFF yesterday filed a letter from a number of academic security  
researchers
urging the judge in the MIT Charlie Card case to reverse the  
restraining

order.  It can be found on the EFF's case page, at
   http://www.eff.org/cases/mbta-v-anderson/

As a security researcher (and one of the signers of the letter to the  
judge), I was
particularly struck by the ironic -- and very unfortunate -- message  
that the court
order sends to our community:  it's safer to irresponsibly blindside  
users and vendors
by publishing about vulnerabilities without warning them first (thus  
denying them

the opportunity to seek a pre-publication gag order).

Surely that's not what that the court or the MBTA seek to encourage  
here.


I blog a bit more about this at
  http://www.crypto.com/blog/security_through_restraining_orders/

-matt





On Aug 13, 2008, at 3:58, David Farber wrote:


clipped from Steve Bellovin blog --
The MBTA versus (Student) Security Researchers
12 August 2008

As I'm sure many of you have heard, the MBTA (Massachusetts Bay  
Transportation Authority) has a very insecure fare payment system.  
Some students at MIT, working under the supervision of Ron Rivest —  
yes, that Ron Rivest, the R in RSA — found many flaws and planned  
a presentation at DEFCON on it. The MBTA sought and received an  
injunction barring the presentation, but not only were the slides  
already distributed, the MBTA's court filing included a confidential  
report prepared by the students with more details than were in the  
talk...


The Electronic Frontier Foundation is appealing the judge's order,  
and rightly so. Not only is this sort of prior restraint blatantly  
unconstitutional, it's bad public policy: we need this sort of  
security research to help us build better systems. I and a number of  
other computer scientists have signed a letter supporting the  
appeal. You can find the complete EFF web page on the case here.


djf --- Here's the letter:

http://www.eff.org/files/filenode/MBTA_v_Anderson/letter081208.pdf

The rest of the case files are here:
http://www.eff.org/cases/mbta-v-anderson


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Surveillance, secrecy, and ebay

2008-07-26 Thread Matt Blaze

One of the less-discussed risks of widespread surveillance is
not just the abuse or misuse of intercepted content and meta-
data by the government, but its accidental disclosure. As
more and more private data gets collected, and as it sits
around for longer and longer, it becomes inevitable that some
of it will end up in surprising places.  No malice is required;
it's practically impossible to avoid.  And this is not merely
a hypothetical concern.  Case in point:

I recently indulged myself with a used Nagra SNST tape
recorder, a beautifully-engineered miniature reel-to-reel
device that was especially popular with law enforcement and
intelligence agencies from the 70's to the 90's.  (Hey, I'm a
old-school geek -- I like gadgets.)

The recorder came with with a tape reel, which I had assumed
was blank or erased. But a couple of days ago, I decided to
double check just to be sure.  To my surprise, the the tape
wasn't blank at all.  It contained a recording of a wired
confidential informant being sent out to buy drugs on behalf
of a state police agency in 1996.

The recording was pretty innocuous and boring, to be honest
(the deal never happened, and most of the tape is the sound
of a car being driven to the buy location).  But there was
a disturbing element: the tape contained the full names of both
the suspect and the supposedly confidential informant!

I've got an MP3 of the tape on my blog.  The names of the
hapless informant and suspect have been muted out in the name
of good sense:
  http://www.crypto.com/blog/watching_the_watchers_via_ebay/

Unfortunately, this is hardly an isolated incident; this sort of
inadvertent disclosure of sensitive information -- stuff that
could cause people real harm -- happens all the time.  And law
enforcement agencies can be among the most carless offenders.  A
couple of years ago, when my grad students and I were studying
telephone wiretaps and were buying up surplus law enforcement
wiretapping gear, we were disturbed to discover that almost none
of the equipment we bought had been sanitized before being sold
off.  Pen registers bought from several different agencies (on
ebay and other places) generally were delivered in the state in
which they were last used, configured complete with suspect's
telephone numbers and call detail records

None of this should be terribly surprising.  It's becoming harder
and harder to destroy data, even when it's as carefully controlled
as confidential legal evidence. Aside from copies and backups made
in the normal course of business, there's the problem of obsolete
media in obsolete equipment; there may be no telling what
information is on that old PC being sent to the dump, where it
might end up, or who might eventually read it.   More secure storage
practices -- particularly transparent encryption -- can help here,
but they won't make the problem go away entirely.   Once sensitive
or personal data is captured, it stays around forever, and the
longer it does, the more likely it is that it will end up somewhere
unexpected.  This is yet another reason why everyone should be
concerned about large-scale surveillance of the kind recently
authorized by Congress; it's simply unrealistic to expect that the
personal information collected will remain confidential for very
long.

-matt

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Security and Human Behavior workshop

2008-07-02 Thread Matt Blaze

There was a terrific interdisciplinary workshop this week at MIT on
security and human behavior.  Organized by Ross Anderson and
Bruce Schneier, the idea was to bring together security researchers
from diverse fields who don't normally talk with each other: computing,
psychology, economics, criminology, sociology, etc.

There weren't any new earth shattering research results presented;  
rather
the idea was to inspire security thinkers who should be learning from  
each
other (but often don't) to reach outside their own disciplines.   
There's a

lot to learn.

Bruce's comments, with links to an agenda and position papers, are on  
his

blog at:
   http://www.schneier.com/blog/archives/2008/06/security_and_hu.html

Ross liveblogged the sessions at:
   http://www.lightbluetouchpaper.org/2008/06/30/security-psychology/

I recorded most of the sessions, for those who enjoy listening to hours
of noisy, out-of-context audio of events they didn't attend.  MP3s can
be found at:
   http://www.crypto.com/blog/shb08

-matt


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: How far is the NSA ahead of the public crypto community?

2008-05-09 Thread Matt Blaze


On May 8, 2008, at 19:08, Leichter, Jerry wrote:
An interesting datapoint I've always had on this question:  Back in  
1975
or so, a mathematician I knew (actually, he was a friend's PhD  
advisor)

left academia to go work for the NSA.  Obviously, he couldn't say
anything at all about what he would be doing.

The guy's specialty was algebraic geometry - a hot field at the time.
This is the area of mathematics that studied eliptic curves many years
before anyone realized they had any application to cryptography.  In
fact, it would be years before anyone on the outside could make any
kind of guess about what in the world the NSA would want a specialist
in algebraic geometry to do.  At the time, it was one of the purest
of the pure fields.



I've heard similar recollections of mathematicians from improbably
abstract specialties being eagerly taken in by NSA, throughout the
cold war.   I've also heard it said that at one time NSA was the
US's single largest employer of math PhDs.  I don't know if that
was actually true, but it certainly seems plausible.

But it's also important to remember that crypto isn't the only
area of the NSA mission that benefits from mathematical expertise.
I suspect that while many of these NSA math PhDs were indeed doing
cryptomathematics, a large fraction were (and are) working on
other SIGINT problems such as signal processing, databases and
searching, coding theory, machine learning, and so.  Some of the
(non-crypto) problems here seem rather specific to the NSA's domain,
and so don't likely have an advanced civilian research community
competing with them they way academic crypto does today.

A couple of the papers from the 1970's hint (in redacted form,
frustratingly)  that the NSA then had large scale automatic systems
for intercepting and processing morse code signals from large
blocks of radio spectrum, which implies some pretty advanced
(for that era) signal processing and computing, crypto aside.

-matt

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


How far is the NSA ahead of the public crypto community?

2008-05-08 Thread Matt Blaze
During the 1980's and 1990's crypto wars, an occasional topic of  
speculation was
just how much the NSA was ahead of the open/public/academic  
cryptography research
community in cryptanalysis and cipher design.  We wondered (and still  
wonder)
whether the NSA was merely a strong center of expertise, a bit ahead  
of the rest
of us by virtue of their focused mission and culture, or were they  
more of a
crypto-mathematical superpower, possessing amazing techniques that  
effectively

demolish every cipher in the public domain?

For those of us in the unclassified world, there has relatively  
little evidence
to go on beyond the occasional tantalizing technical nugget, and even  
those
have been hardly uniform in their message.  The impressively well- 
engineered

resistance of DES to differential cryptanalysis (apparently called the
tickle attack on the inside years before Biham and Shamir's result)  
and the
narrow -- but apparently solid -- resistance of Skipjack to various  
new attacks
suggests a remarkably sophisticated set of decades-old cipher design  
and analysis
tools that the civilian world is only beginning to catch up with.  On  
the other
hand, there have been blunders, like the early problems with SHA and  
the protocol
weaknesses in Clipper, that suggest that the NSA's crypto toolkit  
might not be

all that much sharper than ours after all.

Anyway, there's now a bit more fuel for speculation.  The latest  
batch of (still
partly redacted) publicly-released NSA technical and historical  
publications
includes several policy papers from the 1990's that touch on NSA's  
dominance

over crypto in the face of an increasingly sophisticated public research
community (among other factors).  I found one of the most interesting  
(if
frustratingly censored) new documents to address this point was  
Third Party

Nations: Partners and Targets from Winter 1989:
http://www.nsa.gov/public/third_part_nations.pdf

This paper discusses the pros and cons (from the NSA's perspective)  
of sharing
cryptologic technology with other countries.  The specifics  
(presumably naming
names of the countries concerned) are all redacted, but what remains  
is a
hypothetical dialog between liberal (pro-sharing) and  
conservative (anti-
sharing) internal viewpoints.  Page 8 of the PDF (marked as page 17)  
addresses
the general spread of cryptographic expertise.Interestingly, both  
the
liberal and the conservative sides acknowledge the rapid development  
of public
cryptographic expertise, and this was back in 1989.  The conservative  
argument
relied here not on the NSA's better crypto-mathematics (an advantage  
that
they seemed to believe was shrinking), but rather on the large gap  
between

the theory and actual deployment in the non-NSA world (a problem that we
here have long recognized).

Anyway, this isn't big news, since it's essentially what most of us have
suspected all along, but this is the earliest document I'm aware of from
inside the NSA to explicitly address the question.

Personally, I suspect the NSA does have a large advantage in SIGINT
technologies, but in those areas, like demodulation of unknown signals,
for which there's less of a civilian research interest.  The vibrant
crypto research community, on the other hand, has probably evolved to
the point of being a serious competitor to NSA.

On a side note, I've also been enjoying filling in some of the redacted
gaps in the various technical papers.  I was particularly delighted
to discover a fun little paper on safecracking (an analysis of the
keyspaces of safe locks), which was very similar to part of a survey I
published a few years ago.   I discuss what's likely in some of the
redacted material from that paper in a recent blog post at
   http://www.crypto.com/blog/nsa_safecracking/

-matt

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: OpenSparc -- the open source chip (except for the crypto parts)

2008-05-05 Thread Matt Blaze

Nonsense. Total nonsense. A half-decent reverse engineer does not
need the source code and can easily determine the exact operation of
all the security-related components from the compiled executables,
extracted ROM/EPROM code or reversed FPGA/ASIC layout


I'm glad to know that you have managed to disprove Rice's
Theorem. Could you explain to us how you did it? I suspect there's an
ACM Turing Award awaiting you.

Being slightly less sarcastic for the moment, I'm sure that a good
reverse engineer can figure out approximately what a program does by
looking at the binaries and approximately what an ASIC does given
good equipment to get the layout. What you can't do, full stop, is
know that there are no unexpected security related behaviors in the
hardware or software. That's just not possible.




In particular, while it's certainly true than an expert can often  
discover

unexpected security-related behavior by careful examination of source
(or object) code, the absence of such a discovery, no matter how
expert the examination, is no guarantee of anything, for general  
software

and hardware designs.

And on a slight tangent, this is why it was only with great  
reluctance that

I agreed to participate in the top-to-bottom voting system reviews
conducted last year by California and Ohio.  If flaws were found (as  
they

were), that would tell us that there were flaws.  But if no flaws had
been found, that would tell us nothing about whether any such flaws were
present.  It might just have been that we were bad at our job, that the
flaws were subtle, or that something prevented us from noticing  
them.  Or

maybe there really are no flaws. There'd be no way to no for sure.

I ultimately decided to participate because I suspected that it was  
likely,
based on the immaturity of the software and the apparent lack of  
security

engineering in the design process for these systems, that we would find
vulnerabilities.  But what happens when those are fixed?  Should we then
conclude that the system is now secure?  Or should we ask another set
of experts to take another look?

After some number of iterations of this cycle, the experts might stop  
finding

vulnerabilities.  What can we conclude at that point?

It's a difficult question, but the word guarantee almost certainly
does not belong in the answer (unless preceded by the word no).

-matt


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Rewriting the cryptography debate

2008-03-15 Thread Matt Blaze

So I recently re-read Lawrence Wright's controversial piece in the
New Yorker profiling Director of National Intelligence Mike McConnell.
(http://www.newyorker.com/reporting/2008/01/21/080121fa_fact_wright)
While the piece's glimpse into the administration's attitudes toward  
torture

and warrantless wiretaps have gotten much attention, I was particularly
struck by this paragraph:

 In the nineties, new encryption software that could protect  
telephone
 conversations, faxes, and e-mails from unwarranted monitoring  
was coming
 on the market, but the programs could also block entirely legal  
efforts
 to eavesdrop on criminals or potential terrorists. Under  
McConnell's
 direction, the N.S.A. developed a sophisticated device, the  
Clipper Chip,
 with a superior ability to  encrypt any electronic  
transmission; it also
 allowed law-enforcement officials, given the proper authority,  
to decipher
 and eavesdrop on the encrypted communications of others.  
Privacy advocates
 criticized the device, though, and the Clipper was abandoned by  
1996. They
 convinced the folks on the Hill that they couldn't trust the  
government to
 do what it said it was going to do, Richard Wilhelm, who was  
in charge of

 information warfare under McConnell, says.

This seems to me a significant re-writing of history, and the Wilhelm  
quote a particularly
disingenuous interpretation of recent events.  In fact, Clipper died  
on the vine due to
technical problems that rendered it ineffective for its intended  
purpose (to say nothing
of the extravagance of being implemented in an expensive tamper- 
resistant ASIC).  And
key escrow and crypto export controls died (in 2000) not from an act  
of Congress (which
never actually voted on any cryptography legislation), but from  
unilateral action within
the executive branch.  In 2004, the Bush administration further  
liberalized the crypto
export control policies of the previous administration, which I  
believe had (and still

have) strong bipartisan support.

While Clipper certainly was a lightning rod for criticism on privacy  
grounds, the changes
in policy that eventually occurred can hardly be attributed to some  
sort of frightened

capitulation to an out-of-control privacy lobby, as the quote implies.

I blog a bit more about this at http://www.crypto.com/blog/ 
mcconnell_clipper/


-matt

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Failure of PKI in messaging

2007-02-12 Thread Matt Blaze

I'm all for email encryption and signatures, but I don't see
how this would help against today's phishing attacks very much,
at least not without a much better trust management interface on
email clients (of a kind much better than currently exists
in web browsers).

Otherwise the phishers could just sign their email messages with
valid, certified email keys (that don't belong to the bank)
the same way their decoy web traffic is sometimes signed with
valid, certified SSL keys (that don't belong to the bank).

And even if this problem were solved, most customers still
wouldn't know not to trust unsigned messages purporting
to be from their bank.

-matt

On Feb 12, 2007, at 16:43, James A. Donald wrote:


 --
Obviously financial institutions should sign their
messages to their customers, to prevent phishing.  The
only such signatures I have ever seen use gpg and come
from niche players.

I have heard that the reason no one signs using PKI is
that lots of email clients throw up panic dialogs when
they get such a message, and at best they present an
opaque, incomprehensible, and useless interface.  Has
anyone done marketing studies to see why banks and
massively phished organizations do not sign their
messages to their customers?

 --digsig
  James A. Donald
  6YeGpsZR+nOTh/cGwvITnSR3TdzclVpR0+pr3YYQdkG
  BwrcLrYHszR0syC9LdVrjxAionyxVDwbtJq8Xu2q
  4ky71ODjPeHF5TC4pnkktFaLHEOfFN4fY8JEyqnfn

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Intuitive cryptography that's also practical and secure.

2007-01-26 Thread Matt Blaze

I was surprised to discover that one of James Randi's million dollar
paranormal challenges is protected by a surprisingly weak (dictionary-
based) commitment scheme that is easily reversed and that suffers from
collisions. For details, see my blog entry about it:
   http://www.crypto.com/blog/psychic_cryptanalysis/

I had hoped to be able to suggest a better scheme to Randi (e.g., one
based on a published, scrutinized bit commitment protocol).   
Unfortunately

I don't know of any that meets all his requirements, the most important
(aside from security) being that his audience (non-cryptographers
who believe in magic) be able to understand and have confidence in it.

It occurs to me that the lack of secure, practical crypto primitives and
protocols that are intuitively clear to ordinary people may be why
cryptography has had so little impact on an even more important problem
than psychic debunking, namely electronic voting. I think intuitive
cryptography is a very important open problem for our field.

-matt

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Creativity and security

2006-03-28 Thread Matt Blaze


On Mar 26, 2006, at 22:07, Joseph Ashwood wrote:

- Original Message - From: J. Bruce Fields  
[EMAIL PROTECTED]

Subject: Re: Creativity and security



On Fri, Mar 24, 2006 at 06:47:07PM -, Dave Korn wrote:
  IOW, unless we're talking about a corrupt employee with a  
photographic

memory and telescopic eyes,


Tiny cameras are pretty cheap these days, aren't they?  The employee
would be taking more of a risk at that point though, I guess.


The one I find scarier is the US restaurant method of handling  
cards. For those of you unfamiliar with it, I hand my card to the  
waiter/waitress, the card disappears behind a wall for a couple of  
minutes, and my receipt comes back for to sign along with my card.  
Just to see if anyone would notice I actually did this experiment  
with a (trusted) friend that works at a small upscale restaurant. I  
ate, she took my card in the back, without hiding anything or  
saying what she was doing she took out her cellphone, snapped a  
picture, then processes everything as usual. The transaction did  
not take noticably longer than usual, the picture was very clear,  
in short, if I hadn't known she was doing this back there I would  
never have known. Even at a high end restaurant where there are  
more employees than clients no one paid enough attention in the  
back to notice this. If it wasn't a trusted friend doing this I  
would've been very worried.

   Joe



Heh, that's marvelous.

I touched briefly on the awfulness of restaurant payment protocols in my
2004 paper from the Cambridge Protocols Workshop, which you may enjoy:

   M. Blaze. Toward a broader view of security protocols.
   12th Cambridge International Workshop on Security Protocols.
   Cambridge, UK. April 2004.

   http://www.crypto.com/papers/humancambridgepreproc.pdf

-matt


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: serious threat models

2006-02-04 Thread Matt Blaze

Yes, it's not at all clear from these stories just what was
going on or how high tech the attack would have to be. What does
diverting to a prepaid mobile mean?  Here's a possibility:
they social engineered or otherwise compromised the target account
to assigned it a new telephone number and forward the old number
to a prepaid account they control.  The interceptor box acts
as a man in the middle that receives calls at this prepaid account
and forwards them back to the target's new number (all the
while recording the content).

Such an arrangement would allow interception of incoming calls (but
not outgoing calls, unless they managed to get those forwarded
as well somehow -- perhaps there's a GSM feature that can do that,
too).  Cumbersome, but has the advantage to the attacker of not
requiring any custom software or features on the switch or
cryptanalysis of the over-the-air interface, just garden-variety
subscriber account compromise and cobbling together a couple of
off-the-shelf GSM handsets.

-matt

On Feb 3, 2006, at 4:15, Jaap-Henk Hoepman wrote:



I wondered about that too. Do commonly used mobile phone switches  
have built-in
functionality to divert (or rather split) calls to another phone;  
could this be
done using phone conference facilities? or could you easily use  
lawfull

interception fucntionality...? In other words, could it be done by
reconfiguring the switch?  Or would it require more drastic changes
(software/hardware) to the switch (which makes the number of people  
that could

actually do this much smaller...)

Jaap-Henk
(who should have paid more attention to phone switches when he  
worked at

a telco... but everybody did internet there then ;-)

On Thu, 02 Feb 2006 21:28:31 -0500 Steven M. Bellovin  
[EMAIL PROTECTED] writes:
I hate to play clipping service, but this story is too important  
not to

mention.  Many top Greek officials, including the Prime Minister, and
the U.S. embassy had their mobile phones tapped.  What makes this
interesting is how it was done: software was installed on the switch
that diverted calls to a prepaid phone.  Think about who could manage
that.

http://www.guardian.co.uk/mobile/article/0,,1701298,00.html
http://www.globetechnology.com/servlet/story/RTGAM. 
20060202.wcelltap0202/BNStory/International/



--Steven M. Bellovin, http://www.cs.columbia.edu/~smb



-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to  
[EMAIL PROTECTED]





--
Jaap-Henk Hoepman   |  I've got sunshine in my pockets
Dept. of Computer Science   |  Brought it back to spray the day
Radboud University Nijmegen |Gry Rocket
(w) www.cs.ru.nl/~jhh   |  (m) [EMAIL PROTECTED]
(t) +31 24 36 52710/53132   |  (f) +31 24 3653137


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to  
[EMAIL PROTECTED]



-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Webcast of crypto rump session this year!

2005-08-17 Thread Matt Blaze

And for those who didn't catch this bit on the webcast (or in person):

The Bletchley park trust wants to sell off the building that houses the
Colossus rebuild and turn it in to housing.

Another group, the Bletchley Park Heritage (run by, among others,
the amazingly interesting Tony Sale) hopes to buy the land and
preserve and expand the project.

For those not familiar with it, the Colossus rebuild is a remarkable
engineering effort to reconstruct what was arguably the world's first
electronic computer.

They BADLY need cash.

http://www.bletchleyparkheritage.org.uk/


On Aug 16, 2005, at 14:34, james hughes wrote:

For those interested, testing will begin at 3:30pm PDT. More  
information see

http://www.iacr.org/conferences/crypto2005/rump.html

The program is now available
http://www.iacr.org/conferences/crypto2005/C05rump.pdf

Please feel free to forward this to other security list. I believe  
it will be an interesting rump session.


Thanks

jim


On Aug 14, 2005, at 5:42 PM, james hughes wrote:



I now have new and good news.

There _WILL_ be a webcast of this year's Crypto which will  
commence at 7pm this Tuesday (Aug 16th).


Please watch http://www.iacr.org which will be posted as soon as  
further information is known!


Please feel free to cross post this message to other cryptography  
related lists!


Enjoy!

jim



On Aug 12, 2005, at 9:07 AM, Mads Rasmussen wrote:






Anyone knows whether there will be webcasts from this years  
Crypto conference?


--
Mads Rasmussen
Security Consultant
Open Communications Security
+55 11 3345 2525



 
-

The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to  
[EMAIL PROTECTED]






-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to  
[EMAIL PROTECTED]






-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to  
[EMAIL PROTECTED]





-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Weaknesses in RFID-based transponders

2005-01-29 Thread Matt Blaze
A group of computer scientists at Johns Hopkins and RSA Labs
is reporting practical attacks against the TI Digital Signature
Transponder RFID chip, which is used, among other things, to
secure many automotive transponder ignition keys and the
SpeedPass payment system.  Their paper is available at
   http://www.rfidanalysis.org
The results are also mentioned in today's New York Times, at
   http://www.nytimes.com/2005/01/29/national/29key.html
Aside from the practical significance of this work (a thief
may be able to copy your ignition immobilizer and payment
transponder from a short distance away without your knowledge
or cooperation), it nicely illustrates yet again the increasing
convergence of cryptology, computer security and physical security,
as well as the importance of exposing any security technology to
scrutiny before it is fielded.
From a cursory scan of the paper, it appears that these attacks
could have been easily avoided had the designers of the system
followed well known, widely accepted computer security practices
such as the use of well-scrutinized algorithms and, most importantly,
not depending on easily discovered secrets.  Unfortunately, as
this work demonstrates, many designers of both computer and
physical security systems have yet to take these principles
seriously.
-matt
-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Safecracking for the computer scientist

2005-01-09 Thread Matt Blaze
I've been thinking for a while about the relationship between the
human-scale security systems used to protect the physical world
the cryptologic and software systems that protect the electronic
world.  I'm increasingly convinced that these areas have far more
in common that we might initially think, and that each can be
strengthened by applying lessons from the other.
I've started writing down much of what I've learned about a
particularly interesting area of high-end human-scale security --
safes and vaults.  A draft survey of safe security from a CS
viewpoint, Safecracking for the computer scientist, is at:
http://www.crypto.com/papers/safelocks.pdf
This is a big file -- about 2.5MB -- and is heavily illustrated.
This is the same paper that was slashdotted last weekend, but I
figured some here may not have seen it and may enjoy it.
-matt
-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Question on the state of the security industry (second half not necessarily on topic)

2004-07-09 Thread Matt Blaze
On Jul 3, 2004, at 14:22, Dave Howe wrote:
Well if nothing else, it is impossible for my bank to send me anything 
I would believe via email now

To take this even slightly more on-topic - does anyone here have a 
bank capable of authenticating themselves to you when they ring you?
I have had four phone calls from my bank this year, all of which start 
out by asking me to identify myself to them. When I point out that 
they must know who I am - as they just phoned me - and that I have no 
way of knowing who they are, they are completely lost (probably takes 
them away from the little paper script pinned to their desk)

Last month I had a rather good experience with American Express
in this regard.  I recently moved and had ordered something
to be shipped to my new address (this was before I changed my
billing address with AMEX).  Apparently the merchant had Amex
verify the transaction, and so AMEX called me.
Naturally, I asked how I was supposed to know it was really them
calling.  Without missing a beat, the caller invited me to hang
up and call back the number on the back of my card, which I did.
After the usual exchange of information to establish my identity,
I was transferred to the right department, and ended up speaking with
the same person who had originally called me(!).
After confirming the validity of the transaction in question, I
asked how many people are as suspicious as I was in asking for
confirmation that it's really AMEX calling.  He said not many,
but a significant enough number that they're ready to handle it
routinely when it happens (he also congratulated me for my
diligence).
It's nice that they have a procedure for this, but it's still a
mixed success for security against the theft of sensitive personal
information.  People like me (us?) remain the exception rather
than the rule, and while it's comforting that the standard procedures
accommodate us, the vast majority of people appear to happily give any
information requested to whoever calls them.  And when banks and
credit card issuers make calls requesting sensitive information
as part of their routine operations, they're training their customers
to engage in exactly the same behavior that they should be trying to
discourage.
Perhaps a better procedure would be to always simply ask the customer
to call back the known, trusted contact number (e.g., as printed on
the card), and never ask for any personal or sensitive information
in an unsolicited call.  They could widely advertise that this is
always the procedure and ask customers to be alert for any caller
who deviates from it.
-matt
-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: New authentication protocol, was Re: Tinc's response to Linux's answer to MS-PPTP

2003-09-30 Thread Matt Blaze
I wrote:
 For some recent relevant papers, see the ACM-CCS '02 paper my colleagues
 and I wrote on our JFK protocol (http://www.crypto.com/papers/jfk-ccs.ppt),
...
But of course I meant the url to be
http://www.crypto.com/papers/jfk-ccs.pdf

I don't know what I could have been thinking; I don't use the
program that produces files with that extension unless a gun is
pointed to my head.

-matt



-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Monoculture

2003-09-30 Thread Matt Blaze
 I imagine the Plumbers  Electricians Union must have used similar
 arguments to enclose the business to themselves, and keep out unlicensed
 newcomers.  No longer acceptable indeed.  Too much competition boys?


Rich,

Oh come on.  Are you willfully misinterpreting what I wrote, or
did you honestly believe that that was my intent?

No one - at least certainly not I - suggests that people shouldn't
be allowed to invent whatever new protocols they want or that some
union card be required in order to do so.  However, we've learned
a lot in recent years about how to design such protocols, and we've
seen intuitively obviously secure protocols turn out to be badly
flawed when more advanced analysis techniques and security models
are applied against them.

Yes, the standards against which newly proposed protocols are measured
have increased in recent years: we've reached a point where it is
practical for the potential users of many types of security protocols
to demand solid analysis of their properties against rather stringent
security models.  It is no longer sufficient, if one hopes to have
a new protocol taken seriously, for designers to simply throw a proposal
over the wall to users and analysts and hope that if the analysts
don't find something wrong with it the users will adopt it.  Now
it is possible - and necessary - to be both a protocol designer and
analyst at the same time.  This is a good thing - it means we've made
progress.  Finally we can now look at practical protocols more
systematically and mathematically instead of just hoping that we
didn't miss certain big classes of attack.  (We're not done, of course,
and we're a long way from discovering a generally useful way to look
at an arbitrary protocol and tell if it's secure).

Fortunately, there's no dark art being protected here.  The literature
is open and freely available, and it's taught in schools.  And unlike
the guilds you allude to, anyone is free to participate.  But if they
expect to be taken seriously, they should learn the field first.

I'd encourage the designer of the protocol who asked the original question
to learn the field.  Unfortunately, he's going about it a sub-optimally.
Instead of hoping to design a just protocol and getting others to throw
darts at it (or bless it), he might have better luck (and learn far
more) by looking at the recent literature of protocol design and analysis
and trying to emulate the analysis and design process of other protocols
when designing his own.  Then when he throws it over the wall to the rest
of the world, the question would be not is my protocol any good but
rather are my arguments convincing and sufficient?

I suppose some people will always take an anti-intellectual attitude
toward this and congratulate themselves about how those eggheads who
write those papers with the funny math in them don't know everything to
excuse their own ignorance of the subject.  People like that with
an interest in physics and engineering tend to invent a lot of
perpetual motion machines, and spend a lot of effort fending off
the vast establishment conspiracy that seeks to suppress their
brilliant work.  (We've long seen such people in cipher design, but
they seem to have ignored protocols for the most part, I guess
because protocols are less visible and sexy).

Rich, I know you're a smart guy with great familiarity (and
contributions to) the field, and I know you're not a kook, but
your comment sure would have set off my kook alarm if I didn't
know you personally.

 
 Who on this list just wrote a report on the dangers of Monoculture?
 
 Rich Schroeppel   [EMAIL PROTECTED]
 (Who still likes new things.)

Me too.

-matt



-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Monoculture

2003-09-30 Thread Matt Blaze
Perry writes:
 
 Richard Schroeppel [EMAIL PROTECTED] writes:
 (Responding to the chorus of protocol professionals saying please do
  not roll your own)
  I imagine the Plumbers  Electricians Union must have used similar
  arguments to enclose the business to themselves, and keep out unlicensed
  newcomers.  No longer acceptable indeed.  Too much competition boys?
 
...
 
  Who on this list just wrote a report on the dangers of Monoculture?
 
 I did. Dependence on a single system is indeed a problem. However, one
 must understand the nature of the problem, not diversify blindly.
 
 Some companies are said to require that multiple high level executives
 cannot ride on the same plane flight, for fear of losing too many of
 them simultaneously. That is a way of avoiding certain kinds of
 risk. However, I know of no company that suggests that some of those
 executives fly in rickety planes that have never been safety tested
 and were built by squirrels using only pine cones. That does not reduce
 risk.
 

Speaking of plumbers and electricians, it occurs to me that while
it would be very difficult to find pipe fittings designed without
taking into account static and dynamic analysis or electric wiring
designed without benefit of resistance or insulation breakdown tests
(basic requirements for pipes and wires that nonetheless require
fairly advanced knowledge to understand properly), equipping a house
with such materials might actually end up being safe.  The inevitable
fire might be extinguished by the equally inevitable flood.

-matt



-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: New authentication protocol, was Re: Tinc's response to Linux's answer to MS-PPTP

2003-09-29 Thread Matt Blaze
EKR writes:
 I'm trying to figure out why you want to invent a new authentication
 protocol rather than just going back to the literature and ripping
 off one of the many skeletons that already exist (STS, JFK, IKE,
 SKEME, SIGMA, etc.). That would save people from the trouble
 of having to analyze the details of your new protoocl.

Indeed.

It's also worth pointing out that the standards for authentication /
key exchange / key agreement protocols (and the techniques for
attacking them) have improved over the last few years, to the point
that if you want your protocol to have any chance of being taken
seriously, you'd better have both a clear statement of why your
protocol is an improvement over those in the existing literature,
and some kind of proof of security under an appropriate model.

Key agreement turns out to be a surprisingly hard problem, especially
in any context that's to be used in a real protocol.  (For evidence of
this, you need look no further than the fact that research papers on
the subject are still being written and published in competitive
conferences and journals).  Even defining the security model under
which such protocols should be analyzed is a hard problem and the subject
of current research.

It is probably no longer acceptable, as it was just a few years ago,
to throw together an ad-hoc authentication or key agreement protocol
based on informal obvious security properties, without a strong
proof of security and a clear statement of the model under which the
security holds.

For some recent relevant papers, see the ACM-CCS '02 paper my colleagues
and I wrote on our JFK protocol (http://www.crypto.com/papers/jfk-ccs.ppt),
and Ran Canetti and Hugo Krawczyk's several recent papers on the design
and analysis of various IPSEC key exchange protocols (especially their
CRYPTO'02 paper).

-matt

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


USENIX Security '04 Call for Papers

2003-09-06 Thread Matt Blaze
USENIX SECURITY '04 - AUGUST 9-13, 2004 - SAN DIEGO, CA
 
CALL FOR PAPERS

The USENIX Security Symposium brings together researchers,
practitioners, system administrators, system programmers, and others
interested in the latest advances in security of computer systems.
The 13th USENIX Security Symposium will be held August 9-13, 2003 in
San Diego, CA.

If you are working on any practical aspects of security or
applications of cryptography, the program committee encourages you to
submit a paper.  Submissions are due at 23h59 (Pacific time) January
25, 2004.  The symposium will span five days: Two days of tutorials
will be followed by a two and one half day technical program, which
will include refereed papers, invited talks, Work-in-Progress reports,
panel discussions, and Birds-of-a-Feather sessions.


IMPORTANT DATES

Submissions Due: 25 January 2004
Notification to Authors: 31 March 2004
Camera-Ready Papers Due: 18 May 2004


ORGANIZERS

Program Chair:
Matt Blaze, ATT / University of Pennsylvania

Program Committee:
Bill Aiello, ATT Labs - Research
Tina Bird, Stanford University
Drew Dean, SRI International
Carl Ellison, Microsoft
Eu-Jin Goh, Stanford University
Sotiris Ioannidis, University of Pennsylvania
Angelos Keromytis, Columbia University
Patrick McDaniel, ATT Labs - Research
Adrian Perrig, Carnegie-Mellon University
Niels Provos, Google
Greg Rose, Qualcomm
Sean Smith, Dartmouth College
Leendert van Doorn, IBM Research
Paul van Oorschot, Carleton University
Dave Wagner, University of California, Berkeley
Rebecca Wright, Stevens Institute of Technology 

Invited Talks Co-Chairs:
Vern Paxson, ICSI
Avi Rubin, Johns Hopkins University


SYMPOSIUM TOPICS

Refereed paper submissions are solicited in all areas relating
to systems and network security, including: 

* Adaptive security and system management 
* Analysis of malicious code 
* Analysis of network and security protocols 
* Applications of cryptographic techniques 
* Attacks against networks and machines 
* Authentication and authorization of users, systems,
  and applications 
* Automated tools for source code analysis 
* Denial-of-service attacks and countermeasures
* File and filesystem security 
* Firewall technologies 
* Intrusion detection 
* Privacy preserving (and compromising) systems 
* Public key infrastructure 
* Rights management and copyright protection 
* Security in heterogeneous and large-scale environments 
* Security of agents and mobile code 
* Security of Internet voting systems 
* Techniques for developing secure systems 
* World Wide Web security 

Note that USENIX Security is primarily a systems security conference.
Papers whose contributions are primarily in the area of new
cryptographic algorithms or protocols, cryptanalysis, electronic
commerce primitives, etc, may not be appropriate for this conference.


REFEREED PAPERS  AWARDS 

Papers that have been formally reviewed and accepted will be presented
during the symposium and published in the symposium proceedings. The
proceedings will be distributed to attendees and, following the
conference, will be available online to USENIX members and for
purchase.

One author per accepted paper is offered a $200 discount against the
registration fee; USENIX will waive the fee for presenters for whom
the fee would present a hardship.

Awards may be given at the conference for the best overall paper and
for the best paper that is primarily the work of a student.


TUTORIALS, INVITED TALKS, PANELS, WIPS, AND BOFS 

In addition to the refereed papers and the keynote presentation,
the technical program will include tutorials, invited talks, panel
discussions, a Work-in-Progress session (WiPs), and Birds-of-a-Feather
Sessions. You are invited to make suggestions regarding topics or
speakers in any of these sessions via email to the contacts listed
below or to the program chair at [EMAIL PROTECTED]

Tutorials

Tutorials for both technical staff and managers will provide
immediately useful, practical information on topics such as local
and network security precautions, what cryptography can and cannot
do, security mechanisms and policies, firewalls, and monitoring
systems. If you are interested in proposing a tutorial or suggesting
a topic, contact the USENIX Tutorial Coordinator, Dan Klein, by
email to [EMAIL PROTECTED]

Invited Talks

There will be several outstanding invited talks in parallel with the
refereed papers. Please submit topic suggestions and talk proposals
via email to [EMAIL PROTECTED]

Panel Discussions

The technical sessions may include topical panel discussions.
Please send topic suggestions and proposals to [EMAIL PROTECTED]

Work-in-Progress Reports (WiPs)

The last session of the symposium will be a Works-in-Progress
session. This session will consist of short presentations about
work-in-progress