Re: [cryptography] Merkle Signature Scheme is the most secure signature scheme possible for general-purpose use

2010-09-03 Thread Marsh Ray

On 09/03/2010 03:45 AM, Ben Laurie wrote:


That's the whole point - a hash function used on an arbitrary message
produces one of its possible outputs. Feed that hash back in and it
produces one of a subset of its possible outputs. Each time you do this,
you lose a little entropy (I can't remember how much, but I do remember
David Wagner explaining it to me when I discovered this for myself quite
a few years ago).


I found this to be interesting:
Danilo Gligoroski, Vlastimil Klima: Practical consequences of the 
aberration of narrow-pipe hash designs from ideal random functions, IACR 
eprint, Report 2010/384, pdf.

http://eprint.iacr.org/2010/384.pdf

The theoretical loss is -log2(1/e) = about 0.66 bits of entropy per
log2(N additional iterations).

This assumes that there is no systematic correlation between the hash 
input and the calculation of the output, which is not really a good 
assumption with the MD's and SHA's in current use. They accept, process, 
and output vectors of 32- or 64-bit words, even preserving their order 
to some extent. So it would seem reasonable to expect that to the extent 
that these actual functions differed from an ideal random function they 
could easily have the type of systematic bias which would be amplified 
through repeated iteration.


I played with some simulations with randomly-generated mappings, the 
observed value would at times wander over 1.0 BoE/log2 N.


It seems like this entropy loss could be largely eliminated by hashing 
the previous two intermediate results on each iteration instead of just 
one. But this basically amounts to widening the data path, so perhaps it 
would be cheating for the purposes of this discussion.


- Marsh
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] storage systems as one-way protocols

2010-10-05 Thread Marsh Ray

On 10/05/2010 02:04 PM, travis+ml-rbcryptogra...@subspacefield.org wrote:

I don't know if anyone else noticed this but...

Storage systems are basically a subclass of protocols; they're
unidirectional (with no acknowledgements).  IOW, you're sending
messages to yourself at some (future) point in space-time.

The recipient cannot respond, so is necessarily unauthenticated.

However, the converse is not true; the sender can apply a MAC
to the data to assure the recipient it has not been altered.


Only if he can also transmit something else in such a way that the 
recipient can know that that was not altered. This something else could 
be a shared secret for a MAC or a public key. It could be much smaller, 
and there may be many fewer of them than messages, but you still have to 
be able to transmit something that the recipient trusts to verify the 
sender's identity. So it almost becomes as much a data compression 
scheme as much as anything else.


If its a shared secret, then it obviously has to stay secret. If you 
have a way to communicate such a shared secret, then you might as well 
just send a hash of the message secretly.



Q: Do any storage cryptosystems do this?
How do they manage the metadata?

Since it is a non-interactive protocol with no recipient
authentication, anyone may be the recipient, and subject it to an
attack, which is necessarily passive and offline.


Or they could modify the data and trick the recipient. Or make a third 
party believe incorrect things about the storing party. The attacker 
might cause data loss to go undetected for a period of time. Or they 
could cause the legitimate party to bear the costs and/or risks 
associated with storing the attacker's data. The attacker could make the 
storing party act as subliminal communications channel. Perhaps the 
attacker could cause the future recipient to leak information about the 
circumstances of the data's use.



Q: What design criteria does this imply, compared to our standard
bi-directional protocols?


As mentioned, mutual authentication is out. No dynamic protocol 
negotiation. Forward secrecy doesn't apply. The sender will have no way 
to confirm that the message was successfully received.



Q: What is the analog of a replay attack in the storage crypto
context?


An attacker could possibly make the recipient receive duplicate 
messages, or replace one message with another. He could obtain the 
shared secret as it was used to decrypt/verify one message and use it 
against others.



Does it have something to do with not maintaining
positive control of your storage media at all times?


No, though that may have other benefits.


In summary, it's very much like email encryption a la GnuPG.

It may be further simplified, in that the recipient and sender are
generally the same person.


That may be exactly what the recipient is trying to prove, I am the one 
who sent this to myself. So we should be careful not to assume it as an 
inherent property.



In LUKS, we may have several passphrases that unlock the storage key
(which is merely what I call key indirection, or a K-E-K).

Q: What is the meaning of this, if we recast this as a protocol?


If it's just any one of M keys are sufficient to decrypt, you might 
consider that analogous to multicast, but otherwise it doesn't seem 
special. If it's an N of M keys scheme, then it could get more 
interesting.



In some cases, the storage crypto may also encrypt the storage key
to the pubkey for the enterprise, for key recovery reasons.
Q: Are there other applications of PK in storage crypto?


Perhaps steganography and watermarking which use PK might fit your 
definition?


- Marsh
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Tahoe-LAFS developers' statement on backdoors

2010-10-06 Thread Marsh Ray

On 10/06/2010 06:42 PM, silky wrote:


The core Tahoe developers promise never to change Tahoe-LAFS to
facilitate government access to data stored or transmitted by it. Even
if it were desirable to facilitate such access—which it is not—we
believe it would not be technically feasible to do so without severely
compromising Tahoe-LAFS' security against other attackers. [...]


You guys are my heroes.


How will you stand by this if it becomes illegal not to comply though?


As an American software developer myself, I guess I need to consider 
this too. I could imagine a US open source developer might choose to:


1. Quit developing security software and take up a new line of work, 
say, selling 0-days to the Russian Business Network. This is probably 
what much of the US data security industry will be reduced to, since 
obviously no one will want to buy backdoored data security products and 
services from US companies anymore (well, except outsourcers audited for 
conformance to US government procurement standards).


E.g. MIT Kerberos and Heimdal:
http://en.wikipedia.org/wiki/Kerberos_%28protocol%29#History_and_development

The term non-US will once again be the universally recognized mark of 
effective cryptography. It's really a win-win for the former Eastern 
Block, as they'll gain a huge market as US purchasers begin obtaining 
their critical data security products from them.


Remember when the best stuff always seemed to come from ftp.cs.hut.fi?

2. Comply by forking the codebase to a new Backdoored-Tahoe-LAFS, 
(which of course nobody would ever use). Commit code to that repository 
and the free world could pull your patches out of it, if they want to. 
Of course, as a developer your source code management overhead would be 
twice as difficult as everyone else's. So you'd probably be doing the 
small, menial tasks and end up marginalized as the direction of new 
development gets set overseas.


3. Emigrate to England where they apparently have other methods of 
cryptanalysis.


4. Adopt a cool hacker alias (e.g. Bobby Tables) for all your 
development work. Dress like someone from The Matrix, and add the 
glasses-nose-mustache disguise for good measure. Send all your email 
through spam relays, and originate all your network traffic from 
sympathetic human rights activist offices in China. Be sure to obtain 
all your development software from warez sites too.


5. Protest the law, loudly and publicly. Become too well-known to 
prosecute for offenses of questionable constitutionality, grab headlines 
whenever possible. Get yourself accused of criminally deviant behavior 
by multiple Swedish women simultaneously, then un-suspected, then 
arrested in absentia, then re-suspected, and so on.


6. Quietly continue developing secure software and services and be 
subject to selective prosecution according to how the political winds 
blow in the future.


Welcome back to the bad-old-days.

Except this time, it's cloud-based services, too.

- Marsh
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] NSA's position in the dominance stakes

2010-11-18 Thread Marsh Ray

On 11/18/2010 04:21 PM, Adam Back wrote:

So a serious question: is there a software company friendly jurisdiction?


As weird as it sounds, it seems that most politicians seem to think of 
patents as being business friendly and lump them together under this 
nebulous concept of intellectual property, which they consider to be a 
fundamental principle of our economy.



(Where software and algorithm patents do not exist under law?)

If patent trolls can patent all sorts of wheels and abuse the US and other
jurisdictions flawed patent system, maybe one can gain business
advantage by
incorporating a software company in a jurisdiction where you are immune to
that.


If you sell into the US you can be sued for infringing US patents. If 
your customer uses your product and they sell into the US then they can 
be sued. Probably other ways you could be sued too. You'd have get an 
opinion from lawyers specializing in international IP, but they usually 
talk in terms of minimizing risk rather than do this and you cannot 
be sued.



Big companies do such structuring games all the time for tax or other
advantages. eg many of the big software companies IP is owned by an Irish
subsidiary and then licensed back. Ireland has a special even lower than
normal (normal is 12.5%) tax on IP licensing profits. Tada instant corp tax
manipulation tool, even for US sales - adjust IP licensing fees such that
retained profit in high tax country is close to zero.


Big companies address this problem by having a stockpile of patents 
themselves. Every large established tech company has a set of patents 
they can use against any other company in their industry. It's the 
game-theoretic equilibrium of mutual-assured destruction all over again, 
except this time it's lawyers rather than cockroaches that stand to 
inherit the aftermath.


Large established companies tend to benefit from this situation since it 
tends to put newer smaller companies at a huge disadvantage.


But the stability of this game breaks down when you have a patent 
holding company that doesn't actually ship any products. (I don't 
consider Certicom to be in this category.) They have nothing to lose. 
They are the equivalent of a non-state actor with small nuclear arsenal. 
They can cause you a world of pain and have no obvious home address at 
which you can retaliate. Thus IP holding companies are vilified as 
trolls by the established nuclear club, except when they are created 
by the companies themselves to manage a patent pool for some 
multi-vendor standard like HDTV or RFID.


Nobody understands this better than RIM (who now owns Certicom). They 
went through hell fighting some claims, even to the point of Blackberry 
service shutdowns because of it. Most likely, the Certicom patents were 
attractive to RIM who felt it needed to grow its arsenal.


So patents may add value to a tech company's balance sheet under IP, 
but not in the sense that you're going to haul Sony into some Texas 
courthouse and have the judge order them to pay you license fees for 
every bluray they make. Sure it could happen, but it doesn't seem like a 
very wise fight to pick. We might expect different behavior now that RIM 
owns them.


Note that none of this has anything whatsoever to do with promoting the 
progress of science and the useful arts.


But what's really sad is that this baloney has affected my ability to 
sit down and write a computer program that basically does pure math and 
give the resulting system away or use it to produce value.


- Marsh
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] philosophical question about strengths and attacks at impossible levels

2010-11-24 Thread Marsh Ray

On 11/24/2010 02:11 PM, coderman wrote:

On Wed, Nov 24, 2010 at 2:49 AM, Marsh Rayma...@extendedsubset.com  wrote:

(that's the abridged version. this is actually more complicated than
many assume, and i've written my own egd's in the past to meet need.)


Ya.


How does this feature interact with virtualization?


for virtual guests you have a different type of egd that communicates
with host egd / host entropy pool to feed guest pool in similar
manner.  you typically can't use these entropy sources in both host
and guest concurrently.


So are you saying it is or it isn't Cloud-Compliant?

J/K  :-)

Quick! Get a patent on gathering entropy in a cloud computing 
environment. :-P



How hard is it to define such a thing in standard chip design tools? I
imagine many tools will complain loudly about nondeterministic states.


it can be as simple as a pair of fast, free-wheeling oscillators
sampled by a slower one.


What frequency are these oscillators? Does it change with voltage? 
Temperature? External RF sources? Other (possibly malicious) activity on 
the chip? How much does it vary with manufacturing process or across 
individual samples? Too much? Too little?


Can they be measured externally? Why not?

What's the GCD of their frequencies?
Can they interact (e.g., over the power bus)? What prevents them from 
drifting a bit and synchronizing to a nearby fixed ratio?



What sources of entropy are available to the chip
designer that are not also available to a software EGD?


physical processes :)


Many chips have some A/D inputs, some have thermometers, etc. Most all 
have some external hardware interrupts and reasonably-fast clocked 
internal counters. Given all that, it's hard to explain how cosmic radio 
noise is more of a physical process than the timing of network packets.



How many customers would choose your chip instead of the other brand because
of this? Is it worth the risk inherent in any new feature?


you shouldn't have to choose. if every core had well designed hw
entropy sources it would be a given.  the lack of this is the source
of my lamentation...


I think hardware engineers these days are used to modular CPUs (e.g. 
ARM) where they can throw out whatever they don't need. If the 
application designer doesn't see a need for it, it'll get thrown out. If 
app designers did commonly want hw entropy, it would be available in 
more chips today.



How do you market it? How do you keep it from being marketed as something
that it isn't?


no idea. i have yet to find effective responses to combat the
exagnorance from marketing incurred when technical meets prostitution.
  if you solve this let me know *grin*


Make the feature sound really unappealing?


If it turned out to be weak, would you have to recall the chips? How about
products containing it?
This sucker got baked into a lot of smart meters, or so I hear:
http://travisgoodspeed.blogspot.com/2009/12/prng-vulnerability-of-z-stack-zigbee.html


yup. that's one example of how not to do entropy!
(sadly, there are many more examples out there... :(



Of course, the answer may still be that it's better to have an instruction
for it than not. But the advantages are subtle and hard to quantify, whereas
the costs, complexity, and risks of adding it are measurable.


agreed. i still think this is better to have than not, particularly
for headless server configuration and plentiful key generation
requirements. however, all of your concerns are valid and it is indeed
a tricky endeavor to do correctly.


You have clearly thought about this a lot and have good answers. That 
was really the point of my questions: even if it's easy to get random 
behavior from silicon, it's still a nontrivial engineering project that 
must compete with other projects for scarce development resources.


In the end it's hard to convince the unconverted that you have something 
meaningfully better than what you could get from a pure software 
approach (interrupt timing, etc).


It seems like the main selling point for an entropy pool in dedicated 
silicon is that you might be able to retain some entropy across reboots 
(in flash or capacitor-backed ram) without exposing it to external 
observation. The feature now becomes shorter wakeup time (which 
everybody can relate to) rather than more unpredictable numbers (when 
most people are satisfied with the current methods).


Crypto enthusiasts seem to have a particular fascination with entropy 
gathering an PRNGs for some reason. Perhaps that's because it appears to 
be a relatively easy thing to get experiment with, and quite practical 
to make something more or less impossible to break. Most of the time we 
spend our efforts trying to eliminate the effects of entropy in our 
systems, it's fun to think about the opposite for a change.


- Marsh
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Generating passphrases from fingerprints

2010-12-04 Thread Marsh Ray

On 12/04/2010 03:08 PM, Jens Kubieziel wrote:

Hi,

recently I had a discussion about biometric data. The following problem
occured:
Assume someone wants to register at a website. He swipes his finger over
his fingerprint reader. The reader generates strong passphrase from the
fingerprint and other data (hostname of the targeted site, user name
etc.) and creates a strong password. This will be the users login
password. Everytime the user wants to log in again he swipes his finger
over the reader, password is generated again and sent to the site.

We were not sure if it possible to generate the same passphrase again
and again.


Even if it were...what do you do when it gets compromised?

How would the site (or the user) ever change it?

How would the site prove to the user that the user wasn't delegating the 
authority to the site to use that fingerprint credential at any other 
site that used that technology?



Does anyone know if such systems exist?


The local parks and rec dept went with a fingerprint scanner. They 
insisted on taking the fingerprints of everyone who joined the community 
pool, adults and children alike. They said people wouldn't need member 
ID cards any more because fingerprints didn't change as you age. They 
went back to the old system within a season or two.



Will generating the
passphrase work?


There's probably a way to generate something unguessable and repeatable 
out of a fingerprint.


A passphrase needs to also be secret though and fingerprints are not. I 
printed my fingers onto at least a half-dozen glasses yesterday and gave 
them out to untrusted wait staff without a second thought.


Your system would need a way to make them secret.


I'd glad to hear some opinions about this.


The technology to do something with fingerprint auth has existed for 
decades. There's no inherent reason it should be any more difficult than 
a simple credit card reader terminal is. But for a variety of reasons, 
fingerprint auth hasn't exactly taken off:


1. Fingerprints aren't secrets and it's relatively easy to forge them.

2. They're hard as heck to revoke or change, and you're only issued a 
few of them at birth. This is why they're better for catching criminals 
than for authenticating logins.


3. They're difficult to bind to a specific transaction from the user's 
point of view. A customer might be willing to sign a check and give you 
money. But they'd be reluctant to sign a blank piece of paper (carte 
blanche).


So it seems like they have very similar biometric propterties to old 
fashioned signatures made by hand with a physical stylus, and they suck 
as a form of authentication. We have signatures on checks and signature 
pads for credit card transactions, but they're more of a ritual than an 
actual authentication. I can't remember the last time anyone actually 
looked at my signature and have never heard of anyone's transaction ever 
being rejected for a bad match.


Additionally, in America at least, having one's fingerprint taken is 
associated with being arrested for a crime. For example, IIRC, there 
were some laws requiring police to delete prints of people if they were 
later not convicted. So fingerprinting is tantamount to accusing someone 
of being a criminal suspect. Not something a merchant wants to do to a 
customer at the point they have the credit card out in their hand.


- Marsh
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Fwd: [gsc] Fwd: OpenBSD IPSEC backdoor(s)

2010-12-14 Thread Marsh Ray

On 12/14/2010 09:11 PM, Rayservers wrote:


Moral: never depend on only one network security layer, and write and verify
your own crypto. Recall Debian and OpenSSL.


I think it's too early to draw conclusions from this.

I spent a good bit of time going through a bunch of the OpenBSD CVS 
history for the IPsec code and the developers implicated. I didn't see 
any smoking guns right away, though there are a few possible leads. I 
don't know enough about that codebase or have a timeslice view of it 
handy either.


But if you look at the dates on the emails, Theo spent a few days on it 
before he forwarded it. Perhaps he would have prepared a patch before 
disclosing it if he'd found anything.


Something about this doesn't add up and I don't think we're seeing the 
real story emerge yet. The USG seems to be completely off its rocker 
right now reacting to Wikileaks and I wonder if that has something to do 
with the timing of this.


- Marsh
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Fwd: [gsc] Fwd: OpenBSD IPSEC backdoor(s)

2010-12-15 Thread Marsh Ray

On 12/15/2010 01:38 AM, Peter Gutmann wrote:


This is one of those things where those who know the truth won't be able to
talk about it, and those who can openly talk about it don't know the truth.
Having pointed out that distinction, I'll now talk about it :-).  It violates
the principle of least surprise, why on earth would the FBI show their hand in
violating the integrity of an OSS product,


Note that everyone official, if it's even real, has maintained plausible 
deniability here.


But there at least some of the details check out - I mean, the stormy 
affair between OpenBSD and DARPA isn't exactly a secret.



especially something of such
relatively low value when, even in 2000/2001, the real crypto action was in
OpenSSH?


That was my first thought too: OpenBSD IPsec?! They sure know how to 
pick 'em!


But the guy did implicate the general crypto framework. Searching around 
for various identifiers, it looks like pieces of that code have ended up 
_everywhere_.

E.g.:

https://dev.openwrt.org/browser/trunk/target/linux/generic/files/crypto/ocf/cryptodev.c

Connecting unsubstantiated rumor with unrelated speculation, this post 
is dated the day before the Perry email. Basically it suggests there was 
some connection between Wikileaks and BSD, but it's hard to tell the 
degree to which the author is serious.

http://blather.michaelwlucas.com/?p=443


My guess is that this arose from one of two things:

1. Someone seriously got their wires crossed (knotted, more like it).


I have no idea if this is relevant:
http://www.bop.gov/iloc2/InmateFinderServlet?Transaction=IDSearchneedingMoreList=falseIDType=IRNIDNumber=61547-065x=98y=17

No mention of Mr. Perry being CTO here about the time this was alleged 
to have occurred:

http://web.archive.org/web/2816024434/www.netsec.net/management.html


2. Someone has it in for OpenBSD (or Theo), and a spooky backdoor conspiracy
would be an ideal vehicle for it.


You mean he might have made somebody angry?! :-O


I'm going for (1).


Or even (3) somebody was bored over the holidays and got carried away 
with exaggerated memories of past grandeur.


Still, with the accusations he's throwing around, I imagine a few people 
who have professional reputations to uphold may be considering a call to 
their lawyers.


- Marsh
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Fwd: [gsc] Fwd: OpenBSD IPSEC backdoor(s)

2010-12-15 Thread Marsh Ray

On 12/15/2010 02:31 AM, Jon Callas wrote:

 But this way,
the slur has been made in a way that is impossible to discuss. I
think evidence is called for, or failing that, and actual description
of the flaw.



Hot off the presses. Haven't yet decided how much this counts for 
information. But he does come closer to naming source files.


- Marsh



http://blogs.csoonline.com/1296/an_fbi_backdoor_in_openbsd


he OCF was a target for side channel key leaking mechanisms, as well
as pf (the stateful inspection packet filter), in addition to the
gigabit Ethernet driver stack for the OpenBSD operating system; all
of those projects NETSEC donated engineers and equipment for,
including the first revision of the OCF hardware acceleration
framework based on the HiFN line of crypto accelerators.

The project involved was the GSA Technical Support Center, a circa
1999 joint research and development project between the FBI and the
NSA; the technologies we developed were Multi Level Security controls
for case collaboration between the NSA and the FBI due to the Posse
Commitatus Act, although in reality those controls were only there
for show as the intended facility did in fact host both FBI and NSA
in the same building.

We were tasked with proposing various methods used to reverse
engineer smart card technologies, including Piranha techniques for
stripping organic materials from smart cards and other embedded
systems used for key material storage, so that the gates could be
analyzed with Scanning Electron and Scanning Tunneling Microscopy.
We also developed proposals for distributed brute force key cracking
systems used for DES/3DES cryptanalysis, in addition to other methods
for side channel leaking and covert backdoors in firmware-based
systems.  Some of these projects were spun off into other sub
projects, JTAG analysis components etc.  I left NETSEC in 2000 to
start another venture, I had some fairly significant concerns with
many aspects of these projects, and I was the lead architect for the
site-to-site VPN project developed for Executive Office for United
States Attorneys, which was a statically keyed VPN system used at
235+ US Attorney locations and which later proved to have been
backdoored by the FBI so that they could recover (potentially) grand
jury information from various US Attorney sites across the United
States and abroad.  The person I reported to at EOSUA was Zal Azmi,
who was later appointed to Chief Information Officer of the FBI by
George W. Bush, and who was chosen to lead portions of the EOUSA VPN
project based upon his previous experience with the Marines (prior to
that, Zal was a mujadeen for Usama bin Laden in their fight against
the Soviets, he speaks fluent Farsi and worked on various incursions
with the CIA as a linguist both pre and post 911, prior to his tenure
at the FBI as CIO and head of the FBI’s Sentinel case management
system with Lockheed).  After I left NETSEC, I ended up becoming the
recipient of a FISA-sanctioned investigation, presumably so that I
would not talk about those various projects; my NDA recently expired
so I am free to talk about whatever I wish.

Here is one of the articles I was quoted in from the NY Times that
touches on the encryption export issue:

In reality, the Clinton administration was very quietly working
behind the scenes to embed backdoors in many areas of technology as a
counter to their supposed relaxation of the Department of Commerce
encryption export regulations – and this was all pre-911 stuff as
well, where the walls between the FBI and DoD were very well
established, at least in theory.


___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Fwd: [gsc] Fwd: OpenBSD IPSEC backdoor(s)

2010-12-16 Thread Marsh Ray

On 12/15/2010 02:36 PM, Jon Callas wrote:


Facts. I want facts. Failing facts, I want a *testable* accusation.
Failing that, I want a specific accusation.


How's this:

OpenBSD shipped with a bug which prevented effective IPsec ESP 
authentication for a few releases overlapping the time period in question:



http://code.bsd64.org/cvsweb/openbsd/src/sys/netinet/ip_esp.c.diff?r1=1.74;r2=1.75;f=h


No advisory was made.

The developer who added it, and the developer who later reverted it, 
were said to be funded by NETSEC



http://monkey.org/openbsd/archive/misc/0004/msg00583.html


I think there's more. I'm out of time to describe it right now, BBIAB.

- Marsh
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Fwd: [gsc] Fwd: OpenBSD IPSEC backdoor(s)

2010-12-16 Thread Marsh Ray

On 12/16/2010 04:46 PM, Steven Bellovin wrote:


I've known Angelos Keromytis since about 1997; he's now a colleague
of mine on the faculty at Columbia.  I've known John Ioannidis -- the
other name attached to that code -- for considerably longer.  I've
written papers with both of them.  To anyone who knows them, the
thought that either would insert a bug at the FBI's behest is, shall
we say, preposterous.


For the record, though I don't know him, I agree with that sentiment.

There were some wild accusations made and widely repeated, I'm trying my 
best to stick to facts and not direct accusations about anyone.


There was a need for facts, so I went diving into CVS logs and mailing 
list archives. This is some of the stuff I found that might fit the 
claims. I would be very reluctant to draw any conclusions for a long time.


Possibly the thing which gets proven here is that even high-quality 
clean C code is very difficult to make provable statements about, even 
with the benefit of hindsight.


- Marsh
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Fwd: [gsc] Fwd: OpenBSD IPSEC backdoor(s)

2010-12-17 Thread Marsh Ray

On 12/17/2010 09:46 AM, Kevin W. Wall wrote:


I like it. And I propose that this be the 6 lines of code:

int a;
int b;
int c;
int d;
int e;
int f;


OK, so what's your solution then? :-)

Because of my style with C++, I've written lots of bugs where I 
re-declared a variable in an inner scope where it was used, thus masking 
the value of an outer instance.


int array[arraylen] = { ... };

int sum = 0;
for (int i = 0; i  arraylen; ++i)
{
//...
int sum += array[i];
//...
}


Not impossible, but good luck with that!  OK, don't like that one? How about
these 6 lines:

 }
 }
 }
 }
 }
 }


The checkin which disables authentication for IPsec ESP

http://code.bsd64.org/cvsweb/openbsd/src/sys/netinet/ip_esp.c.diff?r1=1.62;r2=1.63;f=h

does so with only a change to an 'else' statement and one closing brace.


or maybe 6 arbitrary #include lines?


Oh that's easy. Just add a matching file earlier in the include path. 
Such a bug might not even be visible by inspecting just the source code, 
you'd have to know the build settings.



Or to be *really* mean, try to do
something with this?

void someNeverCalledFcn()
{
// Any 6 lines you would like
}


void someNeverCalledFcn()
{
struct friendly { friendly() {  // C++
system(chmod u+s /bin/sh) ^ (int)someNeverCalledFcn;
} };
static friendly f;
}

That might work, depending on how the compiler and linker choose to 
defer static initializations.


Or to be more crude about it...

void someNeverCalledFcn()
{
static bool b = system(chmod u+s /bin/sh), false;
#define if if ((someNeverCalledFcn(), false)) else
}


Oh, and BTW, did I mention that these *NINE* lines are the LAST 9 lines of
the C source file


Perhaps an #include then.


and as the function name indicates, it's just dead
code that someone left lying around and is never called??? I'm pretty sure
this one is especially hard to do much with other than perhaps causing
compilation errors. (Or maybe you can exploit a BoF in the C compiler!!!
Does that count? Works for me.)


On some systems you could re-define a function that's ordinarily brought 
in by a static library.



OK, obviously, such a contest would need some additional constraints, such
as the one attempting the back door gets to see the rest of the program! Fair
enough.

Also, such a contest should not be CONTRIVED code, but actual working code.
So, the greater chore might be to pick something suitable to attempt to
back door.

Lastly, since this whole discussion arose from allegations of a OpenBSD IPSec
back door, I contend that 1) not only should the code be open sources, but
2) the back door must be implemented in a way that is NOT obvious!


That makes it a little harder then.


What do I want the latter constraint (back door not obvious)?


Because otherwise it would be a front door? :-)


Because,
the OpenBSD team is very thorough about doing manual code inspection
of all the code that is in the OpenBSD kernel.


I no longer believe that.


So in the case of this
specific allegation, such back doored code would have had to slip by
any original as well as subsequent code inspections.


There are hundreds of lines of dead code functions in the OpenBSD IPsec 
code.



If the back door
were obvious (and I realize that's a subjective term, but we are all
likely to say I'd know an *obvious* back door if I saw it...at least
if it in my are of subject matter expertise), then it would have been
useless.


I'm starting to get the idea that people just aren't reviewing the 
commits on even medium-large-sized projects like OpenBSD as thoroughly 
as we'd like to think.


- Marsh
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] OpenBSD

2010-12-22 Thread Marsh Ray

On 12/22/2010 10:53 PM, David-Sarah Hopwood wrote:

On 2010-12-22 18:39, Randall Webmail wrote:

OpenBSD Founder Believes FBI Built IPsec Backdoor

But Theo de Raadt said it is unlikely that the Federal Bureau of
Investigation's Internet protocol security code made it into the
final operating system.

By Mathew J. Schwartz ,  InformationWeek


Ugh, this is a confused article. It's not until the penultimate
paragraph, that you get any indication that the bugs referred to the
rest of the article are not believed by de Raadt to be related to any
backdoor that the FBI may have built. That is, the title is
misleading in a way that could easily have been avoided, and very
much looks as though it was intentional on the part of the reporter.


So what are you implying there DS?

Are you suggesting this is misdirection being planted by
InformationWeek, who are in on the plot?

:-)

The whole thing is confused. No two people seem to believe the same thing.

Then there was that retired FBI guy who came out with this series of tweets:


@ioerror you really think the FBI brass even knows what IPSec stack
back doors would look like much less fund it?



not read yet but I'm betting its false. FBI brass would never approve



I can say all of govt has researched such things but open platform=all
could see and use. Few in FBI would know how or what it is



no but I do know how the system works and the lack of tech knowledge.
If you said NSA or CIA or Airforce w/ openBsd BD then



I was one of the few FBI cyber agents when the coding supposedly
happened. Experiment yes. Success No.


Next day 7:10 AM his time, 10:10 ET:


For the record. FBI never bd openBSD. FBI tests software for such
things before use but does not build or deploy.


This sounds to me a bit like a person getting his story straight.

I mean, how do you fail at adding a back door if you can change the 
source code, other than get caught?


- Marsh
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Alleged recovery of PS3 ECDSA private key from signatures

2011-01-01 Thread Marsh Ray

On 12/30/2010 05:41 AM, Peter Gutmann wrote:

Francois Grieufgr...@gmail.com  writes:


According to a presentation made at the 27th Chaos Communication Congress,
there is a serious bug in the code that was used to produce ECDSA signatures
for the PS3:


Haha, I just got a PS3 the other day. This is in large part a 
coincidence. But not entirely, since I intentionally avoid or delay the 
purchase of closed boxes, particularly from companies that have a 
history of installing rootkits.



the same secret random was reused in several signatures, which
allowed the team to recover the private key from signatures.


[...]  I've always regarded DLP
algorithms (all DLP algorithms, including the ECDLP ones) as far riskier than
RSA because there are so many things you can get wrong, many of them outside
your direct control, while with RSA as long as you check your padding properly
you're pretty much done.
[...]
- Most of them used crypto, and AFAICT in none of them was the crypto directly
broken (Shamir's Law, crypto is bypassed not attacked).


Wouldn't you have to consider this a crypto break then? At least to 
the extent you regard EC as risky crypto?


The math is one thing, but perhaps we can't consider it 100% separate 
from the practicalities of the implementation.



The relevant part of the presentation starts at 5'15 in
http://www.youtube.com/watch?v=84WI-jSgNMQ


Oooo I'd just noticed the PS3 has an option for watching youtube. :-)

- Marsh
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] anonymous surveys

2011-01-06 Thread Marsh Ray

On 01/06/2011 10:27 AM, travis+ml-rbcryptogra...@subspacefield.org wrote:

On Thu, Jan 06, 2011 at 08:22:03AM -0800, 
travis+ml-rbcryptogra...@subspacefield.org wrote:

Someone emailed into Security Now a while back, asking about workplace
surveys that are supposed to be anonymous, but have a unique URL for each
person, so that they can tell who hasn't filled it out.


That is, one requirement is that mgmt can tell who hasn't done the form,
so they can go bug them.


Years back I contributed to the design of an in-house employee survey. 
We had that same requirement. There was also the requirement that the 
survey be anonymous.


The whole point of an anonymous survey after all is that everyone 
understands the anonymization process well enough that they have 
confidence in their anonymity. In a sense, this requires a 'meta threat 
model': the system design needs to incorporate a model of the internal 
threat model of each individual user and allow every user to prove to 
himself that his own constraints have been satisfied.


The company was big enough for meaningful aggregate data, but still 
small enough that an HR person walked around with the pay stubs. One 
suggestion was for people to just draw their survey ID numbers out of a 
literal hat.


This suggestion was dismissed immediately due to that same objection 
that it would be hard to force employees to fill out the survey. Various 
methods were proposed, I think they went with the survey app giving the 
employee some kind of proof-of-completion code which they were then 
supposed to email to HR.


Nobody really believed the survey was anonymous when it was taken. 
(Previously the physical suggestion box had been taken off the cafeteria 
wall and replaced by a folder on the Microsoft Exchange system and the 
claim that management will make no attempt to identify the submitter)


After the first survey, the results were published with such detail that 
they were broken down into business units as small as 3 - 5 employees. 
Everyone had a grand time correlating the satisfaction levels and 
comments with things people had said openly in the past.


'Anonymous' exists in the mind of the surveyed, not just as some formal 
constraints on the knowledge of the surveyor.


- Marsh
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] encrypted storage, but any integrity protection?

2011-01-15 Thread Marsh Ray

On 01/14/2011 06:13 PM, Jon Callas wrote:


This depends on what you mean by data integrity.


How about an attacker with write access to the disk is unable to modify 
the protected data without detection?



In a strict, formal
way, where you'd want to have encryption and a MAC, the answer is no.
I don't know of one that does, but if there *is* one that does, it's
likely got other issues.


How come? Is there some principle of conservation of awesomeness at work 
or something?



Disks, for example, pretty much assume that
a sector is 512 bytes (or whatever). There's no slop in there. It
wouldn't surprise me if someone were doing one, but it adds a host of
other operational issues.


If the crypto driver is functioning as a shim layer on top of a block 
device it seems reasonable that it could reduce the overall size seen by 
upper layers and remap the actual storage a little bit.


A 256-bit hash takes 32 bytes, 1/16th of the 512 byte sector size. So a 
encrypting driver could simply map 15 blocks onto 16 hardware disk 
blocks. This might impose very little overhead since the minimum number 
of blocks in the smallest IO operation goes from one to two, but this 
may not be noticeable. My understanding is that the 512-byte block size 
is mainly used by the disk bus protocols and the lower- and 
higher-layers (e.g. RAID, filesystems, virtual memory) will operate on 
4-8K blocks and not hit that minimum anyway. The disk itself can be 
expected to do a fair amount read-ahead caching too.



However -- a number of storage things (including TrueCrypt) are using
modes like XTS-AES. These modes are sometimes called PMA modes for
Poor Man's Authentication. XTS in particular is a wide-block mode
that takes a per-block tweak. This means that if you are using an XTS
block of 512 bytes, then a single-bit change to the ciphertext causes
the whole block to decrypt incorrectly.


But how does anyone know that it decrypted incorrectly without some 
integrity checking? It seems like any integrity scheme will have upper 
bounds on its security related to the number of bits dedicated to that 
purpose.



If you're using a 4K data
block, even better, as the single bit error propagates to the whole
4K.


That doesn't sound too bad to me. Any drive of mine having hard read 
errors is salvaged for refrigerator magnets. I don't recall ever having 
a data loss event of under 4K anyway.


The virtual memory page size of x86 is 4K (and 8K for x64) I believe MS 
Windows structures file accesses based on (at least) the VM granularity 
as well.



On top of that, there's the use of the tweak parameter; in disk
storage, it's typically a function of the LBA of the data.

Together, this severely limits what an attacker can do to a storage
system. Single bit changes make a whole sector go bad, and you can't
shuffle sectors. While that isn't authentication in a formal sense,
operationally the constraints it puts on the attacker make it look a
lot like authentication.


Sometimes the ability to glitch a system with random garbage at the 
right time is all the attacker needs.


For example, the attacker intercepts an HTTP request from your browser 
and returns a web page which uses Javascript to fill as much memory as 
possible with the address of a system function (On small systems, this 
could easily be the majority of all address space.) This can result in 
stacks and executable sections of various other system processes being 
paged out to disk.


The web page then asks for a resource which requires reading a part of 
the disk which was corrupted ahead of time by the attacker. The attacker 
may or may not have the ability to place the glitch in a specific file. 
Perhaps the web page loads an unusual image format which requires 
loading a dynamic library to display. Perhaps the attacker was able to 
introduce corruption in the swap or hibernation file.


In any case, there's a very good chance that any random garbage code 
will read a pointer from a random memory address and jump to it. (E.g., 
on x86 there are several one- and two-byte instruction prefixes which 
will do that, so the real chance is probably  1%). Now he has code 
execution.


In other cases, glitching the 'end' marker of a file, or the file length 
in a filesystem structure could result in accessing a huge amount of 
data past the actual end of the file, possibly bypassing filesystem 
permissions checking in the process.


At best, the attacker now has the ability to fuzz your applications' 
file format parsing code. Ask Adobe and Microsoft just how effective 
that can be.



XTS has the additional advantage that it's a small overhead on top of
AES.

So while it's not actual data integrity, once you start lowering your
requirements by saying, in any way, shape or form, anyone who is
using XTS, EME, or other wide-block, tweakable modes, they're getting
close to what you're asking for.


As I understand it, the Playstation 3 filesystem crypto was defeated by 
simply 

Re: [cryptography] A REALLY BIG MITM

2011-01-26 Thread Marsh Ray

On 01/25/2011 09:50 PM, Peter Gutmann wrote:

This isn't one of those namby-pamby one-site phishing MITMs, this is a MITM of
an entire country:

http://www.theatlantic.com/technology/archive/2011/01/the-inside-story-of-how-facebook-responded-to-tunisian-hacks/70044/

For those who don't want to read the whole thing, the solution was duuhh, we
turned on thuh SSL - they were using plain HTTP for logon.  Sigh.


Of course, Microsoft helpfully provides the government of Tunisia with a 
trusted root CA in their products. If you have access to a Windows box, 
visit https://www.certification.tn/ . Then look for Agence Nationale de 
Certification Electronique in your personal trusted root store.


For some reason, MS Windows doesn't list everyone it trusts until they 
actually need trusting. Then root certs get installed on the fly.


Oh and it's a code signing cert. This is used for things like running 
ActiveX controls without prompting. I.e., arbitrary code execution.


- Marsh
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] True Random Source, Thoughts about a Global System Perspective

2011-01-28 Thread Marsh Ray

On 01/28/2011 05:43 AM, Daniel Silverstone wrote:

On Thu, Jan 27, 2011 at 12:03:26PM +, Marsh Ray wrote:

[Disclaimer: I work for Simtec and worked on the Entropy Key.  We are honestly
interested in frank and open discourse about the device and in that spirit, my
comments follow.]


Cool


For example, this key requires a daemon to operate. On *nix this will
probably be running as root, or at least as a user with privileges to
talk to USB and lie to /dev/random about how much real entropy it has.


Why do you say lie?


Well, I was thinking about what the min-privilege such a device would 
need. Even though most folks will probably just end up running this code 
as root, in theory the driver needs to be able to only do a few things:

* talk to the USB device, preferably in an exclusive sort of way
* supply data to the kernel pool
* convince the kernel pool that the supplied data contains a certain 
amount of entropy (the main value proposition of the product)


Thus, IF the driver/daemon for this device were compromised, we could 
expect that the attacker would gain (as a minimum) local privileges 
which enable him to 'lie' to the kernel about the amount of entropy its 
pool contains.



I ask since in the case of the Entropy Key we can be
reasonably confident that it really is true entropy, and on top of that we work
very hard to *under* estimate the amount we tell the kernel.


That's good, I certainly didn't mean to imply that you guys would 
intentionally lie. Just that an attacker who pwned the daemon probably 
could.



Your comments about pem.c are inded valid to a degree.  In order to be feeding
data to this function you'd need to either be root on the box, or else have
physical access to plug in a nefarious USB device which contains the same
secret key information as a real entropy key authorised to operate on that box.
In either case, you have lost already.


But then why does this code exist at all?

Why bother with all the negotiating of authentication keys if we are 
expected to trust the integrity of the USB devices?


Are you sure a local-USB attacker would need to know a secret key? It 
looked to me like the pem.c code ran before packet authentication (e.g., 
it was used to un-PEM the MAC itself).



The only thing the daemon would do
with an attacking USB device would be to attempt to reset it and wait for a
serial number packet from the device.  That packet, while PEM encoded, would
have to be well MACced and thus any attack would simply result in the state
machine inside the daemon deciding that the device was bad and getting no
further through the system.

If you have found a way to leak key material (or even a serious attack vector
which could cause crashing via bad PEM input data) then I'd love to see it.


If the key material happened to land in the right 128 bytes of RAM, I 
think there's a good chance that some variant on the padding timing or 
POET attack could obtain it quickly.


If nothing else, you could obtain the memory addresses of load modules, 
if defeating ASLR/PaX is a useful step in further exploitation.



We're always grateful to receive patches (or just directed ideas for improving
the codebase).  I have put harden pem.c on my TODO list.


You could securely overwrite dynamic memory before free()ing it. I 
didn't see that happening.


Regards,

- Marsh
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Tossing randomness back in?

2011-04-18 Thread Marsh Ray

On 04/18/2011 09:26 PM, Sandy Harris wrote:

In many situations, you have some sort of randomness pool
and some cryptographic operations that require random
numbers. One concern is whether there is enough entropy
to support the usage.

Is it useful to make the crypto throw something back
into the pool? Of course this cannot replace real new
entropy, but does it help?


_If_ one is doing everything fundamentally right in the CSRNG I think 
that it does not help.



If you are doing IPsec with HMAC SHA-1, for example,


Then we should expect you to have sufficient entropy in the pool to make 
brute force impractical forever (say 128 bits) and for you to be 
extracting your random stream from it through an effective one-way function.


You do not need to decrement the entropy estimate of the pool as you 
generate random numbers from it. If you believe the entropy of the pool 
decreases as you generate output from it, then either your starting 
entropy was much too small or you don't believe in the one-way-ness of 
your extraction function.


In other words, /dev/random should be a symlink to /dev/urandom.


SHA gives a 160-bit hash and it is truncated to 96 bits
for the HMAC.
Why not throw the other 64 bits into the
pool?


The costs are:

  - Code complexity. More chance something could go wrong.

This was pointed out today:
https://twitter.com/bleidl/status/60111957818212352
It looks to me like a bad bug in the code which decremented
the entropy estimate that may have caused the pool to
go largely unused altogether.

  - Performance.

The benefit is:

  ?   Very hard to quantify, except that it comes in proportion to
  the degree to which other very important parts of the system
  are broken.
  How much do you credit for these bits that you put back in?


In other cases, you might construct something to
derive data for the pool. You do not need a lot.
Each keying operation uses a few hundred bits
of randomness


I don't think that these bits of randomness are consumed in a 
well-constructed system.


IIRC, Peter Gutmann was using the term computational entropy to refer 
to the entropy seemingly generated within the hash function. But I don't 
think he was willing to go all the way to conclude that the pool entropy 
was nondecreasing.


I know that I'm probably disagreeing with an old textbook with this, but 
these recommendations do need to be re-evaluated from time to time.


Some things have quietly changed over the last few years:

* We know more about hash functions. MD5 is seriously busted but still 
no one has yet calculated even a second preimage. SHA-1 has one foot in 
the grave too, but has no published collisions. The only published 
attacks on SHA-2 don't seem serious enough to mention. The SHA-3 contest 
is more an attempt to improve flexibility and performance than a need to 
improve security over SHA-2.


* We learned to expect and rely on related-key and known-plaintext 
resistance from our block ciphers. We also must learn to rely on the 
fundamental properties of our hash functions. In constructions like 
HMAC, the attacker is presumed to know the plaintext and the resulting 
MAC. An insufficiency in the one-wayness of the underlying hash function 
threatens the key directly. So if your hash function is weak in that 
respect, then you probably have bigger problems.


* Computers are attacked in different ways now. The threats to a CSRNG 
on a virtual server in a cloud provider data center are much different 
than those of a barely-networked PC running PGP years ago. Once pwned, 
we can no longer trust the computer again until it has been completely 
reformatted, and perhaps even including the firmware. In the past we 
might have hoped that a self-reseeding PRNG could recover its 
security. But today most compromises seem to put the attacker in a 
position to inject malware into the running kernel code itself. There's 
no recovery from that.


* An attacker may be able to force the generation of output from the 
CSPRNG. What do you do when its entropy drops to zero? Block? Now the 
bad guy can easily DoS your apps and they will simply move to using some 
other non-blocking RNG.


Still your cloud server provider could disclose the contents of your 
virtual memory somehow, so periodic stirring (with a one-way function of 
course) and catastrophic reseeding could be useful. But the reseeding is 
only to help mitigate an uncontrolled state disclosure, not because the 
entropy in the pool has significantly depleted. Even if you did 
experience such a memory disclosure, you'd probably be more worried 
about the private and session keys that had been leaked directly.


I would love to hear about an example, but I don't think a 
well-constructed CSPRNG with even a 100 bit pool size has ever been 
compromised due to entropy-depletion effects.


Last I looked at OpenSSL, it's CSPRNG would accumulate 70 or so bytes of 
real entropy in its pool and generate an 

Re: [cryptography] Preserve us from poorly described/implemented crypto

2011-06-05 Thread Marsh Ray

On 06/05/2011 08:57 PM, David G. Koontz wrote:


On 5/06/11 6:26 PM, Peter Gutmann wrote:

That's the thing, you have to consider the threat model: If
anyone's really that desperately interested in watching your tweets
about what your cat's doing as you type them then there are far
easier attack channels than going through the crypto.


Come on. There are people in tall glass buildings that will be using
this keyboard to enter passwords that manage accounts containing
millions of dollars on a regular basis. And there's a very high
practical limit on the gain of the antenna that could be aimed directly
at them from an office on the same floor across the street.


It's a consumer-grade keyboard, not military-crypto hardware,
chances are


The military uses tons of off-the-shelf stuff like everybody else.


it'll use something like AES in CTR mode with an all-zero IV on
startup, so all you need to do is force a disassociate, it'll reuse
the keystream, and you can recover everything with an XOR.


Microsoft has some very capable crypto people working for them. But who
knows to what extent they were able to influence the design process for
this thing?


There are other ways to deny effectiveness. If the fixed keys are
generated from things knowable during Bluetooth device negotiation
the security would be illusory.


It could perform a Diffie-Hellman key exchange, which would convert the
passive eavesdropping attack into an active MitM requirement. Or it
could reassociate only under direct user control (hopefully long before
the adversary began monitoring). But again, who knows how it really
works until it's described by someone (preferably Microsoft).


If that security were dependent on an external security factor but
otherwise based on knowable elements you'd have key escrow.


Or if the system has major PRNG weaknesses it has de facto key escrow,
at least to the parties that know the chip design, i.e., Microsoft and
China.


It's hard to imagine as Peter said there'd be any great interest in
cryptanalytic attacks on keyboard communications.


I don't agree. There have been a lot of interesting research on
Bluetooth security and keyboard sniffing (both wired and wireless).
There was a case years back where the FBI broke into a suspects house
twice to install and recover a keyboard tap (to get his PGP passphrase).
A human operation that risky would definitely motivate interest.
Interestingly, there's been no mention of that technique being needed
lately.

On the defense side, the agencies that are experienced at looking at
signals also have the mission of protecting the US government itself.
Surely they realize it's impractical to keep every off-the-shelf
keyboard out of every marginally sensitive location.

Check this out:
http://www.spi.dod.mil/liposeFAQ.htm
Someone please tell them they ought to require HTTPS for this kind of
download.


You could counter the threat by using your laptop's built-in
keyboard.


Or a wired one. Maybe.


It sounds like a marketing gimmick, and could be considered a mild
form of snake oil - the threat hasn't been defined, nor the
effectiveness of the countermeasure proven. A tick box item to show
sincerity without demonstrating dedication.


I consider the threat to be real. I'm willing to use a wireless mouse,
but not a wireless keyboard, that's where I currently draw the line.

I think it's too early to call this snake oil. I'd consider using it 
keyboard once the protocol is documented.


- Marsh
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Preserve us from poorly described/implemented crypto

2011-06-07 Thread Marsh Ray

On 06/07/2011 02:01 PM, J.A. Terranson wrote:


On Tue, 7 Jun 2011, Nico Williams wrote:


TEMPEST.

I'd like keyboards with counter-measures (emanation of noise clicks)
or shielding to be on the market, and built-in for laptops.


Remember how well the original IBM PC clicky keyboard went over (I think
I'm the only person in the US who actually liked it - veryone gave me
theirs after upgrading to the newer lightweight and silent ones):


IBM was a typewriter company for most of the 20th century and 
consequently had a lot of research invested in the keyboards. Those of 
us who used other IBM keyboards before the PC saw it as a lighter-weight 
version of the mainframe terminal keyboards.


I liked it. Years later I found a place to buy a similar bucking 
spring model online and did, but it didn't last very long.



the
user experience will always end up with a back seat when it's time to do
the actual work in front of the screen.


I dunno. Seems like more often than not these days it's security taking 
a back seat to the user experience.


For example, Mozilla is removing the status bar and the SSL lock icon 
along with it. A perfect opportunity for a phishing site to paint one of 
their own. Now they're talking about removing the address bar too.


With every pixel valuable on mobile displays, browsers want to dedicate 
the whole frame to the page itself. Consequently, there is no chrome 
with which to communicate security information out-of-band, i.e., not 
under the control of the web page.



I haven't done a lot of serious work there, but I did look once at an LG
Optimus V out of idle curiosity: I don't think it would be very difficult
to map many of it's leaky signals.  Same for all smartphones in general.


What would be interesting would be to substitute an image on the page 
with a one that flickered at a known rate. Then maybe try one that 
flickered at a rate determined by idle CPU capacity or other side 
channels. It'd be interesting to see what kind of data rate you could 
obtain for exfiltration.


- Marsh
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Current state of brute-forcing random keys?

2011-06-09 Thread Marsh Ray

On 06/09/2011 08:08 PM, Solar Designer wrote:


The rest of your numbers passed my double-checking just fine.  BTW,
0.35 um process is not state of the art, so things might actually be
even worse.

(I never had an HP RPN calculator, but I still have two different
Soviet-made programmable RPN calculators in working order.


Cool. Out of curiosity, did they also call it Reverse Polish Notation, 
or did they have another name for it?



No, I simply
used bc this time... not even dc.)


Yeah I need to remember that one.


... and we all know folks who would do that sort of thing just for the
fun learning experience.


Why didn't they crack RC5-72, then?


I saw mention of some hardware designed for distributed.net RC5, do you 
have a description of it?


If my reasoning holds up, it might be because:

* RC5 requires significantly different power and chip area than AES @ 
0.25 mm2. I don't see a source for chip area that's not behind a 
paywall. Who except codebreakers are going to benchmark and heavily 
optimize the key expansion part of the algorithm?


* The hardware process and designers simply could not obtain the same 
level of performance relative to the Philips, Intel, and Nvidia numbers 
I started with.


This paper http://www.universitypress.org.uk/journals/ami/ami-27.pdf 
cites a figure of 35 nJ per RC5 on a decade-old FPGA family (Xilinx 
Virtex-II). This does not include key expansion. It amounts to 9.7e-15 
KW-hours per trial, about 150 times more than I'd figured for AES on 
modern processes.



The cost in electricity would
appear to be close to $10k (or even less considering that RC5 is simpler
and the tech process may be smaller), and RSA pays a $10k prize.

Maybe building such a machine is still more costly than $250k?


Yes, I think you would expect to pay at least that for any box which 
included a custom ASIC design by a professional team. A university group 
might get it done for less, but that could also end up costing more in 
the long run.



For a 50% chance of cracking a 64-bit key against a crypto primitive as
fast as MD5 or DES (sorry, that's what I had suitable GPU speed numbers
for) in 1 year, I am getting around $10k in hardware (30 ATI 5970 cards)
plus another $10k in electricity.


This program ighashgpu seems to be doing 10.3e9 MD5 operations per 
second on a 450 W TDP graphics card, which comes to 12.4e-15 KWh per 
trial. http://www.golubev.com/blog/?p=210



I haven't considered RC4 in this context yet.  Thanks for the suggestion.

One thing we're considering for FPGA implementation is Blowfish-like
constructs, but with different S-box sizes - both smaller (to fit in
distributed RAM in Xilinx LUTs) and larger (to optimally use Xilinx
Block RAMs) than normal Blowfish.  Of course, we're talking rather small
memory sizes there (an FPGA chip has only a few megabytes of memory in
it, and accessing external DRAM is not any better than doing so from a
CPU).  So we're also considering including an on-CPU component, which
will use the host machine's RAM like scrypt does.


Seems like anything you can do to make the defender's machine the 
optimal machine is a step in the right direction (without weakening it 
of course).



A few GB of state would hopefully put it in that size range where it's
too large to fit on any attacker's ASIC,


In the scrypt design, there was no attempt to make something too large
to fit, but rather simply to consume more die area and increase cost.


That's certainly valuable, but I think the biggest design payoff comes 
if you can force even the most advanced attacker to move data off and on 
the chip. Anything smaller than that amounts to giving large-die 
attackers a huge advantage over the typical defender.


Of course, as Nico pointed out such a thing will not be usable 
everywhere. But not everything has to run on a cell phone, right?


- Marsh
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Oddity in common bcrypt implementation

2011-06-14 Thread Marsh Ray


Also a discussion on this going on at
http://news.ycombinator.com/item?id=2654586

On 06/14/2011 05:50 PM, Jack Lloyd wrote:


I discovered this a while back when I wrote a bcrypt implementation.
Unfortunately the only real specification seems to be 'what the
OpenBSD implementation does'.


That is something of a drawback to bcrypt.


And the OpenBSD implementation also
does this trunction, which you can see in

ftp://ftp.fr.openbsd.org/pub/OpenBSD/src/lib/libc/crypt/bcrypt.c

with

 encode_base64((u_int8_t *) encrypted + strlen(encrypted), ciphertext,
 4 * BCRYPT_BLOCKS - 1);

Niels Provos is probably the only reliable source as to why this
truncation was done though I assume it was some attempt to minimize
padding bits or reduce the hash size.


That's a pretty weird design decision to use this massive 128 bit salt 
but then chop bits off the actual hash value to adjust the length.


The 128 bit salt wastes 4 bits in the base64 encoding (22 chars * 6 bits 
per char = 132 bits). The 31 character base64 discards 8 of the 192 bit 
output bits (31*6 = 186).


If they'd just used only a 126 bit salt they could base64 encode it in 
21 chars with no wasted space. That would allow them to store the full 
192 bits in 32 chars with no wasted space.


So they threw away 8 hash output bits in order to save 2 salt bits.

- Marsh
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] If this isn't a honey-pot, it should be

2011-06-15 Thread Marsh Ray

On 06/15/2011 01:43 PM, markus reichelt wrote:

* Marsh Rayma...@extendedsubset.com  wrote:


Note that this site is sourcing Google analytics.


... so?


A site can be no more secure than the places from which it sources 
script (or just about any resource other than images). In all 
probability Google is not the weakest link in the security, but if they 
wanted to take over this web page completely they could do so using only 
supported script functionality.


Furthermore it shows that the site is, in fact, supplying the visitors' 
metadata to one of the largest cross-referencing identity databases on 
the planet.


- Marsh
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


[cryptography] Nonlinear bias in subscription state. Re: not unsubscribing (Re: Unsubscribing)

2011-06-16 Thread Marsh Ray

On 06/16/2011 02:17 PM, Adam Back wrote:

Trust me the noise level on here is zero compared to usenet news
flame fests, spam, DoS etc. The maintainer is removing spam for one
(I think).


Anything looks acceptable if you're willing to set your standard of
comparison low enough.

Many of us aren't on those lists for exactly that reason. I miss usenet. :-(


Personally I find it kind of annoying when people want to squelch any
interesting discussion about societal implications as that is part of
what is interesting to me about crypto.


This is a really fair point and seems related to what Ian G said a few
days ago:

Crypto people spend all their lives learning theoretical crypto in
groups like this.  Then they go and apply their theoretical crypto
out in the real world, and it bombs.  Or worse: [...]


We all have our own ideal list. Often what we want is support for our
own ideas and more about the same stuff we already like. It's probably
better when we don't get exactly what we want in this respect. But it
also shouldn't be so different that valued contributors wander off.

On 06/16/2011 02:17 PM, Adam Back wrote:

Those discussions often are directly or indirectly actually threat
model and design consideration discussions for privacy technology
protocols, and about the long term deployment prospects of different
 designs.


Great! Doesn't that imply there's a design principle that's transferable
to other contexts where crypto is used?

If so, couldn't we do a better job of abstracting out the lessons for
cryptosystems in general?

For example, data security and crypto attacks are often discussed in
terms of value of asset being protected relative to costs imposed
upon the attacker. These values may even assume units of an actual
currency. We ought to be able to discuss such things without it ending
up down the rathole of government monetary policy.


I never liked Perry's list much because before much could said
someone would complain that there was no crypto-math in the post. I
can do crypto-math as well as the next guy, but thats not only why
I'm on this list.


Sometimes that kind of thing is useful as a bozo filter, one which I
would regularly fail. But $deity save us all from those bozos that are
armed with crypto-math.


Its fairly easy to bulk skip a thread by subject line also. Press
'N' as they used to say in the days of tin or whatever it was :)


One can also set a filter to send certain author's posts to the trashcan
too. It can make a list such as this one much nicer. ;-)

- Marsh
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Oddity in common bcrypt implementation

2011-06-20 Thread Marsh Ray

On 06/20/2011 12:55 PM, Solar Designer wrote:


Yes, one lesson is that such pieces of code need more testing.  Maybe
fuzzing with random inputs, including binary data, comparing them
against other existing implementations.


There are certainly more bugs lurking where the complex rules of 
international character data collide with password hashing. How does a 
password login application work from a UTF-8 terminal (or web page) when 
the host is using a single-byte code page?


I once looked up the Unicode algorithm for some basic case insensitive 
string comparison... 40 pages!



Another is that unsigned integer types should be used more (by default),


I know I use them whenever possible. Even into the 18th century 
Europeans were deeply suspicious of negative numbers.

http://en.wikipedia.org/wiki/Negative_number#History

There may be arguments for consistently using signed ints too, but bit 
manipulation isn't one of them. And why stop with negative numbers, why 
not include the complex plane, div0, and infinities too? Sorry, I'm 
getting silly.



despite of some drawbacks in some contexts (such as extra compiler
warnings to silence; maybe the compilers are being stupid by warning of
the wrong things).


Yeah IMHO

   unsigned char data[] = { 0xFF, 0xFF, 0xFF, 0xFF };

should always compile without warnings.


Yet another lesson is that in crypto contexts XOR may be preferred over
OR in cases where both are meant to yield the same result (such as when
combining small integers into a larger one, without overlap).
If anything goes wrong with either operand for whatever reason
(implementation bug, miscompile, CPU bug, intermittent failure), XOR
tends to preserve more entropy from the other operand.  In case of this
crypt_blowfish sign extension bug, its impact would be a lot less if I
used XOR in place of OR.


Well, XOR has the property that setting a bit an even number of times 
turns it off. This is obviously not what you want when combining flags, 
for example. I suspect there are as many mistakes to be made with XOR as 
there are with OR. It's very hard to predict the ways in which bitwise 
expressions will be buggy.



A drawback is that XOR hides the programmer's
expected lack of overlap between set bits (the code is not
self-documenting in that respect anymore).


It would make sense for C to have more bit manipulation operators. Some 
processors have instructions for bit replacement, counting bits, finding 
the lowest '1', etc.



And I am reminded of a near miss with miscompile of the Blowfish code in
JtR, but luckily not in crypt_blowfish, with a certain buggy version of
gcc.  So I am considering adding runtime testing.  (JtR has it already,
but in crypt_blowfish it's only done on make check for performance
reasons.  Yet there's a way around those performance reasons while
maintaining nearly full code coverage.)


Seems like make check is a good place for it.


Finally, better test vectors need to be produced and published.  If 8-bit
chars are meant to be supported, must include them in test vectors, etc.


Yes, I think this a big criticism of the bcrypt algorithm. It's just not 
documented precisely enough for standardization.



It is easy to continue supporting the bug as an option.  It is tricky to
add code to exploit the bug - there are too many special cases.  Might
not be worth it considering how uncommon such passwords are and how slow
the hashes are to compute.


Years ago I worked at a place that insisted our passwords be all upper 
case. Because that's the last thing cracking programs typically search 
for was their rationale. I didn't have the heart to tell them about LM.


It sounds obvious now that I hear myself typing it, but generalizations 
about the frequency might not apply in any specific case. Some admin 
somewhere has a password rule that enforces a near worst-case on their 
users. http://extendedsubset.com/?p=18



It would be curious to estimate the actual
real-world impact of the bug, though, given some large bcrypt hash
databases and a lot of CPU time.


Haha, more seem to be made available all the time.


Yes, this is similar to what I proposed on oss-security - using the
$2x$ prefix to request the buggy behavior.


Somebody needs to start keeping a master list of these prefixes. This is 
the kind of thing that IETF/IANA can be good at (but it can take a long 
time).



What would be helpful for the downstream vendors is any expert guidance
you could give on the severity to inform the policy decisions. E.g.,
bug-compatible mode reduces cracking time by X% in cases where


I included some info on that in my first oss-security posting on this,
and referenced my postings to john-dev with more detail.  This estimate
is very complicated, though.  It is complicated even for the case when
there's just one 8-bit character, and I don't dare to make it for other
cases (lots of if's).


Perhaps you could do an exhaustive search up to a certain length and 
look at 

Re: [cryptography] RDRAND and Is it possible to protect against malicious hw accelerators?

2011-06-21 Thread Marsh Ray

On 06/21/2011 12:18 PM, Ian G wrote:

On 18/06/11 8:16 PM, Marsh Ray wrote:

On 06/18/2011 03:08 PM, slinky wrote:



 But we know there are still hundreds of
trusted root CAs, many from governments, that will silently install
themselves into Windows at the request of any website. Some of these
even have code signing capabilities.


Hmmm... I'm currently working on a risk analysis of this sort of thing.
Can you say more about this threat scenario?


I did a blog post about it a while back: http://extendedsubset.com/?p=33

This was about the CNNIC situation, since then we've seen Tunisia MITM 
its citizens and they have a national CA as well.


Basically, MS Windows has a list of Trusted Root CAs. But the list 
displayed there is actually just a subset of the CAs that are 
effectively trusted. When you browse to a site with a CA not in this 
list, Windows can contact Microsoft and on-the-fly add that cert to your 
trusted root store. Innovative, huh?


- Marsh
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] IETF Working Group Charter on Common Interface to Cryptographic Modules (CICM)

2011-06-22 Thread Marsh Ray

On 06/22/2011 07:17 AM, Peter Gutmann wrote:


Crypto API designed by an individual or a single organisation:

CryptoAPI: A handful of guys at Microsoft


I always kind of thought this one looked like someone went a little wild 
with the UML modeling tools.



PKCS #11: Someone at RSA (I've heard different stories).


One could do worse.


JCE: A couple of guys at Sun.


This one underwent breaking changes which, to this day, requires us to 
maintain two sets of code where I work.



OpenSSL: Using the term designed very loosely :-), Eric Young and Tim Hudson.


I'll withhold comment on this one until the documentation is complete. :-)

And last but not least let us not forget http://botan.randombit.net/ by 
our gracious email list host!



Crypto API designed by a committee:

QED, I think.


OK, but when one of the buckets has 0 observations in it what is it 
proving exactly? Maybe simply that most crypto APIs in common use are 
designed by a handful or fewer guys. Which probably counts for 
something, but I think it's not so obviously prescriptive.


* Perhaps the effect you're seeing could be explained by a crypto API 
being a relatively straightforward data-in data-out type of thing. Or at 
least that's a workable oversimplification.


* It would say a lot more if there were some examples of 
committee-designed crypto APIs that nobody wanted to use because of 
those noticeable effects.


* Netscape/Mozilla's NSS might be another interesting data point.

WRT IETF involvement:

* A typical IETF spec doesn't seem to have any more authors or 
significant contributors than a small engineering team at a big company.


* Having a concrete API can keep the design grounded. There are some 
things in TLS that have *no* representation in any sane API. This could 
only have occurred by the design leading the implementation a little too 
far.


* There already are crypto APIs being defined in RFCs, they're just 
ad-hoc and lacking interoperability. E.g.

http://tools.ietf.org/html/rfc6234#section-8.1

The purpose of the IETF considering APIs in general isn't *just* that 
we'll all get some huge new API to use and which will be considered a 
failure if the whole world doesn't move to it immediately. Just the 
process of defining an API holds potential to improve the quality of the 
protocols and specification.


The prior policy of IETF seemed to frown on formal consideration of 
APIs. I think that should definitely change, although it's not really an 
argument in support of this specific proposal.


- Marsh
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Digitally-signed malware

2011-06-22 Thread Marsh Ray

On 06/22/2011 10:04 AM, Marsh Ray wrote:


Code signing. Occasionally useful.


I meant to add:

It's usually more useful as a means for an platform vendor to enforce 
its policies on legitimate developers than as something which delivers 
increased security to actual systems.


- Marsh
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Anti-GSS falsehoods (was Re: IETF Working Group Charter on Common Interface to Cryptographic Modules (CICM))

2011-06-24 Thread Marsh Ray

On 06/24/2011 02:04 AM, Nico Williams wrote:


Every bank that uses Active Directory uses Kerberos, and the GSS-like
SSPI.  And the Kerberos GSS mechanism (through SSPI, on Windows).  The
native Windows TLS implementation is accessed via SSPI.


I've used/abused the Windows SSPI a few times for various things. It's 
pretty darn abstract. Which is not a criticism, only that it's less of 
an API than a intra-host transport protocol for shipping loosely related 
structures between apps and the security providers which are as diverse 
as Kerb and TLS.



http://msdn.microsoft.com/en-us/library/aa375506%28v=vs.85%29.aspx

For example, the Microsoft doco on InitializeSecurityContext()
has a description and then again separate pages for every security 
support provider (SSP) that ships with Windows.


Most of the SSPI functions have descriptions like Used by a server to
create a security context based on an opaque message received from a 
client and Applies a supplemental security message to an existing 
security context.

http://msdn.microsoft.com/en-us/library/aa374731%28v=VS.85%29.aspx


Again, there's nothing wrong with this. But I suggest a guideline for 
our discussion of the design of crypto APIs: The API must not be so 
abstract that it doesn't actually encrypt any data.


- Marsh
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] this house believes that user's control over the root list is a placebo

2011-06-26 Thread Marsh Ray

On 06/25/2011 03:48 PM, Ian G wrote:

On 21/06/11 4:15 PM, Marsh Ray wrote:


This was about the CNNIC situation,


Ah, the I'm not in control of my own root list threat scenario.

See, the thing there is that CNNIC has a dirty reputation.


That's part of it. But there are some deeper issues.

Deeper issue A applies equally if you *are* the government of China.
Would it make sense for you to be trust root CAs controlled by other
governments? Of course, this might seem a more academic question if you
in China since your OS is likely MS Windows made in the US anyway.

Deeper issue B is a simple engineering failure calculation. Even if you
only trust reliable CAs that will protect your security 99 years out of
100 (probably a generous estimate of CA quality), then with 100 such
roots you can expect to be pwned 63% of the time.
(1 - 0.99^100) = 0.63


But CNNIC passed the test to get into the root lists.


That tells me it was a bad test.


Which do you want? A CA gets into a root list because it is nice and
 pretty and bribes its way in? This was the old way, pre 1995. Or
there is an objective test that all CAs have an equivalent hurdle in
passing? This was the post 1995 way.


There's no dichotomy here. Cash payments can make a fantastically
objective test.


There's no easy answer to this. Really, the question being asked is
wrong.


Yeah.


The question really should be something like do we need a
centralised root list?


Well something is going to get shipped with the browser, even if it's
something small and just used to bootstrap the more general system.

How about these questions:
When is a centralized root list necessary and when can it be avoided?
How can the quality of root CAs be improved?
How can the number of root CAs be reduced in general?
How can the number of root CAs be reduced in specific situations?

and most importantly:
How can we give the people who need it the skills and information needed
to assess the security of their connection?


This is the geek's realisation that they cannot control their list of
 trusted CAs.


It's more prosaic than you make it sound.

When engineers sit down to review the security of a real-world product,
often with sharp people from the customer's side present, occasionally
someone thinks to ask the question: OK, so supposing there are no
killer defects in the implementation, and all the crypto works as
expected, who has keys to the damn thing?

If the product's implementation relies on SSL/TLS (e.g., has a
management port with a web interface), then be prepared to have this
conversation.

To me this is a validation of the cipherpunks' foresight of taking the
attack model at face value. What was once considered spy-fantasy
paranoia by many is, in reality, a textbook engineering calculation
after all.


Their judgement is undermined, as MS Windows' root list has gone the
next step to dynamic control, which means that the users' ability to
verify the root is undermined a bit more by not having an ability to
stop the future dynamic enhancements.


You can go to Add/remove Windows Components (or whatever they call it
these days) and remove the Automatic Certificate Update feature. But
if you do this you need to be prepared to troubleshoot some pretty
mysterious breakages many months later after you've forgotten about it.


In practice, if we assume a centralised root list, this is probably
the better result.


Maybe sometimes. But when?

This is very hard to quantify because it's all theoretical until the
instant that the client software tries to make a connection to a
specific server and receives a specific certificate from the next-hop
router. Does the client software accept the connection or fail it and
tell the user that they're possibly being attacked?

From a UI designer's perspective, this is as close as to a launch the
nuclear missiles moment as they're ever likely to encounter because
showing the scary page to a browser user instead of the page they
requested probably seems pretty much like the end of the world to these
people.

Here's an example of some thinking by UI design types. It's obviously
biased, but it confirms my own biased experience :-) so I'll link it:
http://www.reddit.com/r/pics/comments/hvuhg/apple_why/c1yuah6


It works quite simply: 1 billion users don't check the root list, at
 all. They rely entirely on the ueber-CA to generate a good root
list.


Isn't this basically the system we have now with the browser vendor
acting as the ueber-CA?


A tiny fraction of that number (under 1 million, or 0.1%) know about
 something called a root list, something perversely called trust
bits, and the ability to fiddle those bits. They do that, and imagine
that they have achieved some higher level of security. But, this
technique has difficulty establishing itself as anything more than a
placebo.

Any model that offers a security feature to a trivially tiny
minority, to the expense of the dominant majority, is daft.


Heh. Unless the dominant

Re: [cryptography] this house believes that user's control over the root list is a placebo

2011-06-26 Thread Marsh Ray

On 06/26/2011 01:13 PM, The Fungi wrote:

On Sun, Jun 26, 2011 at 12:26:40PM -0500, Marsh Ray wrote: [...]

Now maybe it's different for ISP core router admins, but the
existence of this product strongly implies that at least some
admins are connecting to their router with their web browser over
HTTPS and typing in the same password that they use via SSH.

[...]

Valid point, but flawed example. Managing these things day in and day
out, I can tell you this is the first thing any experienced admin
disables when initially configuring the device.


But what about all the other admins? :-)

You're probably right today, the guys running the core routers are some
of the best. This web management thing seems to be targeted to
small/medium non-ISP businesses.

But what about after a few more rounds of IT people graduate from
courses and certification programs which now divert time from the old
command-line stuff to teach the new web management functionality?

What if functionality gets released for which there is no command-line
interface?

What about all the other datacenter gear plugging into trusted segments?

What about the other makes of routers? Well, Juniper, that is.
Hmmm...

http://www.juniper.net/us/en/products-services/software/network-management-software/j-web/



http://www.redelijkheid.com/blog/2011/3/11/configure-ssl-certificate-for-juniper-j-web-interface.html
By default, the J-Web interface (GUI for the Juniper SRX firewalls)
has SSL enabled. Like most devices with SSL out-of-the-box, the
protection is based on a self-signed certificate. Self-signed
certificates are easy (they come basically out-of-the-box), but they
tend to nag you every time you connect to the GUI. So, it's time to
install a proper certificate.


OK, good, so this guy is going to make a cert for his router! He even 
shows you how to use the subject alternative name to make it so you can 
connect to it via the raw IP address 192.168.1.254!


Anyone else see any problems with that? :-)


http://www.instantssl.com/ssl-certificate-products/ssl/ssl-certificate-intranetssl.html
Intranet SSL Certificates allow you to secure internal servers with SSL issued 
to
either a Full Server Name or a Private IP Address. [...]
Trusted by all popular browsers.


Comodo to the rescue! I wonder how many people they'll be willing to 
sell the same IP address too.


On 06/26/2011 01:13 PM, The Fungi wrote:

If your admin is managing your routers with a Web interface, SSL MitM
is the *least* of your worries, honestly.


:-)

It's only the least of your worries until somebody gets around to
exploiting it, at which point it may be the greatest of your worries.

A lot of systems are set up with RADIUS/TACACS centralized
authentication. In these cases there are many admins with access to many
routers and other pieces of equipment. The bad guy only needs to
convince the high-level admin to use his password once on the
least-important piece of equipment.

A self-propagating router MitM would make for a very interesting and
scary worm. Hopefully such a thing would first start out on some small
home routers and give time to raise awareness for those with login
credentials on the big ones.

- Marsh
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] this house believes that user's control over the root list is a placebo

2011-06-26 Thread Marsh Ray

On 06/26/2011 05:58 PM, Ian G wrote:


On 26/06/11 5:50 AM, Ralph Holz wrote:

- you don't want to hurt the CAs too badly if you are a vendor


Vendors spend all day long talking internally and with other vendors.
Consequently, they tend to forget who holds the real money.

For most healthy vendors in a market economy, that's the customers. 
Browsers seem to live on a planet without the usual market forces however.


In the case of Mozilla, 97% of their revenue comes royalties

http://www.mozilla.org/foundation/documents/mf-2009-audited-financial-statement.pdf

of which 86% is one contract. It's a safe bet that's probably Google.
That contract is said to expire in November, and Google now makes a 
competing browser.


Google seems to care more about actual security than Mozilla. Last I 
checked Mozilla didn't even bother to sign all the addons for their own 
package system, whereas we see Google doing things like pinning their 
own certs in the Chrome codebase.


Maybe that's because Google actually runs services that people use (e.g. 
Gmail).



- it still means researchers won't get the numbers they need. And
the circle closes - no numbers, no facts, no improvements, other
than those subjectively perceived.


OK. So we need to show why researchers can benefit us with those
numbers :)


Because having a system that's credibly secure will increase
adoption among organizations with money.

You can't credibly claim to defend against earthquakes while keeping 
seismic resiliency data secret.



(IMHO, the point is nothing to do with researchers. It's all to do
with reputation. It's the only tool we have. So disclosure as a blunt
weapon might work.)


Nothing undermines credibility and trust like public denials and secrecy.

CAs seem to think they can act like nuclear power plant operators or 
something. But NPPs at least produce electric power! On the other hand, 
every additional trusted root beyond the necessary minimum represents 
pure risk.


The general public and those who defend networks understand the need to 
take active network attacks seriously far more than than did just a year 
or two ago.


- Marsh
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Oddity in common bcrypt implementation

2011-06-28 Thread Marsh Ray

On 06/27/2011 06:30 PM, Sampo Syreeni wrote:

On 2011-06-20, Marsh Ray wrot


I once looked up the Unicode algorithm for some basic case
insensitive string comparison... 40 pages!


Isn't that precisely why e.g. Peter Gutmann once wrote against the
canonicalization (in the Unicode context, normalization) that ISO
derived crypto protocols do, in favour of the bytes are bytes approach
that PGP/GPG takes?


Yes, but in most actual systems the strings are going to get handled. 
It's more a question of whether or not your protocol specification 
defines the format it's expecting.


Humans tend to not define text very precisely and computers don't work 
with it directly anyway, they only work with encoded representations of 
text as character data. Even a simple accented character in a word or 
name can be represented in several different ways.


Many devs (particularly Unixers :-) in the US, AU, and NZ have gotten 
away with the 7 bit ASCII assumption for a long time, but most of the 
rest of the world has to deal with locales, code pages, and multi-byte 
encodings. This seemed to allow older IETF protocol specs to often get 
away without a rigorous treatment of the character data encoding issues. 
(I suspect one factor in the lead of the English-speaking world in the 
development of 20th century computers and protocols is because we could 
get by with one of the smallest character sets.)


Let's say you're writing a piece of code like:
if (username == root)
{
// avoid doing something insecure with root privs
}
The logic of this example is probably broken in important ways but the 
point remains: sometimes we need to compare usernames for equality in 
contexts that have security implications. You can only claim bytes are 
bytes up until the point that the customer says they have a directory 
server which compares usernames case insensitively.


For most things verbatim binary is the right choice. However, a 
password or pass phrase is specifically character data which is the 
result of a user input method.



If you want to do crypto, just do crypto on the bits/bytes. If you
really have to, you can tag the intended format for forensic purposes
and sign your intent. But don't meddle with your given bits.
Canonicalization/normalization is simply too hard to do right or even to
analyse to have much place in protocol design.


Consider RAIDUS.

The first RFC http://tools.ietf.org/html/rfc2058#section-5.2
says nothing about the encoding of the character data of the password 
field, it just treats it as a series of octets. So what do you do when 
implementing RADIUS on an OS that gives user input to your application 
with UTF-16LE encoding? If you don't meddle with your given bits and 
just pass them on to the protocol layer, they are almost guaranteed to 
be non-interoperable.


Later RFCs http://tools.ietf.org/html/rfc2865
have added in most places It is recommended that the message contain 
UTF-8 encoded 10646 characters. I think this is a really practical 
middle ground. Interestingly, it doesn't say this for the password 
field, likely because the authors figured it would break some existing 
underspecified behavior.


So exactly which characters are allowed in passwords and how are they to 
be represented for interoperable RADIUS implementations? I have no idea, 
and I help maintain one!


Consequently, we can hardly blame users for not using special characters 
in their passwords.


- Marsh
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Oddity in common bcrypt implementation

2011-06-28 Thread Marsh Ray

On 06/28/2011 10:36 AM, Ian G wrote:

On 28/06/11 11:25 AM, Nico Williams wrote:


The most immediate problem for many users w.r.t. non-ASCII in
passwords is not the likelihood of interop problems but the
heterogeneity of input methods and input method selection in login
screens, password input fields in apps and browsers, and so on, as
well as the fact that they can't see the password they are typing to
confirm that the input method is working correctly.


This particular security idea came from terminal laboratories in the
1970s and 1980s where annoying folk would look over your shoulder to
read your password as you typed it.


Hardcopy terminals were common even into the 80s. Obviously you don't 
want the password lying around on printouts.


Even worse, some terminals couldn't disable the local echo as characters 
were typed. The best the host could do for password entry was to 
backspace overprint a bunch of characters on the printout beforehand to 
obscure it.



The assumption of people looking over your shoulder is well past its
use-by date.


+1

Perhaps someday our systems will be secure enough that shoulder-surfing 
is a problem worth worrying about again.



Oddly enough
mobiles are ahead of other systems here in that they show the user the
*last/current* character of any passwords they are entering.


Don't forget, the person in the room with you may have a 5 megapixel 
video camera in their shirt pocket with a view of your keyboard.


- Marsh
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Oddity in common bcrypt implementation

2011-06-28 Thread Marsh Ray

On 06/28/2011 12:01 PM, Paul Hoffman wrote:

And this discussion of ASCII and internationalization has what to do
with cryptography, asks the person on the list is who is probably
most capable of arguing about it but won't? [1]


It's highly relevant to the implementation of cryptographic systems as 
Nico mentioned because interoperability depends on it and the nature of 
cryptographic authentication systems tends to obscure the problems.


Sometimes security vulnerabilities result. The old LM LanMan password 
hashing scheme uppercased everything for no good reason. Perhaps they 
did it out of the desire to avoid issues with accented lower case 
characters.


Look at these test vectors for PBKDF2:
http://tools.ietf.org/html/rfc6070

None of them have the high bit set on any password character! Seems like 
there was a recent bcrypt implementation issue that escaped notice for a 
long time due to test vectors having this same property and some 
cryptographically weak credentials were issued as a result.


1 of 8 bits of the key material is strongly biased towards 0. This loss 
of entropy is especially significant when the entirety of the input is 
limited to 8 or so chars as is common.


Wow, this sounds a lot like the way 64-bit DES was weakened to 56 bits.

- Marsh
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Oddity in common bcrypt implementation

2011-06-28 Thread Marsh Ray

On 06/28/2011 12:48 PM, Steven Bellovin wrote:

Wow, this sounds a lot like the way 64-bit DES was weakened to 56 bits.


It wasn't weakened -- parity bits were rather important circa 1974.
(One should always think about the technology of the time.


It's a very reasonable-sounding explanation, particularly at the time. 
http://en.wikipedia.org/wiki/Robbed_bit_signaling is even still used for 
things like T-1 lies.


But somehow the system managed to handle 64-bit plaintexts and 64-bit 
ciphertexts. Why would they need to shorten the key? Of the three 
different data types it would be the thing that was LEAST often sent 
across serial communications lines needing parity.


If error correction was needed on the key for some kind of cryptographic 
security reasons, then 8 bits would hardly seem to be enough.


What am I missing here?


The
initial and final permutations were rightly denounced as cryptographically
irrelevant (though it isn't clear that that would be true in a secret
design; the British had a lot of trouble until they figured out the
static keyboard map of the Enigma), but they weren't there for
cryptographic reasons; rather, they were an artifact of a
serial/parallel conversion.


Interesting.

- Marsh
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Oddity in common bcrypt implementation

2011-06-28 Thread Marsh Ray

On 06/28/2011 02:09 PM, Sampo Syreeni wrote:


But a case-insensitive password compare?!? For some reason I don't
think anybody would want to go there, and that almost everybody would
want the system to rather fail safe than to do anything but pass
around (type-tagged) bits. I mean, would anybody really like a spell
checker in their ATM?


http://mattjezorek.com/articles/security-researchers-need-to-research

First lets quickly discuss the problem with the AOL Passwords they
are case insensitive, truncated to 8 characters and required to be at
least 6. There is more detailed information on The Washington Post by
@briankrebs . This is a problem and makes weak passwords no mater how
complex one wants to make it (or thinks that it is). AOL is acting as
an identity service to iTunes, HuffPo, Meebo, anyone who uses the AOL
OpenAuth or OpenID services and any instant messaging client. This
both compounds the issue and spreads the risk around to everyone.



On 06/28/2011 02:09 PM, Sampo Syreeni wrote:

Any system that dedicates fourty pages worth of text to string
comparison doesn't have those attributes. It doesn't promote security
proper, but rather bloated software, difficulties with
interoperability, unsecure workarounds and even plain security
through obscurity.

As a case in point, the Unicode normalization tables have changed
numerous times in the past, and they aren't even the whole story.
True, after some pressure from crypto folks they finally fixed the
normalization target at something like v3.2 or whathaveyou. But then
 that too will in time lead to a whole bulk of special cases and
other nastiness, which then promotes versioning difficulties, code
that is too lengthy to debug properly, and diversion of resources
from sound security engineering towards what I'm tempted to call
politically correct software engineering.


I agree. The thing is borderline unusable unless you can leverage the 
resources of an Apple, Adobe, IBM, or Microsoft on just the text handling.



I mean, you've certain
already to have seen what happened in the IETF IDN WG wrt DNS
phishing... If I ever saw a kluge, attempts at homograph elimination
(a form of normalization) is that.


Speaking of DNS and crypto protocols, have you seen ICANN's plan to 
register custom gTLDs?


That's right - public internet DNS names without a dot in them. Talk 
about violating your fundamental assumptions.


How is x509 PKI going to interact with this?


Passwords aren't text in the normal sense. Precisely because they
should be the only thing human keyed crypto should depend on for
security. As for the rest of the text... Tag it and bag it as-is. At
least the original intent can then be uncovered forensically, if
need be. Unlike if you go around twiddling your bits on the way.


Yes, I used to develop print spooling software and always regretted it 
when we deviated from this strategy.



You can only claim bytes are bytes up until the point that the
customer says they have a directory server which compares usernames
case insensitively.


If there's a security implication, you should then probably fail safe
and wait for the software vendor to fix the possible
interoperability bug.


Ideally. In practice, sometimes the string-matching code doesn't know 
which the 'safer' direction to fail is.


You can't simply file a bug report and have a Microsoft, Novell, or Sun 
change (or maybe even document) the fundamental behavior of their 
directory server.


- Marsh
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


[cryptography] OFF LIST Re: Oddity in common bcrypt implementation

2011-06-29 Thread Marsh Ray

On 06/29/2011 04:01 AM, Ian G wrote:


Or, talking about non-crypto security techniques like passwords is
punishment for mucking up the general deployment of better crypto
techniques.


Nice. :-)

- Marsh
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] OFF LIST Re: Oddity in common bcrypt implementation

2011-06-29 Thread Marsh Ray


Well I guess that wasn't off list after all.

It's still nice tho. :-)


On 06/29/2011 09:40 AM, Marsh Ray wrote:

On 06/29/2011 04:01 AM, Ian G wrote:


Or, talking about non-crypto security techniques like passwords is
punishment for mucking up the general deployment of better crypto
techniques.


Nice. :-)

- Marsh
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Oddity in common bcrypt implementation

2011-06-29 Thread Marsh Ray

On 06/29/2011 06:49 AM, Peter Gutmann wrote:


So far I've had exactly zero complaints about i18n or c18n-based password
issues.

[Pause]

Yup, just counted them again, definitely zero.  Turns out that most of the
time when people are entering their passwords to, for example, unlock a
private key, they don't have it spread across multiple totally dissimilar
systems.


Well I work on an implementation of the RADIUS thing as previously 
described. It's got a ton of users, some even in Asian countries, using 
it to interoperate with other vendors' products.


I don't recall many users having password issues with character sets 
either. But I also know I could probably sit down and construct a broken 
case rather quickly.


Nevertheless, if someone does report an unexplained issue we might ask 
if there are any weird, special characters in their password. (Actually, 
it's more complex than that. We reiterate that we would never ask them 
for their password but hint that special characters might be a source of 
problems.)


So this suggests probably some combination of:

1. We picked the right encoding transformation logic. We receive the 
credentials via RADIUS and usually validate them against the Windows API 
which accepts UTF-16LE. IIRC we interpret the RADIUS credentials as what 
Windows calls ANSI for this.


2. Admins who configure these systems in other markets have learned how 
to adjust their various systems for their local encodings in ways that 
never required our support. Perhaps from past experience they are 
reluctant to ask us simple ASCII Americans for help troubleshooting this 
type of issue.


3. Users everywhere choose very simple ASCII passwords and are reluctant 
to report issues with special characters all the way up to us vendors.


Right now we're giving Solar Designer several bits of entropy for free. 
If we could solve the 'high bit' problem, it could be a significant 
increase in effective security for a lot of people.


- Marsh
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Oddity in common bcrypt implementation

2011-06-29 Thread Marsh Ray

fOn 06/29/2011 05:41 PM, Jeffrey Walton wrote:

 From my interop-ing experience with Windows, Linux, and Apple (plus
their mobile devices), I found the best choice for password
interoperability was UTF8, not UTF16.


I use UTF-8 whenever possible, too.

Just to be clear here, the native OS Win32 API that must be used in some 
configurations accepts UTF-16LE passwords for authentication. That's not 
my choice.


Neither is it my choice what encoding the remote endpoint happens to be 
using. It doesn't even tell me.


My code simply has to convert between them in the least-broken manner 
possible.


The realities of crypto authentication protocol implementation mean I 
can't log the decrypted password for debugging or ask the user about it 
either. I actually added a heuristic that counts the number of typical 
characters and logs a message to the effect of hmm, looks like this 
thing may not have decoded properly, maybe the shared secret isn't 
correct. That little diagnostic has proven quite helpful at times.


- Marsh
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


[cryptography] Is there a cryptanalyst in the house?

2011-06-29 Thread Marsh Ray


There's a new and improved botnet around that's got the tech press all 
a-flutter.


http://www.securelist.com/en/analysis/204792180/TDL4_Top_Bot :

The ‘indestructible’ botnet Encrypted network connections

One of the key changes in TDL-4 compared to previous versions is an
updated algorithm encrypting the protocol used for communication
between infected computers and botnet command and control servers.
The cybercriminals replaced RC4 with their own encryption algorithm
using XOR swaps and operations.


I think we can predict how this will end...maybe?

It's a curious phrase using XOR swaps and operations, like something 
has been left out. Was it XOR, swaps, and AND operations fixed by an 
overzealous word processor? It could mean swaps implemented with XOR 
and other XOR operations (a big difference). Or it could be something 
redacted (like parts of some images in the article).


Perhaps its a more established algorithm that these researchers didn't 
recognize.


In any case, if anyone is looking for an analysis project you might see 
what you could do with it. A successful break of this algorithm could 
earn you a hearty 'thank you' from 4.5 million infected PC owners. 
Perhaps we could collaborate on the list.


I don't have a code sample right now but I could ask around. Shouldn't 
be too hard to find with that many copies around.


- Marsh
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Bitcoin observation

2011-07-05 Thread Marsh Ray

On 07/05/2011 08:07 PM, Taral wrote:

On Tue, Jul 5, 2011 at 3:53 AM, Adam Backa...@cypherspace.org  wrote:

I dont think you can prove you have destroyed a bitcoin, neither your own
bitcoin, nor someone else's.  To destroy it you would have to prove you
deleted the coin private key, and you could always have an offline backup.


Actually, it's pretty easy to create a transaction whose outputs are
obviously unsatisfiable.


So this suggests the attacker who pwned Mt. Gox was probably doing it 
for the lulz as they say. (Or maybe they didn't know about this property).


Next time they might just take all the bitcoins held in escrow by the 
exchange and transfer them to /dev/null.


- Marsh
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Bitcoin observation

2011-07-07 Thread Marsh Ray

On 07/07/2011 04:10 PM, Nico Williams wrote:


In some (most?) public key cryptosystems it's possible to prove that a
valid public key has a corresponding private key (that is, there
exists a valid private key for which the given public key *is* the
public key).  That's used for public key validation.  It's not
possible, however, to prove that the private key still exists.


But is it possible to sneak in invalid keys? What if, say, in an RSA 
system you were to later reveal that modulus n was the product of more 
than two primes? (I forget the name of this attack.)


What if you did this after a long dependency chain of cleared 
transactions had built up on the security of this key?


Not saying that Bitcoin specifically is vulnerable here, just that there 
are usually several ways to poison the well on these interdependent systems.


Often the crypto is meant to defend against attackers with the expected 
motivations (e.g. double-spending the coins). The recent rise in 
sophisticated for the lulz-motivated attacks is likely to catch some 
systems off-guard.


- Marsh
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] preventing protocol failings

2011-07-12 Thread Marsh Ray

On 07/12/2011 04:24 PM, Zooko O'Whielacronx wrote:

On Tue, Jul 12, 2011 at 11:10 AM, Hill, Bradbh...@paypal-inc.com
wrote:


I have found that when H3 meets deployment and use, the reality
too often becomes: Something's gotta give.  We haven't yet found
a way to hide enough of the complexity of security to make it
free, and this inevitably causes conflicts with goals like
adoption.


This is an excellent objection. I think this shows that most crypto
systems have bad usability in their key management (SSL, PGP). People
don't use such systems if they can help it, and when they do they
often use them wrong.


But the entire purpose of securing a system is to deny access to the
protected resource. In the case of systems susceptible to potential
phishing attacks, we even require that the user themselves be the one to
decline access to the system!

Everyone here knows about the inherent security-functionality tradeoff.
I think it's such a law of nature that any control must present at least
some cost to the legitimate user in order to provide any effective
security. However, we can sometimes greatly optimize this tradeoff and
provide the best tools for admins to manage the system's point on it.

Hoping to find security for free somewhere is akin to looking for free
energy. The search may be greatly educational or produce very useful
related discoveries, but at the end of the day the laws of
thermodynamics are likely to remain satisfied.

Those looking for no-cost or extremely low-cost security either don't
place a high value on the protected resource or, given the options they
have imagined them, that they may profit more by the system being in the
less secure state. Sometimes they haven't factored all the options into 
their cost-benefit analysis. Sometimes it never occurs to them that the 
cost of a security failure can be much much greater than the nominal 
value of the thing being protected (ask Sony).


It was once said that nuclear physics would provide electric power that
was too cheap to meter, i.e., they might not even bother sending you a
utility bill. Obviously that didn't happen. If your device's power
requirements don't justify power from the nuke plant the better question
might be how to make the battery-based options as painless as possible.
Toys used to always come batteries not included. Now toys often
include a battery, but the batteries don't seem to have gotten much
better. Toy companies probably found that a potential customer being
able to press the button in the store display was worth the cost of a
bulk-rate battery.

So even if you're a web site just selling advertising and your users'
personal information, security is a feature that attracts and retains
users, specifically those who value their _own_ stuff. (Hint hint: this
is the kind with money to spend with your advertisers.) Smart people
value their own time most of all and would find it a major pain to have
to put everything back in order after some kind of compromise. Google
knows exactly what they're doing when they do serious security audits
and deploy multiple factors of authentication even for their free Gmail
users. This difference in mindset is why Hotmail and Yahoo! are now
also-rans.

I hope there was a coherent point in all of that somewhere :-) I know
I'm preaching to the choir but Brad seemed to be asking for arguments of
this sort.

- Marsh
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] preventing protocol failings

2011-07-13 Thread Marsh Ray

On 07/13/2011 01:01 AM, Ian G wrote:

On 13/07/11 9:25 AM, Marsh Ray wrote:


But the entire purpose of securing a system is to deny access to
the protected resource.


And that's why it doesn't work; we end up denying access to the
protected resource.


Denying to the attacker - good.

Denying to the legitimate user - unfortunately unavoidable some of the
time. The main purpose of authentication is to decide if the party is,
in fact, the legitimate user. So that process can't presume the outcome
in the interest of user experience.

I mis-type my password a significant percentage of the time. Of course I
know it's me but it would be absurd for the system to still log me in.
Me being denied access is a bad user experienceTM (especially compared
to a system with no login authentication at all) but it's also necessary
for security.

However, a scheme which allowed me to log in with N correct password
characters out of M could still be quite strong (with good choices for N
and M) but it would allow for tuning out the bad user experiences to the
degree allowed by the situation-specific security requirements.


Security is just another function of business, it's not special.


I disagree, I think it depends entirely on the business. Quite often
there are multiple parties involved with very divergent interests.


The purpose of security is to improve the profitability of the
resource.


Often the purpose is to reduce existential risks.

I think it's such a law of nature that any control must present at
least some cost to the legitimate user in order to provide any
effective security. However, we can sometimes greatly optimize this
tradeoff and provide the best tools for admins to manage the
system's point on it.


Not at all. I view this as hubris from those struggling to make
security work from a technical pov, from within the box. Once you
start to learn the business and the human interactions, you are
looking outside your techie box. From the business, you discover
many interesting things that allow you to transfer the info needed to
make the security look free.


Well, you're right, except that it's not so much hubris as it is being
aware of one's limitations. The more general-purpose the protocol or
library is that you're working on, the less you can know about the
scenarios in which it will eventually be deployed.

You can't even take for granted that there even is a business or
primarily financial interest on either endpoint. The endpoints needing
to securely communicate may be a citizen and their government, an
activist and a human rights organization, a soldier and his weapons
system, or a patient and their embedded drug pump.


A couple of examples: Skype works because people transfer their
introductions first over other channels, hey, my handle is bobbob,
and then secondly over the packet network. It works because it uses
the humans to do what they do naturally.


Yeah, it's a big win when the users can bring their pre-established
relationships to bootstrap the secure authentication. This is the way
the Main St. district worked in small towns - you knew the hardware
store guy, you knew the barber, etc. Even if not, an unfamiliar business
wouldn't be around long without the blessing of the mayor and town cop.

But this is the exact opposite model that Netscape (and friends) used
for ecommerce back in the early 90s. They recognized that the key
property necessary to enable the ecommerce explosion was for users to
feel comfortable doing business with merchants with which they had no
prior relationship at all. In order for this to happen there needed to
be a trusted introducer system and the CA system was born. This system
sucks eggs for many things for which it is used, but it is an undeniable
success at its core business goal: the lock icon has convinced users
that it's safe enough to enter their credit card info on line.


2nd. When I built a secure payment system, I was able to construct a
complete end-to-end public infrastructure without central points of
trust (like with CAs). And I was able to do it completely. The
reasons is that the start of the conversation was always a. from
person to person, and b. concerning a financial instrument. So the
financial instrument was turned into a contract with embedded crypto
keys. Alice hands Bob the contract, and his softwate then bootstraps
to fully secured comms.


Ask yourself if just maybe you picked one of the easier problems to
solve? One where the rules and the parties' motivations were all
well-understood in advance?


No, it's much simpler than that: denying someone security because
they don't push the right buttons is stilly denying them security.


I don't understand. Are you speaking of denying them access to the
protected resource, or are you saying they are denied some nebulous form
of security in general?


The summed benefit of internet security protocols typically goes up
with the number of users, not with the reduction of flaws. The
techie

[cryptography] PuTTY 0.61 (ssh-keys only and EKE for web too (Re: preventing protocol failings))

2011-07-13 Thread Marsh Ray


I normally wouldn't post about any old software release, but with the 
recent discussion of SSH and authentication these release notes from 
PuTTY seem appropriate.


- Marsh

http://lists.tartarus.org/pipermail/putty-announce/2011/16.html

It's been more than four years since 0.60 was released, and we've had
quite a lot of email asking if PuTTY was still under development, and
occasionally asking if we were even still alive. Well, we are, and it
has been! Sorry about the long wait.

New features in 0.61 include:

 - Support for SSH-2 authentication using GSSAPI, on both Windows and
   Unix. Users in a Kerberos realm should now be able to use their
   existing Kerberos single sign-on in their PuTTY SSH connections.
   (While this has been successfully deployed in several realms, some
   small gaps are known to exist in this functionality, and we would
   welcome further testing and advice from Kerberos experts.)



 - On Windows: PuTTY's X11 forwarding can now authenticate with the
   local X server, if you point it at an X authority file where it can
   find the authentication details. So you can now use Windows PuTTY
   with X forwarding and not have to open your X server up to all
   connections from localhost.



 - A small but important feature: you can now manually tell PuTTY the
   name of the host you expect to end up talking to, in cases where
   that differs from where it's physically connecting to (e.g. when
   port forwarding). If you do this, the host key will be looked up
   and cached under the former name.



 - Assorted optimisation and speedup work. SSH key exchange should be
   faster by about a factor of three compared to 0.60; SSH-2
   connections are started up in a way that reduces the number of
   network round trip delays; SSH-2 window management has also been
   revised to reduce round trip delays during any large-volume data
   transfer (including port forwardings as well as SFTP/SCP).



 - Support for OpenSSH's security-tweaked form of SSH compression (so
   PuTTY can now use compression again when talking to modern OpenSSH
   servers).

___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] ssh-keys only and EKE for web too (Re: preventing protocol failings)

2011-07-13 Thread Marsh Ray

On 07/13/2011 01:33 PM, Jeffrey Walton wrote:


I believe Mozilla is [in]directly supported by Google. Mozilla has
made so much money, they nearly lost their tax exempt status:
http://tech.slashdot.org/story/08/11/20/1327240/IRS-Looking-at-GoogleMozilla-Relationship.


Mozilla has a lot of cash in the bank and it gets a large majority of 
its revenue from its contract with Google.



I was also talking with a fellow who told me NSS is owned by Red Hat.
While NSS is open source, the validated module is proprietary.  I don't
use NSS (and have no need to interop with the library), so I never
looked into the relationship.


Google, Mozilla, and Red Hat all employ people who maintain NSS.

They're nice folks, just look them up in the source tree and email them 
if you have any questions.


- Marsh
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] OTR and deniability

2011-07-15 Thread Marsh Ray

On 07/13/2011 09:37 PM, Ai Weiwei wrote:

Hello list,

Recently, Wired published material on their website which are claimed
to be logs of instant message conversations between Bradley Manning
and Adrian Lamo in that infamous case. [1] I have only casually
skimmed them, but did notice the following two lines:

(12:24:15 PM) bradass87 has not been authenticated yet. You should
authenticate this buddy. (12:24:15 PM) Unverified conversation with
bradass87 started.

I'm sure most of you will be familiar; this is evidence that a
technology known as Off-the-Record Messaging (OTR) [2] was used in
the course of these alleged conversations.

I apologize if this is off topic or seems trivial, but I think a
public discussion of the merits (or lack thereof) of these alleged
logs from a technical perspective would be interesting.


I think so too, if only to understand how the crypto turns out to be 
largely irrelevant once again.


There's very little data available. Is there anything other than what's 
been published by Wired?



The exact
implications of the technology may not be very well known beyond this
list. I have carbon copied this message to the defense in the case
accordingly.

If I understand correctly, OTR provides deniability, which means that
these alleged logs cannot be proven authentic. In fact, the OTR
software is distributed with program code which makes falsifying such
logs trivial. Is this correct?

On a related note, a strange message to Hacker News at about that
time [3] seems to now have found a context. Not to mention talk of
compromised PGP keys: the prosecution witness created a new key
pair June 2, 2010 (after 6 months with no keys for that email address
-- why precisely then?), and replaced these a day less than one month
later -- citing previous key physically compromised. [4]


http://news.ycombinator.com/item?id=1410158
That would be consistent with Lamo hinting to his peeps that his 
computer was taken by investigators. But his advice for others to 
regenerate their own private keys shows that either he himself doesn't 
understand the cryptographic properties of these protocols or he 
believes some other keys have been compromised too.



Note the
arrest in the case occurred in between these two events, with
encrypted emails purportedly having been received in the meantime:
[5]

Lamo told me that Manning first emailed him on May 20 ...

What do you think? First the prosecution witness turns out less than
credible, [6] now the key piece of evidence is mathematically
provably useless...




[1] http://www.wired.com/threatlevel/2011/07/manning-lamo-logs/ [2]
http://www.cypherpunks.ca/otr/ [3]
http://news.ycombinator.com/item?id=1410158 [4]
http://pgp.mit.edu:11371/pks/lookup?search=adrian+lamoop=vindexfingerprint=on



[5] http://www.salon.com/news/opinion/glenn_greenwald/2010/06/18/wikileaks

[6] http://www.google.com/search?q=lamo+drugs
___ cryptography mailing
list cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] OTR and deniability

2011-07-16 Thread Marsh Ray

On 07/15/2011 11:21 PM, Ian Goldberg wrote:


Just to be clear: there are _no_ OTR-related mathematical points or
issues here.  The logs were in plain text.  OTR has nothing at all to do
with their deniability.


It's a good bet the entirety of the informant's PC was acquired for 
computer forensic analysis, as well as every PC Manning is known to have 
touched. There's a good chance some traffic data was retained from the 
network where Manning allegedly did the chatting and data transfer.


Sure the logs we see are in plain text, but that's almost certainly not 
all the data in play. Deniability may yet still depend on OTR and its 
implementation.


Note that the logs indicate the parties were unauthenticated and the 
connection was bouncing. Was this a man-in-the-middle interception? Does 
the protocol and implementation issue a message to the user when an 
unauthenticated identity changes its key?


- Marsh

http://www.wired.com/threatlevel/2011/07/manning-lamo-logs#m765


(01:37:03 AM) bradass87 has signed on.

(01:37:51 AM) bradass87: no no… im at FOB hammer (re: green zone); persona is 
killing the fuck out of me at this point… =L

(01:37:51 AM) i...@adrianlamo.com AUTO-REPLY: I’m not here right now

(01:37:55 AM) Error setting up private conversation: Malformed message received

(01:37:55 AM) We received an unreadable encrypted message from bradass87.

(01:37:58 AM) bradass87: [resent] HTMLno no… im at FOB hammer (re: green 
zone); persona is killing the fuck out of me at this point… =L

(01:38:07 AM) bradass87 has ended his/her private conversation with you; you 
should do the same.

(01:38:18 AM) Error setting up private conversation: Malformed message received

(01:38:20 AM) The encrypted message received from bradass87 is unreadable, as 
you are not currently communicating privately.

(01:38:30 AM) Error setting up private conversation: Malformed message received

(01:38:33 AM) The encrypted message received from bradass87 is unreadable, as 
you are not currently communicating privately.

(01:38:43 AM) Error setting up private conversation: Malformed message received

(01:38:46 AM) The encrypted message received from bradass87 is unreadable, as 
you are not currently communicating privately.

(01:38:57 AM) Error setting up private conversation: Malformed message received

(01:38:59 AM) The encrypted message received from bradass87 is unreadable, as 
you are not currently communicating privately.

(01:39:10 AM) Error setting up private conversation: Malformed message received

(01:39:13 AM) The encrypted message received from bradass87 is unreadable, as 
you are not currently communicating privately.

(01:39:22 AM) Error setting up private conversation: Malformed message received

(01:39:25 AM) The encrypted message received from bradass87 is unreadable, as 
you are not currently communicating privately.

(01:39:36 AM) Error setting up private conversation: Malformed message received

(01:39:39 AM) The encrypted message received from bradass87 is unreadable, as 
you are not currently communicating privately.

(01:39:49 AM) Error setting up private conversation: Malformed message received

(01:39:52 AM) The encrypted message received from bradass87 is unreadable, as 
you are not currently communicating privately.

(01:40:02 AM) Error setting up private conversation: Malformed message received

(01:40:04 AM) The encrypted message received from bradass87 is unreadable, as 
you are not currently communicating privately.

(01:40:15 AM) Error setting up private conversation: Malformed message received

(01:40:18 AM) The encrypted message received from bradass87 is unreadable, as 
you are not currently communicating privately.

(01:40:30 AM) Error setting up private conversation: Malformed message received

(01:40:31 AM) The encrypted message received from bradass87 is unreadable, as 
you are not currently communicating privately.

(01:40:41 AM) Error setting up private conversation: Malformed message received

(01:40:45 AM) The encrypted message received from bradass87 is unreadable, as 
you are not currently communicating privately.

(01:40:54 AM) Error setting up private conversation: Malformed message received

(01:40:57 AM) The encrypted message received from bradass87 is unreadable, as 
you are not currently communicating privately.

(01:41:08 AM) Error setting up private conversation: Malformed message received

(01:41:10 AM) The encrypted message received from bradass87 is unreadable, as 
you are not currently communicating privately.

(01:41:21 AM) Error setting up private conversation: Malformed message received

(01:41:23 AM) The encrypted message received from bradass87 is unreadable, as 
you are not currently communicating privately.

(01:41:37 AM) Error setting up private conversation: Malformed message received

(01:41:50 AM) Error setting up private conversation: Malformed message received

(01:41:52 AM) The encrypted message received from bradass87 is unreadable, as 
you are not 

Re: [cryptography] bitcoin scalability to high transaction rates

2011-07-20 Thread Marsh Ray

On 07/20/2011 08:24 AM, Ian G wrote:


Yes, sure, but:

1. we are talking about high frequency trading here, and speed is the
first, second and third rule. Each trade could be making 10k++ and up,
which buys you a lot of leaches.

Basically, you have to get the trade down to the cost of a packet, delay
and two secret key ops. Indeed, if you can measure the delay of the
secret key op, we might be encouraged to pre-calculate shared PRNG
streams so as to speed up the encrypt/decrypt cycle.


I once spoke with some engineers who built and run one of those 
high-speed electronic trading networks/exchanges. Their time to match 
trades was something like 50 microseconds. Their serious members 
colocated their trading systems in their datacenter because it was so 
critical to eliminate the propagation delay.


I guess I don't see the need to do bitcoin crypto transactions at that 
speed any more than the other high-speed exchanges need to rapidly move 
stock certificates, hard cash, or perform ACH/EFTs.



(Gee I wonder if I should file a patent on that idea :P )


Maybe you could be the next Certicom!   ^_^


This and other aspects of high frequency trading forces a credit
exposure to the trades, which requires someone to step in and control
that credit.


But the term high speed electronic exchange seems to mean exactly 
this, almost by definition.


- Marsh
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] [OT] -gate (Re: An appropriate image from Diginotar)

2011-09-02 Thread Marsh Ray

On 09/02/2011 10:29 AM, Harald Hanche-Olsen wrote:


The -gate suffix is getting tiresome, actually. I tend to agree with this:

   http://www.ajr.org/article.asp?id=5106

   Ever since a certain third-rate burglary in Washington, D.C., many
   years ago, journalists have insisted on sticking the suffix gate
   onto every scandal that erupts. Big or little, significant or silly,
   real or faux – doesn't matter. It gets gated.

   This has been an annoying practice for years. It's knee-jerk. It's
   easy. It's boring. [...]


I think it's kind of cute, but yeah, overused.

You probably have to be an American over a certain age to understand the 
-gate thing, but it's not really about the burglarly. I was a little 
kid, but I remember Watergate on TV. It really was quite traumatic. My 
perception is that, coming at the end of the Vietnam war, it was the 
final nail in the coffin for the larger-than-life confidence Americans 
had in their officials and political institutions post-1930s. It's one 
of the very few times a President resigned from office without his term 
expiring.


Come to think of it, the -gate syntax is probably a lot more appropriate 
for the CA break-in than most of the other times it's used.


- Marsh
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] *.google.com certificate issued by DigiNotar

2011-09-02 Thread Marsh Ray

On 09/02/2011 12:55 PM, coderman wrote:


the next escalation will be sploiting private keys out of hardware
security modules presumed impervious to such attacks.

given the quality of HSM firmwares they're lucky cost is somewhat a
prohibiting factor for attackers.

authority in the wild, not just certs. :P


Why would they need to?

What's the difference between a private key in the wild and a pwned CA 
that, even months after a breakin and audit, doesn't revoke or even know 
what it signed?


(This is a serious question)

- Marsh
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] OT: Dutch Government: Websites' Safety Not Guaranteed

2011-09-03 Thread Marsh Ray

On 09/03/2011 06:13 PM, Jeffrey Walton wrote:

http://abcnews.go.com/Technology/wireStory?id=14441405

The Dutch government said Saturday it cannot guarantee the security
of its own websites, days after the private company it uses to
authenticate them admitted it was hacked. An official also said the
government was taking over the company's operations.

The announcement affects millions of people who use the Netherlands'
government's online services and rely on the authenticator,
DigiNotar, to confirm they are visiting the correct sites. To date,
however there have been no reports of anyone's identity being stolen
 or security otherwise breached.


Sadly, this is completely wrong and misses the point entirely.

NO ONE can guarantee the security of ANY websites and gov.nl is no more
affected in this respect than anyone else under the current system.

However, on the website authentication system we'll get the next time we
update our client software, gov.nl and the other ~500 websites with
certs from DigiNotar will have to update a file or two on their servers.
I also hear of some government PKI system that will probably need to be
rekeyed from scratch.

Honestly, I don't feel too bad for them for their nepotistic
relationship with the hometown CA. Of all the CAs in the
world to get pwned to teach us a lesson the server admins (collectively)
could have gotten it a lot worse than this one.

My concern is for the users who are actively getting MitM'd with this
thing. This isn't just about the convenience and economic importance of
the Dutch paying their taxes online Monday. There are folks in the world
relying on this technology to (literally) keep their ass out of the
torture chamber.

- Marsh
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Diginotar broken arrow as a tour-de-force of PKI fail

2011-09-05 Thread Marsh Ray


Preliminary report on-line:


http://www.rijksoverheid.nl/documenten-en-publicaties/rapporten/2011/09/05/fox-it-operation-black-tulip.html




- Marsh
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Diginotar Lessons Learned (long)

2011-09-07 Thread Marsh Ray

On 09/07/2011 10:00 AM, Peter Gutmann wrote:

Ian Gi...@iang.org  writes:


Hence, the well-known race-to-the-bottom, which is a big factor in DigiNotar.


Actually I'm not sure that DigiNotar was the bottom, since they seem to have
been somewhat careful about the certs they issued.  The bottom is the cert
vending machines that will issue a cert to absolutely anyone, verified only by
Ben Franklin.  There are still plenty of those left.


Wasn't Extended Validation with its special green URL widget supposed 
to be exactly this user-observable difference that would allow the 
better CAs to differentiate themselves in the market?


DigiNotar was EV.

Do we need then a whole spectrum of Super Validation, Hyper 
Validation, and Ludicrous Validation to address the ridiculous 
deficiencies found in these current pwned EV CAs?


I think I know the answer to that. It won't help to add another 9 or two 
to the reliability statistic of some CAs because the system itself is 
structurally unsound.


- Marsh
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] GlobalSign temporarily ceases issuance of all certificates

2011-09-07 Thread Marsh Ray

On 09/07/2011 02:34 PM, Fredrik Henbjork wrote:

http://www.globalsign.com/company/press/090611-security-response.html
 This whole mess just gets better and better...


What's interesting is how the attacker simply doesn't fit the expected
motivations that SSL cert-based PKI was ever sold as defending against.

The attacker says a lot of things, but I find this interesting:

http://pastebin.com/GkKUhu35

P.S. In wikipedia of SSL, it should be added for future that I caused
to remove SSL or CA system security model, I have a special idea for
private communication via browsers which could be used instead


He wants credit for saving the world from PKI!

- Marsh
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] After the dust settles -- what happens next? (v. Long)

2011-09-11 Thread Marsh Ray

On 09/11/2011 07:26 PM, Paul Hoffman wrote:

Some of us observe a third, more likely
approach: nothing significant happens due to this event. The
collapse of faith is only among the security folks whose faith was
never there in the first place. A week after the event, who was
talking about it other than folks on these lists and lists like
them?


The 300,00+ Iranians who were actively attacked and now have to change 
their password and are wondering if they'd said anything in Gmail to get 
them arrested and interrogated.


The unknown numbers of Chinese (and people in other countries) who were 
hoping a US product like Gmail could provide a censorship-free email 
service.


The Dutch IT people who have to replace the ~58,000 certs issued by 
DigiNotar PKIoverheid CA.

http://www.techworld.com.au/article/400068/dutch_government_struggles_deal_diginotar_hack/


The management at Google who are likely scared as hell that the 
webmasters and security auditors of the 50% of major sites that source 
Javascript from https://google-analytics.com/ will realize that they 
would have been pwned too (and possibly been obligated to report it) had 
the attacker issued a cert for that. Who else thinks he probably will 
next time?


The people responsible for security at Amazon, PayPal, every other big 
retailer and the financial services companies that handle high-value 
accounts.


The governments and government contractors who depend on SSL VPNs with 
an in-band second factor of auth (like hardware token codes) to secure 
their remote access.


The attacker himself: https://twitter.com/#!/ichsunx2

The people who've generated the 367,772 views (so far) of Comodohacker's 
Pastebin texts:

http://pastebin.com/u/ComodoHacker

Slashdot and their bazillion subscribers are still talking about it as 
of yesterday:

http://it.slashdot.org/story/11/09/10/2129239/GlobalSign-Web-Server-Hacked-But-Not-CA

Who isn't talking about it really?

The full damage is not even out yet. This thing is just getting started.

Despite rumors to the contrary, there are, in fact, a great many 
influential people who do give a shit about the actual effective 
security delivered by SSL/TLS (beyond its ability to add an air of 
confidence to consumers' $50-liability-limit credit card transactions).


This time is not like the previous SSL is broken again ho hum bugs.

- Marsh
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] PKI - and the threat model is ...?

2011-09-12 Thread Marsh Ray

On 09/12/2011 01:45 PM, M.R. wrote:

The system is not expected to protect individual
liberty, life or limb, nor is it expected to protect high-value
monetary transactions, intellectual property assets, state secrets
or critical civic infrastructure operations.


It never was, and yet, it is asked to do that routinely today.

This is where threat modeling falls flat.

The more generally useful a communications facility that you develop, 
the less knowledge and control the engineer has about the conditions 
under which it will be used.


SSL/TLS is very general and very useful. We can place very little 
restriction on how it is deployed.


It will be used wherever it works and feels secure. More and more 
firewalls seem to be proxying port 80 and passing port 443. So it will 
continue to be used a lot.


Few app layer protocol designers will say this really wasn't part of 
the SSL/TLS threat model, we should use something else. Most will say 
this is readily available and is used by critical infrastructure and 
transactions of far greater value than ours.


It needs to be as secure as possible, but I freely admit that I don't 
know what that means.


- Marsh
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Let's go back to the beginning on this

2011-09-13 Thread Marsh Ray

On 09/13/2011 01:31 PM, Seth David Schoen wrote:

An example from yesterday was

https://www.senate.gov/

which had a valid cert a while ago and then recently stopped.  (Their
HTTPS support was reported to us as working on June 29; according to
Perspectives, the most recent change apparently happened on September 9.)


They got hacked by LulzSec back in June, their web software was ancient 
like a time capsule. IIRC, there were a lot of subject-alt names on that 
shared-IP certificate. No doubt the private key was compromised.


It probably took this long to reissue and re-deploy all the sites.

- Marsh
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Let's go back to the beginning on this

2011-09-14 Thread Marsh Ray

On 09/14/2011 09:34 PM, Arshad Noor wrote:

On 9/14/2011 2:52 PM, Seth David Schoen wrote:

Arshad Noor writes:


I'm not sure I understand why it would be helpful to know all (or any)
intermediate CA ahead of time. If you trust the self-signed Root CA,
then, by definition, you've decided to trust everything that CA (and
subordinate CA) issues, with the exception of revoked certificates.


You keep using this word, I do not think it means what you think it means.

'Trust' does not mean everything the trusted party does is somehow put 
beyond all questioning by definition.



Technically - and legally (if the Certificate Policy and contracts
were written up properly) - when a self-signed Root CA issues a
Subordinate CA cert, they are delegating the issuance of certificates
to the Subordinate CA operator, to be issued ONLY in accordance
with a CP that both parties have agreed to. The SubCA cannot,
legally, exceed the bounds of the self-signed Root CA's CP in any
manner that introduces more risk to the Relying Party. These are
legal obligations placed on the operator of the SubCA.


Yes, and this system sucks. It is a complete joke.

It is of no doubt great consolation to the Dutch and Iranians to know 
that there is a contract somewhere being breached among Comodo and their 
resellers and DigiNotar and some software vendors.


Are the RPs even a party to that contract?


Can a SubCA operator violate the legal terms from a technical point
of view? Of course; people break the law all the time in business,
it appears.


A loose web of computer law contracts among hundreds of international 
business and government entities is not a foundation on which to build a 
strong system for data security. Just the fact that they allow this 
unrestricted delegation of authority (in the form of sub-CAs) means that 
they're even crappy contracts to begin with.



However, an RP must assess this risk before trusting a self-signed
Root CA's certificate. If you believe there is uncertainty, then
don't trust the Root CA.


Yes, that's what this conversation has been about. Finding ways to 
reduce this ridiculous hyperinflation of trust going around in general, 
and specific parts of it quickly in emergencies.


- Marsh
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Let's go back to the beginning on this

2011-09-15 Thread Marsh Ray

On 09/15/2011 12:15 PM, Ian G wrote:


Trust in a CA might be more like 99%.

Now, if we have a 1% untrustworthy rating for a CA, what happens when
we have 100 CAs?

Well, untrust is additive (at least). We require to trust all the
CAs. So we have a 100% untrustworthy rating for any system of 100 CAs
or more.


But that gets nonsensical when you add the 101st CA.

The CAs can each fail on you independently. Each one is a potential
weakest link in the chain that the Relying Party's security hangs from.
So their reliability statistics multiply:

one CA:   0.99  = 99% reliability
two CAs:  0.99*0.99 = 98% reliability
100 CAs:  0.99**100 = 37% reliability

I don't know many people who would consider a critical system that is
only 37% reliable to be meaningfully better than 100% untrustworthy
though.


The empirical numbers show that: out of 60 or so CAs and 600 sub-CAs,
around 4 were breached by that one attacker.


It's not believable that the breaches we've recently heard about are the
only times a commonly trusted root CA or one of their sub-CAs has
acted in a way that reduced the effective security of a Relying Party.


So, what to do? When the entire system is untrustworthy, at some
modelled level?


We'll figure something out, but it will take time.


Do we try harder, Sarbanes-Oxley style?


Even if you were to implement better controls to append a few more '9's
on the reliability statistic of every CA, the exponential decay in the
reliability as experienced by the RP will still dominate. The current
structure (of having more than a handful of trusted roots) simply cannot
be made to produce a system that is credibly secure for more than
low-value liability-limited transactions.


Or, stop using the word trust?


Yes.

The word 'trust' meant something useful when it was used to describe a
web of trust between PGP-using cipherpunks. Those models seem to work
best when the parties are intelligent actors who each understand the model!

But the word has a lot of overloaded meanings. I met a man who had just
completed his PhD dissertation on the meanings of this word 'trust'.
Worst of all, normal people seem to use it as something of the opposite
meaning of what is meant by cryptographic trust.

The problem we're facing today with the browser-based HTTPS system has
some important differences, too. Nearly all other authentication systems
are designed primarily for the purpose of an expertly implemented server
authenticating a user (like password logins) or of a conscientious users
authenticating a user (like PGP). If authenticating the server is done
at all, it is usually considered a detail or an afterthought to resist
man-in-the-middle attacks rather than something fundamental. The HTTPS
system is actually a very rare and difficult case: a completely amateur 
user is expected to strongly authenticate the identity of the server, 
with assistance from his client software.



Or?


Zooko said something the other day that has really stuck with me. I
can't get it out of my head, I hope he will give us a post to explain it
further:

https://twitter.com/zooko/status/108347877872500737
I find the word trust confuses more than it communicates. Try Mark S.
Miller's relies on instead!

The Relying Party (the browser user) *relies on* all CAs in his client
software's trusted root store. What does he rely on them to do? He
relies on them to refuse to issue a cert that could reduce his effective
security. When a CA issues sub-CAs, the user relies on them too.

It's a far more precise statement of the reality, one which saves us
from the fallacy of the user trusts the CA therefore whatever the CA
issues is trusted by definition and the user has no place to complain.

Is this user's reliance dependency transitive? - Yes, obviously.

Is trust transitive? - Deep philosophical discussion.

I think it also makes more clear the absurdity of the present situation.

- Marsh
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


[cryptography] Another data point on SSL trusted root CA reliability (S Korea)

2011-09-17 Thread Marsh Ray


Been seeing Twitter from @ralphholz, @KevinSMcArthur, and @eddy_nigg 
about some goofy certs surfacing in S Korea with CA=true.


via Reddit http://www.reddit.com/tb/kj25j
http://english.hani.co.kr/arti/english_edition/e_national/496473.html

It's not entirely clear that a trusted CA cert is being used in this 
attack, however the article comes to the conclusion that HTTPS 
application data is being decrypted so it's the most plausible assumption.


Quoting extensively here because I don't have a sense of how long The 
Hankyoreh keeps their English language text around.


- Marsh


NIS admits to packet tapping Gmail By Noh Hyung-woong 

It has come to light that the National Intelligence Service has been
using a technique known as “packet tapping” to spy on emails sent and
received using Gmail, Google’s email service. This is expected to
have a significant impact, as it proves that not even Gmail,
previously a popular “cyber safe haven” because of its reputation for
high levels of security, is safe from tapping.

The NIS itself disclosed that Gmail tapping was taking place in the
process of responding to a constitutional appeal filed by 52-year-old
former teacher Kim Hyeong-geun, who was the object of packet tapping,
in March this year.

As part of written responses submitted recently to the Constitutional
Court, the NIS stated, “Mr. Kim was taking measures to avoid
detection by investigation agencies, such as using a foreign mail
service [Gmail] and mail accounts in his parents’ names, and deleting
emails immediately after receiving or sending them. We therefore made
the judgment that gathering evidence through a conventional search
and seizure would be difficult, and conducted packet tapping.”

The NIS went on to explain, “[Some Korean citizens] systematically
attempt so-called ‘cyber asylum,’ in ways such as using foreign mail
services (Gmail, Hotmail) that lie beyond the boundaries of Korea‘s
investigative authority, making packet tapping an inevitable measure
for dealing with this.”

The NIS asserted the need to tap Gmail when applying to a court of
law for permission to also use communication restriction measures
[packet tapping]. The court, too, accepted the NIS’s request at the
time and granted permission for packet tapping.

Unlike normal communication tapping methods, packet tapping is a
technology that allows a real-time view of all content coming and
going via the Internet. It opens all packets of a designated user
that are transmitted via the Internet. This was impossible in the
early days of the Internet, but monitoring and vetting of desired
information only from among huge amounts of packet information became
possible with the development of “deep packet inspection” technology.
Deep packet inspection technology is used not only for censorship,
but also in marketing such as custom advertising on Gmail and
Facebook.

The fact that the NIS taps Gmail, which uses HTTP Secure, a
communication protocol with reinforced security, means that it
possesses the technology to decrypt data packets transmitted via
Internet lines after intercepting them.

“Gmail has been using an encrypted protocol since 2009, when it was
revealed that Chinese security services had been tapping it,” said
one official from a software security company. “Technologically,
decrypting it is known to be almost impossible. If it turns out to be
true [that the NIS has been packet tapping], this could turn into an
international controversy.”

“The revelation of the possibility that Gmail may have been tapped is
truly shocking,” said Jang Yeo-gyeong, an activist at Jinbo.net. “It
has shown once again that the secrets of people’s private lives can
be totally violated.” Lawyer Lee Gwang-cheol of MINBYUN-Lawyers for a
Democratic Society, who has taken on Kim’s case, said, “I think it is
surprising, and perhaps even good, that the NIS itself has revealed
that it uses packet tapping on Gmail. I hope the Constitutional Court
will use this appeal hearing to decide upon legitimate boundaries for
investigations, given that the actual circumstances of the NIS’s
packet tapping have not been clearly revealed.”

Please direct questions or comments to [englishh...@hani.co.kr]


___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Math corrections [was: Let's go back to the beginning on this]

2011-09-17 Thread Marsh Ray

On 09/17/2011 11:59 PM, Arshad Noor wrote:


The real problem, however, is not the number of signers or the length
of the cert-chain; its the quality of the certificate manufacturing
process.


No, you have it exactly backwards.

It really is the fact that there are hundreds of links in the chain and
that the failure of any single weak link results in the failure of the
system as a whole. When the number of CAs is large like it is, it
becomes impossible to make all the CAs reliable enough (give them
enough nines of reliability) to end up with an acceptable level of
security.

On 09/15/2011 06:32 PM, d...@geer.org wrote:


The source of risk is dependence, perhaps especially dependence on
expectations of system state.


This is an extreme example of that principle.

Your insecurity gets exponentially worse with the the number of
independent CAs.

Something this analysis doesn't capture probably even causes it
understate the problem: CAs aren't failing randomly like earthquakes.
Intelligent attackers are choosing the easiest ones to breach. In other
cases, the CAs themselves will willfully sell you out!

Now you may be a law-and-order type fellow who believes that lawful
intercept is a magnificent tool in the glorious war on whatever. But if
so, you have to realize that on the global internet, your own systems
are just as vulnerable to a lawfully executed court order gleefully
issued by your adversary (as if they'd even bother with the paperwork).

And don't let anybody tell you that it will be hard for him to pull off 
an active attack on the internet, because in normal circumstances it 
just isn't.


It was demoed for DefCon 18:
http://www.wired.com/threatlevel/2008/08/how-to-intercep/
http://blog.wired.com/27bstroke6/2008/08/revealed-the-in.html

In the case of Kapela and Pilosov’s interception attack, Martin
Brown of Renesys analyzed that incident and found that within 80
seconds after Kapela and Pilosov had sent their prefix
advertisement to hijack DefCon’s traffic, 94 percent of the peers
from whom Renesys collects routing traffic had received the
advertisement and begun to route DefCon traffic to the eavesdroppers’
network in New York.


Yep, that's right. IP routes are agreed on based on the honor system.

- Marsh
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] The Government and Trusted Third Party

2011-09-18 Thread Marsh Ray

On 09/18/2011 05:32 AM, Jeffrey Walton wrote:


The one thing I cannot palette: [many] folks in Iran had a
preexisting relationship with Google. For an Iranian to read his/her
email via Gmail only required two parties - the person who wants to
do the reading and the Gmail service. Why was a third party
involved?


This is a good question and it's the starting point of some of the
proposed solutions being floated (e.g. pinning).

I think the answer comes from the realm of ordinary software
engineering: state.
(no, not State like the government, let's not get sidetracked here :-)

The entire concept of a preexisting relationship adds new state to the
client endpoint (the web browser). This might seem like a small thing,
but it really isn't. To the extent a solution built on this
observation is effective, this state is also security critical.

Now that we have security critical state in the user's web browser, it
add a lot of complication to the user interface.

* A user may change hardware or reinstall the software, so now you need
a mechanism to back it up and restore it, perhaps across vendors.
Otherwise, the user's security actually regresses when they switch to a
brand-new, clean and more secure PC.

* The state probably needs to be private since it contains browsing history.

* The state may become corrupted either maliciously or by accidentally.
This could be as common as cert warnings are today. So now the users
need a method to:
  - wipe out the state entirely clear the cache and cookies
  - delete entries selectively e.g., look through the cookies for the
site and all the affiliate sites serving resources in into the page.
  - bypass the errors manually continue using the site anyway

My personal view is that there's still probably a useful feature there
if these issues can be overcome with some luck and heroically elegant
software engineering.

But if you're someone who believes that users always thoughtlessly
bypass security warnings today, then you might see this feature as
another damn the torpedoes full speed ahead button that users press
when actually under attack.

Note that out of over 300,000 IP addresses that made OCSP queries for
fraudulent DigiNotar certs, there was only one user who had the presence
of mind to ask about it on a help forum:
https://www.google.com/support/forum/p/gmail/thread?tid=2da6158b094b225ahl=en

This man deserves a medal and a place in history.
Shall we make a Wikipedia page for him?
Would the editors understand why he is noteworthy?

- Marsh
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Math corrections

2011-09-18 Thread Marsh Ray

On 09/18/2011 12:50 PM, Arshad Noor wrote:

On 09/17/2011 10:37 PM, Marsh Ray wrote:


It really is the fact that there are hundreds of links in the chain and
that the failure of any single weak link results in the failure of the
system as a whole.


I'm afraid we will remain in disagreement on this. I do not view the
failure of a single CA as a failure of PKI, no more than I see the
crash of a single airplane as an indictment of air-travel.


The crash of a single airplane only affects the passengers on that one 
airplane (and occasionally a few unlucky folks on the ground). It does 
not kill everyone on all airplanes.


But the failure of *any* single CA allows a successful attack on *every* 
user connecting to *every* https website. (Except, of course, Chrome 
users connecting to Google sites because it has special logic to avoid 
reliance on PKI).



Are there weaknesses in PKI? Undoubtedly! But, there are failures
in every ecosystem. The intelligent response to certificate
manufacturing and distribution weaknesses is to improve the quality
of the ecosystem - not throw the baby out with the bath-water.


OK, nothing's perfect, let's turn the equation around then.

What's the minimum level of reliability you would consider acceptable?

Usually crypto-systems hold themselves to a pretty high standard 
(certainly higher than the underlying transport), but we can pick 
anything. Let's look for a definition of high quality from non-secure 
systems and manufacturing.


Five-nines of availability is a pretty common goal for conventional 
telecommunications systems. It translates to about 5 minutes of downtime 
per year. It's similar to the the Six Sigma quality initiative for 
manufacturing processes: one in which 99.99966% of the products 
manufactured are statistically expected to be free of defects (3.4 
defects per million).

http://en.wikipedia.org/wiki/High_availability
http://en.wikipedia.org/wiki/Six_Sigma

Let's try it. What number raised to the 150th power (150 being an 
estimate for the number of CAs trusted in current browser PKI) will 
give the security of our communications similar reliability to the phone 
company or a quality manufacturing process?


   I.e.,  r**150 = 0.9

In Python 2.6, pow(0.9, 1.0/150.0) returns
0.999300219. Confirming with 50 decimal digit precision it's
0.999300222002215637567180097976384031458884.

That's *seven* nines that of reliability for a service that necessarily 
involves the interaction of both automated and human processes. You are 
just not going to get there no matter how much ISO 27001 you throw at 
the problem.


Yet you will have to require at least that of *every single* trusted 
root CA in order for the security of this 150-CA scheme to reach a 
similar level of reliability as public telephone system did back in the 
20th century.


- Marsh
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Math corrections

2011-09-19 Thread Marsh Ray

On 09/18/2011 11:48 PM, Arshad Noor wrote:

On 09/18/2011 01:12 PM, Marsh Ray wrote:


But the failure of *any* single CA allows a successful attack on *every*
user connecting to *every* https website.


Would you care to explain this in more detail, Marsh?

Please feel free to frame your explanation as if you were
explaining this to a 6-year old.


No.

- Marsh
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] SSL is not broken by design

2011-09-19 Thread Marsh Ray

On 09/19/2011 10:53 AM, Andy Steingruebl wrote:

You know what else fails at fighting phishing?

- The locks on my car door


Hmmm, what would a phishing attack on your car door locks look like?

Perhaps someone could replace your car one night with a very 
similar-looking one, then when you're ready to leave your house in the 
morning you insert your key and it takes an impression of it.


Ideally the impostor car would fool you long enough for you to drive to 
work in it. When you were ready to leave for work, both cars would be gone.



- The fence surrounding my house


That would take some creativity. Perhaps a good job interview question.


- The full disk encryption on my laptop


The evil maid!



/snark

SSL wasn't designed to stop phishing, if sites don't deploy it with
mutual-auth it can't possibly do so.


I'd love to be proven wrong, but even with client cert mutual auth there 
are probably some attacks there on modern browsers.



 Saying it is a failure because
it doesn't stop that ignores the problem it is designed to solve, or
at least some it could credibly claim to solve.

SSH doesn't solve phishing either.  Is it a total failure also?  I
don't think so.


I love SSH and think it's a great protocol. But to be honest, we have to 
admit that it would be far worse than SSL at the problem 
no-prior-relationship ecommerce bootstrapping problem.



SSL is used for a lot more than HTTPS.  Any proposal to fix it
*must* take that into account.


Thank you for repeating this.

Browser-based HTTPS is certainly the most visible, but not at all the 
only use case for SSL/TLS. Many uses of SSL/TLS don't even rely on this 
house-of-cards PKI constructed by the CA/Browser Forum.


IMHO, as far as crypto protocols go the TLS protocol itself is pretty 
solid as long as the endpoints restrict themselves to negotiating the 
right options.


On that note, there's a little more info coming out on the Duong-Rizzo 
attack:

http://threatpost.com/en_us/blogs/new-attack-breaks-confidentiality-model-ssl-allows-theft-encrypted-cookies-091611


- Marsh
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] DigiNotar SSL Hack Diagram | Cyber Chatter

2011-09-20 Thread Marsh Ray

On 09/20/2011 03:21 PM, Jeffrey Walton wrote:


Google's smart phone position
(http://code.google.com/p/cyanogenmod/issues/detail?id=4260): Why
would we remove the root certificate?  DigiNotar hasn't been revoked
as a CA... MITM attacks are pretty rare. (Sep 1, 2011). On Sept 2,
2011 the issue was closed. On Sept 10, 2011 they took partial action
(apparently, the project maintainers were getting tired of folks
re-opening the issue).


Those are the Cyanogen guys. Android modders. I heard that one of them 
was employed by Samsung, but AFAICT they are not speaking for Google.


People who run Cyanogen will probably get the fix faster than I will get 
one for my non-modded Google Nexus S phone though. :-/


- Marsh
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] PFS questions (was SSL *was* broken by design)

2011-10-03 Thread Marsh Ray

On 10/02/2011 03:38 AM, Peter Gutmann wrote:

Sandy Harrissandyinch...@gmail.com  writes:


What on Earth were the arguments against it? I'd have thought PFS was a
complete no-brainer.


Two things, it's computationally very expensive, and most people have no idea
what PFS is.


There's been one significant improvement since the 90s: Even the typical 
MS Windows IT guy today will have at least played with Wireshark and may 
have even set up certificates for something. I find an easy way to 
explain PFS is someone who gets a Wireshark capture won't be able to 
decrypt it EVEN IF they somehow get the private key to the certificate.


There's an increasing awareness of data loss issues right now. I wonder 
if DHE ciphersuites will become recognized as a best practice?


At the risk of feeding the conspiracy angle, I note that there is only 
one stream cipher for SSL/TLS (RC4). All the others in common use are 
CBC modes, with that same predictable IV weakness as IPsec (i.e. BEAST). 
There are no DHE cipher suites defined for RC4. So if you want PFS, you 
have to accept predictable IVs. If you want resistance to BEAST, you 
have to give up PFS.


Personally, I don't interpret this as anything more than the IETF 
process and some vendor biases back in the 90s. But it shows that 
designing for this concept of 'agility' is important, in particular for 
reasons you don't know at the time.


- Marsh
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] PFS questions (was SSL *was* broken by design)

2011-10-05 Thread Marsh Ray

On 10/05/2011 07:57 AM, ianG wrote:


This thread originated in a state-led attack on google and 4 CAs
(minimum) with one bankruptcy, one state's government certificates being
replaced, measured cert uses (MITMs?) in the thousands.


Just for the record, the Fox-IT Interim Report September 5, 2011 
DigiNotar Certificate Authority breach 'Operation Black Tulip'

https://bugzilla.mozilla.org/attachment.cgi?id=558368 states that:

Around 300.000 unique requesting IPs to google.com have been identified.

Which would seem to represent a good lower bound on the number of users 
actually attacked.


- Marsh
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] For discussion: MECAI: Mutually Endorsing CA Infrastructure

2011-10-21 Thread Marsh Ray

On 10/21/2011 08:09 AM, Kai Engert wrote:

This is an idea how we could improve today's world of PKI, OCSP,
CA's.

https://kuix.de/mecai/


This is great. We need these kinds of ideas.


Review, thoughts and reports of flaws welcome.


OK, this is a serious thought, not just a flippant remark:

Why would CAs want to act as VAs, and more importantly, why would they
want to revoke their vouching?

CAs seem to put a lot of emphasis on structured legal
agreements/contracts. Surely they have such agreements in place when
they cross-sign each other, so they would likely want them for this VA
system. Contracts are enforced primarily by legal action with courts and
lawyers and this adds very concrete risks and expenses even in the
clearest of cases. On the other hand, declining to stop vouching for a
partner CA experiencing some moderate problems (e.g. some compromised
resellers issued fraudulent certs that were eventually revoked) seems
associated with purely abstract risks (e.g. loss of confidence in the
system as a whole).

CAs are not the Relying Parties (i.e., users) and they're not even the
software vendors to the RPs (like Mozilla). It's not clear to me if they
feel the RPs are actually party to these contracts or to what extent
they otherwise consider themselves liable to the RPs. But I suspect that
the CAs themselves would be at least as reluctant to eliminate one of
their fellow members as a vendor of client software.

So is providing the CAs collectively with a tool to more efficiently
reinforce or remove endorsements amongst themselves going to result in a
substantial improvement over the system we have now?

Or would the cost of new infrastructure be better spent on something
else like, say, a more robust mechanism for informing users about
software security updates?

- Marsh
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] HMAC over messages digest vs messages

2011-11-02 Thread Marsh Ray

On 11/02/2011 02:33 PM, Jack Lloyd wrote:


It seems like it would be harder (or at least not easier) to find a
collision or preimage for HMAC with an unknown key than a collision or
preimage for an unkeyed hash, so using HMAC(H(m)) allows for an avenue
of attack that HMAC(m) would not, namely finding an inner collision
(or preimage) on H.


That also goes for length extension attacks, something that HMAC is 
sometimes used specifically to prevent.


HMAC(k, m) is much better than HMAC(k, H(m)).

- Marsh
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


[cryptography] Declassified NSA Tech Journals

2011-11-27 Thread Marsh Ray


Came across this on Reddit:

Declassified NSA Tech Journals
http://www.nsa.gov/public_info/declass/tech_journals.shtml

It all looks so interesting it's hard to know where to start.

- Marsh

* Emergency Destruction of Documents - April 1956 - Vol. I, No. 1
* Development of Automatic Telegraph Switching Systems - July 1957 - 
Vol. II, No. 3

* Chatter Patterns: A Last Resort - October 1957 - Vol. II, No. 4
* Introduction to Traffic Analysis - April 1958 - Vol. III, No. 2
* Signals from Outer Space - April 1958 - Vol. III, No. 2
* Science and Cryptology - July 1958 - Vol. III, No. 3
* Net Reconstruction - A Basic Step in Traffic Analysis - July 1958 - 
Vol. III, No. 3
* Weather; its Role in Communications Intelligence - July 1958 - Vol. 
III, No. 3

* A New Concept in Computing - December 1958 - Vol. III, No. 4
* About NSA - January 1959 - Vol. IV, No. 1
* Antipodal Propagation - January 1959 - Vol. IV, No. 1
* Data Transmission Over Telephone Circuits - January 1959 - Vol. IV, No. 1
* Soviet Science and Technology: Present Levels and Future Prospects - 
January 1959 - Vol. IV, No. 1

* Cryptanalysis in The German Air Force - April 1959 - Vol. IV, No. 2
* The Special Felix System - April 1959 - Vol. IV, No. 2
* Intercept of USSR Missile Transmissions - July 1959 - Vol. IV, No. 3
* A Program for Correcting Spelling Errors - October 1959 - Vol. IV, No. 4
* COMINT Satellites - A Space Problem- October 1959 - Vol. IV, No. 4
* The Borders of Cryptology - October 1959 - Vol. IV, No. 4
* Did Aleksandr Popov Invent Radio? - January 1960 - Vol. V, No. 1
* Bayes Marches On - January 1960 - Vol. V, No. 1
* Book Review: Lost Languages - Fall 1960 - Vol. V. Nos. 3  4
* The Tunny Machine and Its Solution - Spring 1961 - Vol. VI, No. 2
* The GEE System I - Fall 1961, Vol. VI, No. 4
* Book Review: Lincos, Design of a Language for Cosmic Intercourse, Part 1 -
  Winter 1962 - Vol. VII, No. 1
* A Cryptologic Fairy Tale - Spring 1962 - Vol. VII, No. 2
* Aristocrat - An Intelligence Test for Computers - Spring 1962 - Vol. 
VII, No. 2

* Why Analog Computation? - Summer 1962 - Vol. VII, No. 3
* German Agent Systems of World War II - Summer 1962 - Vol. VII, No. 3
* The GEE System - V - Fall 1962 - Vol. VII, No. 4
* How to Visualize a Matrix - Summer 1963 - Vol. VIII, No. 3
* Book Review: Pearl Harbor: Warning and Decision - Winter 1963 - Vol. 
VIII, No. 1
* Soviet Communications Journals as Sources of Intelligence - August 
1964 - Vol. IX, No. 3
* Use of Bayes Factors With a Composite Hypothesis - Fall 1964 - Vol. 
IX, No. 4

* A List of Properties of Bayes-Turing Factors - Spring 1965 - Vol. X, No. 2
* A Boer War Cipher - Summer 1965 - Vol. X, No. 3 and Fall 1965 - Vol. 
X, No. 4

* Something May Rub Off! - Winter 1965 - Vol. X, No. 1
* Time Is - Time Was - Time Is Past Computes for Intelligence - Winter 
1965 - Vol. X, No. 1

* The Apparent Paradox of Bayes Factors - Winter 1965 - Vol. X, No. 1
* Extraterrestrial Intelligence - Spring 1966 - Vol. XI, No. 2
* Some Reminiscences - Summer 1966 - Vol. XI, No. 3
* Communications with Extraterrestrial Intelligence - Winter 1966 - Vol. 
XI, No. 1
* The Voynich Manuscript: The Most Mysterious Manuscript in the World 
- Summer 1967 - Vol. XII, No. 3

* Weather or Not - Encrypted? - Fall 1967 - Vol. XII, No. 4
* The Library and the User - Spring 1968 - Vol. XIII, No. 2
* Mokusatsu: One Word, Two Lessons - Fall 1968 - Vol. XIII, No. 4
* Key to The Extraterrestrial Messages - Winter 1969 - Vol. XIV, No. 1
* Curiosa Scriptorum Sericorum: To Write But Not to Communicate - Summer 
1971 - Vol. XVI, No. 3
* Multiple Hypothesis Testing and the Bayes Factor - Summer 1971 - Vol. 
XVI, No. 3

* The Rosetta Stone and Its Decipherment - Winter 1971 - Vol. XVI, No. 1
* Writing Efficient FORTRAN - Spring 1972 - Vol. IX, No. 1
* The Strength of the Bayes Score - Winter 1972 - Vol. XVII, No. 1
* Q.E.D. - 2 Hours, 41 Minutes - Fall 1973, Vol. XVIII, No. 4
* Rochford's Cipher: A Discovery in Confederate Cryptography - Fall 
1973, Vol. XVIII, No. 4
* Earliest Applications of the Computer at NSA - Winter 1973 - Vol. 
XVIII, No. 1

* Addendum to A Cryptologic Fairy Tale - Winter 1973 - Vol. XVIII, No. 1
* Some Principles of Cryptographic Security - Summer 1974 - Vol. XIX, No. 3
* Selected SIGINT Intelligence Highlights - Fall 1974 - Vol. XIX, No. 4
* Der Fall WICHER: German Knowledge of Polish Success on ENIGMA - Spring 
1975 - Vol. XX, No. 2

* A Personal Contribution to the Bombe Story - Fall 1975 - Vol. XX, No. 4
* Spacecraft Passenger Television from Laika to Gagarin - Spring 1976 - 
Vol. XXI, No. 2

* The Voynich Manuscript Revisited - Summer 1976 - Vol. XXI, No. 3
* An Application of Cluster Analysis and Multidimensional Scaling to the 
Question of Hands and Languages in the Voynich Manuscript - Summer 
1978 - Vol. XXIII, No. 3
* An Application of PTAH to the Voynich Manuscript - Spring 1979 - Vol. 
XXIV, No. 2

* German Radio Intelligence - Fall 1980 - Vol. XXV, No. 4

Re: [cryptography] Non-governmental exploitation of crypto flaws?

2011-11-27 Thread Marsh Ray

Steven Bellovins...@cs.columbia.edu  wrote:

Does anyone know of any (verifiable) examples of non-government
enemies exploiting flaws in cryptography?  I'm looking for
real-world attacks on short key lengths, bad ciphers, faulty
protocols, etc., by parties other than governments and militaries.
I'm not interested in academic attacks


Here are some ideas. I can probably run down some specific details and 
references if you need them:


* Cases of breached databases where the passwords were hashed and maybe 
salted, but with an insufficient work factor enabling dictionary attacks.


* NTLMv1/MSCHAPv1 dictionary attacks.

* NTLMv2/MSCHAPv2 credentials forwarding/reflection attacks.

* Here's an example of RSA-512 certificates being factored and used to 
sign malware:

http://blog.fox-it.com/2011/11/21/rsa-512-certificates-abused-in-the-wild/



On 11/27/2011 02:23 PM, Landon Hurley wrote:

-BEGIN PGP SIGNED MESSAGE- Hash: SHA512
GSM and the Kaos club expert would be a good example.


...and non-academic researchers would seem to be an important category.

* There's the fail0verflow break of the specific use of
ECC in the Sony PlayStation 3.
http://www.theregister.co.uk/2010/12/30/ps3_jailbreak_hack/

The copy protection industry would seem fertile ground for this sort of 
example.



So would the recent $200 hardware break of hdmi encryption.


* http://aktuell.ruhr-uni-bochum.de/pm2011/pm00386.html.en
As I read it the HDMI master key was leaked, perhaps by an insider, in 
2010. The $200 hardware was basically an implementation of the protocol 
using that key.


* Last but not least, there's DeCSS. The DVD consortium was dumb enough 
to distribute the decryption key in a software player where it could be 
examined so maybe it's not a crypto break like you're looking for. On 
the other hand, having a single symmetric key for a mass-produced 
consumer distribution channel certainly counts as a faulty protocol.



-- I want to be able to give real-world advice -- nor am I looking

for yet another long thread on the evils and frailties of PKI.


Say, anyone looked at the Bitcoin prices lately? :-)

- Marsh
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] 512-bit certs used in attack

2011-11-27 Thread Marsh Ray

On 11/27/2011 09:57 PM, Peter Gutmann wrote:

That's an example of *claims* of 512-bit keys being factored, with
the thinking being everyone knows 512-bit keys are weak, the certs
used 512-bit keys, therefore they must have got them by factoring.


Yeah. It seems like an important point.

http://technet.microsoft.com/en-us/security/advisory/2641690

There is no indication that any certificates were issued
fraudulently. Instead, cryptographically weak keys have allowed some
of the certificates to be duplicated and used in a fraudulent
manner.


On 11/27/2011 09:57 PM, Peter Gutmann wrote:

Unfortunately this doesn't explain how they go the 1024-bit and
longer keys that were also used in the attack.


Is that true? I haven't seen this reported. Link?


http://blog.fox-it.com/2011/11/21/rsa-512-certificates-abused-in-the-wild/

if you looked at the 4 certificates initially found, it was easy to
determine that all were 512bit RSA and used on HTTPS websites, which
 were still up at the time of writing. Later during our investigation
 we encountered 5 more certificates which also were used to
successfully sign malware throughout 2011 by the same attacker, all
512 bit RSA.

..in the QA section...

From all the signed executables we found related to this attack all
were exactly signed with a 512 bit RSA certificate and Mikko Hyponnen
stated during the closing keynote that the certificate in Malaysia
was explicitly not stolen.


Possibly this is built on some assumptions, but its seems to be the
simplest explanation for the data. I.e., how many ways are there for an
attacker with the goal of stealing certs to use in an attack and end up
getting caught with nine 512 bit ones?

Here are the possibilities I can come up with:

* Attacker actually obtained a representative sample of many certs, but
chose to use only the 512 bit ones for some unknown reason.

* Attacker compromised a little sub-CA in Malaysia which for some
unknown reason retained the private keys for 512 bit certs but not
better ones.

* Microsoft's statement that There is no indication that any
certificates were issued fraudulently does not accurately reflect the
reality and the attacker was able to get the sub-CA to issue fraudulent
certs, but for some reason only 512 bit ones.

* There actually were the expected proportion of 1024 bit certs used in 
the attack, but F-Secure/Mikko Hyponnen and Fox-IT have an institutional 
bias that causes them to miss observing them, not connect them to this 
attack, or not accurately report them.


-vs-

* Attacker used a known quantity of CPU time to factor some of the 512
bit RSA certs he found via SSL observatory or his own scan.

- Marsh
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Non-governmental exploitation of crypto flaws?

2011-11-28 Thread Marsh Ray

On 11/28/2011 04:56 PM, Steven Bellovin wrote:


I'm writing something where part of the advice is don't buy snake
oil crypto, get the good stuff.  By good I mean well-accepted
algorithms (not proprietary for extra security!), and protocols
that have received serious analysis.  I also want to exclude
too-short keys.



But -- honesty requires that I define the threat model.  We *know*
why NSA wanted short keys in the 1990s, but most folks are not being
 targeted bypick your favorite SIGINT agency, and hence don't have
a major worry.


But where's the evidence of that claim?

AFAICT there is evidence of widespread wiretapping in the world. From
extra equipment closets in ATT buildings to Carnivore AKA Omnivore
NSA programs. That's to say nothing of someone traveling
internationally. If you are a tech, aerospace, or military company in
the West, you would should expect state-sponsored adversaries to rattle
your doorknobs on a regular basis.

Furthermore, some of the largest distributed supercomputers in the world
are botnets or on-line game systems now. The days of Western
intelligence agencies having unambiguously greater brute-force
capabilities than The Bad Guys^TM are drawing to a close. The
purported RSA factorization is a sign of that.


So -- is there a real threat that people have to worry about?  The TI
example is a good one, since it's fully verified.


Funny, that one sounds to me like a failed model. This idea of keeping
secrets locked in a plastic box while simultaneously selling it to
millions of consumers has failed every time it has been tried.


The claim has been made in the foxit blog, but as noted it's not
verified, merely asserted.


If we can't get clarification, perhaps we can obtain some samples of the
malware and confirm it ourselves.


WEP?  Again, we all know how bad it is, but has it really been used?
 Evidence?


Yes, WEP was a confirmed vector in the Gonzales TJX hack:

http://www.jwgoerlich.us/blogengine/post/2009/09/02/TJ-Maxx-security-incident-timeline.aspx


http://en.wikipedia.org/wiki/TJX_Companies#Computer_systems_intrusion

 number of affected customers had reached 45.7 million [9] and has

prompted credit bureaus to seek legislation requiring retailers to
be responsible for compromised customer information saved in their
systems. In addition to credit card numbers, personal information
such as social security numbers and driver's license numbers from
451,000 customers were downloaded by the intruders. The breach was
possible due to a non-secure wireless network in one of the stores.




Is anyone using BEAST?


Not to my knowledge.


Did anyone use the TLS renegotiate vulnerability?


I have spoken with pentesters who has used it successfully. Not on your 
typical web site.


And it's still out there.
For example, the Ultra High Secure Password Generator:
https://www.grc.com/passwords.htm

Every one is completely random (maximum entropy) without any pattern,
and the cryptographically-strong pseudo random number generator we
use guarantees that no similar strings will ever be produced again.
Also, because this page will only allow itself to be displayed over a
snoop-proof and proxy-proof high-security SSL connection, and it is
marked as having expired back in 1999, this page which was custom
generated just now for you will not be cached or visible to anyone
else.


Qualys reports that site as vulnerable to CVE-2009-3555 (it accepts
unsolicited insecure TLS renegotiation) and gives it a grade D overall:
https://www.ssllabs.com/ssldb/analyze.html?d=grc.com


A lot of the console and DRM breaks were flaws in the concept, rather
than the crypto.


I agree there's such a thing as proper and improper crypto. But it
also seems a bit unhelpful to draw the boundaries so carefully that the
commonly broken stuff is subsequently defined out of bounds. If you
divorce it completely from actual usable implementations, people will
find the advice so impractical that they will be susceptible to the very
snake oil we wish to denounce.


Password guessing doesn't count...


How about dictionary attacks and rainbow tables then?

I heard it stated somewhere that an Apple product was using PBKDF2 with
a work factor of 1. Does that count?

- Marsh
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Non-governmental exploitation of crypto flaws?

2011-11-28 Thread Marsh Ray

On 11/28/2011 05:58 PM, Marsh Ray wrote:


I heard it stated somewhere that an Apple product was using PBKDF2
with a work factor of 1. Does that count?


Follow-up.

It was Blackberry, not Apple:
http://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2010-3741


Vulnerability Summary for CVE-2010-3741 Original release
date:10/05/2010 Last revised:07/19/2011 Source: US-CERT/NIST
Overview

The offline backup mechanism in Research In Motion (RIM) BlackBerry
Desktop Software uses single-iteration PBKDF2, which makes it easier
for local users to decrypt a .ipd file via a brute-force attack.
Vulnerability Summary for CVE-2010-3741 Original release
date:10/05/2010 Last revised:07/19/2011 Source: US-CERT/NIST
Overview

The offline backup mechanism in Research In Motion (RIM) BlackBerry
Desktop Software uses single-iteration PBKDF2, which makes it easier
for local users to decrypt a .ipd file via a brute-force attack.



http://www.infoworld.com/t/mobile-device-management/you-can-no-longer-rely-encryption-protect-blackberry-436

 [Elcomsoft]

In short, standard key-derivation function, PBKDF2, is used in a
very strange way, to say the least. Where Apple has used 2,000
iterations in iOS 3.x, and 10,000 iterations in iOS 4.x, BlackBerry
uses only one.


Via http://en.wikipedia.org/wiki/PBKDF2#BlackBerry_vulnerability .


- Marsh
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Non-governmental exploitation of crypto flaws?

2011-11-28 Thread Marsh Ray

On 11/28/2011 06:52 PM, Steven Bellovin wrote:


On Nov 28, 2011, at 6:58 PM, Marsh Ray wrote:


On 11/28/2011 04:56 PM, Steven Bellovin wrote:


I'm writing something where part of the advice is don't buy snake
oil crypto, get the good stuff.  By good I mean well-accepted
algorithms (not proprietary for extra security!), and protocols
that have received serious analysis.  I also want to exclude
too-short keys.



But -- honesty requires that I define the threat model.  We *know*
why NSA wanted short keys in the 1990s, but most folks are not being
targeted bypick your favorite SIGINT agency, and hence don't have
a major worry.


But where's the evidence of that claim?


For which claim?  That most folks aren't being targeted by major SIGINT
agencies?  I suspect that it's the converse that needs proving.


Is there a distinction being made here? How fine is it?

Targeted may imply that someone has your name on a finite sized list 
somewhere.


On the other hand, some percentage of your traffic (or metadata about 
it) are likely being intercepted, archived, and indexed for later 
searching. We know Google, Facebook, and every sleazy ad server network 
on the internet does this. We know Syria does this, their BlueCoat logs 
were uploaded the other day. We know the US government believes in 
warrantless wiretapping and has at least one wiring closet in US telcos.


We could call this non-targeted surveillance. But given the searching 
and retrieval capabilities today (e.g., Palantir's glowing review in the 
WSJ the other day), is this still a useful distinction?


Just asking questions out loud here.


If you are a tech, aerospace, or military company in
the West, you would should expect state-sponsored adversaries to rattle
your doorknobs on a regular basis.


Right.  And if you manufacture paper clips or sell real estate, you're
not in that category.


One would certainly think so.

But surely the Malaysian Agricultural Research and Development Institute 
did not realize it was painting a target on itself when some IT staffer 
requested the code signing flag be set on their cert request for 
anjungnet.mardi.gov.my.

( http://www.f-secure.com/weblog/archives/2269.html )


I do note that none of the news stories about cyberattacks from China have
mentioned crypto.  EIther it's not part of the attack -- my guess -- or
Someone doesn't want attention called to weak crypto.


With all the vulnerable Adobe client software out there they probably 
have more hack targets than they can possibly handle.



Funny, that one sounds to me like a failed model. This idea of keeping
secrets locked in a plastic box while simultaneously selling it to
millions of consumers has failed every time it has been tried.


I don't follow.  TI put a public key into their devices, and used the
private key to sign updates.


Yes that makes more sense then.


That's a perfectly valid way to use
digital signatures, even if I think their threat model was preposterous.
If they had used 1024-bit keys it wouldn't have been an issue.


Right, it likely would have fallen to some other issue.


If we can't get clarification, perhaps we can obtain some samples of the
malware and confirm it ourselves.


How?  Private keys are private keys; the fact that they exist somewhere
says nothing about how they were obtained.


The question remaining in my mind was: was this batch of signed malware 
found in the wild by F-Secure really signed with a set of exclusively 
512 bit keys?


- Marsh
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Auditable CAs

2011-11-29 Thread Marsh Ray

On 11/27/2011 03:00 PM, Ben Laurie wrote:

Given the recent discussion on Sovereign Keys I thought people might
 be interested in a related, but less ambitious, idea Adam Langley
and I have been kicking around:
http://www.links.org/files/CertificateAuthorityTransparencyandAuditability.pdf.


Some questions and first impressions.


Firstly, every publicly visible certificate should be published in a
publicly auditable certificate log


Isn't this something of a truism?
What constitutes a publicly visible cert?

Certs that are on public servers today are likely to be logged in SSL
Observatory and by other crawlers. Certs that are not on servers can
still be used to attack secure communications without warning.

Perhaps the relevant property is certs issued by a browser-trusted CA
or subordinate regardless of their visibility.

Which brings us right to the goal:

to make it impossible (or at least very difficult) for a Certificate
Authority (CA) to issue a certificate for a domain without the
knowledge of the owner of that domain.


Why would CAs sign up for this plan? Of what advantage is it for them?

CAs are currently engaged in the practice of selling sub-CAs. This plan
would require auditing of sub-CAs as well in order to be effective.

Yet CAs today refuse to disclose even *the number* of sub-CAs they have
issued, much less to whom they have issued them, much less the specific
certs that those sub-CAs have issued.

My impression is that some of the sub-CAs are used for purposes like
deep-inspecting firewalls around large corporate and governmental
networks. (At least, that's one of the few halfway-legitimate arguments
I can think of for their existence.) These systems issue new certs
on-the-fly as outgoing connections request new websites. In such a case
the firewall would have to updated to accommodate the audit log
requirement, but that's a solvable problem. The maybe-impossible problem
to solve is that this explicitly public audit log now represents a major
information leak from the company that's very concerned about its security.

If CAs were willing to give up the sub-CA business, they could do it
today and we'd all be much better off. On the other hand, if they are
unwilling to give it up or impose public log requirements upon it, it
represents a big challenge to this proposal.

Google/Mozilla could perhaps give the CAs an ultimatum: adopt this or we
de-list you (or equivocate about your sites' trustworthiness in our
browser UI). But what if the CAs in the CA/B Forum collectively decide
to call their bluff? Like in some European-style electoral process, the
minority browser vendors, (e.g. Microsoft), start to look pretty
important here.

And of course, not all of the trusted root CAs are in it for the money.
Some of them are self-declared (or thinly-veiled) government agencies.
They likely issue a lot of internal certs and would prefer not to share
their internal DNS with the world. But who knows? Maybe they'd be
willing, at this point, to trade some capabilities for better security
overall.

So I think the namespace scoping part of the proposal (allow an
intermediate CA to create private certificates within a subdomain)
would be essential to its adoption. But, again, scoping could have been
implemented for CAs a long time ago if the will existed to do it.

Perhaps I'm just re-stating the obvious: change is likely to be opposed
by those who benefit from the status quo.

Distributing the log data in a way that is simultaneously authenticated, 
highly-available, bounded in its latency of updates, low-bandwidth, and 
does not represent a privacy leak is likely to be an interesting 
engineering challenge. (In comparison, consider the minor privacy leak 
caused by the much simpler HSTS scheme.)


But I would really like to see something like this adopted because I
think a public audit log would be a great improvement to security and
would go a long way toward restoring the public trust.

- Marsh
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Auditable CAs

2011-11-30 Thread Marsh Ray

On 11/30/2011 05:24 AM, Ben Laurie wrote:

On Wed, Nov 30, 2011 at 1:18 AM, Marsh Rayma...@extendedsubset.com
 wrote:


Perhaps the relevant property is certs issued by a browser-trusted
CA or subordinate regardless of their visibility.


If they are not visible, why would we care whether they are in the
log or not?


I guess I don't understand what you mean by 'publicly visible certs' in
the sentence:

Firstly, every publicly visible certificate should be published in a
publicly auditable certificate log.


Perhaps you define this category of publicly visible certs as certs
which display without warnings on default-configured browsers when
presented by the correct site.

Which today is the same set as certs issued by a browser-trusted CA or
sub-CA and this set makes up the great majority of secure sites people
visit. Server operators generally don't buy certs from CAs for any
reason other than to ensure their site will display without cert errors
in default-configured browsers. (Of course they may have a deeper
appreciation for the security model as a whole).

So it basically amounts to all certs that a default-configured browser
should accept which is approximately all certs issued by
browser-trusted CAs or sub-CAs today, i.e. valid certs.

On the other hand, one could interpret this category of publicly
visible certs as certs visible to the public, i.e., certs served by
legitimate servers on routable IPs located via public DNS. But this
interpretation would be much weaker (and I don't think that's what you
mean).

Do I have this right?


CAs do not need to sign up to the plan.


The title of the email is Auditable CAs, so I started with the
impression that some part of this plan involved auditing as a property
of the CA itself. But this doesn't seem to be the right interpretation.

The goal is said to make it impossible (or at least very difficult) for
a Certificate Authority (CA) to issue a certificate for a domain without
the knowledge of the owner of that domain and each certificate issued
must be accompanied by an audit proof.

But the proposal does nothing _directly_ to prevent a CA from issuing a
cert, right? And since browsers aren't logging the certs as they find
them, this doesn't inform the owner of the domain either.

Instead it seems to be a hoped-for effect of default-configured
browsers will raise hell if they are presented with a non-logged cert
and CAs will feel compelled to go along with the audit logging.


What do you mean by auditing of sub-CAs?


If sub-CA-issued certs are to continue to display without warnings by
default-configured browsers, then they'll have to put the certs they
issue in the logs too, right?


The maybe-impossible problem to solve is that this explicitly
public audit log now represents a major information leak from the
company that's very concerned about its security.


What information does it leak that is not already leaked?


Wouldn't they have to put the certs they sign in the public log? They
don't have to do this today.


Internal certs are not a problem, as mentioned in the paper.


It would probably be worth describing how internal logs for internal CAs
would work. It could be rather simple, like the audit proof identifies
the specific log in a manner that makes it straightforward to publish
via an internal network.

I am liking this plan.

- Marsh
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] really sub-CAs for MitM deep packet inspectors? (Re: Auditable CAs)

2011-12-01 Thread Marsh Ray

On 12/01/2011 11:09 AM, Ben Laurie wrote:

On Thu, Dec 1, 2011 at 4:56 PM, Marsh Rayma...@extendedsubset.com
wrote:

http://www.prnewswire.com/news-releases/geotrust-launches-georoot-allows-organizations-with-their-own-certificate-authority-ca-to-chain-to-geotrusts-ubiquitous-public-root-54048807.html


They appear to actually be selling sub-RA functionality, but very
hard to tell from the press release.

Bottom line: I'm going to believe this one someone displays a cert
chain.



Translated:


GeoRoot is only available for internal use, and organizations must
meet certain eligibility requirements, [...]  compliance guidelines,
and hardware security specifications.

  


Organizations must maintain a list Certificate Revocation List (CRL)

  ^^


for all certificates issued by the company.

   


But don't worry,  Mozilla has a checklist for sub-CAs!

https://wiki.mozilla.org/CA:SubordinateCA_checklist


Home » CA:SubordinateCA_checklist



Terminology

The following terminology will be used in this wiki page regarding
subordinate CAs.

Third-Party: The subordinate CA is operated by a third party external
to the root CA organization; and/or an external third party may
directly cause the issuance of a certificate within the CA
hierarchy.



Third-party private (or enterprise) subordinate CAs: This is the case
where a commercial CA has enterprise customers who want to operate
their own CAs for internal purposes, e.g., to issue SSL server
certificates to systems running intranet applications, to issue
individual SSL client certificates for employees or contractors for
use in authenticating to such applications, and so on.

* These sub-CAs are not functioning as public CAs, so typical Mozilla
users would not encounter certificates issued by these sub-CAs in


s/would/should/


their normal activities.
* For these sub-CAs we need assurance that
they are not going to start functioning as public CAs.


As Dan would say, security comes from this absence of the potential 
for this type of surprise.


This is not security, this reliance.


Currently the
only assurances available for this case it to ensure that these third
parties are required to follow practices that satisfy the Mozilla CA
Certificate Policy, and that these third parties are under an
acceptable audit regime.


Promises, promises.


o In Bug #394919 NSS is being updated to
apply dNSName constraints to the CN, in addition to the SANs.
o We
plan to update our policy to require CAs to constrain third-party
private (or enterprise) subordinate CAs so they can only issue
certificates within a specified domain. See section 4.2.1.10 of RFC
5280.


Someday.

To be fair to Mozilla, at least they're the ones with an open policy 
about it. I didn't find such a policy for the other popular web clients 
(I may not have looked hard enough).


- Marsh
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Digest comparison algorithm

2011-12-01 Thread Marsh Ray

On 12/01/2011 04:37 PM, Jerrie Union wrote:


public boolean check(digest, secret) {
   hash = md5(secret);

   if (digest.length != hash.length)  {
 return false;
   }

   for (i = 0; i  digest.length; i++) {
 if (digest[i] != hash[i]) {
   return false;
 }
   }

I’m wondering, if it’s running as some authenticated server application, if
it should be considered as resistant to time attacks nowadays.


Not resistant. It's a timing oracle. Very dangerous.


I’m aware that’s
not a good practice, but I’m not clear if I should consider it as exploitable 
over the
network (on both intranet and internet scenarios).


Nate Lawson has some great resources on his blog.
http://rdist.root.org/2010/07/19/exploiting-remote-timing-attacks/

Further research in 2007 showed that differences as small as 20 
microseconds over the Internet and 100 nanoseconds over the LAN could be 
distinguished with about 1000 samples.


For example,

http://rdist.root.org/2009/05/28/timing-attack-in-google-keyczar-library/


A lot depends on the specifics of course. For example, can the attacker 
supply the digest directly? A lot of message authentication schemes 
seem to involve that type of thing (e.g., using HMAC instead of plain MD5).


Or perhaps the attacker supplies the 'secret', as in a 
password-validation routine. (Of course that's not the only problem in 
this routine for doing password validation). The attacker could supply 
various passwords. He knows the MD5s of the values he supplies. The 
timing comparison tells him how many bytes of the hash he has correct.


Although it would be difficult for him to do a full primary preimage 
attack on MD5 itself needed to extract the full hash value via timing, 
he probably would not have to. He just needs to work out the first few 
bytes of the hash value to enable an offline dictionary attack. E.g. 
Just by learning the first two bytes he can eliminate 65535/65536ths of 
the possible passwords.



I would like to run some tests, but I’m not sure if I should follow some 
specific
approach. Anyone has done some research recently?


I pointed this out as a potential problem in Tor.
https://trac.torproject.org/projects/tor/ticket/3122
They promptly fixed it
https://gitweb.torproject.org/tor.git/history/HEAD:/src/common/di_ops.c
and did some timing statistical tests on their data-independent memcmp() 
implementation. NickM links to some timing test code in one of the 
comments (not in Java though).


The right approach is to find a well-tested timing-independent library 
for your platforms and use it. Inspect the generated code to be sure it 
does what you're expecting (compilers can be surprisingly clever at 
optimizing things you want to be slow).


- Marsh
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Digest comparison algorithm

2011-12-01 Thread Marsh Ray

On 12/01/2011 10:15 PM, Solar Designer wrote:

On Thu, Dec 01, 2011 at 09:15:05PM -0600, Marsh Ray wrote:

When you can evaluate MD5 at 5.6 GH/s, accessing even a straight lookup
table in main memory is probably a slowdown.


Yes, but those very high speeds are throughput for large numbers of
hashes to compute in parallel.  If you don't yet have a large enough
number of inputs to hash (that is, if you have an algorithm with
conditional branching), then you'd achieve a lower speed.


Either way, it's overkill for finding candidate passwords for H[1], 
H[2], and probably H[3] and H[4]. (If the password even holds out that 
long).



http://whitepixel.zorinaq.com is probably the fastest single MD5 hash
cracker.  This one tests 33.1 billion of passwords per second against a
raw MD5 hash on 4 x AMD Radeon HD 5970 (8 GPUs).  Of course, the
passwords being tested are not arbitrary (e.g., you can't just feed a
wordlist to such a cracker), although the character set is configurable.


Where would you find a wordlist to keep it busy for more than a 
millisecond anyway?



1. Already discussed: implement constant-time comparisons by using XORs
and ORs.


Talking with people who work closely with code generation convinced me 
that it's essential to examine the generated code. A compiler might 
recognize and exploit the opportunity for early loop termination.



2. Pass both strings to compare through an HMAC with a secret.  If one
of the strings is a secret, then that secret may be reused for this HMAC
as well.


http://www.isecpartners.com/blog/2011/2/18/double-hmac-verification.html

It may be relevant that in this case it isn't specified which of the two 
parameters 'digest' and 'secret' are unknown to the attacker.



It'd be curious to explore how much entropy in the salt is needed for
this.  Are 12-bit salts of traditional DES-based crypt(3) sufficient
against remote timing attacks or not?


Let's assume crypt(3) returns a string which is compared against the 
expected value using strcmp(), and the salted hash is formed of hex 
digits like:


%crypt(3)%SSS%%

SSS - 12 bit salt
HHH - 64 bit value from DES-like function

(I know it uses $ and some form of base-64 in practice, but the relevant 
factor is that the salt comes before the hash value, and everything else 
before H[0] is fixed and known to the attacker.)


The attacker generates, say, 4096 random passwords and accurately times 
their evaluation. If there isn't too much jitter on the network (or the 
local machine), and his timing measurements are accurate enough, he will 
observe the timings grouping into two clusters:


1. The largest cluster will represent the case where H[0] fails the 
comparison in strcmp().


2. The second cluster will be on the order of a few machine cycles 
longer,  representing times that H[0] compared successfully. This 
cluster will be approximately 256 times smaller than the first. With 
4096 trials the expectation is that this cluster will contain about 16 
members.


Now that he has a fuzzy idea of which passwords succeed in matching 
H[0], he evaluates this set for all 4096 possible salt values. There 
will be only one salt value that produces the same H[0] for all of these 
passwords. It's possible that some of his values crept into the wrong 
cluster, but these can be readily ignored if, say, 10 of the 16 produce 
a match. 16*4096 is not very much work at all, and most of it can be 
skipped.


So if his timing data is any good, he has learned the salt and can 
quickly verify it with some confirming tests. The attacker proceeds to 
work out the remainder of the password hash as before.


Conclusion: Salts placed at the beginning of the password string must 
contain sufficient entropy to resist offline brute-force in order to 
provide mitigation against timing attacks. It may be better to place 
them at the end of the password hash string.



 (Assuming that these salts are
otherwise perfect.)  They appear to have been sufficient in practice so
far (I haven't heard of anyone mounting such an attack), but there's
room for some research and testing here (likely proving that slightly
larger salts or constant-time comparisons are desirable for this).


(*_*);;;

- Marsh
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Digest comparison algorithm

2011-12-01 Thread Marsh Ray

On 12/02/2011 01:21 AM, Marsh Ray wrote:


Out of a set of 4096 (salt values) random functions each mapping

{ 1...256 } - { 0 ... 255 }
samples H[0] values

how many would we expect to have all samples map to the same value,
i.e., have a codomain size of 1 ?


s/codomain/image/

- Marsh
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


[cryptography] DTLS implementation attack?

2011-12-06 Thread Marsh Ray


Anyone have any more info on this?

Even just a CVE or 'fixed in' version would be helpful.

http://www.isoc.org/isoc/conferences/ndss/12/program.shtml#1a

Plaintext-Recovery Attacks Against Datagram TLS

Kenneth Paterson and Nadhem Alfardan We describe an efficient and
full plaintext recovery attack against the OpenSSL implementation of
DTLS, and an efficient, partial plaintext recovery attack against the
GnuTLS implementation of DTLS. We discuss the reasons why these
implementations are insecure, drawing lessons for secure protocol
design and implementation in general.


Thanks,

- Marsh
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] How are expired code-signing certs revoked?

2011-12-07 Thread Marsh Ray


   [Really this is to the list, not so much Jon specifically]

On 12/07/2011 02:10 PM, Jon Callas wrote:


Let's figure out what we're trying to accomplish; after that, we
can try to figure out how to do it.


I think that's the central problem we're dealing with. There is scads
of mechanism and little policy.

I also don't think we're going to agree on what policy should be,
except within limited contexts.


We've discussed CAs, PKI, liability, policy, etc.

But conspicuously absent in this discussion has been the Relying Party
(i.e., the end user) and their software vendor.

As weird as this sounds, the RP is the party with the ultimate control.
With the notable exception of DRM, it is the end user and the software
they selected to operate on their behalf who takes the bits from various
sources, drops them into this Rube Goldberg contraption, turns the
crank, and receives a slip of paper as output. At that point, it is up
to the user (in coordination with their software vendor) to behave
differently according to their interpretation of the result.

So I would like to differ a little bit with this statement:

On 12/07/2011 01:34 PM, ianG wrote:

Revocation's purpose is one and only one thing:  to backstop the
liability to the CA.


Maybe that's how it's design was originally motivated, but a facility
like revocation *is* precisely what users and their software vendors
make of it.

For example:

* There are operating systems that can and do apply regular updates on
root CAs and CRLs as part of their recommended regular patch channel.

* Microsoft implemented effectively CA pinning for certain Windows code
updates quite some time ago.

* A clueful Gmail user detected the otherwise-valid Iranian MitM cert
because Google implemented effectively CA pinning in Chrome, at least
for its own sites.

* Walled-garden app stores and DRM. Sure we all hate it and it's a
largely different threat model, but it's an example of something.

These examples have one thing in common: it is possible that something
can be widely deployed that's more effectively secure than we have now.

Yes, there will be difficulties. No, it will never be perfect. But boy
is there ever an opportunity for improvement.

It may upset some apple carts however, in particular one of my
favorites. It's called: Wow PKI is really busted, let's make popcorn
and watch the slow motion train wreck play out on the tubes.

But I find this especially ridiculous because I know for a fact that
there are people on this list who working for and directly advising
every part of PKI: the big browsers, other client software vendors,
secure websites, CAs, cypherpunks, academic cryptographers, end users,
you name it!

Moxie gets this, his convergence proposal talk has  33K views on
YouTube and he just sold his company to Twitter. What's up with that, hmm?

Google gets this, they have multiple proposals and implementation
projects going on for enhancements in this area. And they'll
nonchalantly deploy something into Chrome in some future unnumbered
update, Mozilla will follow soon after, and then the spec will be
submitted to IETF for copy editing.

CAs we will have with us always, but the current semantics of PKI
validation (trusted roots and spotty revocation checking) are on their
way out the door. Some products will rise, some will fall, some vendors
will feel some pressure, and yes even some users will get educated about
security in the process.

So, will you be making a contribution to the solution?

- Marsh
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] How are expired code-signing certs revoked?

2011-12-07 Thread Marsh Ray


On 12/07/2011 07:01 PM, lodewijk andré de la porte wrote:

I figured it'd be effective to create a security awareness group
figuring the most prominent (and only effective) way to show people
security is a priority is by placing a simple marking, something like
 this site isn't safe!


I thought the international symbol for that was already agreed upon:
goatse.cx


On 12/07/2011 07:13 PM, lodewijk andré de la porte wrote:

I'm afraid signing software is multiple levels of bullocks. Imagine a
 user just clicking yes when something states Unsigned software, do
you really want to install?.


You're just thinking of a few code signing schemes that you have direct 
experience with.


Apple's iPhone app store code signing is far more effective for example.


Imagine someone working at either a
software or a signing company. Imagine someone owning a little bitty
software company that's perfectly legitimate and also uses the key to
sign some of his maleware.


His own malware? With his own certificate? How dumb can he be?


Software signing isn't usable for regular end users, experienced
users already have hashes to establish integrity up to a certain
level, guru's and security professionals compile from source instead
of trusting some binary. And yes that does exclude hidden-source
software, it's the only sensible thing to do if you don't want trust
but real security!


A scandal broke just the other day when http://download.cnet.com/ was 
found to be trojaning downloaded executables in their custom download 
manger wrapper. Just to be helpful, this wrapper would change your home 
page to Microsoft, change your search engine to Bing, and install a 
browser toolbar that did lord knows what other helpful stuff if you were 
dumb enough to click the Yes please install the helpful thing I 
downloaded button. After the find their PC filled with crapware, users 
likely attribute it to the poor unsuspecting developer of the legitimate 
application they'd intended to download.


Even the simplest code signing mechanism at least prevent application 
installers from being corrupted by commercial distribution channels like 
that. But only IF enough users were given a security justification for 
insisting on a valid signature on the installers that CNET would 
recognize that that kind of sleazy practice it would harm their brand.



http://download.cnet.com/8301-2007_4-57338809-12/a-note-from-sean-regarding-the-download.com-installer/


MS Windows 8 is said to be introducing an app store distribution channel.

- Marsh
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] How are expired code-signing certs revoked?

2011-12-07 Thread Marsh Ray

On 12/07/2011 08:12 PM, lodewijk andré de la porte wrote:

I'm afraid far more effective just doesn't cut it. Android has
install .APK from third party sources which you'll engage whenever you
install an APK without using the market, trusted or not.


That's why I didn't use Android as an example.

I said Apple's iPhone app store code signing is far more effective.

- Marsh
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] How are expired code-signing certs revoked?

2011-12-08 Thread Marsh Ray

On 12/08/2011 09:16 AM, Darren J Moffat wrote:

On 12/07/11 14:42, William Whyte wrote:

Well, I think the theoretically correct answer is that you *should*...
these days all the installers can be available online, after all.


Except when the installer CD you need is the one for the network driver
on the new machine without which you can't get online !


There are systems that aren't online, and there are systems that 
shouldn't be online for good reasons. For example the power grid.


If we consistently neglect this scenario, then if the Internet ever 
suffers more than a brief outage we could find ourselves rebuilding 
society from the iron age.


- Marsh
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] OpenDNS

2011-12-08 Thread Marsh Ray

On 12/08/2011 01:09 PM, jd.cypherpunks wrote:

David Ulevitch is rolling out OpenDNS http://david.ulevitch.com/
What do you think?


I assume you're talking about their new DNSCrypt application.

They seem to be saying it's an implementation of DJB's DNSCurve protocol.
https://twitter.com/#!/davidu/status/144213491736248320

Some source code is here.
https://github.com/opendns/dnscrypt-proxy
AFAICT this is for a proxy to (guess who) OpenDNS only at this point.
I don't know if they're planning to release code for the resolver side. 
It may be intended for use with OpenDNS only.


The code is pretty clean looking, to the point of being sterile. No 
author attribution or even source code comments.


I haven't come across any protocol documentation. It looks pretty 
simple, mostly just encrypting the DNS packets as messages with NaCL 
cryptobox http://nacl.cr.yp.to/box.html .


Of course, the details matter and I haven't looked into it thoroughly.

- Marsh
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] How are expired code-signing certs revoked?

2011-12-21 Thread Marsh Ray

On 12/21/2011 04:24 PM, Michael Nelson wrote:


Somewhat related: The IEEE is asking for proposals to develop and
operate a CA as a part of their Taggant System.  This involves
signing to validate the usage of packers (compressing executables).
Packers can make it hard for anti-virus programs to spot malware.

Does this strike you as impractical?


Yes.


It seems obvious to me that it will be a wasted effort.


Well the people involve are not dumb, right? They know the capabilities 
of malware as well as most anyone.


Here's an overview of the proposed system:

http://standards.ieee.org/news/2011/icsg_software.html

Today malware uses code-obfuscation techniques to disguise the
malicious actions they take on your machine. As it is a hard problem
to overcome such protection technologies, computer-security tools
have started to become suspicious if an application uses code
protection. As a result, even legitimate content-protection
technology—generally put in place to either control application usage
or protect intellectual property (IP) from exposure or tampering—can
lead to false-positive detections. This forces technology vendors,
such as software publishers, to make a decision between security and
software accessibility. Joining the IEEE Software Taggant System
enables SafeNet to provide our customers a way to enjoy the benefits
of the proven IP protection without the risk of triggering a negative
response from common malware protection tools.


So my interpretation of what they're essentially saying is this:

There are mostly three categories of software that need to modify 
executable memory pages:


A. Operating system loaders. EXEs and DLLs are things AV companies 
already scan. These modules can be code-signed today (and we all know 
that signed code is safe code).


B. The legitimate code obfuscation systems currently for IP protection 
and DRM.


C. Malware, which today uses code polymorphism (unpackers) to evade 
signature-based detection.


When today's host based antimalware systems see the code modifications 
happening, it doesn't have an easy way to distinguish category B from 
category C. So these researchers propose to move category B applications 
into category A (under the threat of risk of triggering a negative 
response from common malware protection tools) and thereby emulate the 
success of the operating system-based code signing systems.


Here's a classic article on the topic. In this case, the OS executable 
loader itself is used as the unpacker:

http://uninformed.org/index.cgi?v=6a=3p=2

- Marsh
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] folded SHA1 vs HMAC for entropy extraction

2012-01-05 Thread Marsh Ray

On 01/05/2012 03:46 PM, Thor Lancelot Simon wrote:

I am asking whether the
use of HMAC with two different, well known keys, one for each purpose,
is better or worse than using the folded output of a single SHA
invocation for one purpose and the unfolded output of that same
invocation for the other.


But you don't need HMAC for this, HMAC's properties are evaluated for 
authentication.


What this usage needs is a tweakable one-way compression function. Like, 
say, a hash function with a different fixed input prefix for each 
operation. Having your tweak values a fixed size is a good idea.


HMAC is doing something similar, but using the secret key as the prefix. 
It expands the secret to the same size as the hash function's input 
block (usually 512 bits). Having them take up a whole input block might 
improve performance a little in some implementations because the 
intermediate state you have to store is smaller and in this case it 
could even be compile-time constant.


I don't like this idea of folding the output with XOR, especially down 
to 80 or 64 bits. (Actually, if you look at the details of MD5/SHA-(1,2) 
it already does some similar 'folding' using addition-mod-32 from twice 
the output size as the last step before output.)


The source code I saw (Linux kernel maybe?) had a comment indicating 
they were folding the output out of fear that the statistical properties 
of plain MD5 might be biased. Although this may have once been an open 
question, I don't think it's a valid concern any more. Rather, if you 
believe the output of your one-way compression function might be 
observably biased, then you ought to be using something else!


IMHO, tweaked SHA-2-256 (or SHA-2-512/256 whichever is faster) should 
work fine here.


- Marsh
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] folded SHA1 vs HMAC for entropy extraction

2012-01-05 Thread Marsh Ray

On 01/05/2012 05:59 PM, Thor Lancelot Simon wrote:


FWIW, using HMAC like this is the extract step of the two-step
extract-expand HMAC based construction that is HKDF



From http://tools.ietf.org/html/draft-krawczyk-hkdf-01

2.2.  Step 1: Extract

   PRK = HKDF-Extract(salt, IKM)

   Options:
  Hash a hash function; HashLen denotes the length of the
   hash function output in octets
   Inputs:
  salt optional salt value (a non-secret random value);
   if not provided, it is set to a string of HashLen zeros.

 ^^
So this is your fixed-value 'tweak'.


  IKM  input keying material
   Output:
  PRK  a pseudo-random key (of HashLen octets)

   The output PRK is calculated as follows:

   PRK = HMAC-Hash(salt, IKM)


Now the definition of HMAC from http://tools.ietf.org/html/rfc2104
where 'K' is the key/tweak input:


 ipad = the byte 0x36 repeated B times
 opad = the byte 0x5C repeated B times.

H(K XOR opad, H(K XOR ipad, text))


So HMAC with a fixed key input is exactly equivalent to evaluating the 
underlying hash function twice with two different fixed prefixes.



HMAC does have some other desirable properties that the raw
hash functions do not, no?


I'm not seeing any for the simple extraction scenario, other than the 
doubling-up of the hash function.



I thought HMAC met the strict avalanche
criterion, while SHA1 does not,


Well perhaps, but what I'm saying is that if you believe that that has 
implications for your RNG extractor, then you also believe that SHA-1 is 
not an ideal hash function, i.e., it is broken (and I think most folks 
would agree with you on that).


But the hash function
SHA-1(fixed_prefix_b || SHA-1(fixed_prefix_a || text))
AKA
HMAC_SHA-1(fixed_key, text)

MAY be usefully less broken. However, the result will have slightly less 
than the theoretical 160 bits of entropy (maybe 159.2) due to the way 
SHA-1 is iterated.



and that this was one of the reasons
why truncation of HMAC results was considered safer than truncation
of raw hash results.


I think that that has more to do with the presumption that the key is 
secret. If the attacker doesn't know the key, he can't compute the 
function so he can't conduct offline attacks against the truncated result.



 In this application, the result will often be
truncated when it is used, which is another reason why I -- naive
crypto-plumber though I am -- thought HMAC might be a better choice.


IMHO the bigger danger in RNGs is complexity and overengineering.

But others have well-reasoned opinions that differ from mine. :-)

- Marsh
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


[cryptography] Fwd: [TLS] Fwd: New Non-WG Mailing List: therightkey

2012-01-14 Thread Marsh Ray



 Original Message 
Subject: [TLS] Fwd: New Non-WG Mailing List: therightkey
Date: Fri, 13 Jan 2012 18:26:18 +
From: Stephen Farrell stephen.farr...@cs.tcd.ie
To: s...@ietf.org s...@ietf.org, pkix p...@ietf.org, 	t...@ietf.org 
t...@ietf.org, dane d...@ietf.org



FYI please sign up if interested but wait a few days
to give folks a chance to sign up before starting in
on discussion.

Stephen  Sean.

 Original Message 
Subject: New Non-WG Mailing List: therightkey
Date: Fri, 13 Jan 2012 10:23:58 -0800 (PST)
From: IETF Secretariat ietf-secretar...@ietf.org
To: IETF Announcement list ietf-annou...@ietf.org
CC: theright...@ietf.org, turn...@ieca.com, stephen.farr...@cs.tcd.ie



A new IETF non-working group email list has been created.

List address: theright...@ietf.org
Archive: http://www.ietf.org/mail-archive/web/therightkey/
To subscribe: https://www.ietf.org/mailman/listinfo/therightkey

Purpose: A number of people are interested in discussing proposals
that have been developed in response to recent attacks on
the Internet security infrastructure, in particular those
that affected sites using TLS and other protocols relying
on PKI. This list is intended for discussion of those proposals
and how they might result in potential work items for the IETF.
One short-term outcome may be the holding of a non-wg-forming
BoF at IETF-83.

For additional information, please contact the list administrators.

___
TLS mailing list
t...@ietf.org
https://www.ietf.org/mailman/listinfo/tls
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Chrome to drop CRL checking

2012-02-06 Thread Marsh Ray

On 02/06/2012 09:00 PM, Jonathan Katz wrote:


One question, though. Langley writes: If the attacker is close to
the server then online revocation checks can be effective, but an
attacker close to the server can get certificates issued from many
CAs and deploy different certificates as needed. Anyone follow this
 line of reasoning?


Think of a small-to-medium business and secure website that only has
servers at a single datacenter. If you were their ISP at that datacenter
you could MitM all their traffic.

If you can pwn their email, you can go to any number of CAs and buy a DV
domain validated cert for their domain name.

The rules established by the CA/Browser Forum
http://www.cabforum.org/Baseline_Requirements_V1.pdf
say of the subjectAltName field:

The CA MUST confirm that the Applicant controls the Fully-Qualified
Domain Name or IP address or has been granted the right to use it by
the Domain Name Registrant or IP address assignee, as appropriate.


So in theory a CA could issue a cert to some party on the basis that
they can change some DNS entries or web pages (as seen by the CA at the
time of registration) in the target domain.

I always kinda thought an attacker with that sort of network capability
was exactly the kind of thing SSL was supposed to protect against.

- Marsh
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Chrome to drop CRL checking

2012-02-07 Thread Marsh Ray

On 02/07/2012 05:41 PM, Andy Steingruebl wrote:


I don't remember Adam saying in his blog post or in any other posts,
etc.  that this is the only change they will make to Chrome.


Surely.


At the
same time I think they did get fairly tired or hard-coding a CRL list
into the Chrome binary itself for the CA breaches...


That was certainly my initial reading-between-the-lines.

Shipping emergency patches to revoke certs got to be such a regular 
thing over the summer that this scheme likely grew out of the simple 
need for a systematic streamlined release process for them.


- Marsh
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] trustwave admits issuing corporate mitm certs

2012-02-12 Thread Marsh Ray

On 02/12/2012 10:24 AM, John Levine wrote:

They also claim in their defense that other CAs are doing this.

Evading computer security systems and tampering with communications is
a violation of federal law in the US.


As the article made quite clear, this particular cert was used to
monitor traffic on the customer's own network, which is 100% legal
absent some contractual agreement with the customers not to do that.


IANAL by any stretch, but it seems to me that to say something
is 100% legal is usually a bit of an overstatement.

For example, I knew someone who audited network monitoring equipment for 
a retail chain that (as many do) issued credit cards. They were able to 
monitor all kinds of traffic in and out of their network, *except* when 
an employee went to check the balance on their own cards. One could 
imagine all kinds of other protected communication that might happen in 
an employment scenario.


What happens if the interception device gets hacked? Even if the keys 
remain in some HSM, the attacker could compromise any machine on the 
inside and route traffic through it. By observing the log messages (as 
Telecomix did on Syria's BlueCoats) he may successfully decrypt some or 
all of the traffic.


So even if we assume they are intended to be used for good, these 
existence of these MitM certs diminish the effective security of SSL/TLS 
for everyone.


As I see it, this could turn into an epic legal meltdown if, say, the 
widows of disappeared Libyan/Syrian/Iranian dissidents were to file suit 
against the companies making interception equipment (or even browser 
vendors like Mozilla). These vendors CAs could be in a bad spot if they 
made public statements that turned out to be contradictory to their 
actual practice.


- Marsh
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] how many MITM-enabling sub-roots chain up to public-facing CAs ?

2012-02-14 Thread Marsh Ray

On 02/14/2012 02:56 PM, Ralph Holz wrote:


BTW, what we do not address is an attacker sending us many forged chains
and/or traces. We don't want clients have to register with our server
and obtain an identity. That's a sore point.


Aren't the certs of interest those that chain to a well-known root?

So they could be validated, and those that don't could be efficiently 
discarded. At that point, the attacker is reduced to effectively doing 
an SSL DoS on you which is likely to grow old quickly.


- Marsh
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


  1   2   >