Re: [cryptography] fonts and viruses

2015-12-15 Thread Marcus Brinkmann

I'd start here:

http://www.cvedetails.com/vulnerability-list/vendor_id-9705/product_id-17354/opec-1/Pango-Pango.html

But if you are looking for specific examples, I don't know any.

What you are looking for is bugs in the font rendering libraries, which 
are system dependent.


On 12/15/2015 01:23 PM, Givon Zirkind wrote:
> i've been researching this subject with little results.  is it possible
> to some how include a virus in a font?  otf or ttf?
> ___
> cryptography mailing list
> cryptography@randombit.net
> http://lists.randombit.net/mailman/listinfo/cryptography

___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Snowden Paywalled

2014-12-31 Thread Marcus Brinkmann
Thanks to whoever is keeping track of this.  It's important work, and
the latest Spiegel release is direct evidence of the fact that a lot of
information of extreme importance is hidden from the public.

On 31.12.2014 14:01, John Young wrote:
 Free: 577,131 documents (millions of pages) informing public debate:
 https://www.documentcloud.org/public/search/
 
 Paywalled: Snowden: 3,361 pp of 58K-1.7M files:
 http://cryptome.org/2013/11/snowden-tally.htm
 
 
 ___
 cryptography mailing list
 cryptography@randombit.net
 http://lists.randombit.net/mailman/listinfo/cryptography

___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] DES history

2014-05-07 Thread Marcus Brinkmann

On 05/07/2014 08:31 AM, Joshua Hill wrote:

On Mon, May 05, 2014 at 10:37:48PM +0200, Marcus Brinkmann wrote:

It is well known that the DES S-Boxes were specifically designed (by the
NSA, no less, back in the good ol' days) to protect against that attack.


This was the lore for years after the introduction of DES (and as you
mentioned, Schneier repeated this lore in his books), but this was
denied by Don Coppersmith (one of the cryptographers involved with the
DES S-box design) 20 years ago. Coppersmith states that cryptographers
within IBM independently knew of differential cryptanalysis as early as
1974, and that IBM did not publish a rational for the selection of the
DES S-boxes because the NSA voiced concern over the publication of this
cryptanalytic technique.


Thanks for the link.  But let's be very careful here and not replace one 
rumor by another: Coppersmith gives no attribution, and no description 
of any process that lead to the discovery of differential attacks.  He 
also does not deny anything, and he does not claim independent 
knowledge in the 1994 paper.  Maybe he did that elsewhere?  According to 
Wikipedia, Stephen Levy claims that IBM had independent knowledge, but I 
don't know his evidence, and I don't have a copy of the book around.




___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Best practices for paranoid secret buffers

2014-05-07 Thread Marcus Brinkmann

On 05/07/2014 05:56 AM, Tony Arcieri wrote:

- malloc/free + separate process for crypto
- malloc/free + mlock/munlock + secure zeroing
- mmap/munmap (+ mlock/munlock)


Separate process protects from a different threat than mlock/munlock 
(the latter prevents swapping out the pages to the swap device).


Depending on your paranoia level, maybe scramble the buffer if it is 
held unused for a long time.  The scrambling secret should be short 
enough not to stick out like a sore thumb in a memory dump.  Although 
that probably won't help much (it works better if the secret key and the 
scrambling key are in different processes).





___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] DES history

2014-05-05 Thread Marcus Brinkmann

On 05/05/2014 09:08 PM, Givon Zirkind wrote:

A question about DES.  Did anyone ever try  map or graph the routes
through the S-boxes?  I mean pictorially.  Do the routes produce some
kind of wave or path, that have (or have not) relationships with the
other routes?


This is a vague question, but here is a somewhat specific answer that 
may entirely miss what you are asking.


Any wave or path in the output would suggest (to me anyway) that linear 
pieces of the input are mapped to more or less linear pieces in the 
output (because any curve is just a number of connected straight 
lines, as far as these concepts transfer to discrete spaces).


Such dependencies in the S-Box would suggest a high linearity, which 
makes the cipher weak to differential cryptanalysis [1].  This is highly 
undesirable, and must be avoided.


It is well known that the DES S-Boxes were specifically designed (by the 
NSA, no less, back in the good ol' days) to protect against that attack.


This means that any trivial plotting of the DES S-Boxes should show a 
highly non-linear output dependency from the input.


As for the inter-dependencies between the different routes: S-Boxes (in 
general) can be pairwise equivalent modulo some trivial transformation 
(linear, affine, CCZ), and such equivalent could be plotted showing an 
interesting (in this case even linear, affine or CCZ) relationship.


You can find an analysis of the interdependency of the S-Boxes used in 
various ciphers in [2] A Toolbox for Cryptanalysis: Linear and Affine 
Equivalence Algorithms (Biryukov et al), Section 5.  Specifically, for 
(DES (Section 4.2, 5.3): The algorithm showed that no affine 
equivalences exist between any pair of S-boxes, with the single 
exception of S4 with itself., which was apparently already derived by 
looking at patterns in the lookup table in a 1976 paper by Hellman et 
al, Results of an initial attempt to cryptanalyze the NBS Data 
En-cryption Standard.


I don't know (but also haven't checked) of any low-degree (quadratic, 
cubic) versions of the linear (or affine) analysis mentioned.


[1] http://en.wikipedia.org/wiki/Differential_cryptanalysis
[2] http://www.cosic.esat.kuleuven.be/publications/article-16.pdf

Does this answer your question?

Thanks,
Marcus


___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] DES history

2014-05-05 Thread Marcus Brinkmann

On 05/05/2014 10:37 PM, Marcus Brinkmann wrote:

On 05/05/2014 09:08 PM, Givon Zirkind wrote:

A question about DES.  Did anyone ever try  map or graph the routes
through the S-boxes?  I mean pictorially.  Do the routes produce some
kind of wave or path, that have (or have not) relationships with the
other routes?


[...]


I don't know (but also haven't checked) of any low-degree (quadratic,
cubic) versions of the linear (or affine) analysis mentioned.


Replying to myself, a quick google search turns up a quadratic analysis:

Natalia N. Tokareva, k-Bent functions and quadratic cryptanalysis of 
block ciphers


http://mc3.i3s.unice.fr/seminaires/seminaires_mc3/2007_2008/08-08-08_tokareva.pdf

Page 42: We test permutations with the most high nonlinearity
NL = 4 recommended for using in S-boxes of GOST 28147-89, DES,
s3DES and found that for all of them (excepting one) our crypt-analysis 
gives quadratic relations with probability 7/8 whereas any linear 
equality has probability not more then 3/4.


Well, in any case, glancing over that paper may give you an idea what is 
involved today in such analysis: They are definitely not done visually, 
but involve a lot of higher algebra.  It's not 1976 anymore :)


Thanks,
Marcus

___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] DES history

2014-05-05 Thread Marcus Brinkmann

On 05/06/2014 01:20 AM, Bernie Cosell wrote:

On 6 May 2014 at 8:35, Dave Horsfall wrote:


On Mon, 5 May 2014, Marcus Brinkmann wrote:


It is well known that the DES S-Boxes were specifically designed (by

the

NSA, no less, back in the good ol' days) to protect against that

attack.

If I recall Schneier, the S-boxes were *modified* by the NSA, not
designed.


More than that, the modifications *improved* the S-boxes --- they made
DES resistent to differential attacks that [AFAIK] weren't yet known in
the civilian community.  I think it was only after a few years that the
impact of their changes was understood [and that it was a good thing].


On rereading the Wikipedia article on DES history, the whole story seems 
to be considerably muddier than I recalled at first.


On the one hand, the article cites Schneier Applied Cryptography (2nd 
ed.). p. 280, quoting Alan Konheim (one of the designers of DES) with: 
We sent the S-boxes off to Washington. They came back and were all 
different.


On the other hand the article says that Steven Levy (Crypto) claims 
that IBM Watson researchers discovered differential cryptanalytic 
attacks in 1974 and were asked by the NSA to keep the technique secret.


Yet again the United States Senate Select Committee on Intelligence is 
cited with: In the development of DES, NSA [...] indirectly assisted in 
the development of the S-box structures.


Also, the article cites a declassified NSA book on cryptologic history 
with: NSA worked closely with IBM to strengthen the algorithm against 
all except brute force attacks and to strengthen substitution tables, 
called S-boxes.


I guess a more careful review of the evidence is required to make heads 
and tails of it.


Thanks,
Marcus

___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Request - PKI/CA History Lesson

2014-05-02 Thread Marcus Brinkmann

On 05/01/2014 10:25 AM, Ben Laurie wrote:

On 1 May 2014 08:19, James A. Donald jam...@echeque.com wrote:

On 2014-04-30 02:14, Jeffrey Goldberg wrote:


On 2014-04-28, at 5:00 PM, James A. Donald jam...@echeque.com wrote:


Cannot outsource trust  Ann usually knows more about Bob than a distant
authority does.



So should Ann verify the fingerprints of Amazon, and Paypal herself?



Ann should be logging on by zero knowledge password protocol, so that the
entity that she logs on to proves it already knows the hash of her password.


EXACTLY!!!


ZKPP has to be in the browser chrome, not on the browser web page.


This seems obvious, but experiments show users do not understand it.
We have yet to find a satisfactory answer to a trusted path for
ordinary users.


So where it really mattered we got two-factor authentication (by mobile 
phone) instead.  I like the trade-off.  Using another untrusted path on 
a different network and machine for a probabilistic guarantee seems more 
reasonable to me than trying to build a trusted path on a single 
machine, which was ambitious at the best of times, before we knew for a 
fact that we can not trust a single embedded integrated circuit in any 
device in the world.  And that is not even considering the usability and 
accessibility issues of all the fancy trusted path solutions that I've seen.


Security researchers can not even guarantee that the status light of the 
camera is on when it is recording images.




___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Request - PKI/CA History Lesson

2014-05-02 Thread Marcus Brinkmann

On 05/02/2014 01:33 PM, ianG wrote:

For me the sentence, “I had little choice but to trust X” is perfectly
coherent.



Yes, that still works.  It is when it goes to no choice that it fails.
  For example, I have no choice but to use my browser for online banking.
  I'm too far from a branch, and their phone service is mostly about
telling me how to use the browser.


We must live in very different parts of the world, though.  In Germany, 
if I am doing online-banking, I have to follow the rules set by the 
bank.  The bank requires me not to pass the PIN to anybody, to check the 
browser status bar, to protect my TAN list, etc.  All that good stuff.


But I don't have to trust it.  When I follow the rules, and my money is 
stolen, the bank has to put up for it.  I am in the clear (minus the 
paperwork).


So, I don't have to trust it, I just have to use it as it is provided to 
me.  Moral dilemma avoided.


For the bank, the story is a different one altogether.  They don't care 
about IT security, or security research, or PKI, or CA, or browsers, or 
the users, or the meaning of the word trust.  They care about profit 
margins and fraud quota, and if the fraud gets too much they ask a 
simple question: What can we do that costs us as little as possible to 
get the fraud quote down to the X percent that we allow?  And if that 
means bumping the key size from 1024 to 1025 bits, then we get 1025 bits 
until the next bump.


So, frankly, what's the big deal?  We have credible end-to-end security 
story lines if your life depends on it (ask Snowden).  For everything 
else, we have a bunch of patchworks, and insurances, and adjustable 
tolerances to protect against fraud.  Not absolutely, but enough to keep 
the machine running.  From a manager perspective, all is good and dandy, 
and nevermind the pain that is endured by the workers in the engine room.


As long as you live in a country that makes the people responsible for 
the system pay for any damages, it's just not that big a deal, unless 
you are passionate about IT security, or are suffering from some other 
illness to similar effect :).


___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Request - PKI/CA History Lesson

2014-04-30 Thread Marcus Brinkmann

On 04/30/2014 02:59 PM, d...@geer.org wrote:


As is so often found, there are multiple nuanced definitions of a
word, trust being the word in the current case.

Simply as a personal definition, trust is that state wherein I accept
assertions at face value and do so because I have effective recourse
should having let my guard down later prove to have been unwise.

Restated as logic,

If I can trust, then I have effective recourse.

and in contrapositive

If I have no effective recourse, then I cannot trust.


That's funny, because by far the most prevalent definition of trusted 
systems are those whose failure can break your security policy.  They 
must be trusted, because they are the last line of defense.


If you have effective recourse, then by that definition trust is not 
required.


Think about the trust fall game that is played with children.  It 
wouldn't be the same with a mattress.


So, trust is something that you end up stuck with once you remove 
everything you don't have to trust.  Trustworthiness on the other hand 
is something that can be established, for example by introduction 
(usually appealing to a higher authority), formal verification (requires 
transparency), or experience (at best probabilistic guarantees).



___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Is it time for a revolution to replace TLS?

2014-04-25 Thread Marcus Brinkmann

On 04/25/2014 06:28 PM, Tony Arcieri wrote:

On Fri, Apr 25, 2014 at 1:42 AM, Peter Gutmann
pgut...@cs.auckland.ac.nz mailto:pgut...@cs.auckland.ac.nz wrote:

As with let's replace C with My Pet Programming Language, you can
write crap in any language you want.  The problem isn't the language


There's an entire class of memory safety bugs which are possible in C
but not possible in Rust. These also happen to be the class of bugs that
lead to Heartbleed-like secret leakage or remote code execution
vulnerabilities.


But that's just cherry-picking, and not a complete argument.  Clearly, 
there are many other important factors to consider (good luck finding a 
competent rust developer).


There are also whole classes of bugs in memory-safe languages that can't 
occur in C, for example anything related to garbage collection.  That's 
not a complete argument either, but it shows how unconvincing arguments 
based on individual features must be.


The real tragedy is that we still don't know how to develop good 
software in any scientifically meaningful sense.  We have some 
experimental data, and a lot of folklore, but that's about it.



Heartbleed has also done a great job of illustrating that all the
band-aids they try to put on these sharp edges are also flawed.


Actually, we don't even know what direct damage the vulnerability in 
heartbleed caused, if any at all.  From an economical point of view, 
heartbleed probably was much less harmful than many other software 
engineering failures, including those that were done purposefully with 
good intentions, and/or in safe languages.



___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] DNSNMC replaces Certificate Authorities with Namecoin and fixes HTTPS security

2013-12-22 Thread Marcus Brinkmann

On 12/21/2013 10:04 PM, Eduardo Robles Elvira wrote:

The obvious problem with this is that namecoin doesn't have all the
domain names already registered assigned to the current owners, and
there's no arbitration authority that can prevent domain cibersquatting.


This is not a weakness of namecoin, but a weakness of human readable names.

Why does coke.ch lead to the website of the Coca Cola Company, and not 
an informational website on heroin addiction?  Because someone at that 
company decided to cibersquat this domain.



So I can register all the important domains: microsoft, ebay, google,
nsa, whitehouse,


They are only important if you value e-commerce, advertising and the US 
institutions more than the alternatives that could exist.


The solution to this is that names should not claimed, they should be 
given by the community that values the association.  Neither DNS nor 
namecoin allows for that, so both are inadequate.  As an example, 
consider how Wikipedia pages are named: http://en.wikipedia.org/wiki/Coke


This is painfully obvious, and yet we are mentally stuck in an 
authoritative model of naming.  If the use of words (in spoken language) 
were assigned like this, we would hate it.


Thanks,
Marcus


___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] DNSNMC replaces Certificate Authorities with Namecoin and fixes HTTPS security

2013-12-22 Thread Marcus Brinkmann

On 12/22/2013 12:58 PM, James A. Donald wrote:

On 2013-12-22 19:44, Marcus Brinkmann wrote:

The solution to this is that names should not claimed, they should be
given by the community that values the association.  Neither DNS nor
namecoin allows for that, so both are inadequate.  As an example,
consider how Wikipedia pages are named: http://en.wikipedia.org/wiki/Coke


Wikipedia does a pretty good job on naming.  The names of Wikepedia
articles are not politicized, but its articles are severely politicized,
because they rely on Academia and the New York Times as final authority,
and Academia and the New York Times is politicized.


I agree, but who said there can only be one directory for names?  If 
social groups disagree, they should each manage their own directory,


With the right tools, we could stack directories.  Most people will 
prefer the mainstream bourgeoisie naming directory, while many might 
choose to layer smaller special-interest directories on top of that. 
Extremist will maintain their own exclusive directories untainted by 
mainstream naming.


And while you are at it, you can throw adblock in the mix, because 
manipulating DNS names (to point to /dev/null) is one of its tasks.



If it was naming keys, so that various entities wanted each wanted their
own key given a certain popular name, naming keys would also be
politicized.

Yes, we should have some social procedure for naming names, so that the
the major influence is what other people call the key, rather than what
the owner of the key wants to key to be called, but any such procedure
will come under attack.


___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] evidence for threat modelling -- street-sold hardware has been compromised

2013-07-30 Thread Marcus Brinkmann
On 07/30/2013 01:07 PM, ianG wrote:
 It might be important to get this into the record for threat modelling.
  The suggestion that normally-purchased hardware has been compromised by
 the bogeyman is often poo-pooed, and paying attention to this is often
 thought to be too black-helicopterish to be serious.  E.g., recent
 discussions on the possibility of perversion of on-chip RNGs.
 
 This doesn't tell us how big the threat is, but it does raise it to the
 level of 'evidenced'.

Not much evidence in the article.  This is the relevant part:

Members of the British and ­Australian defence and intelligence
communities say that malicious modifications to ­Lenovo’s circuitry –
beyond more typical vulnerabilities or “zero-days” in its software –
were discovered that could allow people to remotely access devices
without the users’ knowledge. The alleged presence of these hardware
“back doors” remains highly classified.

If you trust anonymous leaks to the Financial Review by members of your
favourite spying agency network, then I guess its evidence.

Reading the actual classified reports would be more useful.

Thanks,
Marcus

___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] can the German government read PGP and ssh traffic?

2012-05-26 Thread Marcus Brinkmann
On 05/26/2012 08:01 AM, Peter Gutmann wrote:
 Marsh Ray ma...@extendedsubset.com writes:
 
 Perhaps someone who knows German can better interpret it.
 
 The government was asked are encrypted communications creating any
 difficulties for law enforcement in terms of pursuing criminals and
 terrorists?.  The government replied no, not really, so there's no need to
 restrict the use of crypto by the public.

Sorry, that's not quite it.  The Kleine Anfrage (small inquiry) is a
tool for the opposition to put pressure on the government, which has to
explain itself in response to specific questions on a certain topic.

In this inquiry, the left party asked questions about telecommunication
surveillance by the BND, (encrypted and/or unencrypted), which was
reported about in the German press.  In particular, they asked questions
about the keywords used for filtering, the technical methods for
filtering, the amount of filtering that has been done, if any successes
were documented, how politicians/attorneys/journalists and other
protected groups are protected, etc.  The goal is to either find
material that is embarrassing or at least to remind the government and
secret services of their boundaries.

In the end, most of the answers were secret and, as far as they were
given at all, are only available to members of the German parliament.

There is very little specific information that is not secret, for
example that 90% of the emails are spam, that 16400 keywords were used etc.

As far as decryption capabilities goes, the text is very clear: The
software used to analyse the communication stream can, in principle,
decrypt and/or analyze at least some of it.  Note the qualifiers: In
principle, decrypt and/or analysis, depending on type and quality of
encryption.

This really can mean anything and everything.  I would assume that it
just means that the software and services they are buying have
implemented some automatic decryption based on the state of the art
attacks on weak cryptography, or at the very least the ability to detect
encrypted streams and at least analyse the flow.

There is nothing in the answer that suggests that the BND has any
abilities beyond what is commercially available as the state of the art.

Also, there is nothing in the answer that suggests that there is no need
to restrict the use of crypto by the public.  That is just not within
the scope of the inquiry or the response.  The topic was surveillance of
telecommunication by the BND in general.

Encryption was only the topic of one question, which was, compared to
the others, not very detailed and not very carefully phrased.  Ideally,
they should have split the question in decryption and metadata analysis
and asked which methods are available in particular.  They dropped the
ball on this one, and thus government and secret service could weasel
out with a typical non-answer: the smallest answer possible that is
truthful and does cover the scope of the question.  The only other
possible answer would have been: no, we can not analyse or decrypt
encrypted communication streams at all, in which case the answer would
have been secret because it revealed operational details that allows
suspects to evade the measure.

Thanks,
Marcus
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Chrome to drop CRL checking

2012-02-07 Thread Marcus Brinkmann

Hi,

On 02/07/2012 03:52 AM, Steven Bellovin wrote:

http://arstechnica.com/business/guides/2012/02/google-strips-chrome-of-ssl-revocation-checking.ars


While I am no fan of CRLs, I think it's worth mentioning that Google's 
primary objective here does not at all seem to be the security of 
anything except their position in the race for the fastest browser:


online revocation checks are slow and compromise privacy. The median 
time for a successful OCSP check is ~300ms and the mean is nearly a 
second. This delays page loading and discourages sites from using HTTPS


This is a very backward way to say that a 300ms faster response time 
encourages people to use Chrome over competing browsers.


The security argument itself seems very weak.  There is no evidence yet 
that the alternative strategy that Google proposes, namely letting them 
control the CRL list (and thus another part of the internet 
infrastructure), is any safer for the user in the long run.


Certainly the privacy concern that Google expresses because the CA 
learns the IP address of users and which sites they're visiting does 
not extend to Google itself, which already has much more detailed 
information about its users.


With a dubious motive and no clear advantage over the existing 
infrastructure, I'm underwhelmed.


Thanks,
Marcus
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Chrome to drop CRL checking

2012-02-07 Thread Marcus Brinkmann

On 02/07/2012 11:51 AM, Ben Laurie wrote:

The security argument itself seems very weak.  There is no evidence yet that
the alternative strategy that Google proposes, namely letting them control
the CRL list (and thus another part of the internet infrastructure), is any
safer for the user in the long run.


The point is that using this mechanism means Chrome always has an
up-to-date revocation list - as it is now, revocation checking can be
blocked and Chrome will allow revoked certs as a result.


I understood that, but that's just a story, not evidence.  A meaningful 
analysis will not focus on a single story (Schneier's Hollywood 
plots), but look at the issue from all angles and include some real data.



Certainly the privacy concern that Google expresses because the CA learns
the IP address of users and which sites they're visiting does not extend to
Google itself, which already has much more detailed information about its
users.


Since it is a push mechanism, Google does not get which sites the user
is visiting.


As written, that is a very misleading statement.  It's true that they 
don't get that data through the CRL mechanism.  But they still know 
which sites the user is visiting from several other mechanisms.  Google 
Chrome sends every letter typed into the URL or search box to Google 
Search, and Google Analytics keeps track in the background when you are 
not typing but navigating.  And that's just scratching the surface of 
the tracking and aggregation they are already doing.  On top of that, 
they can always turn the data mining screw if they need to.


That's not surprising of course (once you consider security economics), 
as a browser with strong privacy measures would undermine Google's 
business model and thus be a negative value proposition.  In contrast, 
for a CA it's the smarter business move to protect the privacy of the 
data collected.  The incentives are clear here and not in Google's 
favor.  The privacy argument is a red herring, and Google raising it is 
hypocritical.


Thanks,
Marcus
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Chrome to drop CRL checking

2012-02-07 Thread Marcus Brinkmann

On 02/07/2012 01:36 PM, ianG wrote:

On 7/02/12 20:56 PM, Marcus Brinkmann wrote:

Hi,

On 02/07/2012 03:52 AM, Steven Bellovin wrote:

http://arstechnica.com/business/guides/2012/02/google-strips-chrome-of-ssl-revocation-checking.ars




While I am no fan of CRLs, I think it's worth mentioning that Google's
primary objective here does not at all seem to be the security of
anything except their position in the race for the fastest browser:



The first thing to ask is whether CRLs/OCSPs have benefit security *at
all*.

Google's suggestion is no. I would agree. Theory predicts that the
combined weight of problems, well researched and experimentally measured
by now, will lead to revocation being more or less ineffective.


That seems to be their argument only when it comes to maintenance 
revocations.  The argument rather seems to be that if you are 
disconnected from rest of the internet, you can only rely on two sources 
of information: the ones given by the server you want to connect to (the 
captive portal) and the data you are carrying around on your computer. 
 So Langley wants to move the CRL to the local storage of the computer, 
delivered with automatic updates.


Implicitely, Langley seems to suggest that the only useful data you are 
allowed (or able) to carry around on your computer comes from Google 
with automatic updates.


That's a false dilemma.  You could also extract trust from your cache, 
ie your past experience with the same server (the SSH model), and/or 
from your past connections with the internet (CRL or monitoring servers 
differently from Google Chrome autoupdater).


Langley doesn't state why he is limiting the options in this way.  It is 
probably a mix of cultural bias and technical reasons (performance, etc).


In any case, the proposal still keeps an old-fashioned CRL around to check.

Later on, Langley seems to want to replace the CRL with a positive proof 
of freshness:


http://www.imperialviolet.org/2011/11/29/certtransparency.html

This is a better approach, but it remains to be seen how he intends to 
organize the whole thing.  From the description it already seems 
strictly weaker than the decentralised EFF Souvereign Key approach.


Many things coming from Google make quite a lot of sense if one is 
jumping off the cliff and considers everything that Google knows about 
oneself as private and everything that Google knows about everyone 
else as the majority view (in the sense of a P2P network).  As any 
clever lie, this is partially true.  Google is good at protecting the 
data it is collecting, and it has a wide network of servers that is hard 
to attack by enclosing.  But the risk of abuse grows with the value of 
the data and the number of people who can access it, and this should 
give pause for thought.



(We've known this prediction since forever, 1998 is when I first heard it.)

We now have a few solid data points where all vendors decided not to
rely on CAs revocation and instead issued new software. So all vendors
agree.

So, if this is the case - revocation delivers no benefit - then rip the
bloody stuff out and make the browser faster and more reliable:


Thanks,
Marcus
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] CAPTCHA as a Security System?

2012-01-03 Thread Marcus Brinkmann
On 01/03/2012 04:08 AM, John Levine wrote:
 unsusual, so if I were a scalper, I'd have a network of web proxies,
 to make it hard to tell that they're all me, a farm of human CAPTCHA
 breakers in Asia who cost maybe 5c per CAPTCHA, a large set of
 employees, friends, and relatives who will let me use their names and
 credit cards (for a small commission) and scripts that blast through
 Ticketmaster's web pages as fast as they can, so they can buy the
 tickets the moment they go on sale, before real humans can.

That's overstating the costs of captcha solving by several orders of magnitude:

$1.39 per 1000 captchas from http://www.deathbycaptcha.com/
$0.7-$1 per 1000 captchas from http://antigate.com/
$2 (in 2009) for 1000 captchas from http://www.decaptcher.com/ (defunct)
$7 for 1000 captchas from http://www.decaptcher2.com/

This is just from their websites and forums, so I can not vouch for the
quality of their service.

Thanks,
Marcus
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] CAPTCHA as a Security System?

2012-01-02 Thread Marcus Brinkmann
On 01/02/2012 06:58 PM, Jeffrey Walton wrote:
 I was reading CAPTCHA: Using Hard AI Problems For Security by Ahn,
 Blum, Hopper, and Langford (www.captcha.net/captcha_crypt.pdf).
 
 I understand how recognition is easy for humans and hard for computer
 programs.

But is that really true?

My personal experience with CAPTCHAs is that they are increasingly hard to
decipher for humans.  Has the scale already tipped over in favor of computer
programs?

Computer programs today are limited by attention of experts (programmers,
researchers).  What does hard for computer programs actually mean then?  Is
there a theoretical boundary that limits the abilities of computer programs to
recognize captures, or is Ahn just exploiting a temporary lack of economic
incentive to realize the full capabilities of computer systems for these kind
of problems?

IMO, the problems that computers are really (as opposed to currently) bad at
often turn out to be the problems that defy objective solutions.  Many
recaptcha (OCR) problems are ambiguous.  If there is no objective solution to
a problem, how can performance be evaluated?

 Where is the leap made that CAPTCHA is a [sufficient?]
 security device to protect things like web accounts, email accounts,
 and blog comments? It seems to me that a threat model in which bots
 (ie, programs) are the only adversary is flawed.

Louis von Ahn's favorite subject is human computation.  A separation between
(the capabilities of) humans and computers is axiomatic to his research,
otherwise his whole subject would evaporate.

There are two fundamental assumptions made: First, there are problems that are
hard for computers to solve but easy for computers to generate.  Second, the
bad guys can muster huge computational resources but few human resources.

The first assumption is a, at least for the time being, a rejection of the
Church-Turing conjecture.

The second assumption is an extrapolation of past experiences into the future,
and as such very optimistic/naive.

I don't know about any justification offered for either dogma.  Ahn's Phd
thesis[1] is surprisingly void of a theoretical underpinning of his work, in
fact, it does not even contain the phrase Church-Turing.  It is also
completely void of any security analysis.

You'd think that a phd thesis about human computation applied to security
problems would at least contain something on either, but if there is, I can't
find it.

[1] http://www.scribd.com/doc/2533967/Human-Computation-PhD-Thesis-Luis-von-Ahn

Thanks,
Marcus
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography