Re: [cryptography] Secure universal message addressing

2016-04-04 Thread ianG

On 4/04/2016 15:55 pm, Natanael wrote:


After spending way too much time thinking about how to design a secure
universal message passing platform that would work for both IM, email,
push messages and much more, I just ended up with a more complex version
of XMPP that won't really ever have lower latency, be scalable or be
simpler to operate or even be secure at all. So I dropped that idea.

Then I ended up thinking about addressing instead. If building one
single universal communication protocol is too hard, why couldn't it
still be simple to have one single universal protocol for identifying
recipients / users? It would allow each user to have one single unique
global identifier which can be used to find out which communication
protocols each other user supports and how to connect to them.



You're trying to build a tool.  Then when that becomes hard, you're 
switching to another tool that is more or less harder.


Instead, how about setting up a set of requirements which are driven by 
users?  Although sometimes a boring process, it can drive a real design 
much more cleanly because there is a reason for every choice - a reason 
that relates up to what the user needs.



E.g.:


We need secure push messaging, IM, mail and much more,


Like that - except much more and more written down!


... If connecting secure
protocols to your account is easy and transparent for everybody
involved, there would be much less resistance towards changing clients.


"Can use multiple secure protocols as underlying transport?"


...Opening the
contact details for a person would simply show you which protocols you
both already support, and which additional ones they support that you don't.


"Has contact management for each person that does...XXX"


The key idea here is that you get to have *one* identifier for yourself
under your control, that you can use everywhere, securely. Knowing that
people have your real address should provide a strong guarantee that
messages from them to you will go only to you. And you shouldn't need to
change address because you changed messaging services.


"A person has one identifier in another person's client?"


How would you guys go about designing a system like what I describe?



Like that above - requirements driven by business/people behaviour.


iang
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] a new blockchain POW proposal

2016-01-23 Thread ianG

On 17/01/2016 10:13 am, travis+ml-rbcryptogra...@subspacefield.org wrote:

I'm embarrassed by the long, rambling post. It was notes to myself,
which I then circulated to my friends and forwarded without editing.
I should summarize.

0) Bitcoin is amazing technology.  Truly neat.  Many related ideas,
must have taken a long time to develop.  Impressive.  Caught
me way off guard back when it was posted here.
1) Can we use SAT (or another NPC problem) as a POW?
If I'm not mistaken doing hash preimage attacks is a SAT solver.
2) Can we efficiently enumerate the aforementioned NPC problem space
and map to and from ordinals?
3) Would there be any problems in allowing people to solve a problem
defined in advance, rather than having it vary based on the current
block?


Not in the current design because each block refers by hash to the 
previous.  Also, the design of the lottery is based on surprise to try 
and get everyone starting at the same position.



4) Would it be useful to decouple any of the aspects of the block chain
from each other?  Could one decouple the financial impacts from the
cryptographic operations from the persistent, distributed storage?



It turns out that Bitcoin is incredibly well balanced in its 
interlocking assumptions.  Although it looks like a grabbag of tricks, 
it is actually carefully interconnected.


The key assumption(s) is that all are equivalently anonymous.  Therefore 
anyone can pretend to be as many as one likes.  Hence the vote on 
control is required to isolate over some unforgeable differentiating 
thing, which ends up being energy (PoW) in Bitcoin's case (proof of 
stake is also popular).


Energy costs money so it has to be paid for somehow, so we need the 
money creation to empower the mining, and we need to provide a payment 
system so as to encourate people to demand the money to incentivise the 
miners to produce otherwise worthless leading-zero hash numbers.


If you drop the "equivalently anonymous" assumption then every other 
aspect collapses.  Hence the anti-school of "private or permissioned 
blockchains," oxymoron.




5) Would it be useful to create hash lattices rather than a single
chain for some purposes?  What other structures might be useful?


So back off a bit and ask what you are trying to achieve?  Tinkering at 
the edges is fun, but pointless.


There's some thinking about sharding the blockchain because that's the 
only way to go massively scaled to say IoT levels.  Also a lot of 
thinking as to what happens when you relax the anonymity condition.




6) Could we create markets around the various services required to
implement the block chain in a way that creates incentives that
align with the overall goals? In other words, can the design
be a game-creating-game which serves a higher goal.  The
work product of mining can be polished and resold in jewelry,
perhaps in other markets.  This could pay for running the chain
storage.



One of the problems in markets is that it is terrifically hard to get 
specialisations up and going by planning, because you need to coordinate 
multiple groups at the same time.  In this sense, bitcoin started out as 
"everyone was a node" and then it bifurcated to miners and payments 
nodes and then again to full nodes and SPV nodes.  Evolution worked, but 
if you planned it to bootstrap like that you'd likely fail because of 
chicken & egg mechanics.




7) Can that goal include more efficient software and hardware?
Mine for great good.


The doctrinal argument is that if there is another purpose to the 
mining, then the security is weakened because it comes for less money. 
This goes back to Gresham's observation that money with multiple 
purposes has strange artifacts.  Popularly "bad money beats out the 
good" although that is only a popular saying, it's different in the 
analysis.  So in the bitcoin world of today there are multiple issues 
going on with the money source - i.e. the power costs vary which causes 
those artifacts to kick in and impact back into the ecosystem.


So ideally we would look for a more perfect distribution of the lottery, 
which would hopefully replace the PoW.  E.g., instead of using PoW to 
designate the winner, use the hash of the last block to appoint the 
decider of the next block.  If you can get the hash to be truly 
unpredictable (e.g., I can't frontrun myself by pre-predicting myself as 
the winner) then a more perfectly distributed lottery would remove the 
need for energy burning at all.




8) Other than this list, where else might I find influential
people who know more than I about this stuff, to pick their
brain?  I am in SF/BA, IRL, if that matters.


There are meetups in that area.


9) I'm sure there are problems with this idea.  If you would kindly
correct my inadequate understanding I would much appreciate.

On Sun, Jan 17, 2016 at 01:21:38AM -0800, 

[cryptography] GCHQ puzzler for xmas

2015-12-14 Thread ianG

http://www.bbc.co.uk/news/uk-35058761

Britain's most secretive organisation - GCHQ - has added a cryptic twist 
to Christmas card season by including a baffling brainteaser.
This year spy agency director Robert Hannigan is sending out a complex 
grid-shading puzzle inside his traditional Christmas cards of the 
nativity scene.
Successful codebreakers will uncover an image in the grid that leads to 
a series of tougher challenges.

Those not on the card list can have a go here or on the GCHQ website.
Mr Hannigan is asking players who complete all the stages to submit 
their answer to GCHQ by the end of January.
Those who enjoyed the challenge are asked to make a donation to the 
National Society for the Prevention of Cruelty to Children.

___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


[cryptography] attacks on packet length may be surprisingly good: Hookt on fon-iks

2015-10-24 Thread ianG

Phonotactic Reconstruction of Encrypted VoIP Conversations:
Hookt on fon-iks

Abstract—
In this work, we unveil new privacy threats against Voice-over-IP (VoIP) 
communications. Although prior work has shown that the interaction of 
variable bit-rate codecs and length-preserving stream ciphers leaks 
information, we show that the threat is more serious than previously 
thought. In particular, we derive *approximate transcripts* of encrypted 
VoIP conversations by segmenting an observed packet stream into 
subsequences representing individual phonemes and classifying those 
subsequences by the phonemes they encode. Drawing on insights from the 
computational linguistics and speech recognition communities, we apply 
novel techniques for unmasking parts of the conversation. We believe our 
ability to do so underscores the importance of designing secure (yet 
efficient) ways to protect the confidentiality of VoIP conversations.


http://wwwx.cs.unc.edu/~kzsnow/uploads/8/8/6/2/8862319/foniks-oak11.pdf



My emphasis - I'd love to see some examples... iang
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] [Cryptography] WikiLeaks Hosts Cryptome with Search

2015-10-19 Thread ianG

On 19/10/2015 18:42 pm, John Young wrote:

WikiLeaks Hosts Cryptome with Search

https://cryptome.wikileaks.org



Congrats!  Nice mix.

iang

___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Should Sha-1 be phased out?

2015-10-17 Thread ianG
ink about using SHA4.




iang
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] no, don't advertise that you support SSLv2!

2015-08-04 Thread ianG

On 4/08/2015 05:29 am, Patrick Pelletier wrote:

I was on an e-commerce site today, and was horrified when I saw the
following badge:

https://lib.store.yahoo.net/lib/yhst-11870311283124/secure.gif

Did they still have SSLv2 enabled?  I checked, and luckily they don't:

https://www.ssllabs.com/ssltest/analyze.html?d=us-dc2-order.store.yahoo.net

So, it's not as bad as their badge claims, but still, they only get a
C.  (They support only one version: TLS 1.0.)  I would've thought a big
Web property like Yahoo could do better.  :(



Why is this any different to a web browser showing a padlock to users 
that means you're secure?




iang

___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Timeline graphic of hacking attacks

2015-05-26 Thread ianG

On 26/05/2015 22:28 pm, Michael Nelson wrote:

http://RecentHacks.com http://recenthacks.com/

This new site has a timeline of hacking attacks (Target, Sony, Tesla,
etc.).  You can click on an attack and see a summary.  It starts early
2013.  Though it's a new site, I find it surprisingly useful -- both to
recall what an attack was, and to get a feel for the range of attacks
out there.  Built by security jock Paul Chen.



That's a keeper, definitely gets a link on my CA history of threats:

https://wiki.cacert.org/Risk/History

Which lacks any sexy graphics.


iang

___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] NIST Workshop on Elliptic Curve Cryptography Standards

2015-05-11 Thread ianG

On 11/05/2015 17:56 pm, Thierry Moreau wrote:

On 05/09/15 11:18, ianG wrote:

Workshop on Elliptic Curve Cryptography Standards
June 11-12, 2015

Agenda now available!

The National Institute of Standards and Technology (NIST) will host a
Workshop on Elliptic Curve Cryptography Standards at NIST headquarters
in Gaithersburg, MD on June 11-12, 2015.  The workshop will provide a
venue to engage the cryptographic community, including academia,
industry, and government users to discuss possible approaches to promote
the adoption of secure, interoperable and efficient elliptic curve
mechanisms.


I doubt the foremost questions will be addressed:

To which extent NSA influence motivates NIST in advancing the ECC
standards?



John Kelsey, chief of something or other at NIST, gave a pretty 
comprehensive talk on the NSA issue for NIST at Real World Crypto in 
Janaury [0].  My take-away is that they are taking it seriously.


From memory, there wasn't anything directly spotted for the ECC stuff, 
but there has been this rising tide of demand for new curves ... so 
maybe now is the time.




Can independent academia members present hypothetical mathematical
advances (even breakthroughs) that NSA could have made, or could
speculatively expect to make, in order for the NSA to provide the US a
cryptanalysis advance over the rest of the world (central to NSA mission).



If you're saying, can the academics stumble across something that the 
NSA had beforehand, well, of course.  But I'm not sure that's what you mean.



To which extent the table of key size equivalences (between
factoring-based cryptosystems and ECC schemes) is biased for a faster
adoption of ECC (e.g. it makes sense to move to ECC because the
equivalent RSA key sizes are inconvenient)?

NIST has been unquestionably useful for the cryptographic community with
the AES and ASHA competitions. The outcome of the former is a widely
deployed improvement over prior symmetric encryption algorithms. The
outcome of the latter appears less attractive for adoption decisions,
but the very challenges of an efficient secure hash algorithm seems to
be the root cause, and not the NIST competition process.

With ECC, I have less confidence in NIST ability to leverage the
cryptographic community contributions.


Yeah, curves look much harder than hashes and ciphers.  But is there a 
better option?



iang

[0] 
http://www.realworldcrypto.com/rwc2015/program-2/RWC-2015-Kelsey-final.pdf?attredirects=0


___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


[cryptography] NIST Workshop on Elliptic Curve Cryptography Standards

2015-05-09 Thread ianG

Workshop on Elliptic Curve Cryptography Standards
June 11-12, 2015

Agenda now available!

The National Institute of Standards and Technology (NIST) will host a 
Workshop on Elliptic Curve Cryptography Standards at NIST headquarters 
in Gaithersburg, MD on June 11-12, 2015.  The workshop will provide a 
venue to engage the cryptographic community, including academia, 
industry, and government users to discuss possible approaches to promote 
the adoption of secure, interoperable and efficient elliptic curve 
mechanisms.


Register by June 4, 2015.  There is no on-site registration for meetings 
held at NIST.


Agenda, registration and workshop details are available at the workshop 
website:  http://www.nist.gov/itl/csd/ct/ecc-workshop.cfm





iang (as forwarded by Russ to [saag])
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] NSA Apple DPA Cryptanalysis

2015-03-11 Thread ianG

On 11/03/2015 05:25 am, Peter Gutmann wrote:

ianG i...@iang.org writes:


We will also describe and present results for an entirely new unpublished
attack against a Chinese Remainder Theorem (CRT) implementation of RSA that
will yield private key information in a single trace.

An actual cryptography breach!  Outstanding if true...


No, just a DPA attack, you've only quoted the last part of the full paragraph,
which is about DPA attacks.

(Before I read the full report my reaction was they specifically mentioned
RSA CRT, it's either a fault attack or DPA, because if the attack description
includes RSA CRT then it's a sure sign that it'll be one of those two).



Oh I see.  Right that makes sense, they say implementation so there is 
something fishy about the code.


OK, something to put on the list of things to do the constant time 
makeover on, or at least the don't leak bits pass over.


Maybe a summer internship for a student?

/me musing on likely context of attacking the CRT ... suggests they have 
already breached the inner perimeter to do measurements, and know when 
the key is being made, and can run their evil listener.




iang



ps; Note their pride in expressing the entirely new unpublished attack 
... for those who are questioning where the NSA is wrt the open source 
world, such snippets tell us we're not that far away.

___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] NSA Apple DPA Cryptanalysis

2015-03-10 Thread ianG

On 10/03/2015 11:38 am, John Young wrote:

The Intercept has released files on Apple, DPA and other
cryptanalysis:

http://cryptome.org/2015/03/nsa-apple-dpa-intercept-15-0309.zip (12pp,
1.9MB)


tpm-vulnerabilities... 16th March 2012?

We will also describe and present results for an entirely new 
unpublished attack against a Chinese Remainder Theorem (CRT) 
implementation of RSA that will yield private key information in a 
single trace.




An actual cryptography breach!  Outstanding if true...
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Crypto Vulns

2015-03-10 Thread ianG

On 7/03/2015 15:23 pm, John Young wrote:

No 1 vulnerability of crypto is the user
2nd passphrases
3rd overconfidence
4th trust in the producer
5th believing backdoors are No. 1



I would have said that the #0 vulnerability is failing to deliver 
anything that the user sees.  Because of over-engineering, 
over-committeeing or over-consulting (h/t to PHB's rework process).


And the #1 vulnerability is delivering something to the user that she 
walks away from.  OK, that aligns somewhat in your No 1 above...


Also known as K6.



iang

___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


[cryptography] PGP word list

2015-02-18 Thread ianG

On 18/02/2015 10:32 am, Ryan Carboni wrote:

Can't trust anything, except the mail.

Only solution: personally encrypt messages by hand, using computers and
GPG only for transmitting master keys if the keys cannot be delivered in
person.

https://en.wikipedia.org/wiki/PGP_word_list



Wow.  I never even knew that existed!

Is there any experience of of the word list in use?  Any research?

On the face of it, it would make things a lot easier for ordinary people 
to share that hex stuff when doing known key exchanges.


An observation:  calling it the PGP word list is just boring.  It needs 
an exciting name that gets people looking it up out of curiosity.




For my own experiences:  with mobile we went through various 
incarnations to transfer small keys (not share known keys) and settled 
finally on a 4 * 26 character alphabet code.  The reason for this was 
that most small phones have a switch from numbers to letters, so are 
really clunky.  And letters are bigger than numbers, so stick with letters.


On the initiating phone it prints the code in huge letters and 
underneath the phonetics in smaller type.




iang
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] [Cryptography] Equation Group Multiple Malware Program, NSA Implicated

2015-02-17 Thread ianG

On 17/02/2015 15:56 pm, Jerry Leichter wrote:

On Feb 17, 2015, at 6:35 AM, ianG i...@iang.org wrote:

Here's an interesting comparison.  Most academic cryptographers believe
that the NSA has lost its lead:  While for years they were the only ones
doing cryptography, and were decades ahead of anyone on the outside, but
now we have so many good people on the outside that we've caught up to,
and perhaps even surpassed, the NSA.  I've always found this reasoning a
bit too pat.  But getting actual evidence has been impossible.


I'd rather say it this way:  we have circumstantial evidence that we are at 
about the same level for all practical purposes and intents.  As far as we are 
concerned.

What evidence is there for this?


Snowden saying encryption works.  EquationGroup use of RC4-6, AES, 
SHAs.  FBI complaining about going dark, we need backdoors - they only 
ever complain at that level as proxy for NSA, and same complaint is 
repeated in rapid succession in UK, DE.  Practically all the exploits so 
far disclosed are about hacking the software, hardware, nothing we've 
seen comes even close to hacking the ciphers.  Some of the interventions 
are about hacking the RNGs - which typically take the cryptanalysis to 
places where we can hack it.  Off-the-record comments I've heard. 
Analysis of released systems such as Skipjack.


It's all circumstantial.



There's a bit of a difference.  I'd say they are still way ahead in 
cryptanalysis, but not in ways that seriously damage AES, KECCAK, etc.

Again, do you have any evidence?


There is the story about differential cryptanalysis - they released the 
first 4 volumes, but still haven't mentioned the other 4 ;-)



It's not that I have evidence the other way.  We just don't know.



At one level, this all comes down to your model of science.  Typically 
we in the science world like to know stuff based on evidence from 
experiments, or similar facts that have been built up over time.  We are 
very careful to not let our imagination run away with us.


But this doesn't work with the spy business.  They will never let us run 
the experiment, they will not let us read the literature, and if we ever 
find enough to put 2+2 together, they'll run a deception campaign to 
break that logic.  Or lie.  Or they will remind us that you don't know 
or all of the above.


So we have to develop a better approach.  We can probably benefit from 
thinking of the question as a murder investigation - clues, hypotheses, 
correlations, etc.  We can't take it to a court of law -- they deny us 
that as well -- but we can form a view as to whodunnit.


Many won't accept that view, of course.  To them I say, you're dancing 
to their tune.



 What concerns me is that most of the arguments are faith-based - the kind of arguments 
that support open always wins:  No matter how big/smart you are, there are more smart 
people who *don't* work for you than who *do*, and in the long run the larger number of people, 
openly communicating and sharing, will win.  And yet Apple sold more phones in the US last quarter 
than all Android makers combined - the first time they've been in the lead.  It's not even clear 
how to compare the number of smart cryptographers inside and outside of NSA - and NSA has more 
funding and years of experience they keep to themselves.  This is exactly how organizations win 
over smart individuals:  They build a database of expertise over many years, and they are patient 
and can keep at it indefinitely.


Right.  I'm surprised Android sells any phones in USA market.  Although 
I understand that it is the only way to compete with Apple, it is also 
the weaker position.  Which comes out in a price insensitive market. 
OTOH, I'm surprised to see an iPhone in Africa ;)




In contrast, I'd say we are somewhat ahead in protocol work.  That is, the push 
for eg CAESAR, QUIC, sponge construction, is coming from open community not 
from them.

Why would they push for new stuff out in the open world?


Maintenance of protocols is really hard, really expensive.  I know, I 
manage a 100kloc code base with several hard crypto protocols in it, and 
I'm drowning, perpetually.  Whatever we can do to get that into the open 
source world, the better.




They *should* be pushing for it, because they *should* be putting more emphasis 
on defense of non-NSA systems.


Yes.  That is the huge mystery.  It's pretty clear the NSA is doing the 
non-NSA mission huge damage.  Yet no movement on the priorities, just 
blather about 'sharing' from Obama.  That's a mystery.




But what we've seen confirmed repeatedly over the last couple of years is that 
they have concentrated on offense - and against everything that *isn't* an NSA 
system.


Right.  I think that we know, even though they won't release much 
evidence of it ;)



(To the point where they've apparently even neglected defense of their own 
internal systems:  What Snowden did was certainly something they *thought* they 
had

Re: [cryptography] [Cryptography] Equation Group Multiple Malware Program, NSA Implicated

2015-02-17 Thread ianG

On 17/02/2015 00:58 am, Jerry Leichter wrote:

On Feb 16, 2015, at 3:39 PM, John Young j...@pipeline.com
mailto:j...@pipeline.com wrote:
Kaspersky Q and A for Equation Group multiple malware program, in use early

as 1996. NSA implicated.

https://securelist.com/files/2015/02/Equation_group_questions_and_answers.pdf
https://t.co/bByx6d25YF

Dan Goodin: How “omnipotent” hackers tied to NSA hid for 14 years­and
were found at last

http://ars.to/1EdOXWo http://t.co/0n1D05GOFN

Two articles that are well worth reading.

Back in the 1980's, I knew a bunch of the security guys at DEC.  While
this was a much less threatening time, even the DEC internal network of
that period saw attacks here and there.  What the security guys said was
that they had all kinds of attacks that they would find, analyze, and
lock out. But there was this residual collection of ghosts:  They'd
see hints that something kind of attack had taken place, but they
couldn't find any detailed trace of how, where, or by whom.  The guys
doing it could get in and out and at most leave a bit of an odd,
unexplainable event behind.  They assumed it was government attackers,
but could never prove anything.

It should be no surprise that this kind of thing has been going on for
years.  The first papers on attacks on and defenses of computer systems
from a military point of view go back to the 1970's.  (The Air Force
took the early lead - or perhaps they just let more out.)  For a while,
some of this work was in the open; the famous Rainbow Series of reports
was one result.  But then it all went dark - a fact that's now obvious
in retrospect, though I don't recall anyone commenting on it at the
time.  (One wonders if this was the result of the NSA taking over fully.)

With unlimited funding and years of practice, these guys are way ahead
of the rest of us.



Back in late 2000s, there was a surge in interest in APTs and the 
industrial-military contractors went on a shopping spree looking for 
cyber-warriors.  At the time I discounted it as yet another hype thing, 
but it seems that it happened, and we're now in a cyber-arms race.




Here's an interesting comparison.  Most academic cryptographers believe
that the NSA has lost its lead:  While for years they were the only ones
doing cryptography, and were decades ahead of anyone on the outside, but
now we have so many good people on the outside that we've caught up to,
and perhaps even surpassed, the NSA.  I've always found this reasoning a
bit too pat.  But getting actual evidence has been impossible.



I'd rather say it this way:  we have circumstantial evidence that we are 
at about the same level for all practical purposes and intents.  As far 
as we are concerned.


There's a bit of a difference.  I'd say they are still way ahead in 
cryptanalysis, but not in ways that seriously damage AES, KECCAK, etc.


In contrast, I'd say we are somewhat ahead in protocol work.  That is, 
the push for eg CAESAR, QUIC, sponge construction, is coming from open 
community not from them.  In the 1990s we infamously blundered by 
copying their threat model;  now no longer, we have enough of our own 
knowledge and deep institutional experience to be able to say that's 
garbage, our customers are different.  And our needs are pushing the 
envelope out in ways they can't possibly keep up with.


Although, I could be wrong here - Equation team reports from Kaskersky 
didn't say much about the protocols they were using to exfiltrate, just 
that they had a fetish for Ron's ciphers.




So now we have some evidence from a closely related domain.  It's not as
if the world isn't full of people attacking software and hardware, for
academic fame, for money, just for the hell of it.  And yet here we have
evidence that the secret community is *way* out ahead.  Sure, there are
papers speculating about how to take over disk drive firmware.  But
these guys *actually do it*, at scale.

Should we be so confident that our claims about cryptography are on any
firmer ground?



In sum, I'd say they are ahead in the pure math, but you'd be hard 
pressed to find an area where it mattered.


E.g., as Peter  Adi and I are infamously on record for saying [0], the 
crypto isn't what is being attacked here.  It's the software engineering 
and the crappy security systems.



iang


[0] http://financialcryptography.com/mt/archives/001460.html
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Equation Group Multiple Malware Program, NSA Implicated

2015-02-16 Thread ianG

On 16/02/2015 20:39 pm, John Young wrote:

Kaspersky Q and A for Equation Group multiple malware program, in use early
as 1996. NSA implicated.

https://securelist.com/files/2015/02/Equation_group_questions_and_answers.pdf


Once we take the brave step of downloading the pdf, it adds yet another 
indication [0] that the NSA is engaged in undeclared war against all and 
any cryptographic suppliers:




page 21
Victims generally fall into the following categories:
 * (usual industrual suspects...)
 * Companies developing cryptographic technologies.


page 27
16. What kind of encryption algorithms are used by the EQUATION group?

The Equation group uses the RC5 and RC6 encryption algorithms quite 
extensively throughout their creations. They also use simple XOR, 
substitution tables, RC4 and AES.


RC5 and RC6 are two encryption algorithms designed by Ronald Rivest in 
1994 and 1998. They are very similar to each other, with RC6 introducing 
an additional multiplication in the cypher to make it more resistant. 
Both cyphers use the same key setup mechanism and the same magical 
constants named P and Q.


The RC5/6 implementation from Equation group’s malware is particularly 
interesting and deserves special attention because of its specifics.


(followed by discussion of an optimisation found that also allowed some 
degree of tracking to other APT groups.)







iang

[0] http://financialcryptography.com/mt/archives/001455.html
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] [Cryptography] How the CIA Made Google

2015-02-02 Thread ianG

On 31/01/2015 16:14 pm, John Young wrote:

An early program of Highlands Group was perception management by
which public opinion would be shaped by disparagement of opposition
to ubiquitous gov-com spying with gambits like tin-foil hat, conspiracy
theory, and other forms of reputation attacks.



Sadly, these are really good tactics.  They're almost costless, they 
really hit hard against the auditing public, and they're almost blameless.


I'd love to see evidence of the program, and I don't doubt it exists, 
it's just too good to pass up on.  We know for example that the USAF was 
running around stoking up the UFO people so as to give cover for the 
experimental plane flights.  There is no doubt that the spies are 
running around stoking up the cypherpunkian elements to provide cover 
for what they are really doing.  Read a hundred classic spy stories, etc.


Even if we see the evidence, the masses still won't believe it.  But, 
speaking for myself, knowing that there was compelling verified evidence 
of actual skulduggery was something that kept me sane.




iang
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] [Cryptography] OneRNG kickstarter project looking for donations

2014-12-21 Thread ianG
And, boom.  OneRNG just blasted through its $10k ask.  This project 
races ahead.  I'd like to think that the depth of support indicates we 
really do have a need for vibrant cheap open RNGs.  The more the merrier.


https://www.kickstarter.com/projects/moonbaseotago/onerng-an-open-source-entropy-generator

Paul tells me over-funding will be used to do a bigger run.  So we can 
pretty reliably predict that these things will happen sometime after Jan 
when it closes.


Probably still a good idea to support the project because you get sent a 
unit anyway, and more funds will almost certainly lead to other benefits.


iang

On 16/12/2014 16:39 pm, ianG wrote:

Surprisingly, the OneRNG project is already half way to the goal of $10k
NZD after only a week.

https://www.kickstarter.com/projects/moonbaseotago/onerng-an-open-source-entropy-generator


One reason I really like this project is that it is hopefully totally
open.  If we can seed the world with open hardware designs, we can have
a chance of leaking this project into all sorts of other things like
home routers, IoT things, Bitcoin hardware wallets etc.

iang


On 15/12/2014 19:18 pm, ianG wrote:

After Edward Snowden's recent revelations about how compromised our
internet security has become some people have worried about whether the
hardware we're using is compromised - is it? We honestly don't know, but
like a lot of people we're worried about our privacy and security.

What we do know is that the NSA has corrupted some of the random number
generators in the OpenSSL software we all use to access the internet,
and has paid some large crypto vendors millions of dollars to make their
software less secure. Some people say that they also intercept hardware
during shipping to install spyware.

We believe it's time we took back ownership of the hardware we use day
to day. This project is one small attempt to do that - OneRNG is an
entropy generator, it makes long strings of random bits from two
independent noise sources that can be used to seed your operating
system's random number generator. This information is then used to
create the secret keys you use when you access web sites, or use
cryptography systems like SSH and PGP.

Openness is important, we're open sourcing our hardware design and our
firmware, our board is even designed with a removable RF noise shield (a
'tin foil hat') so that you can check to make sure that the circuits
that are inside are exactly the same as the circuits we build and sell.
In order to make sure that our boards cannot be compromised during
shipping we make sure that the internal firmware load is signed and
cannot be spoofed.


___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography




___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] [Cryptography] OneRNG kickstarter project looking for donations

2014-12-16 Thread ianG
Surprisingly, the OneRNG project is already half way to the goal of $10k 
NZD after only a week.


https://www.kickstarter.com/projects/moonbaseotago/onerng-an-open-source-entropy-generator

One reason I really like this project is that it is hopefully totally 
open.  If we can seed the world with open hardware designs, we can have 
a chance of leaking this project into all sorts of other things like 
home routers, IoT things, Bitcoin hardware wallets etc.


iang


On 15/12/2014 19:18 pm, ianG wrote:

After Edward Snowden's recent revelations about how compromised our
internet security has become some people have worried about whether the
hardware we're using is compromised - is it? We honestly don't know, but
like a lot of people we're worried about our privacy and security.

What we do know is that the NSA has corrupted some of the random number
generators in the OpenSSL software we all use to access the internet,
and has paid some large crypto vendors millions of dollars to make their
software less secure. Some people say that they also intercept hardware
during shipping to install spyware.

We believe it's time we took back ownership of the hardware we use day
to day. This project is one small attempt to do that - OneRNG is an
entropy generator, it makes long strings of random bits from two
independent noise sources that can be used to seed your operating
system's random number generator. This information is then used to
create the secret keys you use when you access web sites, or use
cryptography systems like SSH and PGP.

Openness is important, we're open sourcing our hardware design and our
firmware, our board is even designed with a removable RF noise shield (a
'tin foil hat') so that you can check to make sure that the circuits
that are inside are exactly the same as the circuits we build and sell.
In order to make sure that our boards cannot be compromised during
shipping we make sure that the internal firmware load is signed and
cannot be spoofed.


___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


[cryptography] OneRNG kickstarter project looking for donations

2014-12-15 Thread ianG

https://www.kickstarter.com/projects/moonbaseotago/onerng-an-open-source-entropy-generator

About this project

After Edward Snowden's recent revelations about how compromised our 
internet security has become some people have worried about whether the 
hardware we're using is compromised - is it? We honestly don't know, but 
like a lot of people we're worried about our privacy and security.


What we do know is that the NSA has corrupted some of the random number 
generators in the OpenSSL software we all use to access the internet, 
and has paid some large crypto vendors millions of dollars to make their 
software less secure. Some people say that they also intercept hardware 
during shipping to install spyware.


We believe it's time we took back ownership of the hardware we use day 
to day. This project is one small attempt to do that - OneRNG is an 
entropy generator, it makes long strings of random bits from two 
independent noise sources that can be used to seed your operating 
system's random number generator. This information is then used to 
create the secret keys you use when you access web sites, or use 
cryptography systems like SSH and PGP.


Openness is important, we're open sourcing our hardware design and our 
firmware, our board is even designed with a removable RF noise shield (a 
'tin foil hat') so that you can check to make sure that the circuits 
that are inside are exactly the same as the circuits we build and sell. 
In order to make sure that our boards cannot be compromised during 
shipping we make sure that the internal firmware load is signed and 
cannot be spoofed.

___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


[cryptography] cost-watch - the cost of the Target breach

2014-12-05 Thread ianG
I often point out that our security model thinking is typically informed 
by stopping all breaches rather than doing less damage.  Here's some 
indication of damage.


http://bits.blogs.nytimes.com/2014/12/04/banks-lawsuits-against-target-for-losses-related-to-hacking-can-continue/?smid=tw-nytimestechseid=auto_r=0

...
The ruling is one of the first court decisions to clarify the legal 
confusion between retailers and banks in data breaches. In the past, 
banks were often left with the financial burden of a hacking and were 
responsible for replacing stolen cards. The cost of replacing stolen 
cards from Target’s breach alone is roughly $400 million — and the 
Secret Service has estimated that some 1,000 American merchants may have 
suffered from similar attacks.


The Target ruling makes clear that banks have a right to go after 
merchants if they can provide evidence that the merchant may have been 
negligent in securing its systems.

...

At the time of its breach last year, Target had installed a $1.6 million 
advanced breach detection technology from the company FireEye.


But according to several people briefed on its internal investigation 
who spoke on the condition of anonymity, the technology sounded alarms 
that Target did not heed until hackers had already made off with credit 
and debit card information for 40 million customers and personal 
information for 110 million customers.

___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Underhanded Crypto

2014-11-28 Thread ianG

On 27/11/2014 03:04 am, Ilya Levin wrote:

On Thu, Nov 27, 2014 at 1:04 AM, ianG i...@iang.org
mailto:i...@iang.org wrote:

http://underhandedcrypto.com/__rules/
http://underhandedcrypto.com/rules/

The Underhanded Crypto contest ...
And the main prize for a winner would be nearly ruined reputation
because nobody would trust his or her design and code ever again. Giving
a client solid proof and confirmation of their huge concern about your
ability to put some fishy stuff into their system - what else would be
more assuring, right? :)



Given that it is signalled in advance, and given that for the most part 
our job is to stop these things and thinking about how to do it is the 
flip side of the same coin, I suspect reputation isn't an issue.



Seems like we'll find out tho, as Peter and Bear are willing to give it 
a shot.


iang
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


[cryptography] Underhanded Crypto

2014-11-26 Thread ianG

http://underhandedcrypto.com/rules/

The Underhanded Crypto contest was inspired by the famous Underhanded C 
Contest, which is a contest for producing C programs that look correct, 
yet are flawed in some subtle way that makes them behave 
inappropriately. This is a great model for demonstrating how hard code 
review is, and how easy it is to slip in a backdoor even when smart 
people are paying attention.


We’d like to do the same for cryptography. We want to see if you can 
design a cryptosystem that looks secure to experts, yet is backdoored or 
vulnerable in a subtle barely-noticable way. Can you design an encrypted 
chat protocol that looks secure to everyone who reviews it, but in 
reality lets anyone who knows some fixed key decrypt the messages?


We’re also interested in clever ways to weaken existing crypto programs. 
Can you make a change to the OpenSSL library that looks like you’re 
improving the random number generator, but actually breaks it and makes 
it produce predictable output?


If either of those things sound interesting, then this is the contest 
for you.

___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Define Privacy

2014-10-26 Thread ianG
On 22/10/2014 03:22 am, Jason Iannone wrote:
 On a fundamental level I wonder why privacy is important and why we
 should care about it.


Financial privacy is all about theft.  If someone knows where the money
is, it can be stolen.  It works statistically, in that the set of
attackers is typically not well known, so people tend to habitualise
financial privacy.

There would be some who would say this isn't required today, but this is
just sophistry.  The wealth-stealing attack is as pervasive today as it
was thousands of years ago.  One inside complaint about for example AML
is that it is a setup for theft, and there are plenty of cases which
bear that out.  I.e., now that wealth can be measured via pervasive
financial monitoring and now that the principle of consolidated revenue
has been breached, the police are incentivised to become the attacker.
Because they get to share in the proceeds.  C.f., recent reports that
foreigners are being warned not to carry cash in USA because police
steal it.

Financial privacy isn't universal.  In my work in Kenya I discovered
that it is somewhat reversed, groups come together and share their
financial information as a defence against other attackers.  I speculate
that this may be helped by the fact that most of their wealth is
observable at a close distance by their close community.

One can get into trouble mixing financial privacy with other forms of
privacy.  The conversation gets tortured.  A system to protect money
might provide for split keys, which results in less 'privacy' but more
security.  As security of money is the number 1 goal of any money
system, other forms of privacy might be compromisable, it isn't an absolute.

This philosophical flaw might be levelled at Digicash which placed the
blinding formula on a pedestal, and we can note the irony of financial
privacy with Bitcoin.



iang
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


[cryptography] CFP by 24 Nov - Usable Security - San Diego 8th Feb

2014-10-22 Thread ianG
The Workshop on Usable Security (USEC) will be held in conjunction with
NDSS on February 8, 2015. The deadline for USEC Workshop submissions is
November 24, 2014. – In previous years, USEC has also been collocated
with FC; for example in Okinawa, Bonaire, and Trinidad and Tobago.

Additional information and paper submission instructions:

http://www.internetsociety.org/events/ndss-symposium-2015/usec-workshop-call-papers

**

The Workshop on Usable Security invites submissions on all aspects of
human factors and usability in the context of security and privacy. USEC
2015 aims to bring together researchers already engaged in this
interdisciplinary effort with other computer science researchers in
areas such as visualization, artificial intelligence and theoretical
computer science as well as researchers from other domains such as
economics or psychology. We particularly encourage collaborative
research from authors in multiple fields.

Topics include, but are not limited to:

* Evaluation of usability issues of existing security and privacy models
or technology

* Design and evaluation of new security and privacy models or technology

* Impact of organizational policy or procurement decisions

* Lessons learned from designing, deploying, managing or evaluating
security and privacy technologies

* Foundations of usable security and privacy

* Methodology for usable security and privacy research

* Ethical, psychological, sociological and economic aspects of security
and privacy technologies

USEC solicits short and full research papers.

*

Program Committee

Jens Grossklags (The Pennsylvania State University) - Chair
Rebecca Balebako (Carnegie Mellon University)
Zinaida Benenson (University of Erlangen-Nuremberg)
Sonia Chiasson (Carleton University)
Emiliano DeCristofaro (University College London)
Tamara Denning (University of Utah)
Alain Forget (Carnegie Mellon University)
Julien Freudiger (PARC)
Vaibhav Garg (VISA)
Cormac Herley (Microsoft Research)
Mike Just (Glasgow Caledonian University)
Bart Knijnenburg (University of California, Irvine)
Janne Lindqvist (Rutgers University)
Heather Lipford (University of North Carolina at Charlotte)
Debin Liu (Paypal)
Xinru Page (University of California, Irvine)
Adrienne Porter Felt (Google)
Franziska Roesner (University of Washington)
Pamela Wisniewski (The Pennsylvania State University)
Kami Vaniea (Indiana University)





With best regards,

Jens Grossklags

Chair – USEC 2015
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] caring harder requires solving once for the most demanding threat model, to the benefit of all lesser models

2014-10-15 Thread ianG
On 13/10/2014 16:45 pm, coderman wrote:
 On 10/13/14, ianG i...@iang.org wrote:
 ...
 your welcome ;-)
 
 a considered and insightful response to my saber rattling diatribe.
 
 i owe you a beer, sir!

I'm honoured!


 Ah well, there is another rule we should always bring remember:

  Do not use known-crap crypto.

 Dual_EC_DRBG is an example of a crap RNG.  For which we have data going
 back to 2006 showing it is a bad design.
 
 let's try another example: Intel RDRAND or RDSEED.  depend on it as
 the sole source of entropy?

According to what I consider good secure practices [0] relying on one
(platform) source for random numbers is probably best unless you really
truly have a good reason not to.

We have no data that suggests the design is bad.  Actually all the data
suggests the design is good!  What we have is an unfortunate learning
exercise in being too good:  whitening in hardware also hides backdoors.
 Is that good enough a reason?  Not my call, at the moment.  But when
data turns up, such as the attacks on Dual_EC_DRGB, we'll have something
to chew on.


 in theory, the only attacks that would allow to manipulate the output
 are outside scope. (e.g. the data shows them as nation state level
 hypothetical)
 
 is depending on a single entropy source the known-crap part?

I say no.


 or is
 it the un-verifiable output of this specific source that is
 known-crap?


Ah, yes, this is a question.  I'd say it isn't known-crap again
because there is substantial pressure on the platform provider to not
ever get caught providing known-crap.

Which makes it a very high value target ;) so there are limits to
assumptions here.  One wonders if they will fix that in future releases...


 (or am i overreaching, and you advocate direct and sole use of RDRAND
 everywhere? :)


:) em, close, I advocate direct and sole use of your platform's RNG.
Rule #1:

http://iang.org/ssl/hard_truths_hard_random_numbers.html

1. Use what your platform provides. Random numbers are hard, which is
the first thing you have to remember, and always come back to. Random
numbers are so hard, that you have to care a lot before you get
involved. A hell of a lot. Which leads us to the following rules of
thumb for RNG production.

a. Use what your platform provides.
b. Unless you really really care a lot, in which case, you have to
write your own RNG.
c. There isn't a lot of middle ground.
d. So much so that for almost all purposes, and almost all users,
Rule #1 is this: Use what your platform provides. E.g., for *nix, use
urandom [Ptacek].
e. When deciding to breach Rule #1, you need a compelling argument
that your RNG delivers better results than the platform's [Gutmann1].
Without that compelling argument, your results are likely to be more
random than the platform's system in every sense except the quality of
the numbers.




 Others in this category include:  RC4, DES, MD5, various wifi junk
 protocols, etc.
 
 if RC4 is known-crap, then how is a downgrade to known-crap not a problem?


It is.  For my money, a downgrade is always known-crap.  A downgrade to
known-crap is beyond embarrassing, it's humiliating.  It is a mark of
architectural failure, it means that you knew there was known-crap yet
you did nothing.

Oh, and today's news.  SSL should have been deprecated ages ago.  Why
wasn't it?  Just embarrassing;  those who wax ladonical about the need
to support 350 algorithm suites and versions back to the beginning of
time have no, zip, nada thought let alone solution on deprecation.


 Q: 'Should I switch away from 1024 bit strength RSA keys?'

 I agree with that, and I'm on record for it in the print media.  I am
 not part of the NIST lemmings craze.

 So, assuming you think I'm crazy, let's postulate that the NSA has a box
 that can crunch a 1024 key in a day.  What's the risk?
 ...
 WYTM?  The world that is concerned about the NSA is terrified of open
 surveillance.  RSA1024 kills open surveillance dead.
 
 consider a service provider that i use, like Google, with a
 hypothetical 1024 bit RSA key to secure TLS. they don't use forward
 secrecy, so recovery of their private key can recover content.


If google were to ask me 'Is 1024 bit broken' I would say no.  'Should I
switch away from 1024 bit strength RSA keys?' then sure, do that, in
time, but don't be overly panicked about it.

(This is a trick question of course, you've shifted the goalposts.  So I
shifted the strike... google is reputed to have security people and
never ever asks anyone else what to do.  On the other hand, your average
online bank is something that practices 'best practices' and needs to be
told what to do.)


 what is the risk that a Google-like provider key could be attacked? i
 have no idea.  but certainly more than my risk as a single individual.

Indeed.  In fact, we know they were already attacked, and breached.  If
they'd asked me on that one I'd have said, yeah, probably best to have
1024 bit RSA rather than nothing

[cryptography] SSL bug: This POODLE Bites: Exploiting The SSL 3.0 Fallback

2014-10-14 Thread ianG
https://www.openssl.org/~bodo/ssl-poodle.pdf

SSL 3.0 [RFC6101] is an obsolete and insecure protocol. While for most
practical purposes it has been replaced by its successors TLS 1.0
[RFC2246], TLS 1.1 [RFC4346], and TLS 1.2 [RFC5246], many TLS
implementations remain backwards­compatible with SSL 3.0 to interoperate
with legacy systems in the interest of a smooth user experience. The
protocol handshake provides for authenticated version negotiation, so
normally the latest protocol version common to the client and the server
will be used.

However, even if a client and server both support a version of TLS, the
security level offered by SSL 3.0 is still relevant since many clients
implement a protocol downgrade dance to work around server­side
interoperability bugs. In this Security Advisory, we discuss how
attackers can exploit the downgrade dance and break the cryptographic
security of SSL 3.0. Our POODLE attack (Padding Oracle On Downgraded
Legacy Encryption) will allow them, for example, to steal secure HTTP
cookies (or other bearer tokens such as HTTP Authorization header
contents).

We then give recommendations for both clients and servers on how to
counter the attack: if disabling SSL 3.0 entirely is not acceptable out
of interoperability concerns, TLS implementations should make use of
TLS_FALLBACK_SCSV.

CVE­2014­3566 has been allocated for this protocol vulnerability.


http://googleonlinesecurity.blogspot.co.uk/2014/10/this-poodle-bites-exploiting-ssl-30.html


___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


[cryptography] caring requires data

2014-10-13 Thread ianG
On 13/10/2014 01:03 am, coderman wrote:
 On 9/22/14, coderman coder...@gmail.com wrote:
 ...
 Please elaborate.  TKIP has not been identified as a ‘active attack’
 vector.
 
 hi nymble,
 
 it appears no one cares about downgrade attacks, like no one cares
 about MitM (see mobile apps and software update mechanisms). [0]


No, and I argue that nobody should care about MITM nor downgrade attacks
nor any other theoretical laboratory thing.  I also argue that people
shouldn't worry about shark attacks, lightning or wearing body armour
when shopping.

What distinguishes what we should care about and what we shouldn't is
data.  And analysis of that data.  In absence of data, you're in FUD
land.  Just another religion, or another lightning rod salesman [1].


 0. no one cares - this is not strictly true; people care a bit more
 if you have done significant and detailed analysis of the sort that
 eats lives by the quarter-year. i have long since quit giving freebies
 freely, and instead pick my disclosures carefully with significant
 limitations.

Well, if that translated to data of actual attacks, hacks, losses, then
I'd have more sympathy.

Otherwise, it's all sales in the market for silver bullets.  Or
indistinguishable from, the harder you want people to care, the more a
salesman copies your technique ...


 perhaps i should re-state: no one working in the public interest
 cares. there is a roaring business for silence and proprietary
 development, and these people care quite a bit.


Yeah, ain't that the truth.  Meanwhile, data...

iang


[1] a lightning rod salesman is an expression in earlier American
times which refers to someone selling something you don't really need.
I think, perhaps others could explain it better...
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] caring requires data

2014-10-13 Thread ianG
On 13/10/2014 14:32 pm, coderman wrote:
 On 10/13/14, ianG i...@iang.org wrote:
 ...
 No, and I argue that nobody should care about MITM nor downgrade attacks
 nor any other theoretical laboratory thing.  I also argue that people
 shouldn't worry about shark attacks, lightning or wearing body armour
 when shopping.
 ...
 What distinguishes what we should care about and what we shouldn't is
 data.  And analysis of that data.
 
 
 indeed. thanks for showing me the light, ian!


your welcome ;-)


 Q: 'Should I disable Dual_EC_DRBG?'
 A: The data shows zero risk of an attacker compromising the known
 vulnerability of a specially seed random number generator. Do not
 change; keep using Dual_EC_DRBG!

Ah well, there is another rule we should always bring remember:

 Do not use known-crap crypto.

Dual_EC_DRBG is an example of a crap RNG.  For which we have data going
back to 2006 showing it is a bad design.

Others in this category include:  RC4, DES, MD5, various wifi junk
protocols, etc.


 Q: 'Should I switch away from 1024 bit strength RSA keys?'
 A: The data shows zero risk of an attacker compromising the known
 vulnerability of a insufficiently large RSA key as the cost is
 prohibitive and no publicly demonstrated device exists. Do not change
 to larger keys; keep using 1024 bit RSA!


I agree with that, and I'm on record for it in the print media.  I am
not part of the NIST lemmings craze.

So, assuming you think I'm crazy, let's postulate that the NSA has a box
that can crunch a 1024 key in a day.  What's the risk?

Over a year, the risk to *you* is that one of your keys is in the top
365 keys targeted to attack, over this coming year.

Is that likely?  If it is ... well, my advice is not for you, you're
another sort of person altogether ;-)

WYTM?  The world that is concerned about the NSA is terrified of open
surveillance.  RSA1024 kills open surveillance dead.


 Q: 'Should I worry about the auto-update behavior of my devices or computers?'
 A: The data shows minimal risk of an attacker compromising your
 systems via this method. Don't bother changing your vulnerable auto
 update any where any time any how; you're probably safe!


Actually, I thought there was data on this which shows that auto-update
keeps devices more secure, suffer less problems.  I think Microsoft have
published on this, anyone care to comment?



 it's all so easy now... :)


:) iang


___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] [OT] any updates on shellshock?

2014-10-07 Thread ianG
On 4/10/2014 17:57 pm, Jeremy Stanley wrote:
 On 2014-10-05 10:38:38 +1000 (+1000), James A. Donald wrote:
 On 2014-10-05 10:34, James A. Donald wrote:
 On 2014-10-05 07:49, Jeremy Stanley wrote:
 This is pretty off-topic as it has nothing whatsoever to do with
 cryptography.

 It has everything to do with cryptography.

 The greatest failing of cryptographers has always been to produce a
 fortress with a mighty impenetrable door in two foot paling fence.

 And anyone who draws attention to the fact that the fence is only
 two feet tall is told that the fence is out of scope.
 
 And if random security vulnerabilities are on-topic for discussion
 here, we might as well just be reading bugtraq/fulldisc/.../4chan
 instead.


Although I don't particularly like it, I have to agree with Donald.

The value of cryptography is limited by the applicability of its benefit
to the real world.  We can probably agree that there is a valid science
in theoretical cryptography for elegance sake and pedagogical purposes.
 But almost all traffic on this list is in the domain of the practical,
the useful.

Digression.  The 1024 in 20m attack (other thread) reminds me of an
attack on a money system cerca 2000, told to me by Dani Nagy.  The
attacker announced that he had found a breach in the money system, in
which he could double his money.  He offered that anyone could send him
X, he would send back double X to prove his breach.

Which he did.  For quite some time and several events.  The company
investigated, and said it could find no bug.  Eventually, it was agreed
that there was no breach, the attacker was simply paying out the double
claim, from his pocket.

The attack was not on the system, but on the reputation of the system.
It did tremendous damage, as many people decided to mistrust the system,
and growth was stalled for a while.



Balance is a perfectly important property of a system.  There really is
little point in building a safe door into a paling fence, yet
cryptographers and security people typically fall to the 'out of scope'
bug far more often than we'd like, thus rendering their system as out of
balance as the fortress with the paling fence.

Understanding the weakness of the core  average platforms has always
been in scope for deciding balance.



iang

___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Question About Best Practices for Personal File Encryption

2014-08-17 Thread ianG
On 17/08/2014 05:09 am, Jeffrey Goldberg wrote:
 On 2014-08-16, at 4:51 PM, David I. Emery d...@dieconsulting.com wrote:

 I do think, however, that if there are such backdoors, it would have
 to be known to only a very small number of people. Too many of the people
 who work on Apple security would blow the whistle. So it would have to
 be introduced in such a way that most of the people who actually develop
 these tools are unaware of the backdoors. It’s certainly possible, but
 it does shift balance of plausibility.

Right.  As I understand it, the standard way that this is done is to
create a special features group in another closely-allied country.  That
group secures permission from HQ to do some rework for their special
national needs.

That group then inserts in the backdoor, then ships the entire patch off
to HQ.  Unless the center is reviewing for obfuscated tricks from a
trusted partner, the backdoor slides in, and nobody knows it is there.



iang

___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Question About Best Practices for Personal File Encryption

2014-08-17 Thread ianG
On 17/08/2014 19:39 pm, Ryan Carboni wrote:
 Or in the case of OpenSSL, no one notices the backdoor as it is
 indistinguishable from an obscure programming error.


The difference between a corporate backdoor and an open source backdoor
is likely that when it is finally discovered, the corporate
embarrassment is still easy enough to suppress:  NDAs are a weapon.

Sunlight is your friend.  The many eyeballs thing doesn't really find
any more bugs, it seems, but it certainly guarantees a scandal.  The
agencies don't go where the sunlight is brightest.


 On Sun, Aug 17, 2014 at 5:01 AM, ianG i...@iang.org
 mailto:i...@iang.org wrote:
 
 On 17/08/2014 05:09 am, Jeffrey Goldberg wrote:
  On 2014-08-16, at 4:51 PM, David I. Emery d...@dieconsulting.com
 mailto:d...@dieconsulting.com wrote:
 
  I do think, however, that if there are such backdoors, it would have
  to be known to only a very small number of people. Too many of the
 people
  who work on Apple security would blow the whistle. So it would have to
  be introduced in such a way that most of the people who actually
 develop
  these tools are unaware of the backdoors. It’s certainly possible, but
  it does shift balance of plausibility.
 
 Right.  As I understand it, the standard way that this is done is to
 create a special features group in another closely-allied country.  That
 group secures permission from HQ to do some rework for their special
 national needs.
 
 That group then inserts in the backdoor, then ships the entire patch off
 to HQ.  Unless the center is reviewing for obfuscated tricks from a
 trusted partner, the backdoor slides in, and nobody knows it is there.

___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] [Cryptography] Browser JS (client side) crypto FUD

2014-07-26 Thread ianG
On 26/07/2014 16:03 pm, Lodewijk andré de la porte wrote:
 http://matasano.com/articles/javascript-cryptography/
...
 Somebody, please, give me something to say against people that claim JS
 client side crypto can just never work!


It's like opportunistic security;  it's the best you get in a crappy
world for free.  The author acknowledges that SSL/TLS is expensive and
messy, the alternative to that for most purposes is no security at all.
 This gives something in the middle.

It specifically defeats mass surveillance, that which used to be known
as passive eavesdropping in the trade before the lingo reset of recent
times.  This is a valuable thing.

iang
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


[cryptography] who cares about advanced persistent tracking?

2014-07-20 Thread ianG
From the strange bedfellows department, who cares about us all being
tracked everywhere?  The Chinese, that's who ;)


http://www.securityweek.com/apple-iphone-threat-national-security-chinese-media

BEIJING  - Chinese state broadcaster CCTV has accused US technology
giant Apple of threatening national security through its iPhone's
ability to track and time-stamp a user's location.

The frequent locations function, which can be switched on or off by
users, could be used to gather extremely sensitive data, and even
state secrets, said Ma Ding, director of the Institute for Security of
the Internet at People's Public Security University in Beijing.

The tool gathers information about the areas a user visits most often,
partly to improve travel advice. In an interview broadcast Friday, Ma
gave the example of a journalist being tracked by the software as a
demonstration of her fears over privacy.

One can deduce places he visited, the sites where he conducted
interviews, and you can even see the topics which he is working on:
political and economic, she said.


...
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Silent Circle Takes on Phones, Skype, Telecoms

2014-07-11 Thread ianG
On 11/07/2014 11:27 am, James A. Donald wrote:
 On 2014-07-11 07:45, Kevin wrote:
 On 7/10/2014 4:39 PM, John Young wrote:
 https://blog.silentcircle.com/why-are-we-competing-with-phone-makers-skype-and-telecom-carriers-all-in-the-same-week/

 
 With silent circle, when Ann talks to Bob, does Ann get Bob's public key
 from silent circle, and Bob get Ann's public key from silent circle.
 
 If they do it that way, silent circle is a single point of failure which
 can, and probably will, be co-opted by governments.
 
 If they don't do it that way, how do they do it.
 
 Obviously we need a hash chain that guarantees that Ann sees the same
 public key for Ann as Bob sees for Ann.
 
 Does silent circle do that?


While I'm interested in how they're doing that, I'm far more interested
in how Ann convinces Bob that she is Ann, and Bob convinces Ann that he
is Bob.  We left the OpenPGP/cert building a long time ago, we need more
than just 1980s PKI ideas with elegant proofs.

If they haven't got an answer to that question, then I'd wonder if the
product is a throwaway for real security purposes.  (By throwaway, I
mean the drug dealer's trick of using each phone/sim for one call, then
dropping it in the river.)

iang



ps; John's point is well taken.  We don't have a way to escape success
being targetted.  We don't have a way to pay for many small enclaves
with their own tech.  We're stuck in a rocky business.
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


[cryptography] seL4 going open source

2014-06-24 Thread ianG
http://sel4.systems/

 General Dynamics C4 Systems and NICTA are pleased to announce the open
sourcing of seL4, the world's first operating-system kernel with an
end-to-end proof of implementation correctness and security enforcement.
It is still the world's most highly-assured OS.
What's being released?

It will include all of the kernel's source code, all the proofs, plus
other code and proofs useful for building highly trustworthy systems.
All will be under standard open-source licensing terms. More details
will be posted here closer to the release date.
When is it happening?

The release will happen at noon of Tuesday, 29 July 2014 AEST (UTC+10),
in celebration of International Proof Day (the fifth aniversary of the
completion of seL4's functional correctness proof).

...
http://sel4.systems/About/

 What's special about seL4?

Completely unique about seL4 is its unprecedented degree of assurance,
achieved through formal verification. Specifically, the ARM version of
seL4 is the first (and still only) general-purpose OS kernel with a full
functional correctness proof, meaning a mathematical proof that the
implementation (written in C) adheres to its specification. In short,
the implementation is proved to be bug-free. This implies a number of
other properties, such as freedom from buffer overflows, null pointer
exceptions, use-after-free, etc.

There is a further proof that the binary code that executes on the
hardware is a correct translation of the C code. This means that the
compiler does not have to be trusted, and extends the functional
correctness property to the binary.

Furthermore, there are proofs that seL4's specifcation, if used
properly, will enforce integrity and confidentiality, core security
properties. Combined with the proofs mentioned above, these properties
are guaranteed to be enforced not only by a model of the kernel (the
spec) but the actual binary. Therefore, seL4 is the world's first (and
still only) OS that is proved secure in a very strong sense.

Finally, seL4 is the first (and still only) protected-mode OS kernel
with a sound and complete timeliness analysis. Among others this means
that it has provable upper bounds on interrupt latencies (as well as
latencies of any other kernel operations). It is therefore the only
kernel with memory protection that can give you hard real-time guarantees.

...

___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] [Cryptography] Dual EC backdoor was patented by Certicom?

2014-06-16 Thread ianG
On 16/06/2014 04:27 am, Thierry Moreau wrote:
 On 2014-06-15 19:24, Tanja Lange wrote:
 On Sun, Jun 15, 2014 at 02:13:04PM +0100, ianG wrote:

 Or is this impossible to reconcile?  If Certicom is patenting backdoors,
 the only plausible way I can think of this is that it intends to wield
 backdoors.  Which means spying and hacking.  Certicom is now engaged in
 the business of spying on ... customers?  Foreign governments?

 Note that the majority of the claims (and the entirety of the granted
 claims in the US and JP so far; they got all parts granted in Europe)
 is on escrow avoidance; i.e. on using the procedure for alternative
 points from the SP800-90 appendix. I.e. if a vendor gets sufficiently
 worried about the potential backdoor but doesn't want to do a completely
 new implementation he will opt for other points --- royalties.

 
 I looked at the primary documents in the USPTO databases. The part that
 is missing from the US patent 8,369,213 (i.e. missing from the original
 filing and the European patent I suppose) is now in the pending patent
 application US-2013-0170642-a1.
 
 Are these inventors claiming to have *invented* the backdoor in this
 PRNG method? At least an USPTO examiner hints at this: [claims now in
 US-2013-0170642-A1] are drawn to establish escrow key with elliptical
 curve random number generator. The inventors *describe* the escrow
 technique but need not *claim* it.
 
 Note also that the earliest (USA) filing date is 2005/01/21 as a
 provisional US patent application number 60/644982.
 
 In contrast, I would have said that Certicom's responsibility as a
 participant in Internet security is to declare and damn an exploit, not
 bury it in a submarine patent.

 
 Technically, this is not a submarine patent. The publication date is
 2007/08/16 (soon after the international-treaty-based 18 months delay
 after the filing date applicable to the non-USA patent jurisdictions)
 and anyone could have access to this information by then.
 
 Sometimes I think a little more patent literacy might help. E.g. a
 self-defense behavior for some system designer relying on the ECC
 techniques would include a periodic look at patent applications freshly
 published in this area and/or by the known players.


I guess this would be true if one is in the EC world choosing curves.
Patently, a view expressed in the act by DJB and Tanja.

But this is about international standards and an approved way of doing
RNGs.  A rather different kettle of fish.  We in the user community were
supposed to be able to implement a standard like DUAL_EC, perhaps get it
approved, and be done with such crapola.  Or buy an approved product,
and ditto.

One would have thought that NIST, ISO, etc had long since got tired of
the notion of all that good work being done for the public benefit, only
to be snaffled by greedy patent trolls for the price of a filing.

Although it is now historical as the DUAL_EC RNG is withdrawn as a
standard, I think it would be very interesting to hear NIST's views.  It
may not be submarine in some technical lingo, but it rather seems to be
asymmetrical to the standards horizon.

I wonder if NIST knew about the patent?


 Fascinating case study anyway!


Indeed.  I'm fascinated to understand Certicom's business thinking.
What is the business model behind patenting backdoors?



iang
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


[cryptography] Dual EC backdoor was patented by Certicom?

2014-06-15 Thread ianG
In what is now a long running saga, we have more news on the DUAL_EC
backdoor injected into the standards processes.  In a rather unusual
twist, it appears that Certicom's Dan Brown and Scott Vanstone attempted
to patent the backdoor in Dual EC in or around January of 2005.  From
Tanja Lange  DJB:



https://projectbullrun.org/dual-ec/patent.html
   ... It has therefore been identified by the applicant that this
method potentially possesses a trapdoor, whereby standardizers or
implementers of the algorithm may possess a piece of information with
which they can use a single output and an instantiation of the RNG to
determine all future states and output of the RNG, thereby completely
compromising its security.

The provisional patent application also describes ideas of how to make
random numbers available to trusted law enforcement agents or other
escrow administrators.
=



This appears to be before ANSI/NIST finished standardising DUAL_EC as a
RNG, that is, during the process.  What is also curious is that Dan
Brown is highly active in the IETF working groups for crypto, adding
weight to the claim that the IETF security area is corrupted.

Obviously one question arises -- is this a conspiracy between Certicom,
NSA and NIST to push out a backdoor?  Or is this just the normal
incompetent-in-hindsight operations of the military-industrial-standards
complex?

It's an important if conspiratorial question because we want to document
the modus operandi of a spook intervention into a standards process.
We'll have to wait for more facts;  the participants will simply deny.
One curious fact, the NSA recommended *against* a secrecy order for the
patent.



What I'm more curious about today is Certicom's actions.  What is the
benefit to society and their customers in patenting a backdoor?  How can
they benefit in a way that aligns the interests of the Internet with the
interests of their customers?

Or is this impossible to reconcile?  If Certicom is patenting backdoors,
the only plausible way I can think of this is that it intends to wield
backdoors.  Which means spying and hacking.  Certicom is now engaged in
the business of spying on ... customers?  Foreign governments?

In contrast, I would have said that Certicom's responsibility as a
participant in Internet security is to declare and damn an exploit, not
bury it in a submarine patent.

If so, what idiot in Certicom's board put it on the path of becoming the
Crypto AG of the 21st century?

If so, Certicom is now on the international blacklist of shame.  Until
questions are answered, do no business with them.  Certicom have
breached the sacred trust of trade -- to operate in the interests of
their customers.



iang
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] [Cryptography] basing conclusions on facts

2014-06-15 Thread ianG
On 15/06/2014 14:37 pm, Stephen Farrell wrote:
 
 I've no public opinion on Certicom's patent practices. And the
 behaviour of the signals intelligence agencies has been IMO
 deplorable. So I sympathise with some of what you are saying.
 However, building your case on bogus claims that are not facts
 as you are pearly doing is a really bad idea. In particular...
 
 On 15/06/14 14:13, ianG wrote:
 What is also curious is that Dan
 Brown is highly active in the IETF working groups for crypto, 
 
 That is not correct as far as I can see. In my local archives,
 I see one email from him to the TLS list in 2011 and none in
 2012. For the security area list (saag), I see a smattering
 of mails in 2011 and 2012 and none in 2013. For the IRTF's
 CFRG, I see a few in 2010, none in 2011 and some in 2012 and
 2013. I do see increased participation over the last year on
 the the DUAL-EC topic.
 
 None of the above is anywhere near highly active which is
 therefore simply false.
 
 And I don't believe you yourself are sufficiently active to
 judge whether or not someone else is highly active in the
 IETF to be honest. Nor do you seem to have gone through the
 mail list archives to check.


For my part, I had seen his name only with respect to IETF WGs.  However
I admit that I do not follow IETF security WGs closely, so am not
qualified to assert highly active.  You are right, I am wrong.


 You are both of course welcome to become highly active if you
 do want to participate, same as anyone else.
 
 adding
 weight to the claim that the IETF security area is corrupted.
 
 And that supposed conclusion, based only on an incorrect claim,
 is utter nonsense. I would have expected better logic and closer
 adherence to the facts.
 
 Yes, the IETF security area needs to do better, and quite a few
 folks are working on that. Yes, its almost certain the someone
 was paid by BULLRUN to muck up IETF work. Nonetheless unfounded
 misstatements such as the above don't help and are wrong. And
 the correct reaction is to do better work and not to fall for
 the same guily-by-association fallacy that the leads the spooks
 to think that pervasive monitoring is a good plan.


I had a long post addressing this issue, but as it takes us further from
the subject at hand, I'll pull my head from out of the rabbit hole.



iang
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] [Cryptography] USG asks for time served (7 months) as Sabu's sentence

2014-05-25 Thread ianG
On 24/05/2014 15:12 pm, John Young wrote:
 USG asks for time served (7 months) as Sabu's sentence for
 extraordinary cooperation with FBI to rat, admit, aid.
 
 http://cryptome.org/2014/05/monsegur-029-030.pdf

Interesting!  Aside from the human interest aspects of the story, the
FBI calculates some damages (blue page 8, edited to drop non-damages
estimates):



=
In the PSR, Probation correctly calculates that the defendant’s base
offense level is 7 pursuant to U.S.S.G. §2B1.1(a)(1) and correctly
applies a 22-level enhancement in light of a loss amount between $20
million and $50 million [4]; a 6-level enhancement given that the
offense involved more than 250 victims;

_
4 This loss figure includes damages caused not only by hacks in which
Monsegur personally and directly participated, but also damages from
hacks perpetrated by Monsegur’s co- conspirators in which he did not
directly participate. Monsegur’s actions personally and directly caused
between $1,000,000 and $2,500,000 in damages.




That last number range of $1m to 2.5m is interesting, and can be
contrasted to his 10 direct victims (listed on blue pages 5-6) exploited
over a 1 year period.

One could surmise that this isn't an optimal solution.  E.g.,
hypothetically, if the 10 victims were to pay each a tenth of their
losses, they'd raise a salary of 100-250k and put perp to productive
work, and we'd all be in net profit [0].

Obviously this didn't efficiently solve in society due to information
problems.  LulzEconSec, anyone?



iang


[0] additional comments on the 'profit' side:
blue page 13:  Although difficult to quantify, it is likely that
Monsegur’s actions prevented at least millions of dollars in loss to
these victims.
blue page 16: Through Monsegur’s cooperation, the FBI was able to
thwart or mitigate at least 300 separate hacks. The amount of loss
prevented by Monsegur’s actions is difficult to fully quantify, but even
a conservative estimate would yield a loss prevention figure in the
millions of dollars.
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Request - PKI/CA History Lesson

2014-05-02 Thread ianG
On 2/05/2014 06:41 am, Jeffrey Goldberg wrote:
 
 On 2014-05-01, at 8:49 PM, ianG i...@iang.org wrote:
 
 On 1/05/2014 02:54 am, Jeffrey Goldberg wrote:
 On 2014-04-30, at 6:36 AM, ianG i...@iang.org wrote:
 
 OK. So let me back peddle on “Ann trusts her browser to maintain a list of
 trustworthy CAs” and replace that with “Ann trusts her browser to do
 the right thing”.

 Right, with that caveat about choice.
 
 I think that we are in fierce agreement. At first
 I didn’t understand the significance of your insistence
 on *choice*, but I see it now. More below.


I think the point of choice or competition comes down to feedback loops
for improvement.  There's no way to improve the situation, without a
feedback loop.  If we had used some system of continuous improvement
since 1994 then the model might have been ready for the shift into
phishing in 2003 and the threat ramp-up in 2011.  We didn't, and we weren't.

Dan also points at recourse which can be seen as a feedback loop.  We
need a way to punish those doing a bad job.  Now, this was impossible
with the CAs because the only punishment allowed was to drop the CA from
the root list, and this was too big to work effectively.  This was all
known in advance, we discussed it in Mozo forum, and we actually did get
some better ideas in place such as rules for dropping the CA, but still
not enough to make the feedback loop work (for which we can thank
CABForum, who isolated and destroyed the opportunities for feedback).


 In this context, we would claim that users b-trust because they know
 they can switch.  With browsers they cannot switch.

 Their choice is to transmit private information using their browsers.
 Their choice is to not participate in e-commerce.
 
 Right, there is always in economics some form of substitute.  But
 actually we've probably moved beyond that as a society.
 
 I would say that e-commerce is utility grade now, so it isn't a
 choice you can really call a choice in competition terms.
 
 I agree that the behavior in b-trust must be about “choice behavior”
 in that Ann behaves one way instead of another.
 
 But I don’t think that we should have some minimal threshold of choice
 before can call the behavior b-trust. As long as there is some
 non-zero amount of choice the behavior (in these cases) will exhibit
 a non-zero amount of trust.
 
 For me the sentence, “I had little choice but to trust X” is perfectly
 coherent.


Yes, that still works.  It is when it goes to no choice that it fails.
 For example, I have no choice but to use my browser for online banking.
 I'm too far from a branch, and their phone service is mostly about
telling me how to use the browser.


 Is it possible that you are letting your righteous anger at what
 browser vendors have done interfere with how you are defining “trust”?


Indeed, this is always possible.  If you ask anyone at the vendors, I'm
sure they'll dismiss it all as righteous anger, and why doesn't he just
write patches instead?

There is a curious parallel with web-PKI in the Wall Street / financial
crisis.  You have there a dominating cartel of huge players that
successfully changed the rules to suit themselves (dropping of
Glass-Steagall) purchasing of the regulators (revolving doors) and
riding the wave of an innovation (securitization) all the way to doom.

Now if you look at it in a structural sense, the debt overhang has
broken the strength of the banking system.  It's in deadly embrace;
banks won't let the regulators or the prosecutors or the public do
anything to clear out the debris, so here we sit, in the middle of a
Japan-style lost decade.

It's uncanny.  Practically every structural element is the same between
web-PKI and wall street.  And, lots of righteous anger too...

http://www.nytimes.com/2014/05/04/magazine/only-one-top-banker-jail-financial-crisis.html


 All I’m asking is that we consider the people we are asking to
 “b-trust” the system. Can we build a system that is b-trustworthy
 for the mass of individuals who are not going to make c-trust
 judgements.


 Right, this is the question, how do we do that?

 That is what Certificate Transparency and Perspectives seek to do, as
 well as other thoughts.  First they make the c-trust available by
 setting up alternate groups and paths. Then the c-trusters develop their
 followings of b-trusters.
 
 I agree with that last bit. In a sense, if people see that experts trust
 the system they will too. But how will this play out with Certificate
 Transparency for most users? What do they actually need to know and do
 to follow some c-trusters?


Most users will follow the c-trust shipped with their browsers.


 There likely needs to be a group of c-trusters in the middle
 that mediate the trust of the b-trusters.
 
 And how will that work without putting unrealistic expectations on
 the vast major of users. How do they pick which c-trusters to trust?


If the system is put in place to allow a variation to be set up, then I
suspect

Re: [cryptography] Request - PKI/CA History Lesson

2014-05-02 Thread ianG
On 2/05/2014 13:06 pm, Marcus Brinkmann wrote:
 On 05/01/2014 10:25 AM, Ben Laurie wrote:
 On 1 May 2014 08:19, James A. Donald jam...@echeque.com wrote:
 On 2014-04-30 02:14, Jeffrey Goldberg wrote:

 On 2014-04-28, at 5:00 PM, James A. Donald jam...@echeque.com wrote:

 Cannot outsource trust  Ann usually knows more about Bob than a
 distant
 authority does.


 So should Ann verify the fingerprints of Amazon, and Paypal herself?


 Ann should be logging on by zero knowledge password protocol, so that
 the
 entity that she logs on to proves it already knows the hash of her
 password.

 EXACTLY!!!

 ZKPP has to be in the browser chrome, not on the browser web page.

 This seems obvious, but experiments show users do not understand it.
 We have yet to find a satisfactory answer to a trusted path for
 ordinary users.
 
 So where it really mattered we got two-factor authentication (by mobile
 phone) instead.


Right, the European solution, send the Tx to the phone by SMS and get
the user to type in a code that authenticates that Tx only.


Pop quiz:  does the SMS/Tx system still work if we strip HTTPS off the
browser?


 I like the trade-off.  Using another untrusted path on
 a different network and machine for a probabilistic guarantee seems more
 reasonable to me than trying to build a trusted path on a single
 machine, which was ambitious at the best of times, before we knew for a
 fact that we can not trust a single embedded integrated circuit in any
 device in the world.  And that is not even considering the usability and
 accessibility issues of all the fancy trusted path solutions that I've
 seen.
 
 Security researchers can not even guarantee that the status light of the
 camera is on when it is recording images.



iang
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Request - PKI/CA History Lesson

2014-05-02 Thread ianG
On 2/05/2014 13:42 pm, Marcus Brinkmann wrote:
 On 05/02/2014 01:33 PM, ianG wrote:
 For me the sentence, “I had little choice but to trust X” is perfectly
 coherent.


 Yes, that still works.  It is when it goes to no choice that it fails.
   For example, I have no choice but to use my browser for online banking.
   I'm too far from a branch, and their phone service is mostly about
 telling me how to use the browser.
 
 We must live in very different parts of the world, though.


We do.  But to some extent it is a constructed example.  Point being
that choice is not always there, and it's not always easy to isolate
quite whether choice is sufficient or not.

Which means it is easy to manipulate.

Which means that if you are in Germany, it probably makes little sense.
 Whereas if you are in US of A, it probably is a done deal that the bank
is trying to manipulate you to be stuck in an unfair deal.


 In Germany,
 if I am doing online-banking, I have to follow the rules set by the
 bank.  The bank requires me not to pass the PIN to anybody, to check the
 browser status bar, to protect my TAN list, etc.  All that good stuff.
 
 But I don't have to trust it.  When I follow the rules, and my money is
 stolen, the bank has to put up for it.  I am in the clear (minus the
 paperwork).
 
 So, I don't have to trust it, I just have to use it as it is provided to
 me.  Moral dilemma avoided.


You have recourse, right?

In UK, there is a case where the bank checked a transaction, and
discovered that the person trying to make a transaction (buying a rolex
in a jeweler's shop) provided unsure answers to the questions.  E.g., in
answer to how long have you had the account? he answered all my
life.  The correct answer was 4 years.

The bank let the transaction happen, it was fraud.  The judge and the
appeal court both ruled the bank had done the right thing.

http://financialcryptography.com/mt/archives/001478.html

So yeah, people live in different worlds.


 For the bank, the story is a different one altogether.  They don't care
 about IT security, or security research, or PKI, or CA, or browsers, or
 the users, or the meaning of the word trust.  They care about profit
 margins and fraud quota, and if the fraud gets too much they ask a
 simple question: What can we do that costs us as little as possible to
 get the fraud quote down to the X percent that we allow?  And if that
 means bumping the key size from 1024 to 1025 bits, then we get 1025 bits
 until the next bump.
 
 So, frankly, what's the big deal?


I was there when the MITB thing swept through the European banking
scene.  There was outright fear in the banks.  They were terrified.  But
in the end, they knuckled down and pushed out the two-factor thing that
you mentioned earlier.

http://financialcryptography.com/mt/archives/000758.html

The point is:  *the European banks responded*.  They have a feedback
loop.  They took responsibility.

E.g. (2), there is no phishing in Europe, more or less.  Why is that?

Over in USA, no such.  That's the big deal.  Where is web-PKI done?  In
the USA, according to USA rules, USA thinking, and USA-style liability
dumping.


 We have credible end-to-end security
 story lines if your life depends on it (ask Snowden).  For everything
 else, we have a bunch of patchworks, and insurances, and adjustable
 tolerances to protect against fraud.  Not absolutely, but enough to keep
 the machine running.  From a manager perspective, all is good and dandy,
 and nevermind the pain that is endured by the workers in the engine room.
 
 As long as you live in a country that makes the people responsible for
 the system pay for any damages, it's just not that big a deal,


That point, right there.


 unless
 you are passionate about IT security, or are suffering from some other
 illness to similar effect :).


:)

iang



ps; by Europe, I mean the geographically connected part, not the
fogginess over the channel.
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Request - PKI/CA History Lesson

2014-04-30 Thread ianG
On 30/04/2014 02:57 am, Jeffrey Goldberg wrote:
 Hi Ian,
 
 I will just respond to one of the many excellent points you’ve made.


Super, thanks!

 On 2014-04-29, at 12:12 PM, ianG i...@iang.org wrote:
 
 On 29/04/2014 17:14 pm, Jeffrey Goldberg wrote:
 People do trust their browsers and OSes to maintain a list of trustworthy 
 CAs.

 No they don't.  Again, you are taking the words from the sold-model.
 
 I will explain my words below.
 
 People don't have a clue what a trustworthy CA is, in general.
 
 I emphatically agree with you. I hadn’t meant to imply otherwise.
 
 I have been using “trust” in a sort of behavioral way. For the sake of the
 next few sentences, I’m going to introduce some terrible terminology. 
 “b-trust” is my “behavioral trust” which will defined in terms of “c-trust” 
 (“cognitive”).
 
 So let’s say that A c-trusts B wrt to X when A is confident that B will act 
 in way X. (Cut me some slack on “act”). A “b-trusts” B wrt to X when she 
 behaves as if she c-trusts B wrt to X.
 
 So when I say that users trust their browsers to maintain a list of 
 trustworthy CAs, I am speaking of “b-trust”.  They may have no conscious idea 
 or understanding what they are actually trusting or why it is (or isn’t) 
 worthy of their trust. But they *behave* this way.


Right, but this is very dangerous.  You have migrated the meaning of X
in the conversation.

Users trust their browsers to do the right thing by security.

Browsers trust their CAs to do the right thing by their ID verification.

This does not mean that users trust their browsers to maintain a list of
trustworthy CAs.

Trusting the browsers to do the right thing also includes the
possibility that the browsers throw the lot out and start again.  Or
drop some CAs from the list, which they only do with small weak players
that won't sue them.



Also, one has to again refer to the nature of trust.  It's a
choice-based decision.  Trust is always founded on an ultimate choice.

In this context, we would claim that users b-trust because they know
they can switch.  With browsers they cannot switch.  There isn't a
browser that will offer a different model (they cartelised in 1994,
basically).  And there isn't a browser vendor that will take user input
on this question.

So there is no choice for the user.  Therefore, we need a new form;
m-trust perhaps?  Mandated-trust?  I don't know how far we want to go
into the doublespeak to interpret this, point being that m-trust
excludes {b,c}-trust by its nature.

Also, if you asked users whether they trust the browsers to secure their
connections to the online banks, then I'd reckon you'd have a bit more
of an uphill battle.  It isn't done and users know it isn't done, thanks
to phishing.  Users now use more than one browser, not because one does
a better job but because they are diversifying their risks, online
banking one one browser, the rest on another.

Which is where it gets more dangerous:  we can frame the question to
gain the answer we want; but who are we framing the result for ?


 A vampire bat may b-trust that its rook mates will give it a warm meal if 
 necessary. Life is filled with such trust relations even where there is no 
 c-trust. 

Yes, and a vampire bat may choose which mate to get the meal from;  it
has choice.

 (c.f., the *real meaning of trust* being a human decision to take a risk
 on available information.)
 
 Which is what I am talking about. And I’m talking about it because it is what 
 matters for
 human behavior. And I want a system that works for humans.
 
 I see that you’ve written on financial cryptography. Well think about 
 conventional currency works. For all its problems currency works, and it is a 
 system that requires “trust”. But only a negligible fraction of the people 
 who successfully use the system do so through c-trust.

Right.  Now add in hyperinflation to the mix;  how many people really
trust their governments to not hyperinflate?  Only ones with no
collective history of it.


 It may well be that all of the problems with TLS are because the system is 
 trying to work for agents who don’t understand how the system works. But, as 
 I said at the beginning, that is the world we are living in.


Right, we're certainly in the world we are in.  However, the problem
with this particular world is that it uses a language that is
'constructed' to appear to require this particular solution.  In order
to find better solutions we have to unconstruct the constructions in the
language, so as to see what else is possible.



iang

___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Request - PKI/CA History Lesson

2014-04-29 Thread ianG
On 29/04/2014 07:41 am, Ryan Carboni wrote:
 the only logical way to protect against man in the middle attacks would
 be perspectives (is that project abandoned?) or some sort of distributed
 certificate cache checking.
 
 because that's the only use of certificates right?


Well.  Certificates define their MITM as being the sort they can protect
against, sure.

 to protect against man in the middle?

Certs don't defend against *the MITM*, they only defend against _their
MITM_.  Subtle different, the MITM known as phishing is more or less
unprotected.

What to make of this?  Security economics:  there is zero point in
investing anything in a form of MITM that is known to be so rare as
statistically unmeasurable, even in unprotected environments, when there
is another form of MITM that has clocked up billions in measurable losses.

But jobs depend on that not being true, so it isn't.

iang

___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Request - PKI/CA History Lesson

2014-04-29 Thread ianG
Hi Jeffrey,


On 29/04/2014 17:14 pm, Jeffrey Goldberg wrote:
 On 2014-04-28, at 5:00 PM, James A. Donald jam...@echeque.com wrote:
 
 Cannot outsource trust  Ann usually knows more about Bob than a distant 
 authority does.
 
 So should Ann verify the fingerprints of Amazon, and Paypal herself? How do 
 you see that working assuming that Ann is an “ordinary user”?


First, do a proper security analysis;  don't accept some marketing dross
from the sellers of stuff.

If you look at the history of web commerce, there is nothing there that
supports the notion that the in-protocol MITM is a risk to be mitigated.

Even if you look at close analogues, the support is not there.  And, if
you look at the rest of the equation -- humans, banks, stores, remember
them? -- you find they don't care either.  That's because they're all
ready for chargebacks, and always have been so Alice has no problem,
ever.  She does not *ever* need to worry about fingerprints.

Then, what are they worried about?  Mass raids of databases, that's
what.  By far the #1.  The next issue, way behind, is phishing, the
other MITM.  (Which again they do little about.)

It turns out -- and early simple analysis suggested -- that an
in-protocol MITM is the worst possible attack, it's daft to an
extraordinary level, and only security experts ever worry about it.

Conclusion?  Strawman.  A real security analysis reveals all this.

Question then, is where did the notion that you HAVE to defend yourself
form the evil in-protocol MITM?  Why are we all terrified?


 This is exactly the kind of thing I was complaining about in my earlier 
 comment. There are burdens that we cannot push onto the user.
 
 People do trust their browsers and OSes to maintain a list of trustworthy CAs.


No they don't.  Again, you are taking the words from the sold-model.
People don't have a clue what a trustworthy CA is, in general.  That's
because the same model hid it, and is still hiding it.  Have a look at
amazon today -- look Ma, no CA.

In sight.  The day the CA is in sight, the users might care.  Until then
they don't know so they cannot possibly trust.

(c.f., the *real meaning of trust* being a human decision to take a risk
on available information.)


 Sure, we might have the occasional case where some people manually remove or 
 add a CA. But for the most part, we’ve outsourced trust to the browser 
 vendors, how have outsourced trust to various CAs, etc.


We the users have done nothing of the kind.

Browsers have done what they've done, and you could claim that the
browsers trust the CAs.  Maybe.  More so these days coz they actually do
something about it, in CABForum, less so before then, before Mozo policy.

 I am not saying that the system isn’t fraught with series problems. I’m 
 saying that at least it tries
 to work for ordinary users.


Well.  It tries to not interfere with ordinary users.  In terms of
working, one would need to establish the tangible benefit...

  A certificate authority does not certify that Bob is trustworthy, but that 
 his name is Bob.
 
 Yes, of course. Back in the before time (1990s), I had feared that this was 
 going to be a big problem. That people would take the take “trust the 
 authenticity” of a message to be “trust the veracity” of the message. But as 
 it turns out, we haven’t seen a substantially higher proportion of fraud of 
 this nature than in meatspace. I think it is because reputations are now so 
 fragile.


That last comment.  Yes, either the system worked, or the system never
worked, and wasn't needed.

http://financialcryptography.com/mt/archives/001255.html

Show which?  The more things you do to it, and discover that nothing
changes, is evidence to the latter.

iang
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Request - PKI/CA History Lesson

2014-04-29 Thread ianG
On 29/04/2014 19:02 pm, Greg wrote:

 I'm looking for a date that I could point to and call the birth of
 modern HTTPS/PKI.
 
 There is the Loren M Kohnfelder thesis from May of 1978, but that's not
 quite it because it wasn't actually available to anyone at the time.
 
 Perhaps an event along the lines of first modern HTTPS implementation
 in a public web browser was released, or something like that.
 
 Any leads? Maybe something from Netscape's history?


Yes, 1994, when Netscape invented SSL v1.  Which had no MITM support,
which was then considered to be a life and death issue by RSADSI ...
which just happened to have invested big in a think called x.509.  And
the rest is history.

Some commentary here, which is opinion not evidence.

http://financialcryptography.com/mt/archives/000609.html

iang
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Request - PKI/CA History Lesson

2014-04-28 Thread ianG
On 28/04/2014 20:58 pm, Ryan Carboni wrote:
 We happen to live on a planet where most users are ordinary users.
 
 
 given the extent of phishing, it's probably best we outsource trust to
 centralized authorities.


cof  it's them that have shown themselves totally incapable of doing
anything about it.  Indeed, it's them that stopped others doing anything
about it.


 Although it should be easier establishing your own certificate authority.


Oh, they fixed that too :)



iang
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Request - PKI/CA History Lesson

2014-04-28 Thread ianG
On 29/04/2014 00:12 am, Ryan Carboni wrote:
 trust is outsourced all the time in the non-cryptographic world

trust is built up all the time, risks are taken all the time, choice is
taken all the time.

 unless you do not have a bank account

That's not outsourced, that's direct, person to bank, the person has a
choice, chooses to place her trust in that bank.  Also, it is limited to
defined things that are required, can't be done by the person, and
bolstered by real backing such as FIDC.

When you suggest it's probably best we trust authorities that is
CA-playbook crapola meaning you must trust the authorities that have
been picked for you.  The vector has been reversed, people are told
what has to happen, so there is no trust.

Trust derives from choice.  Where is the choice?

iang



 On Mon, Apr 28, 2014 at 3:00 PM, James A. Donald jam...@echeque.com
 mailto:jam...@echeque.com wrote:
 
 On 2014-04-29 05:58, Ryan Carboni wrote:
 
 We happen to live on a planet where most users are ordinary
 users.
 
 
 given the extent of phishing, it's probably best we outsource
 trust to
 centralized authorities.
 Although it should be easier establishing your own certificate
 authority.
 
 
 Cannot outsource trust  Ann usually knows more about Bob than a
 distant authority does.  A certificate authority does not certify
 that Bob is trustworthy, but that his name is Bob.
 
 In practice, however we find that diverse entities have very similar
 names, and a single entity may have many names.
 
 
 _
 cryptography mailing list
 cryptography@randombit.net mailto:cryptography@randombit.net
 http://lists.randombit.net/__mailman/listinfo/cryptography
 http://lists.randombit.net/mailman/listinfo/cryptography
 
 
 
 
 ___
 cryptography mailing list
 cryptography@randombit.net
 http://lists.randombit.net/mailman/listinfo/cryptography
 

___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Request - PKI/CA History Lesson

2014-04-28 Thread ianG
On 29/04/2014 01:20 am, Ryan Carboni wrote:
 One can always start with the difficult first step of uninstalling
 certificate authorities you do not trust.

Yup.  And if you don't like your country, you can hand in your passport
on the way out.

Marketing lies aside, it is clear that the ordinary user has no choice.

iang

 On Mon, Apr 28, 2014 at 4:42 PM, ianG i...@iang.org
 mailto:i...@iang.org wrote:
 
 On 29/04/2014 00:12 am, Ryan Carboni wrote:
  trust is outsourced all the time in the non-cryptographic world
 
 trust is built up all the time, risks are taken all the time, choice is
 taken all the time.
 
  unless you do not have a bank account
 
 That's not outsourced, that's direct, person to bank, the person has a
 choice, chooses to place her trust in that bank.  Also, it is limited to
 defined things that are required, can't be done by the person, and
 bolstered by real backing such as FIDC.
 
 When you suggest it's probably best we trust authorities that is
 CA-playbook crapola meaning you must trust the authorities that have
 been picked for you.  The vector has been reversed, people are told
 what has to happen, so there is no trust.
 
 Trust derives from choice.  Where is the choice?
 
 iang
 
 
 
  On Mon, Apr 28, 2014 at 3:00 PM, James A. Donald
 jam...@echeque.com mailto:jam...@echeque.com
  mailto:jam...@echeque.com mailto:jam...@echeque.com wrote:
 
  On 2014-04-29 05:58, Ryan Carboni wrote:
 
  We happen to live on a planet where most users are
 ordinary
  users.
 
 
  given the extent of phishing, it's probably best we outsource
  trust to
  centralized authorities.
  Although it should be easier establishing your own certificate
  authority.
 
 
  Cannot outsource trust  Ann usually knows more about Bob than a
  distant authority does.  A certificate authority does not certify
  that Bob is trustworthy, but that his name is Bob.
 
  In practice, however we find that diverse entities have very
 similar
  names, and a single entity may have many names.
 
 
  _
  cryptography mailing list
  cryptography@randombit.net mailto:cryptography@randombit.net
 mailto:cryptography@randombit.net mailto:cryptography@randombit.net
  http://lists.randombit.net/__mailman/listinfo/cryptography
  http://lists.randombit.net/mailman/listinfo/cryptography
 
 
 
 
  ___
  cryptography mailing list
  cryptography@randombit.net mailto:cryptography@randombit.net
  http://lists.randombit.net/mailman/listinfo/cryptography
 
 
 ___
 cryptography mailing list
 cryptography@randombit.net mailto:cryptography@randombit.net
 http://lists.randombit.net/mailman/listinfo/cryptography
 
 
 
 
 ___
 cryptography mailing list
 cryptography@randombit.net
 http://lists.randombit.net/mailman/listinfo/cryptography
 

___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Request - PKI/CA History Lesson

2014-04-27 Thread ianG
On 25/04/2014 16:36 pm, Jeffrey Goldberg wrote:
 On 2014-04-25, at 4:09 AM, Peter Gutmann pgut...@cs.auckland.ac.nz wrote:
 
 http://www.cs.auckland.ac.nz/~pgut001/pubs/book.pdf
 
 In which Peter says:
...
 I hated X.509 when it was first being introduced, and much preferred PGP’s 
 “Web of Trust”. I still hate X.509 for all of the usual reasons, but I now 
 have much more sympathy for the design choices. It fails at its goal of not 
 demanding unrealistic from ordinary users, but at least it tries attempts to 
 do so.


There is a slight problem with goals here.  PKI was never designed for
ordinary users.  If you read the original documentation of how PKI was
organised before the web-PKI was invented, it talks about how each
relying party has to enter into a contract and verify that the CPS
provides the answer they are looking for.

In this context, it was reasonable to talk about the relying party
trusting the results, because they had actually gone through the process
of developing that trust.  According to the theory.

When they did the web-PKI however they threw away all of the reliance
contract requirements, or buried them, but kept the language of trust.
As you point out, they had to do this because ordinary users won't go
through the process of CPS and contract review.

So the result was trust-but-no-trust.  We are not using PKI as it was
designed and theorised.  We're using some form of facade that originally
had no proper contractual basis.  The contracts are being sorted out
now, over the last 5 years or so, in secret, but the joke of course is
that we still all believe that we're using trust and PKI and so forth
when none of that really applies.

iang

___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Request - PKI/CA History Lesson

2014-04-27 Thread ianG
On 25/04/2014 18:40 pm, Tony Arcieri wrote:
 On Fri, Apr 25, 2014 at 3:10 AM, ianG i...@iang.org
 mailto:i...@iang.org wrote:
 
 Worse, consider Firefox's behaviour:  it considers a certificate-secured
 site such as a self-cert'd site to be dangerous, but it does not
 consider a HTTP site to be dangerous.  So it tells the user HTTP is
 safe, whereas an attempt to secure means that the user is being robbed!
 
 
 I actually brought this up with one of Chrome UX engineers, specifically
 how to Joe User the address bar makes it appear that plaintext HTTP is
 more secure than HTTPS with an untrusted cert. While one is MitM-able by
 an active attacker, the other is most certainly being passively MitMed
 by someone! :O
 
 The response was that users have an expectation of security when using
 HTTPS that they don't with HTTP, but I wonder, how many people just
 think they're safe because of the absence of scary warning signs and
 have no idea what HTTP vs HTTPS actually means?


Right, that is their logic, and as usual it depends on their rather
unique and personal assumptions which they are incapable of discussing.

We know from phishing and from research that people do not have a
reliable knowledge of whether they are in HTTP or HTTPS in the first place.

We also know that prevalence of scary warnings for false negatives is
O(100) times that of true negatives, and from statistics, this means
that users are trained to click-thru scary warnings, and will miss any
true negatives.  Hence click-thru syndrome.

We also know K6 that if the system is complicated, they'll choose to
turn it off and go naked.

So the 'expectation' which the developers image they are trying to meet
is really rather hopeful, at best, cognitive dissonance in the middle
and negligence at the sharp end.  Yes, us lot here know about it.  Yes,
developers know about it.

But the users?  Not a lot of hope there, not enough to build a PKI
promise on.


 I think plaintext HTTP should show an lock with a big no sign over it
 or something to highlight to users that the connection is insecure.

I think colours are fine.  White for HTTP.  Light Blue for CA-HTTPS,
Green for EV, and Light Pink for non-CA-HTTPS.

But the point of the above mis-expectations is that it is aligned with
CA notions of selling more certs.  A self-signed cert is to them a lost
CA-cert sale, so must be attacked.  The fact that most CAs haven't the
first clue about marketing (a rising tide lifts all boats) is a rabbit
hole we'll refrain from today.



iang
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] [Cryptography] Improving the state of end-to-end crypto

2014-04-27 Thread ianG
On 27/04/2014 18:33 pm, Ben Laurie wrote:
 We are hiring to improve the state of end-to-end crypto:
 
 http://www.links.org/files/SimplySecureProgramDirectorJobPosting.pdf
 http://www.links.org/files/SimplySecure.pdf

To paraphrase, work with ... Advisory Board, developer communities,
academics, funders, civil society, private partners, existing contacts
-­­ yours and others’ -­­ developers, designers, academics,
complimentary efforts, security experts, academics, and partners,
auditors, conferences, venues,...



Everyone *but the users* !!  Shake it up, Ben.  You can't improve the
lot of the users unless you actually meet some of them.



iang
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] OT: Speeding up and strengthening HTTPS connections for Chrome on Android

2014-04-26 Thread ianG
On 26/04/2014 02:15 am, grarpamp wrote:
 On Fri, Apr 25, 2014 at 5:36 PM, ianG i...@iang.org wrote:
 On 25/04/2014 22:14 pm, Jeffrey Walton wrote:
 Somewhat off-topic, but Google took ChaCha20/Poly1305 live.
 http://googleonlinesecurity.blogspot.com/2014/04/speeding-up-and-strengthening-https.html
 
 ... It also *does not support any cipher suite negotiation*,
 instead it always uses a fixed suite (the current
 implementation[2] uses ECDHE-Curve25519-Chacha-Poly1305).
 
 Where is this last bit quoted from? The full suite as (pictured) in
 the blog is: ecdhe_rsa_chacha20_poly1305.


Full post was this one, apologies for segway to an entirely different
venture:

http://www.metzdowd.com/pipermail/cryptography/2014-April/021131.html

From Guus.

iang

___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Request - PKI/CA History Lesson

2014-04-25 Thread ianG
On 16/04/2014 16:30 pm, Jason Iannone wrote:
 The more I read, the more bewildered I am by the state of the PKI.

No, not nearly enough:

http://iang.org/ssl/pki_considered_harmful.html
http://iang.org/ssl/


 The trust model's unwieldy system[1] of protocols, dependencies, and
 outright assumptions begs to be exploited.  Add to that the browser
 behavior for a self-signed certificate (RED ALERT! THE SKY IS
 FALLING!) compared to a trusted site and we're in bizarro world.


Worse, consider Firefox's behaviour:  it considers a certificate-secured
site such as a self-cert'd site to be dangerous, but it does not
consider a HTTP site to be dangerous.  So it tells the user HTTP is
safe, whereas an attempt to secure means that the user is being robbed!

Go figure...

Worse still, Firefox actually deceives and lies about the status of good
certificates.  If there is an ordinary SSL site, it shows it as white,
same as HTTP.  Icons and indicators are downplayed, lost in the noise.

Worse again:  If you click on the icon to ask, it says you are
connected to www.example.com which is run by ( *UNKNOWN* ) even though
the browser has a certificate that states clearly who runs the site.
Try this site which is run by Google, as it says in the cert:

https://developer.android.com/

Looking deeper it states:

   Owner:  This website does not supply ownership information.

One can only assume Firefox is upselling you to green certs, but lying
and deceiving in the process.  Chrome says something different, which I
don't understand, but it doesn't seem to be quite so blatant.

Is there any wonder nobody trusts any of it?


 I'd rather we close the gap and appreciate a secure transaction with
 an unauthenticated party than proclaim all is lost when a self-signed
 key is presented.  I see no reason to trust VeriSign or Comodo any
 more than Reddit.  Assuming trust in a top heavy system of Certificate
 Authorities, Subordinate Certificate Authorities[2], Registration
 Authorities, and Validation Authorities[3] in a post bulk data
 collection partnership world is a non-starter.  The keys are
 compromised.
 
 With that, I ask for a history lesson to more fully understand the
 PKI's genesis and how we got here.  Maybe a tottering complex
 recursive heirarchical system of trust is a really great idea and I
 just need to be led to the light.


Sigh.  You're thinking of it as a hierarchy of trust.  That isn't what
it is.  There's no trust anywhere in the system, even the word 'trust'
as used means a mandated obligatory acceptance, not trust as humans know it.


 [1]http://csrc.nist.gov/publications/nistpubs/800-15/SP800-15.PDF,
 http://csrc.nist.gov/publications/nistpubs/800-32/sp800-32.pdf
 [2]https://www.eff.org/files/DefconSSLiverse.pdf,
 https://www.eff.org/files/ccc2010.pdf
 [3]http://en.wikipedia.org/wiki/Public-key_infrastructure


I just ate breakfast, no thanks :(



iang

___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] [Cryptography] Is it time for a revolution to replace TLS?

2014-04-25 Thread ianG
On 15/04/2014 21:07 pm, d...@deadhat.com wrote:
 http://clearcryptocode.org/tls/

 Probably not going to happen, but it's nice to dream...

 
 It is one of my long term, implausible goals to replace TLS with a
 collection of independent app to app function-targeted security protocols
 that are individually simple enough to understand and implement cleanly. I
 will certainly fail.


It's certainly possible.  It's more or less what I do.  Adoption and
generating the commercial feedback cycle to finance the programmers is
the problem, not the technology.


 E.G.
 For paying with a credit card.. A secure credit card payment protocol
 
 For authenticating a web site and producing keys to bind .. A web page
 authentication protocol.
 
 For remotely logging into a shell producing keys to bind .. A secure shell
 login protocol.
 
 There are many more possibilities.
 
 Today, SSL and TLS with all that entails (ASN.1, X.509, PKCS, TCP/IP etc.)
 is the hammer and any securable thing is the nail. But it's really a
 client-server session privacy and integrity protocol with issues. It isn't
 designed to protect my banking transactions, just the traffic over which I
 communicate my transaction intent. If I had a secure bank transaction
 protocol independent of TLS, heartbleed wouldn't matter.
 
 A classic mismatch between TLS and its primary use securing web traffic is
 the failure of a virtual server to be able to produce the right cert for
 the right virtual web site. The cert is really identifying the TLS
 termination point which may be a web server, rather than a web site, of
 which the server may be serving many. That's one reason why a web-site
 security protocol would be more effective than TLS plumbed under HTTP.
 
 TLS does need nuking so we can replace it with simpler things. The
 sentiment isn't wrong, it's just hard to pull off.


For amusement, someone pointed me at the tcpcrypt group on the IETF
sites.  So I spend some days reading and got dragged into conversation.

It's weird, I don't think I could design a more flawed process if I
tried.  But the good thing is that while the IETF working groups are
focussed on breaking TCP, others are working to replace it.

The question is, how to best replace it?  Recent discussions indicate
that there are many ways to do this, and the space defies easy
cataloging.  Which means that the formal, committee, standards,
consensus group approaches won't work.

How then to replace?  And indeed is it a replacement or a bypass, an
evolution or a revolution?
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] OT: Speeding up and strengthening HTTPS connections for Chrome on Android

2014-04-25 Thread ianG
On 25/04/2014 22:14 pm, Jeffrey Walton wrote:
 Somewhat off-topic, but Google took ChaCha20/Poly1305 live.
 
 http://googleonlinesecurity.blogspot.com/2014/04/speeding-up-and-strengthening-https.html
 
 Earlier this year, we deployed a new TLS cipher suite in Chrome that
 operates three times faster than AES-GCM on devices that don’t have
 AES hardware acceleration, including most Android phones, wearable
 devices such as Google Glass and older computers. This improves user
 experience, reducing latency and saving battery life by cutting down
 the amount of time spent encrypting and decrypting data.
 
 To make this happen, Adam Langley, Wan-Teh Chang, Ben Laurie and I
 began implementing new algorithms -- ChaCha 20 for symmetric
 encryption and Poly1305 for authentication -- in OpenSSL and NSS in
 March 2013. It was a complex effort that required implementing a new
 abstraction layer in OpenSSL in order to support the Authenticated
 Encryption with Associated Data (AEAD) encryption mode properly. AEAD
 enables encryption and authentication to happen concurrently, making
 it easier to use and optimize than older, commonly-used modes such as
 CBC. Moreover, recent attacks against RC4 and CBC also prompted us to
 make this change.
 
 ...


Progress for OpenSSL!  Here's hoping they also see the light and drop
every other ciphersuite as fast as they can.

 We hope there will be even greater adoption of this
 cipher suite, and look forward to seeing other websites
 deprecate AES-SHA1 and RC4-SHA1 in favor of AES-GCM and
 ChaCha20-Poly1305 since they offer safer and faster
 alternatives.


Close!  2 is s much closer to 1, it's even O(1).

iang

ps;  obligatary toot:
http://iang.org/ssl/h1_the_one_true_cipher_suite.html

pps;  Google, take your lead from Guus:

 ... It also *does not support any cipher suite negotiation*,
 instead it always uses a fixed suite (the current
 implementation[2] uses ECDHE-Curve25519-Chacha-Poly1305).

The man!
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


[cryptography] xkcd on Heartbleed

2014-04-24 Thread ianG
XKCD strikes again:

https://xkcd.com/1354
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] NSA Said to Exploit Heartbleed Bug for Intelligence for Years

2014-04-12 Thread ianG
On 11/04/2014 19:36 pm, Arshad Noor wrote:
 On 04/11/2014 03:51 PM, ianG wrote:
 On 11/04/2014 17:50 pm, Jeffrey Walton wrote:
 http://www.bloomberg.com/news/2014-04-11/nsa-said-to-have-used-heartbleed-bug-exposing-consumers.html


 The U.S. National Security Agency knew for at least two years about a
 flaw in the way that many websites send sensitive information, now
 dubbed the Heartbleed bug, and regularly used it to gather critical
 intelligence, two people familiar with the matter said.

 1.  score 1 up for closed source.  Although this bug would as equally
 exist in closed source, the likelihood of discovery, publication and
 exploitation is much lower.
 
 Isn't that a naive assumption?  Every US-based company that has anything
 to do with crypto has to send in their source-code to a special address
 before you can be granted a License Exception (US BIS rules) to export
 to foreign customers.  (The only exception is open-source - whose
 creators must still notify a special e-mail address about the new FOSS).
 In either case, NSA knows about it.


Well, 1. the whole world isn't the USA.  2. we have to differentiate
between NSA-as-existential-threat and the other one which is
hackers-as-people-who-steal-money.

 Is it any less worse that only the NSA might have exploited unknown
 loopholes than random attackers after your money?  They're undermining
 trust in the internet - which is now a multi-billion - perhaps even a
 trillion - dollar industry involving millions of jobs.  Given that the
 US is probably the largest creator of technology products, the end
 result is likely to be a boon for technology companies around the world
 as US jobs are lost due to lost exports.


Right.  Can you put a number on that?  And can you put a number on the
things that the other crooks do?  The latter is certainly true, there is
a big body of evidence that shows that money is being raided from the
Internet in a big way.  Nobody's ever put a number of any credibility on
the NSA damage.

Heartbleed is a big issue because it opens the door for massive robbery,
not because it gives the NSA 1 more trick to add to their other 100.  If
it was *just the NSA* then I'd recommend not re-rolling keys, because
only a tiny proportion of the public are targets, and they should know
who they are.

Open source makes this *everyone at risk*.

 As I see it, only open-source software has a chance to be trusted since
 users can see what they're deploying; of course, it has to be verified,
 but that was always true.


That's why I said score 1 and not this is the end of the debate.
It's complicated, there are many factors involved.



iang
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] NSA Said to Exploit Heartbleed Bug for Intelligence for Years

2014-04-11 Thread ianG
On 11/04/2014 17:50 pm, Jeffrey Walton wrote:
 http://www.bloomberg.com/news/2014-04-11/nsa-said-to-have-used-heartbleed-bug-exposing-consumers.html
 
 The U.S. National Security Agency knew for at least two years about a
 flaw in the way that many websites send sensitive information, now
 dubbed the Heartbleed bug, and regularly used it to gather critical
 intelligence, two people familiar with the matter said.


Bingo!  What lessons are we picking up from this?  Here's what I'm
feeling so far, flame away:

1.  score 1 up for closed source.  Although this bug would as equally
exist in closed source, the likelihood of discovery, publication and
exploitation is much lower.

2.  Score another 1 up for interpreted languages that handle array
allocation cleanly.  This is more or less a buffer overflow, in a wider
sense.

3.  We have evidence of NSA exploitation in the above, and there was
another prior indication that was suggested to be agency.

https://www.eff.org/deeplinks/2014/04/wild-heart-were-intelligence-agencies-using-heartbleed-november-2013

4.  This should put to rest any silly claims that the NSA put the bug
into play themselves.  The programmer and the reviewer missed it.

5.   I've seen no evidence yet of attacker-inflicted damages, nor of new
exploits, but it's only been a week.


 The NSA’s decision to keep the bug secret in pursuit of national
 security interests threatens to renew the rancorous debate over the
 role of the government’s top computer experts.

6.  It is becoming clearer that the NSA's mission is offensive first,
defensive ever?  They aren't our friends, they might be our enemy.  Has
impact on all sorts of cooperation questions (NIST, IETF).

 Heartbleed appears to be one of the biggest glitches in the Internet’s
 history, a flaw in the basic security of as many as two-thirds of the
 world’s websites.

7.  In contrast to damages, the rework bill is immense.  All those sites
multiplied by average refit cost.
http://mashable.com/2014/04/09/heartbleed-bug-websites-affected/
http://happyplace.someecards.com/30541/the-heartbleed-bug-which-sites-you-should-change-your-passwords-for-and-how-to-panic

Does anyone have a view as to the average cost to refit?




iang

___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] OTR and XMPP

2014-04-08 Thread ianG
On 8/04/2014 03:13 am, Pranesh Prakash wrote:
 Dear all,
 In the March IETF 89 meeting in London, there were renewed discussions
 around end-to-end encryption in XMPP.
 
 Here is the recording of the session:
 
 http://recordings.conf.meetecho.com/Recordings/watch.jsp?recording=IETF89_XMPPchapter=part_5
 
 
 There was basic agreement that OTR is a horrible fit for XMPP since it
 doesn't provide full stanza encryption.  The very reasons for the
 benefits of OTR (its ability to be protocol-agnostic) are the reasons
 for its shortfalls too.
 
 However, there is no clear alternative.  The closest is
 draft-miller-xmpp-e2e.  The one clear verdict was that more contributors
 are required.


Has anyone got a text-based summary of what this is about?   I'm happy
to read, but I find listening to recordings doesn't really work.


 The discussions are happening at:
 
 https://www.ietf.org/mailman/listinfo/xmpp
 http://mail.jabber.org/mailman/listinfo/standards
 
 If anyone has the time to make contributions, please do jump in (and
 spread the word).



iang
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] [Cryptography] The Heartbleed Bug is a serious vulnerability in OpenSSL

2014-04-08 Thread ianG
On 7/04/2014 22:53 pm, Edwin Chu wrote:
 Hi
 
 A latest story for OpenSSL
 
 http://heartbleed.com/
 
 The Heartbleed Bug is a serious vulnerability in the popular OpenSSL
 cryptographic software library. This weakness allows stealing the
 information protected, under normal conditions, by the SSL/TLS
 encryption used to secure the Internet. SSL/TLS provides
 communication security and privacy over the Internet for
 applications such as web, email, instant messaging (IM) and some
 virtual private networks (VPNs).
 
 The Heartbleed bug allows anyone on the Internet to read the memory
 of the systems protected by the vulnerable versions of the OpenSSL
 software. This compromises the secret keys used to identify the
 service providers and to encrypt the traffic, the names and
 passwords of the users and the actual content. This allows attackers
 to eavesdrop communications, steal data directly from the services
 and users and to impersonate services and users.


We have here a rare case of a broad break in a security protocol leading
to compromise of keys.

While everyone's madly rushing around to fix their bitsbobs, I'd
encouraged you all to be alert to any evidence of *damages* either
anecdotally or more firm.  By damages, I mean (a) rework needed to
secure, and (b) actual breach into sites and theft of secrets, etc,
leading to (c) theft of property/money/value etc.

In risk analysis, we lean very heavily on firm indications of actual,
tangible damages, because risk analysis is an uncertain tool and the
security industry is a FUD-driven sector.  Where we have actual
experiences of lost money, time, destruction of property or whatever,
this puts us in a much better position to predict what is worth spending
money to protect.

E.g., if we cannot show any damages from this breach, it isn't worth
spending a penny on it to fix!  Yes, that's outrageous and will be
widely ignored ... but it is economically and scientifically sound, at
some level.

I maintain a risk history here: http://wiki.cacert.org/Risk/History for
the CA field, so if anyone can find any real damages effecting the CA
world, let me know!



iang

___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] [Cryptography] The Heartbleed Bug is a serious vulnerability in OpenSSL

2014-04-08 Thread ianG
On 8/04/2014 20:33 pm, Nico Williams wrote:
 On Tue, Apr 08, 2014 at 01:12:25PM -0400, Jonathan Thornburg wrote:
 On Tue, Apr 08, 2014 at 11:46:49AM +0100, ianG wrote:
 While everyone's madly rushing around to fix their bitsbobs, I'd
 encouraged you all to be alert to any evidence of *damages* either
 anecdotally or more firm.  By damages, I mean (a) rework needed to
 secure, and (b) actual breach into sites and theft of secrets, etc,
 leading to (c) theft of property/money/value etc.

 [[...]]

 E.g., if we cannot show any damages from this breach, it isn't worth
 spending a penny on it to fix!

 This analysis appears to say that it's not worth spending money to
 fix a hole (bug) unless either money has already been spent or damages
 have *already* occured.  This ignores possible or probable (or even
 certain!) *future* damages if no rework has yet happened.
 
 The first part (gather data) is OK.  The second I thought was said
 facetiously.  It is flawed, indeed, but it's also true that people have
 a hard time weighing intangibles.


Right, exactly.  Thought experiment.


 I don't know how we can measure anything here.  How do you know if your
 private keys were stolen via this bug?  It should be possible to
 establish whether key theft was feasible, but establishing whether they
 were stolen might require evidence of use of stolen keys, and that might
 be very difficult to come by.


Precisely, that is the question.  What happens if we wait a year and
nothing .. happens?

What happened with the Debian random plonk?  Nothing, that I ever saw in
terms of measurable damages.  The BEAST thing?  Twitter, was it?

What happened with PKI?  We (I) watched and watched and watched ... and
it wasn't until about 2011 that something finally popped up that was a
measurable incident of damages, 512bit RSA keys being crunched from memory.

That's 16 years!  Does that mean (a) PKI was so good that it clobbered
all attacks, or (b) PKI was so unnecessary because there was nobody
interested in attacks?

Dan Geer once said on this list [0]:

The design goal for any security system is that the number of
failures is small but non-zero, i.e., N0. If the number of failures is
zero, there is no way to disambiguate good luck from spending too much.
Calibration requires differing outcomes.

We now have what amounts to a *fantastic* opportunity ghoulish laugh
to clarify delta.  We've got a system wide breach, huge statistics, and
it's identifiable in terms of which servers are vulnerable.

Hypothesize:  Let the number of attacked servers be 1% of population of
vulnerable servers.  Let our detection rate be 1%.  Multiply.  That
means 1 in 10,000 attacked servers.  Let's say we have 1m vulnerable
servers.

We should detect 100 attacks over the next period.

We should detect something!


 We shouldn't wait for evidence of use of
 stolen keys!


(Well, right.  I doubt we can actually tell anyone to wait.)

 Nico




iang



[0] http://financialcryptography.com/mt/archives/001255.html
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] [Cryptography] The Heartbleed Bug is a serious vulnerability in OpenSSL

2014-04-08 Thread ianG
On 8/04/2014 21:02 pm, tpb-cry...@laposte.net wrote:

 You said you control a quite famous bug list.


Not me, you might be thinking of the other iang?

 I should not ask this here, but considering the situation we found ourselves 
 regarding encryption infrastructure abuse from the part of US government ... 
 I'm just curious and can't resist it.

the shoe turns, the knife fits...

 How much are you being paid to give such dangerous and preposterous advice? 
 Or, who are your handlers?


Nothing, nix.  I wish.  Please!?

At this stage it is customary to post a bitcoin address but I don't even
have one of them



iang

___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


[cryptography] Announcing Mozilla::PKIX, a New Certificate Verification Library

2014-04-07 Thread ianG

 Original Message 
Subject: Announcing Mozilla::PKIX, a New Certificate Verification Library
Date: Mon, 07 Apr 2014 15:33:50 -0700
From: Kathleen Wilson kwil...@mozilla.com
Reply-To: mozilla's crypto code discussion list
dev-tech-cry...@lists.mozilla.org
To: mozilla-dev-tech-cry...@lists.mozilla.org

All,

We have been working on a new certificate verification library for
Gecko, and would greatly appreciate it if you will test this new library
and review the new code.

Background

NSS currently has two code paths for doing certificate verification.
Classic verification has been used for verification of non-EV
certificates, and libPKIX has been used for verification of EV
certificates.

As many of you are aware, the NSS team has wanted to replace the
classic verification with libPKIX for a long time. However, the
current libPKIX code was auto-translated from Java to C, and has proven
to be very difficult to maintain and use. Therefore, Mozilla has created
a new certificate verification library called mozilla::pkix.

Request for Testing

Replacing the certificate verification library can only be done after
gaining sufficient confidence in the new code by having as many people
and organizations test it as possible.

We ask that all of you help us test this new library as described here:
https://wiki.mozilla.org/SecurityEngineering/mozpkix-testing#Request_for_Testing

Testing Window: The mozilla::pkix certificate verification library is
available for testing now in Nightly Firefox builds. We ask that you
test as soon as possible, and that you complete your testing before
Firefox 31 exits the Aurora branch in June.
(See https://wiki.mozilla.org/RapidRelease/Calendar)

Request for Code Review

The more people who code review the new code, the better. So we ask all
of you C++ programmers out there to review the code and let us know if
you see any potential issues.
https://wiki.mozilla.org/SecurityEngineering/mozpkix-testing#Request_for_Code_Review


We look forward to your help in testing and reviewing this new
certificate verification library.

Mozilla Security Engineering Team


___
dev-security mailing list
dev-secur...@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security


___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Extended Random is extended to whom, exactly?

2014-04-06 Thread ianG
On 6/04/2014 05:46 am, coderman wrote:
 On Mon, Mar 31, 2014 at 3:33 PM, ianG i...@iang.org wrote:
 ...
 In some ways, this reminds me of the audit reports for compromised CAs.
  Once you know the compromise, you can often see the weakness in the
 report.
 
 are these public reports?  such a collection of compromise reports
 would be informative. (if you've got a list :)


They are published, typically.  Audits are made available to the vendor
community, and some vendors have taken the hint and insisted that they
be posted and available for public scrutiny.

However they are buried.  Firstly, they are not collected in any
particular one place.  The best is probably Mozilla's list of audit
reviews, in which you can follow the links of each post-for-review (and
you get to comment on the post when it is play) but certainly until
recently this list was not complete, many roots were grandfathered in.

No other vendor reports on its ueber-CA activities that I know of, but
sometimes the auditors' associations publish the reports (WebTrust had a
very gappy list at one stage).

Secondly, they use the internal language of audit, and one could be
mistaken in assuming they are written to speak to other auditors, only.
 Thirdly they are full of audit-semantics.  Together, these are
unfortunately hard to distinguish from industrial grade CYA.

Fourthly, they are commissioned by the CA, for the CA, of the CA, not
for you, nor written with you in mind.  There is a false expectation
that the public can rely on auditor's reports, but this only applies to
formal audit reports in a financial reporting context.  Beyond that,
it's ... open to question.  So typically, you are not entitled to rely
on an auditor's report, and while they'll accept you have that
fallacious impression, you can be sure they'll fight it in court and win.

Oh, and fifthly, they are dryer than a Mars rainfall survey.



iang

http://financialcryptography.com/mt/archives/001126.html Audit burial
customs in 7 parts.
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


[cryptography] Tails

2014-04-04 Thread ianG
Has anyone looked at Tails?

http://www.salon.com/2014/04/02/crucial_encryption_tool_enabled_nsa_reporting_on_shoestring_budget/


 Crucial encryption tool enabled NSA reporting on shoestring budget

Big players in Snowden revelations publicly praise Tails, in hope of
gaining much-needed funding for the tool



While followers of the NSA leaks stories and everyday privacy
enthusiasts may be well acquainted with encryption tools like PGP, the
best-practice privacy tool — the operating system enabling much of the
Snowden leaks reporting — is known to few but experts.

On Wednesday, however, three key NSA revelation journalists (Laura
Poitras, Glenn Greenwald and Bart Gellman) spoke publicly about the
importance of Tails — a tool that forces privacy best-practices by
default. As Trevor Timm reported, it’s essentially the sine qua non of
reporting on sensitive stories. However, as Timm notes, the “vital” tool
“is incredibly underfunded. Tails’ 2013 expense report shows that they
only had an operating budget of around 42,000 euros, which is less than
$60,000.” In an effort to garner donations, the journalists spoke out
for the first time at the Freedom of the Press Foundation site about
Tails’ importance:

Laura Poitras: “I’ve been reluctant to go into details about the
different steps I took to communicate securely with Snowden to avoid
those methods being targeted. Now that Tails gives a green light, I can
say it has been an essential tool for reporting the NSA story. It is an
all-in-one secure digital communication system (GPG email, OTR chat, Tor
web browser, encrypted storage) that is small enough to swallow. I’m
very thankful to the Tail developers for building this tool.”

Glenn Greenwald: “Tails have been vital to my ability to work
securely on the NSA story. The more I’ve come to learn about
communications security, the more central Tails has become to my approach.”

Barton Gellman: “Privacy and encryption work, but it’s too easy to
make a mistake that exposes you. Tails puts the essential tools in one
place, with a design that makes it hard to screw them up. I could not
have talked to Edward Snowden without this kind of protection. I wish
I’d had it years ago.”

Timm ran down the key aspects of how Tails renders best-practice privacy
communications a default for its users:

It forces all of your web traffic through the Tor anonymity network,
so you don’t have to configure any of the settings on any program.
It allows you to use GPG encryption when you are emailing and/or OTR
encryption while instant messaging, right out of the box.
It allows journalists to work on sensitive documents, edit audio and
video, and store all their files encrypted.
Critically, Tails never actually touches your hard drive and
securely wipes everything you’ve done every time you shut it down
(unless you specifically save it on an encrypted drive). This serves two
important purposes: first, it helps journalists who are operating in
environments or on networks that may already be compromised by
governments or criminals.

Natasha Lennard

Natasha Lennard is an assistant news editor at Salon, covering
non-electoral politics, general news and rabble-rousing. Follow her on
Twitter @natashalennard, email nlenn...@salon.com.
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Geoff Stone, Obama's Review Group

2014-04-04 Thread ianG
On 3/04/2014 11:42 am, John Young wrote:
 Stone's is a good statement which correctly places responsibility
 on three-branch policy and oversight of NSA, a military unit obliged
 to obey command of civilians however bizarre and politically self-serving.
 
 ODNI and NSA have been inviting a series of critics and journalists
 to discussions. Most have resulted in statements similar to
 Stone's. No such discussions were held after 9/11.


Yes, this is similar to embedding.  In exchange for access, they get to
promise no reporting of actual .. news.  Just opinion.  They are now
propaganda agents.  Or?


 Incorrect to compare NSA to rogue, dirty work, civilian-led CIA
 which will attack the three branches if riled. That is the blackmail
 looming since 1947.
 
 Greater public oversight of the three-branches is needed, for they
 are the rogue, dirty work, civilian-led three LS, protecty by highest
 secrecy.
 
 If this can be helped by these invited discussions and statements,
 that would be a true advance beyond mere futile debate so far
 generated by shallow journalisitic reporting and polemics.


Well maybe.  The problems I see are not addressed below.  Firstly, there
is no sense that the person concerned looked at the lies told by the
agency to their regulators, the Senate committee.  Which is a court, and
is therefore perjury.  And any other deceptions, broadly, and any
deceptions to the public.

Secondly, I don't see any investigation here of have the NSA has
breached commercial crypto or standards crypto.  There is a wider debate
here other than they had some legal pretext.  There is an economy to
deal with, and as the NSA weakens the commercial infrastructure, crooks
move in. The question about interference in standards bodies is not a
persnickety one, it goes to the heart of why there was no real defence
against phishing from vendors, why the crap product we call security was
ineffective against breaching, and why mass surveillance was a doddle.

Thirdly there is no mention of the issue of sharing data with civilians.
 This is going on with around 20 agencies, yet it crosses the line that
should never be crossed.  And, that prohibition is so strong that it has
to be very clearly a matter of national security.  Which rules out
drugs, ML, and indeed most domestic terrorist attacks.  One might argue
that 9/11 was an outlier, but nothing else was.

It's all FBI business.  Yes, noted this is dirty politics by the CIA,
and the FBI has problems of their own, but the principle of separation
of these powers exists for a reason.  Fourthly:  no mention of that
separation.


 Release of far more of Snowden's documents will be needed
 for this to happen, hopefully the whole wad by a means that will
 put the technology in the hands of those who can understand
 it. So far, the journalists have released only the most useful
 to arouse indignation and refuse to release what could make
 a lasting difference. Not that journalists should be expected
 to make a lasting difference.


Well.  They're on an adrenaline rush.  They probably have to out-do
every prior release in order to get the attention of their increasingly
jaded public.  What they could use is a media manager to run it like a
hollywood film or a political campaign.  Which will further annoy us as
we're after hard facts not more trips.


 At 10:56 PM 4/2/2014, you wrote:
 
 [ disclaimer, Geoff Stone is a friend of mine ]


 www.huffingtonpost.com/geoffrey-r-stone/what-i-told-the-nsa_b_5065447.html?utm_hp_ref=technologyir=Technology


 What I Told the NSA

Because of my service on the President's Review Group last fall,
which made recommendations to the president about NSA surveillance
and related issues, the NSA invited me to speak today to the NSA
staff at the NSA headquarters in Fort Meade, Maryland, about my
work on the Review Group and my perceptions of the NSA. Here,
in brief, is what I told them:

  From the outset, I approached my responsibilities as a member
  of the Review Group with great skepticism about the NSA. I am
  a long-time civil libertarian, a member of the National Advisory
  Council of the ACLU, and a former Chair of the Board of the
  American Constitution Society. To say I was skeptical about
  the NSA is, in truth, an understatement.

  I came away from my work on the Review Group with a view of
  the NSA that I found quite surprising. Not only did I find
  that the NSA had helped to thwart numerous terrorist plots
  against the United States and its allies in the years since
  9/11, but I also found that it is an organization that operates
  with a high degree of integrity and a deep commitment to the
  rule of law.

  Like any organization dealing with extremely complex issues,
  the NSA on occasion made mistakes in the implementation of its
  authorities, but it invariably reported those mistakes upon
  discovering them and worked 

[cryptography] Extended Random is extended to whom, exactly?

2014-03-31 Thread ianG
http://www.reuters.com/article/2014/03/31/us-usa-security-nsa-rsa-idUSBREA2U0TY20140331


(Reuters) - Security industry pioneer RSA adopted not just one but two
encryption tools developed by the U.S. National Security Agency, greatly
increasing the spy agency's ability to eavesdrop on some Internet
communications, according to a team of academic researchers.

...
A group of professors from Johns Hopkins, the University of Wisconsin,
the University of Illinois and elsewhere now say they have discovered
that a second NSA tool exacerbated the RSA software's vulnerability.

The professors found that the tool, known as the Extended Random
extension for secure websites, could help crack a version of RSA's Dual
Elliptic Curve software tens of thousands of times faster, according to
an advance copy of their research shared with Reuters.

...
In a Pentagon-funded paper in 2008, the Extended Random protocol was
touted as a way to boost the randomness of the numbers generated by the
Dual Elliptic Curve.

...
But members of the academic team said they saw little improvement, while
the extra data transmitted by Extended Random before a secure connection
begins made predicting the following secure numbers dramatically easier.

Adding it doesn't seem to provide any security benefits that we can
figure out, said one of the authors of the study, Thomas Ristenpart of
the University of Wisconsin.

Johns Hopkins Professor Matthew Green said it was hard to take the
official explanation for Extended Random at face value, especially since
it appeared soon after Dual Elliptic Curve's acceptance as a U.S. standard.

If using Dual Elliptic Curve is like playing with matches, then adding
Extended Random is like dousing yourself with gasoline, Green said.

The NSA played a significant role in the origins of Extended Random. The
authors of the 2008 paper on the protocol were Margaret Salter,
technical director of the NSA's defensive Information Assurance
Directorate, and an outside expert named Eric Rescorla.
...




END of snippets, mostly to try and figure out what this protocol is
before casting judgement.  Anyone got an idea?



iang
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Extended Random is extended to whom, exactly?

2014-03-31 Thread ianG
On 31/03/2014 18:49 pm, Michael Rogers wrote:
 On 31/03/14 18:36, ianG wrote:
 END of snippets, mostly to try and figure out what this protocol
 is before casting judgement.  Anyone got an idea?
 
 http://tools.ietf.org/html/draft-rescorla-tls-extended-random-02
 
 The United States Department of Defense has requested a TLS mode
 which allows the use of longer public randomness values for use with
 high security level cipher suites like those specified in Suite B
 [I-D.rescorla-tls-suiteb].  The rationale for this as stated by DoD
 is that the public randomness for each side should be at least twice
 as long as the security level for cryptographic parity, which makes
 the 224 bits of randomness provided by the current TLS random values
 insufficient.



4.1.  Threats to TLS

   When this extension is in use it increases the amount of data that an
   attacker can inject into the PRF.  This potentially would allow an
   attacker who had partially compromised the PRF greater scope for
   influencing the output.  Hash-based PRFs like the one in TLS are
   designed to be fairly indifferent to the input size (the input is
   already greater than the block size of most hash functions), however
   there is currently no proof that a larger input space would not make
   attacks easier.

   Another concern is that bad implementations might generate low
   entropy extented random values.  TLS is designed to function
   correctly even when fed low-entropy random values because they are
   primarily used to generate distinct keying material for each
   connection.



In some ways, this reminds me of the audit reports for compromised CAs.
 Once you know the compromise, you can often see the weakness in the
report.  In some cases the auditor has pointed it out in black and
white, but it's a trapdoor function;  you have to know the language, and
have some independent confirmation of the weakness, to know that the
auditor covered himself.



iang
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


[cryptography] Michael Haydon on the NSA spying -- blackberries

2014-03-26 Thread ianG
http://www.spiegel.de/international/world/spiegel-interview-with-former-nsa-director-michael-hayden-a-960389-druck.html


In 2008, when President Obama was elected, he had a BlackBerry. We
thought, oh God, get rid of it. He said, No, I am going to keep it. So
we did some stuff to it to make it a little more secure. We're telling
the guy who was going to soon be the most powerful man in the most
powerful country on Earth that if in his national capital he uses his
cell phone, his BlackBerry, countless number of foreign intelligence
services are going to listen to his phone calls and read his e-mails.
It's just the way it is.
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


[cryptography] NIST asks for comment on its crypto standards processes

2014-02-24 Thread ianG
http://www.fierceitsecurity.com/press-releases/nist-requests-comments-its-cryptographic-standards-process

As part of a review of its cryptographic standards development process,
the National Institute of Standards and Technology (NIST) is requesting
public comment on a new draft document that describes how the agency
develops those standards. NIST Cryptographic Standards and Guidelines
Development Process (NIST IR 7977) outlines the principles, processes
and procedures of NIST's cryptographic standards efforts.

http://csrc.nist.gov/publications/drafts/nistir-7977/nistir_7977_draft.pdf

NIST is responsible for developing standards, guidelines, tools and
metrics to protect non-national security federal information systems. To
ensure it provides high-quality, cost-effective security mechanisms,
NIST works closely with a broad stakeholder community to select, define
and promulgate its standards and guidelines.

In November 2013, NIST announced it would review its cryptographic
standards development process after concerns were raised about the
security of a cryptographic algorithm in NIST Special Publication
800-90, which was originally published in 2006 (an updated version,
800-90A, was published in 2007). Based on those concerns, that
publication was re-issued in September 2013 for a new period of public
review and is being revised to address comments received.

With the draft NIST IR 7977, NIST is seeking feedback on how it develops
its documents; engages experts in industry, academia and government; and
communicates with stakeholders. Public comments will be posted on the
NIST website and used to create a revised document. NIST will then
review its existing standards and guidelines to ensure they adhere to
the principles laid out in NIST IR 7977. If any issues are found, said
NIST's Donna Dodson, who oversees the process, they will be addressed
as quickly as possible.

The draft version of NIST IR 7977 and questions for reviewers can be
found in the Computer Security Resource Center at http://csrc.nist.gov/.
Comments may be submitted to crypto-rev...@nist.gov by April 18, 2014.
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Snowden Drop to Poitras and Greenwald Described

2014-02-08 Thread ianG
On 9/02/14 09:11 AM, Jeffrey Walton wrote:
 On Sat, Feb 8, 2014 at 6:28 PM, John Young j...@pipeline.com wrote:
 http://cryptome.org/2014/02/snowden-drop.pdf (7.6MB)

 That should be titled, How Greenwald nearly missed the scoop of the
 millennium. It appears the man did nearly everything in his power to
 undermine the contacts and the meetings.


One of the things I read that really helped understand this process was
an old novel (not sure title/author) about the IRA bombers.  In that
novel (and do note that the spy novelists typically craft their plots
with as much reality as they can steal) the theme was about chasing the
chief IRA bomber and beating him.

But, the bomber also learnt, and adopted his tradecraft.  So what
British Intelligence did was to switch gears and harass his operations
to make them as difficult as possible.  Instead of trying to necessarily
stop the bombs, they pushed gear across that made bomb making risky, and
aggressively clamped down on 'safe' gear where they could.  In effect,
making unstable explosives and detonators available, and controlling the
market for the quality stuff.

So the bomb maker was forced into employing ever more risky techniques ...

This tactic of harassing the enemy to make mistakes more likely is
rather well known.  In war as in business.  And it can and is applied to
the media.

Since Iraq-I the technique of embedding has allowed the media to be
corralled and gelded, a technique the Americans got from the Brits who
developed it in the Falklands.  What is left is a clear trail of those
very few who decline to be embedded.

So, all of the legal, political, business and intel machines can be used
to harass and challenge the targets.  In that story, Poitras was
detained 40 times at airports.  This is deliberate harassment not to
punish her, but to try to slow her down, and to force her to make
mistakes.  Recall PRZ?  They tried to break him. Recall the IETF and its
difficulty in getting good crypto deployed?  Good stuff was harassed,
crap was rewarded.

Given this level of harassment, it really is entirely logical that
Greenwald -- he's a journo fakrisake -- took every excuse to avoid
getting in deep.  And both he and Poitras felt entrapment was a likely
direction, another sign that harassment was a real tactic.

They were acting entirely to the enemy's game plan, in the role cast for
them.  Be suspicious, be nervous, make like a rabbit.  Snowden's
challenge was to beat the plan, although reading the story, I'm
suspecting that he didn't recognise that game plan per se, and got
through with persistence, luck and desperation.

(And the next guy that tries that process is going to be caught, but
that's part of their story, not this one.)



iang

___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] ChaCha/Salsa blockcounter endianness

2014-01-27 Thread ianG
On 27/01/14 06:33 AM, Billy Brumley wrote:
 I think the fact that, in the reference code, input[12] and input[13]
 are contiguous is throwing you off. The spec really just talks about
 bytes:
 
 http://cr.yp.to/snuffle/spec.pdf
 
 - Sec 10 Here i is the unique 8-byte sequence ...
 - Then see how that looks like in Sec 9 (e.g. Example 2)
 - Then Sec 8 finally Sec 7 how they get mapped to 32-bit ints


OK, right.  So that clears up one thing:  the words are laid out in
clear little-endian fashion (it's just not signalled so clearly).

 So my read is how you want to implement that 64-bit counter is up to
 you--as long as you respect the interface and feed the bytes in the
 order it expects.

Right.  Last night I was trying to impose longs over it.  Looking back,
that's a mistake, indeed there aren't even ints imposed or u32s, they
are just used internally.  The byte representation is what is imposed.

So as long as the interface specifies a byte layout, it is pretty much
up to a wider layer to extract the secret of the long conversion, if one
is in the unfortunate position of having to do addition, etc.

OK, much commentary added, and some conversion routines as well.  Thanks!

iang

___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


[cryptography] ChaCha/Salsa blockcounter endianness

2014-01-26 Thread ianG
Has anyone implemented Salsa/ChaCha hereabouts?

I'm looking at the blockcounter and I have a doubt... It is an 8byte
block, and as the reference code works in u32s, it converts it as two
4-byte quantities to two 4-byte ints (u32s) in a platform independent
fashion (controlling each for endianness).

As it is working in little-endian mode, it then does the increment of
the two numbers manually with the first u32 [12] being the low-order.
Unfortunately, this means they are hard-coded in little endian mode:

x-input[12] = PLUSONE(x-input[12]);
if (!x-input[12]) {
 x-input[13] = PLUSONE(x-input[13]);
 /* stopping at 2^70 bytes per nonce is user's responsibility */
}

This is maybe sorta correct if that is how it is defined;  the problem
is that it punts the question of what the actual ordering should be if
we wanted to use longs.  As the reference code sets the blockcounter to
zero, and doesn't offer the choice of restarting down the stream at some
long value, it doesn't matter what the user thinks because there is no
setting of it.

I'm doing Java/network order/bigendian and I'm restarting at random
places determined in longs ... :( so I can't punt it.  If I take a long,
and convert it to byte[8], will I be compatible with anyone else?

To make matters worse, none of the test vectors will pick this issue up
because they using the raw byte[8] all zeros as the blockcounter so will
happily increment internally in little-endian order and compare nicely.

(DJB's cunning test vector starts at long value -1 ... but again that is
symmetrical like zero (ox), and +1 for the next block
gives zero.  Doh!  )

There appear to be two options:

1.  fix the ordering so that conversions to u64s are like the u32s, and
defined in a platform compatible fashion.
2.  stick with the two u32s layed out in little-endian format,
regardless, if that's what everyone has already sort of done.

Any comments?

iang
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] To Protect and Infect Slides

2014-01-09 Thread ianG

On 9/01/14 00:38 AM, d...@geer.org wrote:


Keying off of one phrase alone,

   This combat is about far more than crypto...

I suggest you immediately familiarize yourself with last month's
changes to the Wassenaar Agreement, perhaps starting here:

http://oti.newamerica.net/blogposts/2013/international_agreement_reached_controlling_export_of_mass_and_intrusive_surveillance

Precis: Two new classes of export prohibited software:

Intrusion software

 Software specially designed or modified to avoid detection
 by 'monitoring tools', or to defeat 'protective countermeasures',
 of a computer or network capable device, and performing any of
 the following:

 a. The extraction of data or information, from a computer or
 network capable device, or the modification of system or user
 data; or

 b. The modification of the standard execution path of a program
 or process in order to allow the execution of externally provided
 instructions.

IP network surveillance systems

 5. A. 1. j. IP network communications surveillance systems or
 equipment, and specially designed components therefor, having
 all of the following:

 1. Performing all of the following on a carrier class IP network
 (e.g., national grade IP backbone):

 a. Analysis at the application layer (e.g., Layer 7 of Open
 Systems Interconnection (OSI) model (ISO/IEC 7498-1));

 b. Extraction of selected metadata and application content
 (e.g., voice, video, messages, attachments); and

 c. Indexing of extracted data; and

 2. Being specially designed to carry out all of the following:

 a. Execution of searches on the basis of 'hard selectors'; and

 b. Mapping of the relational network of an individual or of a
 group of people.



Irony, delicious irony.  Google, Facebook and Yahoo are banned from 
crossing the border by Wassenar.


And that's just the commercial players.  Wait until those other 
bureaucracies wake up, like the FATF, which explicitly requires the 
implementation of ... all of 5. above.




All the same arguments that applied exportation bans for crypto
software apply here, especially that of pointlessness.



Cold war warriors never die, they just add more clauses to Wassenaar.



iang
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Techniques for protecting CA Root certificate Secret Key

2014-01-09 Thread ianG

On 9/01/14 18:05 PM, Peter Bowen wrote:

On Wed, Jan 8, 2014 at 11:54 PM, ianG i...@iang.org wrote:

On 9/01/14 02:49 AM, Paul F Fraser wrote:


Software and physical safe keeping of Root CA secret key are central to
security of a large set of issued certificates.
Are there any safe techniques for handling this problem taking into
account the need to not have the control in the hands of one person?
Any links or suggestions of how to handle this problem?


The easiest place to understand the formal approach would be to look at
Baseline Requirements, which Joe pointed to.  It's the latest in a series of
documents that has emphasised a certain direction.

(fwiw, the techniques described in BR are not safe, IMHO.  But they are
industry 'best practice' so you might have to choose between loving
acceptance and safety.)


Is there a better reference for safe


I'm not aware of one.  You probably have to invent your own process. You 
might do worse by looking at what Dan pointed at:


Steve Bellovin: Nuclear Weapons, Permissive Action Links, and the
History of Public Key Cryptography, USENIX, 2006.

http://www.usenix.org/events/usenix06/tech/mp3/bellovin.mp3
http://www.usenix.org/events/usenix06/tech/slides/bellovin_2006.pdf
http://64.233.169.104/search?q=cache:_gevj9vbdqsJ:www.usenix.org/events/usenix06/tech/slides/bellovin_2006.pdf



or a place that has commentary on
the 'best practice' weaknesses?



Pointing out weaknesses in best practices is not best practices.  You're 
either in or your out.




iang
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Techniques for protecting CA Root certificate Secret Key

2014-01-08 Thread ianG

On 9/01/14 02:49 AM, Paul F Fraser wrote:

Software and physical safe keeping of Root CA secret key are central to
security of a large set of issued certificates.
Are there any safe techniques for handling this problem taking into
account the need to not have the control in the hands of one person?
Any links or suggestions of how to handle this problem?



The easiest place to understand the formal approach would be to look at 
Baseline Requirements, which Joe pointed to.  It's the latest in a 
series of documents that has emphasised a certain direction.


However, it is not the only answer.  The best way to describe it is that 
it is 'best practices' for the CA industry, and once you achieve that 
way, you're on the path to being inculcated.  If that's your goal, the 
BR is your answer.


As you don't say much about your problem space is, it's difficult to 
answer your real question:  what are safe techniques for handling root 
CA keys?


(fwiw, the techniques described in BR are not safe, IMHO.  But they are 
industry 'best practice' so you might have to choose between loving 
acceptance and safety.)




iang
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] [Ach] Better Crypto

2014-01-07 Thread ianG

On 7/01/14 04:34 AM, Peter Gutmann wrote:

 give users a choice: a
generic safe config (disable null, export ciphers, short keys, known-weak,
etc), a maximum-interoperability config (3DES and others), and a super-
paranoid config (AES-GCM-256, Curve25519, etc), with warnings that that's
going to break lots of things.



That's a good idea.  I wonder if it could be done efficiently?  Hmmm...



iang
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] NSA co-chair claimed sabotage on CFRG list/group (was Re: ECC patent FUD revisited

2014-01-07 Thread ianG
I think, like James, I see the sacrificial lamb approach.  There is 
benefit in watching what they are up to.  If a measurable push comes out 
of the IAB's CFRG, then this is a clear signal to avoid that like the 
plague.


Pushing ECC patents.  Pushing NIST curves.  Clear signals!

Without those signals, where would we get our information? I've always 
thought that IPSec, DNSSec, and similar were highly suspect because the 
IETF was there at the start, precisely.  Unlike say SSH which was cut 
from whole cloth, in original form, or Skype which had to be sold to the 
borg, before it could be assimilated.




In the wartime OSS Simple Field Sabotage Manual, it suggests things like:

 (4) Bring up irrelevant issues as frequently as possible.

 (6) Refer back to matters decided upon at the last meeting and attempt 
to reopen the question of the advisability of that decision.

...
 (2) Misunderstand orders. Ask endless questions or engage in long 
correspondence about such orders. Quibble over them when you can.


 (7) Insist on perfect work in relatively unimportant products; send 
back for refinishing those which have the least flaw. Approve other 
defective parts whose flaws are not visible to the naked eye.


 (10) To lower morale and with it, production, be pleasant to 
inefficient workers; give them undeserved promotions. Discriminate 
against efficient workers; complain unjustly about their work.




Written from those times.  It would be fascinating to read a current 
version, one that had been written with the IETF and national standards 
orgs in mind.  Maybe someone could reverse-engineer these emails to 
figure it out?


iang
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] [Ach] Better Crypto

2014-01-07 Thread ianG

On 7/01/14 13:18 PM, L. Aaron Kaplan wrote:


None if this is perfect yet of course.  One of the very productive feedback 
results was that we should make a HTML version.


A wiki...  I would say.


   1. We will have three config options: cipher String A,B,C ( generic safe 
config, maximum interoperability (== this also makes the mozilla people happy 
then) and finally a super-hardened setting (with reduced compatibility)).
Admins will get a choice and explanations on when to use which option.



You could call them:

Suite A:  maximum security, super hard
Suite B:  general safe
Suite C:  maximum compatibility

;)  or if you're worried about being sued for trademark violation, how 
abouts:


Sweet A,
Bravo B,
Crazy C!

It would be nice if, typographically, we could see them on the page in 
some easy fashion.  Like, A at left, B in middle, C at right, in 
consistent columns.  Or in colours.


That way, a sysadm could implement things in C easily, then move from 
right to left and try things out.


Of course, this is only icing on the cake.  If it can do B above, 
general safe, then that is really a step forward for the world.




   2. (time-wise) first we focus on some of the weak spots in the guide like 
the ssh config (client config is missing...), the theory section etc.
   3. we give people a config generator tool on the webpage which gives them 
snippets which they can include into their webservers, mailservers etc. The 
tool also shows admins (color codes?) which settings are compatible, unsafe etc.
   4. In addition to having the config generator on the web page, the config 
snippets are moved to the appendix (as you suggested). The theory section moves 
up.



I think the config cutpaste sections are what is important.  As Peter 
mentioned.  I'd flip that around:


Config sections are the bulk.  References to theory found in the 
Appendix, frequent tips that you'll enjoy some theory too.


It's an advice guide, not a schoolbook.



Would that be more in your line of thinking?


Anyway, we will have a authors' meeting today at  ~ 19:00 CET and can discuss 
this.
Anyone who wants to join via teleconference: please get in contact with me. We 
will arrange for remote participation.


good luck.  I'm missing out on all the fun.  Again!


iang
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


[cryptography] Better Crypto

2014-01-05 Thread ianG
Not sure if it has been mentioned here.  The Better Crypto group at 
bettercrypto.org have written a (draft) paper for many of those likely 
configurations for net tools. The PDF is here:


https://bettercrypto.org/static/applied-crypto-hardening.pdf

If you're a busy sysadm with dozens of tools to fix, this might be the 
guide for you.


iang
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] pie in sky suites - long lived public key pairs for persistent identity

2014-01-04 Thread ianG

On 3/01/14 22:42 PM, coderman wrote:

use case is long term (decade+) identity rather than privacy or
session authorization.



Long term identity is not a concept in a vacuum.  Identity in software 
business always relates to other people, your identity is like the sum 
of the thoughts that *others have about you* unlike psychology where 
identity is a concept of how you think about yourself.


So you have to consider (a) everyone else and (b) how everyone else 
interacts with you.


Which in today's world is pointing to the phone.   If we're talking the 
identity on the phone, we're now talking about 2 or more things, 
horizontally:  an app by itself, or an app that integrates vertically 
with the telco (SIM card).  We can also bifurcate vertically with Apple 
v. Android, and also-rans.


The point being that identity of the future is constrained to the 
platform that everyone lives on.  The western/past was your laptop and 
PGP and online banking.  The all-world/future is the phone and mPesa 
style mobile money and apps like SnapChat.


The phone is a really sucky platform for security purposes.  The end 
conclusion of this argument is that ... it doesn't matter much what 
strength/type key you're using because you have to deal with the platform.


As an example.  In my business, we have money on phones, including 
shared accounts and treasurer management and other stuff.  This gets 
hairy if say /the treasurer loses her phone/.  This is a real, live, 
every day issue.


What to do?

(skipping long analysis)... we have to be able to recover the entire app 
onto a new phone and carry on as if nothing had happened.  To do this, 
we have to migrate the swamp of server escrow (cryptocloud! got it sorta 
working last night) and encryption against servers and 4 digit pins and 
finger swipes and cooperative arrangements with people who today are 
your most trusted friends and tomorrow have stolen all your money.


See where this is going?  The conclusion is, it doesn't matter a damn 
what strength key you use.  In practice, we use what tickles us as 
geeks, and what works well with our software design.




eternity key signs working keys tuned for speed with limited secret
life span (month+).  working keys are used for secret exchange and any
other temporal purpose.

you may use any algorithms desired; what do you pick?


Curve3617+NTRU eternity key
Curve25519 working keys
ChaCha20+Poly1305-AES for sym./mac
?



I'm using RSA1024/AES128/SHA1-HMAC at the moment, I could use RSA512 and 
I'd be within my analysed threat model and designed security model.


I'll switch over at some stage to CurveLatest/ChaCha20+Poly1305 ... but 
it won't make a jot of difference to my identity.  I'll do it coz it's cool.




this assumes key agility by signing working keys with all eternity
keys, and promoting un-broken suites to working suites as needed.  you
cannot retro-actively add new suites to eternity keys; these must be
selected and generated extremely conservatively.

other questions:
- would you include another public key crypto system with the above?
(if so, why?)



No.  RSA then EC is what I'll do.  I don't know about NTRU, but I'm the 
guy who always says less is best.




- does GGH signature scheme avoid patent mine fields? (like NTRU patents)
- is it true that NSA does not use any public key scheme, nor AES, for
long term secrets?



Now that is a question!  Yes, let's hear more about their use of Public 
Key [0].  This is now a validated issue, because Suite B and EC is under 
a cloud by the NSA's actions.  Anyone?




- are you relieved NSA has only a modest effort aimed at keeping an
eye on quantum cryptanalysis efforts in academia and other nations?



lol... What was also funny was that paper had a lot of TS over it.  Nice 
to know that these guys are carefully covering up the bleeding obvious. 
 Maybe that's why the newspaper released it over New Year's Day, for 
humour.




iang



[0] http://financialcryptography.com/mt/archives/001451.html
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


[cryptography] beginner crypto

2013-12-28 Thread ianG

On 29/12/13 02:35 AM, RossMcFarlane wrote:

Hi everyone, I don't want to waste your time but I'd love to learn some more 
about cryptography, I was recommended this mailing list but its aimed well 
above my standard.



Yes, this is about crotchety old war armchair cryptographers fighting 
decades-old battles.  So etiquette helps, but which one is a secret.




I'm based in the UK 17 years old and to be pointed in the direction of some 
good resources would be great, I've watched a lot of the YouTube stuff but 
would like a step up from there.



Question 1; are you interested in maths or in programming?  Your 
survival probability increases if it is only one.


If in programming, what language?  What you probably would fine easiest 
would be to read the wikipedia pages on block ciphers.  Then search for 
an algorithm and try and get it going.


There was once an algorithm called Tiny which was quite nice.

If in maths, others can comment.

iang



Hopefully I'll join you again one day ;)
Thanks in advance.
Ross


___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


[cryptography] Prerendering as a security idea (was: RSA is dead.)

2013-12-25 Thread ianG

On 25/12/13 02:38 AM, Bill Frantz wrote:

On 12/25/13 at 2:05 PM, i...@iang.org (ianG) wrote:


So, assuming I sober up by the morn, and SO doesn't notice, where's
Ping's code?


See http://zesty.ca/pubs/yee-phd.pdf p217ff



Thanks!  I had a quick look, it's in Python, I'm squeezed out.  Also, 
there is only a description of the bugs in the thesis, which is no fun.


In order to justify YAPing, here is a snippet from the thesis, which I 
saw as the big idea in Ka Ping's thesis:




What is prerendering?

In a typical voting computer, much of the software code is responsible 
for generating the user interface for the voter. This includes the code 
for arranging the layout of elements on the screen, drawing text in a 
variety of typefaces and languages, drawing buttons, boxes, icons, and 
so on. In a voting computer with audio features, this also includes code 
for manipulating or synthesizing sound. (Some voting computers, such as 
the Avante Vote-Trakker [11], contain speech synthesis software.) The 
user interface is generated in real time—the visual display and audio 
are produced (“rendered”) as the voter interacts with the machine.


Prerendering the ballot.  The software in the voting computer could be 
considerably simplified by moving all this rendering work into the 
preparation stage— /prerendering/ the interface before election day.  1 
Both Ptouch and Pvote realize this idea.


Today’s DRE machines use a ballot definition that contains only 
essential data about the ballot: the names of the offices, the names of 
the candidates running for each office, and so on.  But the ballot 
definition could be expanded to describe the user interface as well. For 
a visual interface, this would include images of the screen with the 
layout already performed, buttons already placed, and text already 
drawn. For an audio interface, this would include prerecorded sound 
clips. Everything presented to the user would be prepared ahead of time, 
so that all the software complexity associated with rendering can be 
taken out of the voting computer.


The ballot definition could specify not just appearance but also 
behaviour—the locations where images will appear, the transitions from 
screen to screen, the user actions that will trigger these transitions, 
and so on. This is exactly the case for both Ptouch and Pvote: the 
ballot definition is a high-level description of the entire user 
interface for voting.


___
1 It was Steve Bellovin who prompted my line of research by suggesting 
prerendering for voting machines.






I'm enjoying my son's gin and tonics. He makes the best ones in the world.

Merry Christmas and Happy New Year1



And to all!

___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


[cryptography] controlling trust with money

2013-12-25 Thread ianG

On 25/12/13 07:33 AM, Peter Todd wrote:

On Tue, Dec 24, 2013 at 11:03:31PM -0500, Benjamin Kreuter wrote:

...

Moderation and spam control - both involve trusting centralized humans.

...

Equally we have very
suductive solutions to such distastful brushes with humanity in the form
of throwing proof-of-work, or better yet transferrable proof-of-work(1),
at the problem. Previously known as hashcash of course, but much more
usable this time around because there's actually a market for the stuff
in the form of Bitcoins so attackers don't have an advantage. Of course,
such pure solutions have real world drawbacks - like rich wankers
flooding your forums with junk because they can afford too - but they've
also never been tried in real-life so there's a lot of interest in doing
just that. Who knows if it'll actually work in practice, but all the
more reason to try.



Controlling groups of people and manipulating the trust and other 
factors by charging money is a time-honoured marketing technique.  It 
may be that Bitcoin community hasn't tried it, but the marketing types 
know all about it.  In the art it is called 'price discrimination' which 
I'm sure google knows all about.  It works.  You can even predict 
with some precision how it is going to work.




1) https://en.bitcoin.it/wiki/Fidelity_bonds - Disclaimer: I invented
them. Also Just use fidelity bonds! is a standard joke in the
Bitcoin developer community, and for good reason.



There have even been studies done on how effective it is.  The one I 
recall is selling two t-shirts, one red and one green, with one at twice 
the price...


Of course, this still leaves the question of how to control trust 
without money.  Another day...


iang
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] [Cryptography] RSA is dead.

2013-12-23 Thread ianG

On 23/12/13 21:43 PM, Kevin wrote:

On 12/23/2013 1:04 PM, Greg wrote:

On Dec 23, 2013, at 11:13 AM, D. J. Bernsteind...@cr.yp.to  wrote:


Peter Gutmann writes (on the moderatedcryptogra...@metzdowd.com  list):

Any sufficiently capable developer of crypto software should be
competent enought to backdoor their own source code in such a way that
it can't be detected by an audit.

Some of us have been working on an auditable crypto library:

   https://twitter.com/TweetNaCl

The original, nicely indented, version is 809 lines, 16621 bytes.

... what is the point of tweeting lines of source code? It's completely 
unreadable (to me, at least).



It's cool.  It's a demonstration of how small a complete library can be. 
 It's a challenge to OpenSSL, you are the Library of Alexander, hack 
and burn.  It's fun to do over Xmas when promises not to work on code to 
SO are thick and intent.



Why doesn't that twitter account link to the original, nicely indented 
version?



If you can't find it, we don't want you to  ;-)


Does the original have comments? If not, why not?



Ah.  This debate has yet to start.  Wait till you see OpenSSL or 
BouncyCastle code... :P




Please do not email me anything that you are not comfortable also sharing with 
the NSA.


Oh, that too.

iang

___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Security Discussion: Password Based Key Derivation for Elliptic curve Diffie–Hellman key agreement

2013-12-17 Thread ianG

On 17/12/13 21:38 PM, Joseph Birr-Pixton wrote:

In very general terms, you cannot hope to achieve confidentiality
without authenticity.



Actually, you can achieve confidentiality, you just can't prove it in 
cryptographic terms.


The original poster should not be dissuaded by claims that no MITM 
solution makes it worthless.  The same trick was done to SSL and look at 
where that got us:  mass surveillance because it is too hard to deploy 
in 100% of circumstances.


Also, look at Greg Rose's post.  The bar is very very low because anyone 
who wants to MITM a facebook user can also slip in many other approaches.


Doing just enough to force the attacker to go active -- by *any means* 
-- is a really good tool.


In the alternate, add some MITM protection as a second generation. 
There are some easy, sorta maybe methods like sharing the number over 
another channel (phone, SMS, skype).  You can much better appreciate 
what works for your design once it is up and running, and once your 
users start telling you what they can do.  This you cannot achieve at 
all if you design in some cold-war PKI design from the get-go.




iang
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


[cryptography] does the mixer pull or do the collectors push?

2013-11-28 Thread ianG

On 28/11/13 12:12 PM, Joachim Strömbergson wrote:


One issue I'm thinking of is if you have more than one source, but one
of them dwafs the other sources in kapacity. Say having a microphone
providing whitish noise at kbps rate and then having RdRand from your
Haswell CPU generating data at Gbps speed, will the microphone entropy
matter?




I'm thinking about the same issues, we're designing a classical RNG 
along the lines of  three elements:


   collector \
  \
   collector - mixer --- expansion function/CSPRNG
  /
   collector /

Here is my list of assumptions:

/*
 * Assumption A:some of our collectors are borked.
 *A.2:  we don't know which collectors are borked.
 *A.3:  We do not rely on measurements of entropy
 *  because a borked collector will deliver
 *  false estimates.
 * Assumption B:Some of our collectors have very high
 *  throughput, others very low.
 * Assumption B.2:  Some are high quality, others are low quality.
 * Assumption C:At least one collector is good.
 * Assumption D:Core java, plus/minus Android.
 *
 *
 * Goal 1.  Each collector should contribute to any request for a seed.
 * Goal 2.  No blocking.  Or minimal blocking...
 */


Our current thoughts are along the question of how the collector and 
mixer interface.  Do the collectors push to the mixer or does the mixer 
pull from the collectors?


Right now we're looking at a hybrid design of both:  Collectors collect 
and save, and push into a mixer pool on their own when full.  When the 
EF/CSPRNG pulls a seed from the mixer, it pulls from collectors, pulls 
from the pool, and mixes all that for the seed.


Thoughts?

iang
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] [Cryptography] Email is unsecurable

2013-11-26 Thread ianG

On 26/11/13 03:03 AM, coderman wrote:

On Mon, Nov 25, 2013 at 1:51 PM, Stephen Farrell
stephen.farr...@cs.tcd.ie wrote:

...
Personally, I'm not at all confident that we can do something
that provides end-to-end security, can be deployed at full
Internet scale and is compatible with today's email protocols.
But if others are more optimistic then I'm all for 'em trying
to figure it out and would be delighted to be proven wrong.



this would make an interesting bet!  i too believe this to be
impossible given the constraints.

a more suspicious individual might even consider these efforts to be a
ruse by intelligence agencies to further the use of insecure (email)
systems with fig leaf protections added on top while metadata and
usability failures continue unabated...



IMHO the TLAs bet big on pushing the CA/PKI solution in the 1990s.  I've 
not seen any hard evidence of it, but there is enough anecdotal evidence 
to conclude it.  Some for different reasons, for example the DoD was 
very keen on COTS which we can see as benign enough, in and of itself.


In terms of mass surveillance and espionage, the PKI is a slam dunk. 
CVPs (centralised vulnerability partners), many of whom are national 
champions or nationally regulated, browsers hiding the CAs, lock-in via 
clients, open sharing of certificates.  This is an open internet 
solution that only an attacker could truly love.


That's not to say there is no value in it for us.  Just that we'll end 
up with strange bedfellows, and we may not be happy who the real winners 
are.  E.g., supporting HTTPS everywhere carries big risks if it is 
forced through without opportunistic encryption, or other escape valves 
for society.


So I'd suggest caution to both sides of this debate.  And careful 
cost-benefit analysis and careful risk analysis.  History has not been 
kind to open internet crypto projects.


iang

___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Design Strategies for Defending against Backdoors

2013-11-18 Thread ianG

On 18/11/13 10:27 AM, ianG wrote:

In the cryptogram sent over the weekend, Bruce Schneier talks about how
to design protocols to stop backdoors.  Comments?



To respond...


https://www.schneier.com/blog/archives/2013/10/defending_again_1.html

Design Strategies for Defending against Backdoors

With these principles in mind, we can list design strategies. None of
them is foolproof, but they are all useful. I'm sure there's more; this
list isn't meant to be exhaustive, nor the final word on the topic. It's
simply a starting place for discussion. But it won't work unless
customers start demanding software with this sort of transparency.

 Vendors should make their encryption code public, including the
protocol specifications. This will allow others to examine the code for
vulnerabilities. It's true we won't know for sure if the code we're
seeing is the code that's actually used in the application, but
surreptitious substitution is hard to do, forces the company to outright
lie, and increases the number of people required for the conspiracy to
work.



I think this is unlikely.  The reasons for proprietary code are many and 
varied, it isn't just one factor.  Also, efforts by companies to deliver 
open reference source and pre-built binaries have not resulted in 
clear proof of no manipulation;  it's often too difficult to reproduce a 
build process.


One of the slides indicated how many google protocols that the NSA had 
built engines for;  a big operation does have a lot of internal 
protocols.  There are reasons for this, including security reasons.




 The community should create independent compatible versions of
encryption systems, to verify they are operating properly. I envision
companies paying for these independent versions, and universities
accepting this sort of work as good practice for their students. And
yes, I know this can be very hard in practice.



This is the model that the IETF follows.  They require two independent 
versions.  Yet, with that requirement comes a committee and a long 
debate about minor changes;  which many have criticised does more harm 
than good.


This argument was used to wrest control of SSL away from Netscape.  Did 
the result make us any safer?  Not really, even though there were bugs 
in the internal SSL v1, it rested heavily on opportunistic encryption, 
which would have given us a much bigger defence against the NSA's mass 
surveillance programme.


The jury's not yet out on the question of whether the CA/PKI thing is a 
benefit or a loss.  What would happen if the next set of Snowden 
revelations were to show evidence that the NSA promoted the PKI as a 
vulnerability?


What would happen if we just handed the change management of SSL across 
to google (as a hypothetical pick) ?  Would they do a worse job or a 
better job than PKIX ?




 There should be no master secrets. These are just too vulnerable.


OK.


 All random number generators should conform to published and
accepted standards. Breaking the random number generator is the easiest
difficult-to-detect method of subverting an encryption system. A
corollary: we need better published and accepted RNG standards.



But:  the RNG is typically supplied by the OS, etc.

What seems more germane would be to include usage of both the OS's RNs 
and also to augment those in case of flaws.  Local randomness exists, 
and it typically available at an application level, in ways it is not 
available at OS level.


If we had a simple mixer and whitener design, that did not disturb the 
quality of the OS nor the local source, surely this would be far better 
than what we got from NIST, et al?




 Encryption protocols should be designed so as not to leak any
random information. Nonces should be considered part of the key or
public predictable counters if possible. Again, the goal is to make it
harder to subtly leak key bits in this information.



Right, that I agree with.  Packets should be deterministically created 
by the sender, and they should be verifiable by the recipient.




But, overall, when it comes down to it, I think the defence against 
backdoors is not really going to be technical.  It think it is more 
likely to be attitude.


The way I see things, the chances of a backdoor in say Silent Circle are 
way down, whereas the chances of a backdoor in Cisco are way up.  Cisco 
could do all the things above, and more, and would still not increase my 
faith.  SC could do none of the things above, and I'd still have faith.


I think we are still waiting to see which companies in the USA are 
actually going to stand up and fight.  Some signs have been seen, but in 
the aggregate we're still at the first stage of grief -- denial.  In the 
aggregate, it seems that the Internet has just slipped back to the old 
international telco days -- every operator is a national champion, and 
is in bed with their national government.


Pure tech or design can't change that.  Only people can change

[cryptography] Design Strategies for Defending against Backdoors

2013-11-17 Thread ianG
In the cryptogram sent over the weekend, Bruce Schneier talks about how 
to design protocols to stop backdoors.  Comments?


https://www.schneier.com/blog/archives/2013/10/defending_again_1.html

Design Strategies for Defending against Backdoors

With these principles in mind, we can list design strategies. None of 
them is foolproof, but they are all useful. I'm sure there's more; this 
list isn't meant to be exhaustive, nor the final word on the topic. It's 
simply a starting place for discussion. But it won't work unless 
customers start demanding software with this sort of transparency.


Vendors should make their encryption code public, including the 
protocol specifications. This will allow others to examine the code for 
vulnerabilities. It's true we won't know for sure if the code we're 
seeing is the code that's actually used in the application, but 
surreptitious substitution is hard to do, forces the company to outright 
lie, and increases the number of people required for the conspiracy to work.


The community should create independent compatible versions of 
encryption systems, to verify they are operating properly. I envision 
companies paying for these independent versions, and universities 
accepting this sort of work as good practice for their students. And 
yes, I know this can be very hard in practice.


There should be no master secrets. These are just too vulnerable.

All random number generators should conform to published and 
accepted standards. Breaking the random number generator is the easiest 
difficult-to-detect method of subverting an encryption system. A 
corollary: we need better published and accepted RNG standards.


Encryption protocols should be designed so as not to leak any 
random information. Nonces should be considered part of the key or 
public predictable counters if possible. Again, the goal is to make it 
harder to subtly leak key bits in this information.

___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Password Blacklist that includes Adobe's Motherload?

2013-11-14 Thread ianG

On 15/11/13 06:35 AM, Kevin W. Wall wrote:


Besides that, (unfortunately) it's a lot easier to change 'snoopy1' to 'snoopy2'
then to 'snoopy3', etc. when your password inevitably changes. Plus, it makes
a lot easier to remember than to start out with 'sn00py' and then go
to 'sn11py',
'sn22py', etc. :-)


When I last worked in a formally controlled  certified security office, 
the password to the system was indeed securityN where N incremented 
every month when the system kicked back and insisted on a password change.


(oops, that's probably security leak...)

It reminds me of the story about the British health system that spent 
untold millions putting in individual smart token control systems, so as 
to control access to security-critical resources.


Every place discovered the same correct way to drive the system.  Access 
was sorted and aligned by seniority of staff, and every morning, the 
designated senior person would plug their token into a given device, 
then walk away and get back to work.




iang
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Which encryption chips are compromised?

2013-11-10 Thread ianG

On 10/11/13 16:31 PM, John Young wrote:

The Guardian version (greater redaction):

http://s3.documentcloud.org/documents/784159/sigintenabling-clean-1.pdf

NYTimes-ProPublica version (lesser redaction):

http://s3.documentcloud.org/documents/784280/sigint-enabling-project.pdf

[0] A related question is where were these slides posted on the Guardian
and NYT sites?  Which did which redaction?


[1]
https://twitter.com/ashk4n/status/37575818993312/photo/1
http://financialcryptography.com/mt/archives/001455.html



Nice!  Lots more information, and evidence.  Blog post updated...

This appears to be the NYT commentary:

http://www.nytimes.com/interactive/2013/09/05/us/documents-reveal-nsa-campaign-against-encryption.html?_r=0#briefing

What I was surprised about with these detailed revelations was that 
there was almost no fuss.  This stuff is the smoking gun for our 
industry.  I must have been totally asleep to miss them...



iang
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


[cryptography] redaction differences btw Guardian and NYT NSA docs re: 'middle east anonymous service' and VPN crypto chips

2013-11-05 Thread ianG

https://twitter.com/ashk4n/status/37575818993312/photo/1

ashkan soltani ‏@ashk4n

FYI: redaction differences btw Guardian and NYT NSA docs re: 'middle 
east anonymous service' and VPN crypto chips

pic.twitter.com/sU475mQMkM

(no time now, but that should be typed in and posted directly.  I am 
assuming that it is as read, haven't checked sources... iang)

___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


[cryptography] chacha test vectors

2013-10-31 Thread ianG

Has anyone got/found test vectors for ChaCha?

iang
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] chacha test vectors

2013-10-31 Thread ianG

On 31/10/13 14:31 PM, Sébastien Martini wrote:

Hi,

On Thu, Oct 31, 2013 at 12:14 PM, ianG i...@iang.org
mailto:i...@iang.org wrote:

Has anyone got/found test vectors for ChaCha?


For ChaCha20 it seems there are these tests
https://tools.ietf.org/html/draft-agl-tls-chacha20poly1305-02#section-7


Thanks, excellent!  Go Adam.  Oct 2013 ... lucky we didn't want them 
last month ;)




iang

___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Cryptographer Adi Shamir Prevented from Attending NSA History Conference

2013-10-17 Thread ianG
I doubt this is anything to do with cryptography.  I would suspect it is 
more to do with jobs / science debate that most countries have every 
time the jobs market gets tough or the science numbers start to look 
like banana republic.


The general knee-jerk response to MSM angst by the bureaucrats 
(politicians instructing the State Dept, but common around the world) is 
to slow down visas to the educated.


The less amusing thing is that slowing down visas to high-end talent is 
precisely the wrong thing to do in economic terms, because these people 
bring in opportunities and enlarge the pie, they don't take from a 
shrinking pie.  But, there it is!  There is now even a separate branch 
of economics dealing with why lessons such as Ricardo's concepts in free 
trade remain unlearnt, after hundreds of years.




iang


On 17/10/13 11:29 AM, Eugen Leitl wrote:


http://blogs.fas.org/secrecy/2013/10/shamir/

Cryptographer Adi Shamir Prevented from Attending NSA History Conference

Categories: Science, Secrecy

In this email message to colleagues, Israeli cryptographer Adi Shamir
recounts the difficulties he faced in getting a visa to attend the 2013
Cryptologic History Symposium sponsored by the National Security Agency. Adi
Shamir is the “S” in the RSA public-key algorithm and is “one of the finest
cryptologists in the world today,” according to historian David Kahn. The NSA
Symposium begins tomorrow. For the reasons described below, Dr. Shamir will
not be there.

From: Adi Shamir
Date: October 15, 2013 12:16:28 AM EDT
To:
Subject: A personal apology

The purpose of this email is to explain why I will not be able to attend the
forthcoming meeting of the History of Cryptology conference, even though I
submitted a paper which was formally accepted. As an active participant in
the exciting developments in academic cryptography in the last 35 years, I
thought that it would be a wonderful opportunity to meet all of you, but
unfortunately the US bureaucracy has made this impossible.

The story is too long to describe in detail, so I will only provide its main
highlights here. I planned to visit the US for several months, in order to
attend the Crypto 2013 conference, the History of Cryptology conference, and
to visit several universities and research institutes in between in order to
meet colleagues and give scientific lectures. To do all of these, I needed a
new J1 visa, and I filed the visa application at the beginning of June, two
and a half months before my planned departure to the Crypto conference in mid
August. I applied so early since it was really important for me to attend the
Crypto conference – I was one of the founders of this flagship annual
academic event (I actually gave the opening talk in the first session of the
first meeting of this conference in 1981) and I did my best to attend all its
meetings in the last 32 years.

To make a long story short, after applying some pressure and pulling a lot of
strings, I finally got the visa stamped in my passport on September 30-th,
exactly four months after filing my application, and way beyond the requested
start date of my visit. I was lucky in some sense, since on the next day the
US government went into shutdown, and I have no idea how this could have
affected my case. Needless to say, the long uncertainty had put all my travel
plans (flights, accomodations, lecture commitments, etc) into total disarray.

It turns out that I am not alone, and many foreign scientists are now facing
the same situation. Here is what the president of the Weizmann Institute of
Science (where I work in Israel) wrote in July 2013 to the US Ambassador in
Israel:

“I’m allowing myself to write you again, on the same topic, and related to
the major difficulties the scientists of the Weizmann Institute of Science
are experiencing in order to get Visa to the US. In my humble opinion, we are
heading toward a disaster, and I have heard many people, among them our top
scientists, saying that they are not willing anymore to visit the US, and
collaborate with American scientists, because of the difficulties. It is
clear that scientists have been singled out, since I hear that other ‘simple
citizen’, do get their visa in a short time.”

Even the president of the US National Academy of Science (of which I am a
member) tried to intervene, without results. He was very sympathetic, writing
to me at some stage:

“Dear Professor Shamir

I have been hoping, day by day, that your visa had come through. It is very
disappointing to receive your latest report. We continue to try by seeking
extra attention from the U. S. Department of State, which has the sole
authority in these matters. As you know, the officers of the Department of
State in embassies around the world also have much authority. I am personally
very sympathetic and hopeful that your efforts and patience will still yield
results but also realize that this episode has been very trying. We hope to
hear of a last-minute success

Re: [cryptography] Allergy for client certificates

2013-10-10 Thread ianG

On 9/10/13 01:41 AM, Tony Arcieri wrote:


We use client certs extensively for S2S authentication where I work
(Square).

As for web browsers, client certs have a ton of problems:



I have successfully used them in a PHP website of my own design.  I just 
plugged away until they worked.  I grant they have a ton of problems, 
but it may be a case of half-empty or half-full.  Here's my point by 
point experiences, partly because the whole exercise for me was in order 
to find out...




1) UX is *TERRIBLE*. Even if you you tell your browser to use a client
cert for a given service, and you go back to that service again,
browsers often don't remember and prompt you EVERY TIME to pick which
cert to use from a giant list. If you have already authenticated against
a service with a given client cert, and that service's public key hasn't
changed, there's absolutely no reason to prompt the user every single
time to pick the cert from all of the client certs they have installed.



Yes, that part doesn't work.  So what my site did was to take every cert 
provided and hook it up to the account in question.  This was a major 
headache to code up because I had to interpolate the contents of the 
certs and do things like match email addresses.


It has a number of interesting edge cases such as correct name but 
different email address.  Also, the name isn't unique, or is it?


I solved these edge cases by leaning on CAcert's systems of governance, 
and simply asking the user:  is this you?  If they lie, I can fall 
back on Arb to solve everything.  (OK, I cheated a bit there to get it 
to work, there are other solutions and other possibilities, but I wanted 
seamless non-support solution.)




2) HTML keygen tag workflow is crap and confusing. It involves
instructing users to install the generated cert in their browser, which
is weird and unfamiliar to begin with. Then what? There's no way to
automatically direct users elsewhere, you have to leave a big list of
instructions saying Please install the cert, then after the cert is
installed (how will the user know?) click this link to continue



This is a problem that is outsourced from the website/user to the 
CA/user interface.  It can be done.  I don't know how the coding is 
done, but CAs do handle this well enough.  I think again it is just a 
matter of plugging away until you get the code going.


(What is not easy is using the certs for email.  That's a fail, unless 
you are using some form of automatic certs distribution.)



3) Key management UX is crap: where are my keys? That varies from
browser to browser. Some implement their own certificate stores. Others
use the system certificate store. How do I get to my keys? For client
certs to replace passwords, browsers need common UI elements that make
managing, exporting, and importing keys an easy process.



It is true that key management is crap, but how much do you care?  As 
long as the keys work, everything is cool ... for *users*.  They never 
need to look at keys.  Only developers need that ... ;-)


(Yes, this is skipping the whole privacy question, but having worked 
through it, you aren't going to do any worse with certs.)




Passwords may be terrible, but they're familiar and people can actually
use them to successfully log in. This is not the case for client certs.
They're presently way too confusing for the average user to understand.



I think, if you can crack the get a cert in the browser problem, that 
whole equation flips around.  In my experience, client certs worked way 
easier than passwords.  They just worked.


The big benefit we had in our community was that our target audience 
already had to have put their cert into their browser, it was part of 
the Assurer test.


What was not easy is the websites.  Taking random site X like a wiki and 
engaging it for immediate auth with the cert is hard, mostly because 
these systems out there have never really considered certs, and often 
enough they haven't even considered SSL.





iang



ps;   More here:
http://wiki.cacert.org/Technology/KnowledgeBase/ClientCerts/theOldNewThing
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


  1   2   3   4   >