Re: [cryptography] Gogo inflight Internet uses fake SSL certs to MITM their users

2015-01-06 Thread Peter Maxwell
On 6 January 2015 at 15:40, Jeffrey Altman jalt...@secure-endpoints.com
wrote:

 On 1/5/2015 8:47 PM, John Levine wrote:
 
 
 http://venturebeat.com/2015/01/05/gogo-in-flight-internet-says-it-issues-fake-ssl-certificates-to-throttle-video-streaming/
 
  They claim they're doing it to throttle video streaming, not to be evil.
 
  Am I missing something, or is this stupid?  If they want to throttle
  user bandwidth (not unreasonable on a plane), they can just do it.
  The longer a connection is open, the less bandwidth it gets.

 I suspect that throttling user bandwidth is not the goal.  Instead they
 are attempting to strip out embedded video from within http streams.
 Since the video stream might be sent over the same tcp connection as
 non-video content they can improve the user's experience by delivering
 all but the video.


​So why do they not take a more traditional approach of:

i. blocking obvious video services (YouTube, etc) wholesale;​ and,

ii. limiting sustained bandwidth per user at a level that would frustrate
viewing video anyway.


​​It's somewhat easier to do than intercepting SSL/TLS connections.
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Announcing ClearCrypt: a new transport encryption library

2014-05-04 Thread Peter Maxwell
On 4 May 2014 23:54, Tony Arcieri basc...@gmail.com wrote:



 The project is presently complete vaporware, but the goal is to produce a
 Rust implementation of a next generation transport encryption library. The
 protocol itself is still up for debate, but will likely be based off
 CurveCP or Noise.



​Would be interested in this, even if just as the crazy bearded person in
the corner shouting abuse mixed with random suggestions.​
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] New Hand Cipher - The Drunken Bishop

2013-12-26 Thread Peter Maxwell
On 26 December 2013 19:56, Aaron Toponce aaron.topo...@gmail.com wrote:

 On Thu, Dec 26, 2013 at 02:53:06PM -0500, Jeffrey Walton wrote:
  On Thu, Dec 26, 2013 at 2:44 PM, Aaron Toponce aaron.topo...@gmail.com
 wrote:
  BBS is not practical in practice due to the size of the moduli
  required. You could probably go outside, take an atmospheric reading,
  and then run it through sha1 quicker. See, for example,
 
 http://crypto.stackexchange.com/questions/3454/blum-blum-shub-vs-aes-ctr-or-other-csprngs
 .

 Understood. BBS was only an example of some way to modify the algorithm to
 introduce non-linearity into the system. I thought I had it, but it's
 apparent I don't. I'm just grateful I'm not getting shamed and flamed by
 cryptographers on this list much stronger in the field than I. :)


​Ok, I've only skim-read the blog page that describes the algorithm but on
a cursory reading it seems trivially weak/breakable.

If you view the moving-the-bishop as an s-box lookup, and apply it to
itself three times (composition), you end up with another s-box of the same
size, lets call it S.  Given S doesn't change, things should be rather easy
indeed.  If your cipher is then roughly akin to C[n] = P[n] + S[ C[n-1] ]
with all operations taken modulo 2^6 the problem should now be a little
more obvious.

​While I very much like the idea of using a standard chessboard to run a
cipher​ - it's innocuous and the key could be hidden almost in plain-sight
- the actual cipher isn't much use, at least not if I've got the gist of
it.  If I've misunderstood the description, please correct me (preferably
in a more terse description).

​Can I suggest doing some preliminary reading on group theory and
finite-field maths, and also paying more attention ​to how existing strong
steam ciphers are constructed.  One of the reasons Solitaire is useful is
because you can mathematically prove certain properties about the cipher
operation; also you'll note the entire internal state of Solitaire changes,
while your design stays static.
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


[cryptography] Fwd: Which programs need good random values when a system first boots?

2013-10-20 Thread Peter Maxwell
​(sorry, I'll try sending to the list this time... gmail seems to default
reply to the individual)​


​

On 20 October 2013 16:25, Paul Hoffman paul.hoff...@vpnc.org wrote:

 Greetings again. The recent discussion seems to have veered towards having
 enough good random bits to create long-lived keys the first time that a
 system boots up. Which programs need this? sshd is at the top of the list;
 are there others?


​​Filesystem encryption, e.g. GELI on FreeBSD,
​is what immediately comes to mind: ​
you normally set that up right when you've just installed a fresh system,
it needs fairly reasonable key lengths and you'd expect to be using those
keys for a quite long time.

​
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Preventing Time Correlation Attacks on Leaks: Help! :-)

2013-08-20 Thread Peter Maxwell
Hi Fabio,

While I don't mean to be dismissive, I suspect your threat model is flawed
for the following reasons:

i. Most mid to large companies would not permit the use of Tor within their
infrastructure and even if the hypothetical company did, it doesn't take a
whole lot of effort to track down the handful of users within a company
using Tor/stunnel/ssh/VPN.  For that matter, I understand some companies
even install private CA certificates into the browsers on company computers
and decrypt outgoing SSL/TLS traffic at their web-proxy/firewall... in that
situation, you're WB is going to stand out like a sore thumb as they'll be
the only TLS connection that isn't being decrypted (because it's Tor).  So
unless you want your whistle-blowers to literally advertise their presence
as worthy of attention, they aren't going to do the leak from an company
system directly.

ii. So, presuming i. is valid - and I suspect anyone who has worked within
a competent security operations team will tell you the same - then you must
assume the whistle-blower will do the leak from either their personal
systems, a burn computer or a public system.  If we make the assumption
that the WB has taken the data out of the company/organisation on removable
media or otherwise made it available to themselves outside the company
infrastructure in a secure manner (while sometimes difficult, that is still
far easier than i.) then your attacker can only see the WB's traffic if
they are actively monitoring the WB's use of computers outside the company,
in which case said WB has far bigger problems to worry about.  If the
attacker cannot monitor the timing of the leak, your problem is not framed
in the manner you've presented.

iii. Even if your model was realistic, you cannot adequately defend against
traffic analysis for such a low-traffic network: you need other traffic to
hide in, lots of it, from other users within the same company - it's not
realistic for this type of service.

iv. There are more subtle problems you are going to come across, not least
of which are issues such as document tagging/water-marking/document
versioning and the ability for the attacker - your hypothetical manager -
to correlate leaked documents against the access rights and access patterns
of potential whistle-blowers.  For that matter, simple forensic analysis of
staff computers is usually more than sufficient (and yes, organisations do
this).


It's also Isle of Man that people like hiding their ill-gotten-gains in,
not Island of Mann ;-)  Interestingly, I think anyone who has used Isle
of Man accounts for tax avoidance are scuppered as the HMRC has signed an
agreement with the authorities there for automatic disclosure.


Anyway, as far as I can see it, you have two different scenarios to
consider with one being significantly more difficult to solve than the
other:


A. The scenario where the whistle-blower is able to take the data out the
company on removable media or paper-copy.  This is the easy one to solve.
 Personally I would favour a combination of asymmetric encryption with
single-use keypairs and USB sticks in the post, but I'm old fashioned that
way.

B. The scenario where the whistle-blower has to leak from the
company/organisation's network.  This is extremely difficult indeed.  If I
were approaching this problem myself, my first considerations would be: how
to make the traffic look like normal web-traffic; how to ensure no forensic
traces are left; and how to do that without installation of third-party
software as that is a dead give-away.  If the quantity of data is larger
than a few hundred Mb, the problem is probably not solvable.


That's my tuppence-worth, hope that helps,

Peter
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] open letter to Phil Zimmermann and Jon Callas of Silent Circle, re: Silent Mail shutdown

2013-08-17 Thread Peter Maxwell
On 17 August 2013 19:23, Jon Callas j...@callas.org wrote:


 On Aug 17, 2013, at 10:41 AM, ianG i...@iang.org wrote:

  Apologies, ack -- I noticed that in your post.
 
  (And I think for crypto/security products, the BSD-licence variant is
 more important for getting it out there than any OSI grumbles.)

 Thanks. I agree with your comments in other parts of those notes that I
 removed about issues with open versus closed source. I often wish I didn't
 believe in open source, because the people doing closed source get much
 less flak than we do.


I'm not sure that's true (that closed-source gets less flak).  From the
user's point of view if security issues arise in a closed-source product
then there are two possible explanations: either the vendor made a mistake
or they did it deliberately; with no way to distinguish, it can be much
more damaging to a company's reputation.  This can be demonstrated by
example: can we have a show of hands for anyone who would trust Skype to
handle anything important/sensitive?

An open-source product on the other hand - in theory at least - is more
amenable to people determining whether a problem was a mistake or
deliberate... or at least the user can make an informed choice based on the
evidence.  From a personal point of view, I don't tend to run software I
cannot look at the source for; granted that is in part due to being able to
fix problems more easily but there have been instances where I've chosen
not to use software because I've seen the state of the source and thought
nae danger am I running that on an internet facing interface.

So, long-story-short, I think your choice was the preferable one and any
flak you might be getting is more likely to work in your favour in the long
term, as long as you keep doing as you have done by continuing to address
those concerns.  There are complicating factors with software like
SilentCircle as I don't trust the underlying OS or firmware of any
currently available mobile device - and I trust even less any potential
recipient's device - but that's a whole other discussion, and a far more
difficult problem.




  Ah ok.  Will they be writing an audit report?  Something that will give
 us trust that more people are sticking their name to it?

 I get regular audit reports, and have since last fall. :-)

 I haven't been putting them out because it felt like argument from
 authority. Hey, don't audit this yourself, trust these guys!

 Moreover, those reports are guidance we have from an independent party on
 what to do next. I want those to be raw and unvarnished. If they're going
 to get varnished, I lose guidance and I also lose speed. A report that's
 made for the public is definitionally sanitized. I don't want to encourage
 sanitizing.

 It's a hard problem. I understand what you want, but my goal is to provide
 a good service, not a good report.


I personally wouldn't expect publication of internal audits.  What might
assuage peoples' concerns though is being able to verify the package they
are running has definitely been compiled from the source code that is
publicly available: people have checked the source for SilentCircle's
products - and from what I can tell, independently - so if we assume we
trust the source there needs to be a chain of trust to ensure the binary
that's being executed has not been altered (I don't expect you ever would
but it's a nice feature to be able to prove it).

The corollary to this is for the ultra paranoid, the provision of a
hash/signature would probably be better done by a third-party, i.e. if
Zooko is intimating that in the current model SilentCircle could distribute
a back-doored package then there is no improvement unless the trust is
shared with an independent third-party... preferably someone not subject to
US jurisdiction.
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] [ramble] [tldr] Layered security where encryption is used?

2013-07-21 Thread Peter Maxwell
On 21 July 2013 22:40, Ben Lincoln f70c9...@beneaththewaves.net wrote:

 Maybe I am misunderstanding (and I apologize if so), but I don't think
 authenticated encryption will address the main problem I'm trying to solve.
 Preventing tampering is important (and I think some of what I suggested has
 the effect of making it at least a little harder to tamper with the data),
 but it's by far the secondary concern.


Unless your software is horribly broken to begin with - and it arguably is
given what you're attempting to do - then a MAC or authenticated encryption
is sufficient to solve *most* of your problem.




 The main problem is trying to reduce the likelihood that the system will
 decrypt data of a different type than it expects, and then display that
 decrypted data to the user (IE allowing them to decrypt arbitrary data
 without themselves knowing the encryption key). Unless I'm missing
 something (and that is certainly possible), then the data will always pass
 an authentication check, because it's always generated by the system in
 question - it just isn't intended to be decrypted and displayed by that
 specific part of the application.


The MAC covers the entire data blob, so assuming each attribute you wish to
store/retrieve has context information associated with it then it is
impossible for the attacker to forge.

Say, you have a data block which contains,

someSensitiveVariable=topsecretinfo

then trying to send that to a display function that expects

someVariableToBeDisplayed=info

won't work because your code checks the variable name (a trite example but
you get the idea).


If the attacker determines field boundaries within the data blob, they
still cannot forge a sensitive attribute into an unsensitive one because
they cannot calculate the required MAC because they do not have the
required secret key to do so.

If it were I implementing a scheme such as this, I'd do the MAC after the
encryption, include a nonce value in the blob as well as an expiry
timestamp and an identifier for the user; I'd also use a different secret
key for the MAC as to the encryption.

Be very careful with your padding and field delineation. And as
CodesInChaos said above, use a different cryptographically random IV for
encrypting each different data blob and for the MAC.


The first reason I said most of your problem above is that an attacker
can resubmit a valid blob and have it processed identically, viz. a replay
attack. For example,

someSensitiveTransactionAmountToTransfer=1

could be resubmitted and your software cannot tell whether it's a duplicate
or not.  The only way past that, as far as I can see, is to keep track of
the nonce values in which case you're back to square one because you need
to have a central transaction store and may-as-well just put all your
sensitive info in there anyway, which I strongly suggest you should be
doing in the first place.


The second reason I said most of your problem is slightly more subtle.
 If an attacker can create encrypted fields from *plaintext* data, say they
enter their information in a form which is then encrypted and stored in the
browser, then the attacker can use the application as an oracle.  If we
furthermore consider the sensitive data and non-sensitive data in the vein
of error-correcting codes and that these span a space much smaller than the
possible space over F_2^n, and we also assume the block-size of the cipher
is small enough (64-bits is enough), it may be possible for the attacker to
iterate over that search space and generate the corresponding ciphertext
blocks, storing them in a table.  By comparing the cipher-text of sensitive
data blobs to the table, the attacker can then identify the corresponding
plaintext, essentially decrypting the sensitive data without requiring the
key.  There are optimisations to this attack which I haven't described here.

This last problem can be solved by using temporal or derived secret keys,
i.e. you have the master secret key and for each data block you combine
that with some cryptographically random data to derive a one-time secret
key.  However, as you'll notice we're already getting quite messy and the
scope for making a complete hash of things (pun intended) has dramatically
increased.




  Using separate keys for separate types of data will go a long way there,
 but I am trying to come up with a completely separate mechanism that
 operates using a different method, but which will also help prevent the
 unwanted outcome even if someone makes a mistake and uses the same key for
 sensitive and non-sensitive data. In other words, the mechanism I'm trying
 to come up with can't involve using different keys, because I already have
 a mechanism based on that principle.


 For comparison, think of physical safety features in industrial equipment.
 The B Reactor at Hanford had a gravity-based system that would
 automatically insert the control rods in the event of a power failure, but
 there was also a system 

Re: [cryptography] 100 Gbps line rate encryption

2013-07-17 Thread Peter Maxwell
On 17 July 2013 08:50, William Allen Simpson 
william.allen.simp...@gmail.com wrote:



  In summary, don't use RC4. Don't use it carelessly with IVs. And don't
 use RC4.

  RC4 is available in many libraries and platforms.  For the
 immediate future, it is most easily and likely implemented.

 We need something yesterday, not next year.


So is Salsa20, for that matter you have optimised versions available in
NaCl, etc.




 So, that's one of the options being explored.  All I'm
 trying to cover is doing it as securely as possible.


Then RC4 is not the way to go, especially when you're starting off with
anything standardisation shaped.





 (As I've some experience with this, you can rest assured
 that I've a fair understanding of IVs and other mechanics.)



  Consider using Salsa20 instead.

  It would be helpful for folks to read the entire thread
 before making off the wall comments.

 Yes, folks have mentioned Salsa20.  It doesn't seem as
 amenable to PPP packets as I would like.  But as I was
 looking at it, is seemed he'd moved on to ChaCha.  I'm
 behind the times on this


You're rekeying RC4 every packet and having to construct an do-it-yourself
IV scheme, that doesn't seem particularly amenable to begin with.




 So, let's talk about what to choose for something fast and
 modern to implement in the next decade  We cannot
 recommend a dozen EU possibilities.  We need something
 that's already had some significant analysis.  Salsa20 or
 ChaCha?  Discuss.


Salsa20, you can choose one of the faster variants.

If you're not wanting encryption for appearances sake - and your phrase
securely as possible above indicates that - you may also want to consider
a MAC... again these days you have easy(ish) options.
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Potential funding for crypto-related projects

2013-06-30 Thread Peter Maxwell
On 1 July 2013 01:55, Jacob Appelbaum ja...@appelbaum.net wrote:


  I would like to see a tor configuration flag that sacrifices speed for
  anonymity.

 You're the first person, perhaps ever, to make that feature request
 without it being in a mocking tone. At least, I think you're not mocking!
 :)



I would second that, it would be a desirable feature.

As it happens, I have been pondering this very problem for a while now,
even before information came to light about GCHQ's pervasive tapping of
fibre cables.  While I doubt any government agency is at the moment running
any decent traffic analysis on the Tor network - as was alluded to in
previous posts, it's hardly worth their while at the moment - conceptually
it wouldn't take a massive leap to do so.  If you have visibility of a
large proportion of the internet with very accurate time stamps, it will
almost certainly be possible to break the anonymity protection that Tor
currently provides.

There are some naive models that can combat that type of traffic analysis
but they all introduce new problems as well.  For example, if one creates a
new mode of operation so that nodes forward entire messages instead of
packets and that those messages have a lower and upper bound delay field,
it would seem on the face of it that one could thwart traffic analysis
because the data forwarding times are almost completely disassociated from
the sender.  However, because it is a larger message instead of packets, a
new statistical bias is introduced in terms of message size and reduction
in frequency of forwarding events.  So in this naive model, it may actually
have made the situation worse.

So, yes, being able to sacrifice speed for improved anonymity is a
desirable feature but I doubt it's going to be particularly easy to design
or implement.  There's also the problem of having applications that can
utilise a mode of operation that has potentially much higher latency.
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] 100 Gbps line rate encryption

2013-06-22 Thread Peter Maxwell
I think Bernstein's Salsa20 is faster and significantly more secure than
RC4, whether you'll be able to design hardware to run at line-speed is
somewhat more questionable though (would be interested to know if it's
possible right enough).



On 22 June 2013 18:35, William Allen Simpson 
william.allen.simp...@gmail.com wrote:

 A quick question: what are our current options for 100 Gbps
 line rate encryption?

 Are we still using variants of ARC4?
 ___
 cryptography mailing list
 cryptography@randombit.net
 http://lists.randombit.net/mailman/listinfo/cryptography

___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] 100 Gbps line rate encryption

2013-06-22 Thread Peter Maxwell
On 22 June 2013 23:31, James A. Donald jam...@echeque.com wrote:

  On 2013-06-23 6:47 AM, Peter Maxwell wrote:



  I think Bernstein's Salsa20 is faster and significantly more secure than
 RC4, whether you'll be able to design hardware to run at line-speed is
 somewhat more questionable though (would be interested to know if it's
 possible right enough).


 I would be surprised if it is faster.




Given the 100Gbps spec, I can only presume it's hardware that's being
talked about, which is well outwith my knowledge.  We also don't know
whether there is to be only one keystream allowed or not.

However, just to give an idea of performance: from a cursory search on
Google, once can seemingly find Salsa20/12 being implemented recently on
GPU with performance around 43Gbps without memory transfer (2.7Gbps with) -
http://link.springer.com/chapter/10.1007%2F978-3-642-38553-7_11 ) -
unfortunately I don't have access to the paper.

On a decent 64-bit processor, the full Salsa20/20 is coming in around
3-4cpb - http://bench.cr.yp.to/results-stream.html - and while cpb isn't a
great measurement, it at least gives a feel for things.


Going on a very naive approach, I would imagine the standard RC4 will
suffer due to being byte-orientated and not particularly open to
parallelism.  Salsa20 operates on 32-bit words and from a cursory
inspection of the spec seems to offer at least some options to do
operations in parallel.

If I were putting money on it, I suspect one could optimise at least
Salsa20/12 to be faster than RC4 on modern platforms; whether this has been
done is another story.  Fairly sure Salsa20/8 was faster than RC4
out-of-the-box.

As with anything though, I stand to be corrected.
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] can the German government read PGP and ssh traffic?

2012-05-29 Thread Peter Maxwell
On 30 May 2012 05:01, ianG i...@iang.org wrote:

 On 29/05/12 11:03 AM, Peter Maxwell wrote:



 On 29 May 2012 01:35, Peter Gutmann pgut...@cs.auckland.ac.nz
 mailto:pgut...@cs.auckland.ac.nz wrote:

Peter Maxwell pe...@allicient.co.uk mailto:pe...@allicient.co.uk

writes:

 Why on earth would you need to spread your private-key across any
number of
 less secure machines?

The technical details are long and tedious (a pile of machines that
need to
talk via SSH because telnet and FTP were turned off/firewalled years
ago, I
won't bore you with the details).  The important point isn't the
technical
details but the magical thinking, a private key sprayed all over
the place in
plaintext is more secure than a line-noise password because everyone
knows
passwords are insecure and PKCs are secure (and, as I've said, this
isn't an
isolated case).



 To make an analogy: people still manage to kill themselves in cars
 fitted with seat-belts and airbags.  That does not imply those measures
 are not an improvement but rather that the improvement is a statistical
 one.



 Right!  And that has to be measured rather than speculated about.



Fair enough.  Out of interest: how many systems have you known to be
compromised via ssh public key auth?  Now, how many systems have you seen
compromised through bad password handling?






  Similarly, just because some numpty stores private keys in plaintext
 does not imply that public key auth is not in general an improvement
 over password auth.  Yes, it is not magical but if the users of such
 systems cannot handle private keys with at least minimal care, there are
 bigger problems afoot.



 The false presumption in this argument is that users can handle anything
 with some assumed level of minimal care.

 The goal is to find something that works best with the users' limited
 attention and knowledge.  Passwords 'work' because at least the users know
 them, sort of.  Although PK is a theoretical improvement over passwords in
 all technical senses, that is only a theoretical analysis and does not
 necessarily translate to user context.  Especially, if the imposition of
 PKs requires the user to 'protect' the private key, all the technical
 presumptions drift rather rapidly from reality.  The particular result of
 this is that SSH's limitations forces a lot of unencrypted private keys, as
 does SSL/HTTPS for server keys.


No, I deliberately did not make such an assumption.  My argument was one of
like-for-like comparison: if users cannot handle private keys with minimal
care then they also cannot handle passwords with minimal care and neither
method wins, although arguably public key auth still has advantages.

That users know passwords and they work is a large part of the problem
with passwords: the same low entropy security token is used for multiple
systems with varying levels of sensitivity.  When using passwords, both the
user and the end systems must, in general, be trusted with the security
token; so say a user uses the same password on 20 services then *all* of
those services must be secure *and* the user must keep the password secure.
 For public key auth, only the user must keep the private key secure, the
other systems do not require trust.




  If multiple users need to use SSH on multiple hosts, they should store
 the private key on removable media and use it from a limited number of
 hosts; to hop from one host to another, create a port-forward on the
 first ssh session form which the second ssh session can connect through
 to the destination host, hence obviating the requirement for copying
 private keys and ensuring the intermediate hosts cannot decrypt any
 traffic.



 Would be nice.  When it's up and going, installed, on common platforms,
 and available easily for users, that will be useful.


It is not difficult, sshd does port-forwarding in the default
configuration.  All that is required is the user knowing to port-forward
through the intermediate host instead of copying their private key to it,
which is a basic standard management/user education problem.

And before anyone suggests this is impossible or difficult, I have seen a
similar system with several thousand users and hundreds of hosts without
people leaving their private keys all over the shop.  The policy is easy
enough to enforce, if a private key is found where it shouldn't be then the
users' public keys are removed from all hosts and they are required to
create new keys and start again.  This stuff isn't magic, its basic
security management.





  I have yet to encounter a problem in real life that requires private ssh
 keys to be copied all over the shop



 I've seen it.  Last week:  E.g., repositories run by difficult sysadms
 that don't respond.  They add one ssh key to the repository.  To get
 access, the next programmer is tempted to borrow the ssh key of someone
 else.

 Call these people bad, if you like.  I

Re: [cryptography] can the German government read PGP and ssh traffic?

2012-05-28 Thread Peter Maxwell
On 29 May 2012 01:35, Peter Gutmann pgut...@cs.auckland.ac.nz wrote:

 Peter Maxwell pe...@allicient.co.uk writes:

 Why on earth would you need to spread your private-key across any number
 of
 less secure machines?

 The technical details are long and tedious (a pile of machines that need to
 talk via SSH because telnet and FTP were turned off/firewalled years ago, I
 won't bore you with the details).  The important point isn't the technical
 details but the magical thinking, a private key sprayed all over the
 place in
 plaintext is more secure than a line-noise password because everyone knows
 passwords are insecure and PKCs are secure (and, as I've said, this isn't
 an
 isolated case).



To make an analogy: people still manage to kill themselves in cars fitted
with seat-belts and airbags.  That does not imply those measures are not an
improvement but rather that the improvement is a statistical one.

Similarly, just because some numpty stores private keys in plaintext does
not imply that public key auth is not in general an improvement over
password auth.  Yes, it is not magical but if the users of such systems
cannot handle private keys with at least minimal care, there are bigger
problems afoot.

If multiple users need to use SSH on multiple hosts, they should store the
private key on removable media and use it from a limited number of hosts;
to hop from one host to another, create a port-forward on the first ssh
session form which the second ssh session can connect through to the
destination host, hence obviating the requirement for copying private keys
and ensuring the intermediate hosts cannot decrypt any traffic.

I have yet to encounter a problem in real life that requires private ssh
keys to be copied all over the shop and when it happens, it's bad
management, which no technical measure is going to sort.
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] can the German government read PGP and ssh traffic?

2012-05-26 Thread Peter Maxwell
On 26 May 2012 06:57, Peter Gutmann pgut...@cs.auckland.ac.nz wrote:

 Werner Koch w...@gnupg.org writes:

 Which is not a surprise given that many SSH users believe that ssh
 automagically make their root account save and continue to use their lame
 passwords instead of using PK based authentication.

 That has its own problems with magical thinking: Provided you use PK auth,
 you're magically secure, even if the private key is stored in plaintext on
 ten
 different Internet-connected multiuser machines.  I don't know how many
 times
 I've been asked to change my line-noise password for PK auth, told the
 person
 requesting the change that this would make them less secure because I need
 to
 spread my private key across any number of not-very-secure machines, and
 they've said that's OK because as long as it uses PKCs it's magically
 secure.


Why on earth would you need to spread your private-key across any number of
less secure machines?  A £10 usb stick and judicious port-forwarding turns
this problem in the worst case to be equivalent security to password and
normally quite a bit better.
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Symantec/Verisign DV certs issued with excessive validity period of 6 years

2012-04-23 Thread Peter Maxwell
On 23 April 2012 22:41, Marsh Ray ma...@extendedsubset.com wrote:


 Thought the list might be interested in this little development in the PKI
 saga.

 Do you all agree with my assertion that No one with a clue about PKI
 security would believe that a revoked cert provides equivalent security
 from misuse as a naturally-expired cert. ?

  - Marsh


With current client implementations, I agree your statement does hold.  I
do however disagree with your general premise that six years is too long.


Say two companies, A and B, have adopted similar security practices and
assume the probability of a cert compromise in a given year for one of them
is roughly one in ten thousand, p_yr = 0.0001

Company A has their cert issued with a two year validity, so their
probability of compromise is p_2yr = 1 - ( 0. ) ^ 2 ~= 2.10^-4

Company B has their cert issued with a six year validity, so their
probability of compromise is p_6yr = 1 - ( 0. ) ^ 6 ~= 6.10^-4


In other words, it makes not a jot of difference to company B: they may be
roughly three times more at risk than company A but that's still only a six
in ten thousand chance, say.  Even if the original p_yr were higher, say
one in a thousand, it still doesn't merit any concern for the individual
company.


If we then turn our attention to the system as a whole: assuming we look at
all certificates, does extending the expiry period help reduce the impact
on users?  Well, actually, counter-intuitively it does not.

The rationale is as follows...

i. assume at any given time, some rate of compromise of all available
issued certs, r;

ii. assume also that most certificate forgery is to steel money or commit
fraud, so there is an incentive to act quickly and the outlier cases of
long-term attacks can be discounted for now;

iii. further assume that the maximum time of utility for the attackers to
use the cert in ii. is approximately a day.

It immediately follows that for a whole-system expiry period of two years
the attacker is impeded and r is reduced by roughly 1 / ( 2 . 365 ) =
0.13%. For expiry times of six years that changes to a reduction of 1 / ( 6
. 365 ) = 0.05%.

That's a fairly marginal result and if we assume the attacker(s) know(s)
this, they will simply avoid tackling certs near expiry, rendering the
difference null.


So does the expiry period actually matter that much?  Intuitively yes,
rationally, no.


Regards,

Peter
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] NIST and other organisations that set up standards in information security cryptography.

2012-04-22 Thread Peter Maxwell
On Sun, Apr 22, 2012 at 4:54 AM, Marsh Ray ma...@extendedsubset.com wrote:
  On 04/22/2012 02:55 PM, Jeffrey Walton wrote:
 
 
  This might sound crazy, but I would rather have a NIST approved hash
  that runs orders of magnitude slower to resist offline, brute forcing
  attacks.
 
 
  Well, that's what we have KDFs with a tunable work factor like PBKDF2
 for.



Exactly, hash functions aren't designed to be KDFs - they've merely been
appropriated within the design of some KDFs.  A specific hash function, to
meet the general requirements of a hash function, must be fast.  You can
take a fast hash function and design a slow KDF from it but not the
converse.

It would rather silly in my opinion for NIST to mandate a slow hash
function as it would only be useful for this particular scenario.  For
almost every other application - and there are many, many more uses for a
hash function - it would be rendered useless and nobody would implement it
other than for, guess what, as part of a KDF.  So why not just mandate
an/another KDF standard?

The moral of the story is use the correct tool for the job: a artist's
paint brush is excellent for painting pictures but it will take you a while
to decorate your house with it.
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] MS PPTP MPPE only as secure as *single* DES

2012-04-05 Thread Peter Maxwell
On 5 April 2012 18:06, Marsh Ray ma...@extendedsubset.com wrote:

 On 04/05/2012 04:12 AM, Ralf-Philipp Weinmann wrote:


 Do you have statistics on that? I remember newer Microsoft and Apple
 operating systems supporting L2Sec quite well. And then there are the
 Cisco abominanations of IPSec that are quite common. But maybe not as
 common as SSL VPNs. And let's not forget OpenVPN for the geek
 faction. Where did you get the data that PPTP still is one of the
 most commonly-used VPN protocols.


 Honestly, it's been years since I messed with VPNs and I have not done
 methodical research. I suspect VPN industry studies are likely to to be
 skewed by selection bias (IT departments who are likely to spend spend
 money on a real VPN).


There's two reasons I haven't commented on this (despite it being good
work):

i. I'm not familiar enough with PPTP, and always avoided it like the plague
anyway (and that was 10 years ago).  Does dial-up not still generally use
MS-CHAPv2?

ii. There's only been once I've seen a company use PPTP for a VPN, and I
responded as any self-respecting sys-admin would... I laughed, took the
piss a bit, then fixed it.  Anything else I've seen has been Cisco (IPSec
or SSL afaik), Checkpoint (IPSec?), more bog-standard IPSec setups and
OpenVPN.  For that matter, I've seen companies use the sshd socks proxy as
a VPN.
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] [info] The NSA Is Building the Country's Biggest Spy Center (Watch What You Say)

2012-03-22 Thread Peter Maxwell
On 22 March 2012 14:15, Dean, James jd...@lsuhsc.edu wrote:

 From
 http://blogs.computerworld.com/19917/shocker_nsa_chief_denies_total_info
 rmation_awareness_spying_on_americans?source=CTWNLE_nlt_security_2012-03
 -22:

 Despite the fact that domestic spying on Americans is already an
 e-hoarding epidemic, the massive new NSA storage facility in Utah will
 solve the problem of how to manage 20 terabytes a minute of intercepted
 communications.

 Even if the intercepted communication is AES encrypted and unbroken
 today, all that stored data will be cracked some day. Then it too can be
 data-mined. The super secret spook agency is full of code breakers.
 Remember, former intelligence official Binney stated, a lot of
 foreign government stuff we've never been able to break is 128 or less.
 Break all that and you'll find out a lot more of what you didn't
 know-stuff we've already stored-so there's an enormous amount of
 information still in there. Binney added the NSA is on the verge of
 breaking a key encryption algorithm.


That sounds far more plausible than the previous explanations.  I'd also
suspect the key encryption algorithm may be RC4 and not AES at the moment.
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography