Re: [Cryptography] Crypto Standards v.s. Engineering habits - Was: NIST about to weaken SHA3?

2013-10-02 Thread Jerry Leichter
On Oct 1, 2013, at 12:27 PM, Dirk-Willem van Gulik wrote:
 It's clear what 10x stronger than needed means for a support beam:  We're 
 pretty good at modeling the forces on a beam and we know how strong beams of 
 given sizes are.  
 Actually - do we ? I picked this example as it is one of those where this 'we 
 know' falls apart on closer examination. Wood varies a lot; and our ratings 
 are very rough. We drill holes through it; use hugely varying ways to 
 glue/weld/etc. And we liberally apply safety factors everywhere; and a lot of 
 'otherwise it does not feel right' throughout. And in all fairness - while 
 you can get a bunch of engineers to agree that 'it is strong enough' - they'd 
 argue endlessly and have 'it depends' sort of answers when you ask them how 
 strong is it 'really' ?
[Getting away from crypto, but ... ]  Having recently had significant work done 
on my house, I've seen this kind of thing close up.

There are three levels of construction.  If you're putting together a small 
garden shed, it looks right is generally enough - at least if it's someone 
with sufficient experience.  If you're talking non-load-bearing walls, or even 
some that bear fairly small loads, you follow standards - use 2x4's, space them 
36 apart, use doubled 2x4's over openings like windows and doors, don't cut 
holes larger than some limit - and you'll be fine (based on what I saw, you 
could cut a hole large enough for a water supply, but not for a water drain 
pipe).  Methods of attachment are also specified.  These standards - enforced 
by building codes - are deliberately chosen with large safety margins so that 
you don't need to do any detailed calculations.  They are inherently safe over 
some broad range of sizes of a constructed object.

Beyond that, you get into the realm of computation.  I needed a long open span, 
which was accomplished with an LV beam (engineered wood - LV is Layered 
Veneer).  The beam was supporting a good piece of the house's roof, so the 
actual forces needed to be calculated.  LV beams come in multiple sizes, and 
the strengths are well characterized.  In this case, we would not have wanted 
the architect/structural engineer to just build in a larger margin of safety:  
There was limited space in the attic to get this into place, and if we chose 
too large an LV beam just for good measure, it wouldn't fit.  Alternatively, 
we could have added a vertical support beam just to be sure - but it would 
have disrupted the kitchen.  (A larger LV beam would also have cost more money, 
though with only one beam, the percentage it would have added to the total cost 
would have been small.  On a larger project - or, if we'd had to go with a 
steel beam if no LV beam of appropriate size and strength exi
 sted - the cost increase could have been significant.)

The larger the construction project, the tighter the limits on this stuff.  I 
used to work with a former structural engineer, and he repeated some of the 
bad example stories they are taught.  A famous case a number of years back 
involved a hotel in, I believe, Kansas City.  The hotel had a large, open 
atrium, with two levels of concrete skyways for walking above.  The skyways 
were hung from the roof.  As the structural engineer specified their 
attachment, a long threaded steel rod ran from the roof, through one skyway - 
with the skyway held on by a nut - and then down to the second skyway, also 
held on by a nut.  The builder, realizing that he would have to thread the nut 
for the upper skyway up many feet of rod, made a minor change:  He instead 
used two threaded rods, one from roof to upper skyway, one from upper skyway to 
lower skyway.  It's all the same, right?  Well, no:  In the original design, 
the upper nut holds the weight of just the upper skyway.  In the modi
 fied version, it holds the weight of *both* skyways.  The upper fastening 
failed, the structure collapsed, and as I recall several people on the skyways 
at the time were killed.  So ... not even a factor of two safety margin there.  
(The take-away from the story as delivered to future structural engineers was 
*not* that there wasn't a large enough safety margin - the calculations were 
accurate and well within the margins used in building such structures.  The 
issue was that no one checked that the structure was actually built as 
designed.)

I'll leave it to others to decide whether, and how, these lessons apply to 
crypto design.
-- Jerry

___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Passwords

2013-10-02 Thread Jerry Leichter
On Oct 1, 2013, at 5:10 PM, Jeffrey Schiller wrote:
 A friend of mine who used to build submarines once told me that the first 
 time the sub is submerged, the folks who built it are on board. :-)
Indeed.  A friend served on nuclear subs; I heard about that practice from him. 
 (The same practice is followed after any significant refit.)  It inspired my 
suggestion.
-- Jerry

___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] AES-256- More NIST-y? paranoia

2013-10-02 Thread Jerry Leichter
On Oct 1, 2013, at 5:58 PM, Peter Fairbrother wrote:
 [and why doesn't AES-256 have 256-bit blocks???]
Because there's no security advantage, but a practical disadvantage.

When blocks are small enough, the birthday paradox may imply repeated blocks 
after too short a time to be comfortable.  Whether this matters to you actually 
depends on how you use the cipher.  If you're using CBC, for example, you don't 
want to ever see a repeated block used with a single key.  With 64-bit blocks 
(as in DES), you expect to see a repetition after 2^32 blocks or 2^38 bytes, 
which in a modern network is something that might actually come up.

A 128-bit block won't see a collision for 2^64 blocks or 2^71 bytes, which is 
unlikely to be an issue any time in the foreseeable future.

Note that many other modes are immune to this particular issue.  For example, 
CTR mode with a 64-bit block won't repeat until you've used it for 2^64 blocks 
(though you would probably want to rekey earlier just to be safe).

I know of no other vulnerability that are related to the block size, though 
they may be out there; I'd love to learn about them.

On the other hand, using different block sizes keeps you from easily 
substituting one cipher for another.  Interchanging AES-128 and AES-256 - or 
substituting in some entirely different cipher with the same block size - is 
straightforward.  (The changed key length can be painful, but since keys are 
fairly small anyway you can just reserve key space large enough for any cipher 
you might be interested int.)  Changing the block size affects much more code 
and may require changes to the protocol (e.g., you might need to reserve more 
bits to represent the length of a short final block).

-- Jerry

___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] TLS2

2013-10-02 Thread ianG

On 2/10/13 00:43 AM, James A. Donald wrote:

On 2013-10-01 14:36, Bill Stewart wrote:

It's the data representations that map them into binary strings that
are a
wretched hive of scum and villainy, particularly because you can't
depend on a
bit string being able to map back into any well-defined ASN.1 object
or even any limited size of ASN.1 object that won't smash your stack
or heap.
The industry's been bitten before by a widely available open source
library
that turned out to be vulnerable to maliciously crafted binary strings
that could be passed around as SNMP traps or other ASN.1-using messages.

Similarly, PGP's most serious security bugs were related to
variable-length binary representations that were trying to steal bits
to maximize data compression at the risk of ambiguity.
Scrounging a few bits here and there just isn't worth it.



This is an inherent problem, not with ASN.1, but with any data
representation that can represent arbitrary data.



Right.  I see the encoding choice as both integral to any proposal, and 
a very strong design decision.


I would fail any proposal that used some form of external library like 
ASN.1, XML, JSON, YAML, pb, Thrift, etc, that was clearly not suited for 
purpose of security.  I would give a thumbs-up to any proposal that 
created its own tight custom definition.




The decoder should only be able to decode the data structure it expects,
that its caller knows how to interpret, and intends to interpret.
Anything else should fail immediately.  Thus our decoder should have
been compiled from, a data description, rather than being a general
purpose decoder.



This is why I like not using a decoder.  My requirement is that I read 
exactly what I expect, check it for both syntax  semantics, and move 
on.  There should be no intervening lazy compilation steps to stop the 
coder seeing the entire picture.


Another problem with decoders is that you need a language.  So that 
makes two languages - the primary one and the layout.  Oops.  Have you 
noticed how these languages start off simple and get more and more 
complicated, as they try and do what the primary could already do?


The end result is no savings in coding, split sanity  semantics 
checking, added complexity and less security.  For every element you 
need to read, you need a line of code either way you do it, so it may as 
well be in the primary language, and then you get the security and the 
full checking capability, for free.




Thus sender and receiver should have to agree on the data structure for
any communication to take place, which almost automatically gives us a
highly compressed format.

Conversely, any highly compressed format will tend to require and assume
a known data structure.

The problem is that we do not want, and should not have, the capacity to
send a program an arbitrary data structure, for no one can write a
program that can respond appropriately to an arbitrary data structure.



Right.  To solve this, we would generally know what is to come, and we 
would signal that the exact expected thing is coming.


Following-data-identification is the one problem I've not seen an 
elegant solution to.  Tagging is something that lends itself to some 
form of hierarchical or centralised solution.  I use a centralised file 
with numbers and classes, but there are many possibilities.


If I was to do it, for TLS2, I'd have a single table containing the 
mapping of all things.  It would be like (off the top of my head):



1  compactInt
2  byteArray
3  bigInt
4  booleans

20 secret key packet
21 hash
22 private key packet
23 public key packet
24 hmac

40 Hello
41 Hiya
42 Go4It
43 DataPacket
44 Teardown


I don't like it, but I've never come across a better solution.




iang
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Why is emailing me my password?

2013-10-02 Thread Russ Nelson
Greg writes:
  This falls somewhere in the land of beyond-the-absurd.
  So, my password, iPoopInYourHat, is being sent to me in the clear by your 
  servers.

Repeat after me: crypto without a threat model is like cookies without
milk.

If you are proposing that something needs stronger encryption than
ROT-26, please explain the threat model that justifies your choice of
encryption and key distribution algorithms.

-- 
--my blog is athttp://blog.russnelson.com
Crynwr supports open source software
521 Pleasant Valley Rd. | +1 315-600-8815
Potsdam, NY 13676-3213  | Sheepdog   
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Why is emailing me my password?

2013-10-02 Thread Markus Wanner
On 10/01/2013 11:36 PM, R. Hirschfeld wrote:
 Your objections are understandable but aren't really an issue with
 mailman because if you don't enter a password then mailman will choose
 one for you (which I always let it do) and there's no need to remember
 it because if you ever need it (a rare occasion!) and don't happen to
 have a monthly password reminder to hand, clicking the link at the
 bottom of each list message will take you to a page where you can have
 it mailed to you.

Mailman choosing a random password for you is certainly better, yes. And
closer to the email based OTP solution. It's still a permanent password,
though. By definition, a single interception suffices for an attacker to
be able to (ab)use it until you modify it. As opposed to the mail based
OTP scheme. And the monthly reminder essentially makes an interception
even more likely.

Granted, the worst an attacker can do with an intercepted password
(permanent or OTP) is just a tad annoying - given it's not used elsewhere.

 The real danger is that those who don't read the instructions might
 enter a password that they use elsewhere and want to keep secure.

Agreed. It's opposed to good practice and common sense of password handling.

Regards

Markus Wanner
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Why is emailing me my password?

2013-10-02 Thread Markus Wanner
On 10/02/2013 12:11 AM, Joshua Marpet wrote:
 Low security environment, minimal ability to inflict damage, clear
 instructions from the beginning. 

Agreed.

There certainly are bigger problems on earth. And I really don't mind if
you move on and take care of any of those, first. :-)

 If the system and processes are not to your liking, that's
 understandable.  Everyone is different.

Please read my arguments, I'm not opposed to it based on personal
preference. Quite the opposite, I actually like web front-ends better
than email commands. But in this case, I think a mail based OTP solution
is better from a security perspective.

 There are other choices.  If you'd like to investigate them, determine
 an appropriate one, and advocate a move to it, that would be welcomed, I
 presume?

I did investigate. And I'm currently using smartlist. Whether or not you
or anybody else moves is entirely up to you or them.

If you use mailman, your users better be aware it doesn't follow best
practice regarding password handling, though.

And yes, smartlist certainly has its issues as well. If you know of any,
please let me know as well.

 No offense meant, in any way.  Please forgive me if offense is given.

No offense taken. And if it were, you're hereby forgiven. ;-)

Regards

Markus Wanner
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] TLS2

2013-10-02 Thread James A. Donald

On 2013-10-02 13:18, Tony Arcieri wrote:

LANGSEC calls this: full recognition before processing

http://www.cs.dartmouth.edu/~sergey/langsec/occupy/ 
http://www.cs.dartmouth.edu/%7Esergey/langsec/occupy/


I disagree slightly with langsec.

At compile time you want an extremely powerful language for describing 
data, that can describe any possible data structure.


At run time, you want the least possible power, such that your 
recognizer can only recognize the specified and expected data structure.


Thus BER and DER are bad for the reasons given by Langsec, indeed they 
illustrate the evils that langsec condemns, but these criticisms do not 
normally apply to PER, since for PER, the dangerously great power exists 
only at compile time, and you would have to work pretty hard to retain 
any substantial part of that dangerously great power at run time.
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

Re: [Cryptography] Why is emailing me my password?

2013-10-02 Thread Markus Wanner
On 10/02/2013 12:03 AM, Greg wrote:
 Running a mailing list is not hard work. There are only so many things
 one can fuck up. This is probably one of the biggest mistakes that can
 be made in running a mailing list, and on a list that's about software
 security. It's just ridiculous.

While I agree in principle, I don't quite like the tone here. But I
liked your password, though. ;-)

And no: there certainly are bigger mistakes an admin of a mailing list
can do. Think: members list, spam, etc..

 A mailing list shouldn't have any passwords to begin with. There is no
 need for passwords, and it shouldn't be possible for anyone to
 unsubscribe anyone else.
 
 User: Unsubscribe [EMAIL] - Server
 Server: Are you sure? - [EMAIL]
 User@[EMAIL]: YES! - Server.
 
 No passwords, and no fake unsubscribes.

For that to be as secure as you make it sound, you still need a password
or token. Hopefully a one-time, randomly generated one, but it's still a
password. And it still crosses the wires unencrypted and can thus be
intercepted by a MITM.

The gain of that approach really is that there's no danger of a user
inadvertently revealing a valuable password.

The limited life time of the OTP may also make it a tad harder for an
attacker, but given the (absence of) value for an attacker, that's close
to irrelevant.

Regards

Markus Wanner
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] RSA recommends against use of its own products.

2013-10-02 Thread John Lowry
BBN has created three ASN.1 code generators over time and even released a 
couple. (ASN.1 to C, C++, and Java). I believe that DER to support typical 
X.509 management is the easiest subset.  I can check on status for release to 
open source if there is interest. It has been available as part of Certificate 
Management systems we've released to open source but obviously this is a very 
small COI indeed.

I can read hex dumps of ASN.1 and choose not to develop similar skills for XML 
and other types.   I'm getting too old for that kind of skill acquisition to be 
fun. But to forward reference in this chain (with apologies), I too would 
prefer a standard that that has Postel's principles as a touchstone. 

John Lowry





Sent from my iPhone

On Sep 30, 2013, at 0:28, James A. Donald jam...@echeque.com wrote:

 On 2013-09-29 23:13, Jerry Leichter wrote:
 BTW, the *idea* behind DER isn't inherently bad - but the way it ended up is 
 another story.  For a comparison, look at the encodings Knuth came up with 
 in the TeX world.  Both dvi and pk files are extremely compact binary 
 representations - but correct encoders and decoders for them are plentiful.
 
 
 DER is unintelligble and incomprehensible.  There is, however, an open source 
 complier for ASN.1
 
 Does it not produce correct encoders and decoders for DER?  (I have never 
 used it)
 ___
 The cryptography mailing list
 cryptography@metzdowd.com
 http://www.metzdowd.com/mailman/listinfo/cryptography


smime.p7s
Description: S/MIME cryptographic signature
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

Re: [Cryptography] TLS2

2013-10-02 Thread ianG

On 1/10/13 23:13 PM, Peter Fairbrother wrote:
...

Sounds like you want CurveCP?

http://curvecp.org/




Yes, EXACTLY that.  Proposals like CurveCP.



I have said this first part before:

Dan Boneh was talking at this years RSA cryptographers track about
putting some sort of quantum-computer-resistant PK into browsers - maybe
something like that should go into TLS2 as well?



I would see that as optional.  If a designer thinks it can be done, go 
for it.  Let's see what the marketplace




We need to get the browser makers - Apple, Google, Microsoft, Mozilla -
and the webservers - Apache, Microsoft, nginx - together and get them to
agree we must all implement this before writing the RFC.



Believe me, that way is a disaster.

The first thing that happens is someone says, let's get together and 
we'll fix this.  Guys, we can do this!


The second thing that happens is they form a committee.  Then the 
companies insist that only their agenda be respected.


End of (good) story, start of rort.



Also, the banks and the CA's should have an input. But not a say.



I'm sorry, this is totally embarrased by history.  The CAs have *all* 
the say, the vendors are told what to say by the CAs.  The banks have 
*none* of the say.  We can see this from the history of CABForum, which 
started out as I suggested above.


(The users were totally excluded from CABForum.  Then about 2 years 
back, after they had laid out the foundation and screwed the users 
totally, they invented some sort of faux figurehead user representation. 
 I never followed it after they announced their intent to do a facade.)




More rules:

IP-free, open source code,



Patent free or free licences provided, yes.


no libraries (*all* functions internal to each suite)


Fewest dependencies.


a compiler which gives repeatable binary hashes so you can verify binary
against source.


Note to Microsoft - open source does not always mean free. But in this
case it must be free.



Maximum of four crypto suites.



3 too many!


Each suite has fixed algorithms, protocols, key and group sizes etc.



I agree with that.  You'll find a lot of people don't agree with the key 
size being fixed, and people like NIST love yanking the chain by 
insisting on upping the numbers to some schedule.


But that resistance is somewhat of an RSA hangover; if the one 
cryptosuite is based on EC then there is more chance of it being fixed 
to one size.




Give them girls' names, not silly and incomplete crypto names - This
connection is protected by Alice.



:)


Ability to add new suites as secure browser upgrade from browser
supplier. ?New suites must be signed by working group?. Signed new
suites must then be available immediately on all platforms, both browser
and webserver.



And that opens pandora's box.  It requires a WG.  I have a vanity need. 
 Trouble begins...




Separate authentication and sessionkeysetup keys mandatory.



I like it, but DJB raises a good point:  if EC is fast enough, there may 
be scope to eliminate some of the phases.



Maybe use existing X.509? but always for authentication only, never
sessionkeysetup.



I see this as difficult.  A lot of the problems in the last lot happened 
because the institutions imposed x.509 over everything.  I see the same 
problem with the anti-solution which is passwords.


How the past is rectified and future auth needs are handled will be part 
of what makes a winning solution the winner.




No client authentication. None. Zero.



That won't get very far.  We need client auth for just about everything.

The business about privacy is totally dead;  sophisticated websites are 
slopping up the id info regardless of the auth.  Privacy isn't a good 
reason to drop client-side auth.


(Which isn't to say privacy isn't a requirement.)



That's too hard for an individual to manage - remembering passwords or
whatever, yes, global authentication, no. That does not belong in TLS.

I specifically include this because the banks want it, now, in order to
shift liability to their customers.


Well, they want a complete solution.  Not the crapola they have to deal 
with now, where they have to figure out where CIA stops and where their 
problems start.



And as to passwords being near end-of-life? Rubbish. Keep the password
database secure, give the user a username and only three password
attempts, and all your GPUs and ASIC farms are worth nothing.



So, it seems that there is no consensus on the nature of client auth. 
Therefore I'd suggest we throw the whole question open:  How much auth 
and which auth will be a key telling point.




iang
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] RSA equivalent key length/strength

2013-10-02 Thread ianG

Hi Peter,

On 30/09/13 23:31 PM, Peter Fairbrother wrote:

On 26/09/13 07:52, ianG wrote:

On 26/09/13 02:24 AM, Peter Fairbrother wrote:

On 25/09/13 17:17, ianG wrote:

On 24/09/13 19:23 PM, Kelly John Rose wrote:


I have always approached that no encryption is better than bad
encryption, otherwise the end user will feel more secure than they
should and is more likely to share information or data they should not
be on that line.



The trap of a false sense of security is far outweighed by the benefit
of a good enough security delivered to more people.


Given that mostly security works (or it should), what's really important
is where that security fails - and good enough security can drive out
excellent security.



Indeed it can.  So how do we differentiate?  Here are two oft-forgotten 
problems.


Firstly, when systems fail, typically it is the system around the crypto 
that fails, not the crypto itself.  This tells us that (a) the job of 
the crypto is to help the rest of the system to not fail, and (b) near 
enough is often good enough, because the metric of importance is to push 
all likely attacks elsewhere (into the rest of the system).


An alternative treatment is Adi Shamir's 3 laws of security:

http://financialcryptography.com/mt/archives/000147.html

Secondly, when talking about security options, we have to show where the 
security fails.  With history, with evidence -- so we can inform our 
speculations with facts.  If we don't do that, then our speculations 
become received wisdom, and we end up fielding systems that not only are 
making things worse, but are also blocking superior systems from emerging.




We can easily have excellent security in TLS (mk 2?) - the crypto part
of TLS can be unbreakable, code to follow (hah!) - but 1024-bit DHE
isn't say unbreakable for 10 years, far less for a lifetime.



OK, so TLS.  Let's see the failures in TLS?  SSL was running export 
grade for lots and lots of years, and those numbers were chosen to be 
crackable.  Let's see a list of damages, breaches, losses?


Guess what?  Practically none!  There is no recorded history of breaches 
in TLS crypto (and I've been asking for a decade, others longer).


So, either there are NO FAILURES from export grade or other weaker 
systems, *or* everyone is covering them up.  Because of some logic (like 
how much traffic and use), I'm going to plumb for NO FAILURES as a 
reasonable best guess, and hope that someone can prove me wrong.


Therefore, I conclude that perfect security is a crock, and there plenty 
of slack to open up and ease up.  If we can find a valid reason in the 
whole system (beyond TLS) to open up or ease up, then we should do it.




We are only talking about security against an NSA-level opponent here.
Is that significant?



It is a significant question.  Who are we protecting?  If we are talking 
about online banking, and credit cards, and the like, we are *not* 
protecting against the NSA.


(Coz they already breached all the banks, ages ago, and they get it all 
in real time.)


On the other hand, if we are talking about CAs or privacy system 
operators or jihadist websites, then we are concerned about NSA-level 
opponents.


Either way, we need to make a decision.  Otherwise all the other 
pronouncements are futile.




Eg, Tor isn't robust against NSA-level opponents. Is OTR?



All good questions.  What you have to do is decide your threat model, 
and protect against that.  And not flip across to some hypothetical 
received wisdom like MITM is the devil without a clear knowledge about 
why you care about that particular devil.




We're talking multiple orders of magnitude here.  The math that counts
is:

Security = Users * Protection.


No. No. No. Please, no? No. Nonononononono.

It's Summa (over i)  P_i.I_i where P_i is the protection provided to
information i, and I_i is the importance of keeping information i
protected.



I'm sorry, I don't deal in omniscience.Typically we as suppliers of
some security product have only the faintest idea what our users are up
to.  (Some consider this a good thing, it's a privacy quirk.)



No, and you don't know how important your opponent thinks the
information is either, and therefore what resources he might be willing
or able to spend to get access to it



Indeed, so many unknowables.  Which is why a risk management approach is 
to decide what you are protecting against and more importantly what you 
are not protecting against.


That results in, sharing the responsibility with another layer, another 
person.  E.g., if you're not in the sharing business, you're not in the 
security business.




- but we can make some crypto which
(we think) is unbreakable.



In that lies the trap.  Because we can make a block cipher that is 
unbreakable, we *think* we can make a system that is unbreakable.  No 
such applies.  Because we think we can make a system that is 
unbreakable, we talk like we can protect the user unbreakably.  A joke. 
 

Re: [Cryptography] encoding formats should not be committee'ized

2013-10-02 Thread Anne Lynn Wheeler

On 09/30/13 04:41, ianG wrote:

Experience suggests that asking a standards committee to do the encoding format 
is a disaster.

I just looked at my code, which does something we call Wire, and it's 700 loc.  
Testing code is about a kloc I suppose.  Writing reference implementations is a 
piece of cake.

Why can't we just designate some big player to do it, and follow suit? Why 
argue in committee?



early 90s annual ACM SIGMODS (DBMS) conference in San Jose ... general meeting 
in (full) ballroom ... somebody in the audience asks panel on the stage what is 
all this x.5xx stuff about ... and one of the panelists replies that it is a 
bunch of networking engineers trying to re-invent 1960s DBMS technology.

CA industry is pitching $20B/annum business case on wallstreet ... where the 
financial industry pays CAs $100/annum for every account for a 
relying-party-only digital certificate ... where the financial industry 
providing all the information that goes into the certificate (CA industry just 
reformats all the information and digitally signs it). In one case of 
institution with 14M accounts, the board asks what is this $1.4B/annum thing 
about?

I repeatedly point out that it is redundant and superfluous since the 
institution already has all the information. Purpose of the certificate is to 
append to every financial transaction. I also point out that digital 
certificate payload is enormous bloat, 100 times larger than the transaction 
size its attached to (besides redundant and superfluous)

CA industry then sponsors x9.63 work in X9 financial standards industry for 
compressed certificate format ... possibly getting the payload bloat down to 
10 times (instead of hundred times). Part of the compressed certificate work was to 
eliminate fields that the relying party already had. Since I had already shown that the 
relying party (institution) already had all fields, it was possible to compress every 
certificate to zero bytes ... so rather than doing digitally signed transactions w/o 
certificates ... it was possible to do digitally signed transactions with mandated 
appended zero-byte certificates.

Trivia: last few years before he passed, Postel would let me do part of STD1. 
There was a joke that while IETF required at least two interoperable 
implementations before standards progression, ISO didn't even require that a 
standard be implementable.

--
virtualization experience starting Jan1968, online at home since Mar1970
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] AES-256- More NIST-y? paranoia

2013-10-02 Thread John Kelsey
On Oct 1, 2013, at 5:58 PM, Peter Fairbrother zenadsl6...@zen.co.uk wrote:

 AES, the latest-and-greatest block cipher, comes in two main forms - AES-128 
 and AES-256.
 
 AES-256 is supposed to have a brute force work factor of 2^256  - but we find 
 that in fact it actually has a very similar work factor to that of AES-128, 
 due to bad subkey scheduling.
 
 Thing is, that bad subkey scheduling was introduced by NIST ... after 
 Rijndael, which won the open block cipher competition with what seems to be 
 all-the-way good scheduling, was transformed into AES by NIST.

What on Earth are you talking about?  AES' key schedule wasn't designed by 
NIST.  The only change NIST made to Rijndael was not including some of the 
alternative block sizes.  You can go look up the old Rijndael specs online if 
you want to verify this.

 -- Peter Fairbrother

--John

___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] encoding formats should not be committee'ized

2013-10-02 Thread Phillip Hallam-Baker
Replying to James and John.

Yes, the early ARPANET protocols are much better than many that are in
binary formats. But the point where data encoding becomes an issue is where
you have nested structures. SMTP does not have nested structures or need
them. A lot of application protocols do.

I have seen a lot of alternatives to X.509 that don't use ASN.1 and are
better for it. But they all use nesting. And to get back on topic, the main
motive for adding binary to JSON is to support signed blobs and encrypted
blobs. Text encodings are easy to read but very difficult to specify
boundaries in without ambiguity.


Responding to James,

No, the reason for baring multiple inheritance is not that it is too
clever, it is that studies have shown that code using multiple inheritance
is much harder for other people to understand than code using single
inheritance.

The original reason multiple inheritance was added to C was to support
collections. So if you had a class A and a subclass B and wanted to have a
list of B then the way you would do it in the early versions of C++ was to
inherit from the 'list' class.

I think that approach is completely stupid, broken and wrong. It should be
possible for people to make lists or sets or bags of any class without the
author of the class providing support. Which is why C# has functional
types, ListT.

Not incidentally, C also has functional types (or at least the ability to
implement same easily). Which is why as a post doc, having studied program
language design (Tony Hoare was my college tutor), having written a thesis
on program language design, I came to the conclusion that C was a better
language base than C++ back in the early 1990s.

I can read C++ but it takes me far longer to work out how to do something
in C++ than to actually do it in C. So I can't see where C++ is helping. It
is reducing, not improving my productivity. I know that some features of
the language have been extended/fixed since but it is far too late.

At this point it is clear that C++ is a dead end and the future of
programming languages will be based on Java, C# (and to a lesser extent
Objective C) approaches. Direct multiple inheritance will go and be
replaced by interfaces. Though with functional types, use of interfaces is
very rarely necessary.


So no, I don't equate prohibiting multiple direct inheritance with 'too
clever code'. There are good reasons to avoid multiple inheritance, both
for code maintenance and to enable the code base to be ported to more
modern languages in the future.
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

Re: [Cryptography] RSA equivalent key length/strength

2013-10-02 Thread Paul Crowley
On 30 September 2013 23:35, John Kelsey crypto@gmail.com wrote:

 If there is a weak curve class of greater than about 2^{80} that NSA knew
 about 15 years ago and were sure nobody were ever going to find that weak
 curve class and exploit it to break classified communications protected by
 it, then they could have generated 2^{80} or so seeds to hit that weak
 curve class.


If the NSA's attack involves generating some sort of collision between a
curve and something else over a 160-bit space, they wouldn't have to be
worried that someone else would find and attack that weak curve class
with less than 2^160 work.
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

Re: [Cryptography] are ECDSA curves provably not cooked? (Re: RSA equivalent key length/strength)

2013-10-02 Thread John Kelsey

On Oct 1, 2013, at 12:51 PM, Adam Back a...@cypherspace.org wrote:

[Discussing how NSA might have generated weak curves via trying many choices 
till they hit a weak-curve class that only they knew how to solve.]
...
 But the more interesting question I was referring to is a trapdoor weakness
 with a weak proof of fairness (ie a fairness that looks like the one in FIPS
 186-3/ECDSA where we dont know how much grinding if any went into the magic
 seed values).  For illustration though not applicable to ECDSA and probably
 outright defective eg can they start with some large number of candidate G
 values where G=xH (ie knowing the EC discrete log of some value H they pass
 off as a random fairly chosen point) and then do a birthday collision
 between the selection of G values and diffrent seed values to a PRNG to find
 a G value that they have both a discrete log of wrt H and a PRNG seed. 
 Bearing in mind they may be willing to throw custom ASIC or FPGA
 supercomputer hardware and $1bil budgt at the problem as a one off cost.

This general idea is a nice one.  It's basically a way of using Merkle's 
puzzles to build a private key into a cryptosystem.  But I think in general, 
you are going to have to do work equal to the security level of the thing 
you're trying to backdoor.  You have to break it once at its full security 
level, and then you get to amortize that break forever.  (Isn't there something 
like this you can do for discrete logs in general, though?)  

Consider Dual EC DRBG.  You need a P, Q such that you know x that solves xP = 
Q, over (say) P-224.  So, you arbitrarily choose G = a generator for the group, 
and a scalar z, and then compute for

 j = 1 to 2^{112}:
T[j] = jz G

Now, you have 2^{112} values in a group of 2^{224} values, right?  So with 
about another 2^{113} work, you can hit one of those with two arbitrary seeds, 
and you'll know the relationship between them.  

But this takes a total of about 2^{113} work, so it's above the claimed secuity 
level of P-224.  I suspect this would be more useful for something at the 80 
bit security level--a really resourceful attacker could probably do a 2^{80} 
search.  

 Adam

--John
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Why is emailing me my password?

2013-10-02 Thread Greg
 I'm interested in cases where Mailman passwords have been abused.

Show me one instance where a nuclear reactor was brought down by an 
earthquake! Just one! Then I'll consider spending the $$ on it!

--
Please do not email me anything that you are not comfortable also sharing with 
the NSA.

On Oct 1, 2013, at 6:38 PM, Bill Frantz fra...@pwpconsult.com wrote:

 On 10/1/13 at 1:43 PM, mar...@bluegap.ch (Markus Wanner) wrote:
 
 Let's compare apples to apples: even if you manage to actually read the
 instructions, you actually have to do so, have to come up with a
 throw-away-password, and remember it. For no additional safety compared
 to one-time tokens.
 
 Let Mailman assign you a password. Then you don't have to worry about someone 
 collecting all your mailing list passwords and reverse engineering your 
 password generation algorithm. You'll find out what the password is in a 
 month. Save that email so you can make changes. Get on with life.
 
 Lets not increase the level of user work in cases where there isn't, in fact, 
 a security problem.
 
 I'm interested in cases where Mailman passwords have been abused.
 
 Cheers - Bill
 
 ---
 Bill Frantz| If the site is supported by  | Periwinkle
 (408)356-8506  | ads, you are the product.| 16345 Englewood Ave
 www.pwpconsult.com |  | Los Gatos, CA 95032
 
 ___
 The cryptography mailing list
 cryptography@metzdowd.com
 http://www.metzdowd.com/mailman/listinfo/cryptography



signature.asc
Description: Message signed with OpenPGP using GPGMail
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

Re: [Cryptography] Why is emailing me my password?

2013-10-02 Thread Greg
 While I agree in principle, I don't quite like the tone here.

I agree, I apologize for the excessively negative tone. I think RL (and 
unrelated) agitation affected my writing and word choice. I've taken steps to 
prevent that from happening again (via magic of self-censoring software).

 But I liked your password, though. ;-)

Thanks! ^_^

 For that to be as secure as you make it sound, you still need a password
 or token. Hopefully a one-time, randomly generated one, but it's still a
 password. And it still crosses the wires unencrypted and can thus be
 intercepted by a MITM.
 
 The gain of that approach really is that there's no danger of a user
 inadvertently revealing a valuable password.
 
 The limited life time of the OTP may also make it a tad harder for an
 attacker, but given the (absence of) value for an attacker, that's close
 to irrelevant.


I don't see why a one-time-password is necessary. Just check the headers to 
verify that the send-path was the same as it was on the original request.

Somebody used the phrase repeat after me previously. I'll give it a shot too:

Repeat after me: Sending *any* user password (no matter how unimportant /you/ 
think it is) in the clear is extremely poor practice and should never be done.

And, if a password is completely unnecessary, it should not be used.

On a side-note (Re: Russ's email and others), I can't believe people are 
talking about encryption and key distribution algorithms in reference to this 
topic.

- Greg

--
Please do not email me anything that you are not comfortable also sharing with 
the NSA.

On Oct 2, 2013, at 3:58 AM, Markus Wanner mar...@bluegap.ch wrote:

 On 10/02/2013 12:03 AM, Greg wrote:
 Running a mailing list is not hard work. There are only so many things
 one can fuck up. This is probably one of the biggest mistakes that can
 be made in running a mailing list, and on a list that's about software
 security. It's just ridiculous.
 
 While I agree in principle, I don't quite like the tone here. But I
 liked your password, though. ;-)
 
 And no: there certainly are bigger mistakes an admin of a mailing list
 can do. Think: members list, spam, etc..
 
 A mailing list shouldn't have any passwords to begin with. There is no
 need for passwords, and it shouldn't be possible for anyone to
 unsubscribe anyone else.
 
 User: Unsubscribe [EMAIL] - Server
 Server: Are you sure? - [EMAIL]
 User@[EMAIL]: YES! - Server.
 
 No passwords, and no fake unsubscribes.
 
 For that to be as secure as you make it sound, you still need a password
 or token. Hopefully a one-time, randomly generated one, but it's still a
 password. And it still crosses the wires unencrypted and can thus be
 intercepted by a MITM.
 
 The gain of that approach really is that there's no danger of a user
 inadvertently revealing a valuable password.
 
 The limited life time of the OTP may also make it a tad harder for an
 attacker, but given the (absence of) value for an attacker, that's close
 to irrelevant.
 
 Regards
 
 Markus Wanner



signature.asc
Description: Message signed with OpenPGP using GPGMail
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

Re: [Cryptography] Why is emailing me my password?

2013-10-02 Thread Markus Wanner
On 10/02/2013 04:32 PM, Greg wrote:
 I agree, I apologize for the excessively negative tone. I think RL (and
 unrelated) agitation affected my writing and word choice. I've taken
 steps to prevent that from happening again (via magic of self-censoring
 software).

Cool. :-)

 I don't see why a one-time-password is necessary. Just check the headers
 to verify that the send-path was the same as it was on the original request.

Hm.. that's a nice idea, but I don't think it can work reliably. What if
the send path changes in between? AFAIK there are legitimate reasons for
that, like load balancers or weird greylisting setups.

Plus: why should that part of the header be more trustworthy than any
other part? Granted, at least the last IP is added by a trusted server.
But doesn't that boil down to IP-based authentication?

I'm not saying it's impossible, I just don't think it's as good as a
one-time token. Do you know of a mailing list software implementing such
a thing?

Regards

Markus Wanner



signature.asc
Description: OpenPGP digital signature
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

Re: [Cryptography] Crypto Standards v.s. Engineering habits - Was: NIST about to weaken SHA3?

2013-10-02 Thread John Kelsey
Has anyone tried to systematically look at what has led to previous crypto 
failures?  That would inform us about where we need to be adding armor plate.  
My impression (this may be the availability heuristic at work) is that:

a.  Most attacks come from protocol or mode failures, not so much crypto 
primitive failures.  That is, there's a reaction attack on the way CBC 
encryption and message padding play with your application, and it doesn't 
matter whether you're using AES or FEAL-8 for your block cipher.  

b.  Overemphasis on performance (because it's measurable and security usually 
isn't) plays really badly with having stuff be impossible to get out of the 
field when it's in use.  Think of RC4 and DES and MD5 as examples.  

c.  The ways I can see to avoid problems with crypto primitives are:

(1)  Overdesign against cryptanalysis (have lots of rounds)

(2)  Overdesign in security parameters (support only high security levels, use 
bigger than required RSA keys, etc.) 

(3)  Don't accept anything without a proof reducing the security of the whole 
thing down to something overdesigned in the sense of (1) or (2).

--John
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Why is emailing me my password?

2013-10-02 Thread Greg
 Hm.. that's a nice idea, but I don't think it can work reliably. What if
 the send path changes in between? AFAIK there are legitimate reasons for
 that, like load balancers or weird greylisting setups.

You're right, I think I misunderstood you when you talked about a one time 
password. I thought you were referring to something users would have to come 
up with.

If by one time password you mean a server-generated token, then yes, that 
would be far better.

That's standard practice for most mailing lists. The token is usually a unique 
challenge link sent back to the user, and they can either click on it or reply 
to the message while quoting the link in the body. Sometimes it's also a unique 
number in the subject line.

- Greg

--
Please do not email me anything that you are not comfortable also sharing with 
the NSA.

On Oct 2, 2013, at 10:40 AM, Markus Wanner mar...@bluegap.ch wrote:

 On 10/02/2013 04:32 PM, Greg wrote:
 I agree, I apologize for the excessively negative tone. I think RL (and
 unrelated) agitation affected my writing and word choice. I've taken
 steps to prevent that from happening again (via magic of self-censoring
 software).
 
 Cool. :-)
 
 I don't see why a one-time-password is necessary. Just check the headers
 to verify that the send-path was the same as it was on the original request.
 
 Hm.. that's a nice idea, but I don't think it can work reliably. What if
 the send path changes in between? AFAIK there are legitimate reasons for
 that, like load balancers or weird greylisting setups.
 
 Plus: why should that part of the header be more trustworthy than any
 other part? Granted, at least the last IP is added by a trusted server.
 But doesn't that boil down to IP-based authentication?
 
 I'm not saying it's impossible, I just don't think it's as good as a
 one-time token. Do you know of a mailing list software implementing such
 a thing?
 
 Regards
 
 Markus Wanner
 



signature.asc
Description: Message signed with OpenPGP using GPGMail
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

Re: [Cryptography] RSA equivalent key length/strength

2013-10-02 Thread Manuel Pégourié-Gonnard
Hi,

On 01/10/2013 19:39, Peter Fairbrother wrote:
 Also, the method by which the generators (and thus the actual groups in 
 use, not the curves) were chosen is unclear.
 
If we're talking about the NIST curves over prime fields, they all have cofactor
1, so the actual group used is E(F_p), the (cyclic) group of all rational points
over F_p: there is no choice to be made here. Now, for the curves over binary
fields, the cofactor is 2 or 4, which again means the curve only has one
subgroup of large prime order. No room for choice either.

On another front, the choice of the generator in a particular group is of no
importance to the security of the discrete log problem. For example, assume you
know how to efficiently compute discrete logs with respect to some generator
G_1, and let me explain how you can use that to efficiently compute discrete
logs with respect to another base G_2.

First, you compute the n_21 such that G_2 = n_21 G_1, that is the discrete log
of G_2 in base G_1. Then you compute n_12, the modular inverse of n_21 modulo r,
the order of the group (which is known), so that G_1 = n_12 G_2. Now given a
random point P of which you want the log with base G_2, you first compute l_1,
its log in base G_1, that is P = l_1 G_1 = l_1 n_12 G_2, and tadam, l_1 n_12
(modulo r if you want) is the desired log in base G_2.

(The last two paragraphs actually hold for any cyclic group, though I wrote them
additively with elliptic curves in mind.)

So, really the only relevant unexplained parameters are the seeds to the
pseudo-random algorithm.

Manuel.
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Crypto Standards v.s. Engineering habits - Was: NIST about to weaken SHA3?

2013-10-02 Thread Jonathan Thornburg
maybe offtopic
On Tue, 1 Oct 2013, someone who (if I've unwrapped the nested quoting
correctly) might have been Jerry Leichter wrote:
 There are three levels of construction.  If you're putting together
 a small garden shed, it looks right is generally enough - at least
 if it's someone with sufficient experience.  If you're talking
 non-load-bearing walls, or even some that bear fairly small loads,
 you follow standards - use 2x4's, space them 36 apart, [[...]]

Standard construction in US  Canada uses 2/4's on 16 (repeat: 16)
centers.  Perhaps there's a lesson here:  leave carpentry to people
who are experts at carpentry.
/maybe offtopic
And leave crypto to people who are experts at crypto.

-- 
-- Jonathan Thornburg [remove -animal to reply] 
jth...@astro.indiana-zebra.edu
   Dept of Astronomy  IUCSS, Indiana University, Bloomington, Indiana, USA
   There was of course no way of knowing whether you were being watched
at any given moment.  How often, or on what system, the Thought Police
plugged in on any individual wire was guesswork.  It was even conceivable
that they watched everybody all the time.  -- George Orwell, 1984
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] RSA equivalent key length/strength

2013-10-02 Thread John Kelsey
On Oct 2, 2013, at 9:54 AM, Paul Crowley p...@ciphergoth.org wrote:

 On 30 September 2013 23:35, John Kelsey crypto@gmail.com wrote:
 If there is a weak curve class of greater than about 2^{80} that NSA knew 
 about 15 years ago and were sure nobody were ever going to find that weak 
 curve class and exploit it to break classified communications protected by 
 it, then they could have generated 2^{80} or so seeds to hit that weak curve 
 class.
 
 If the NSA's attack involves generating some sort of collision between a 
 curve and something else over a 160-bit space, they wouldn't have to be 
 worried that someone else would find and attack that weak curve class with 
 less than 2^160 work.

I don't know enough about elliptic curves to have an intelligent opinion on 
whether this is possible.  Has anyone worked out a way to do this?  

The big question is how much work would have had to be done.  If you're talking 
about a birthday collision on the curve parameters, is that a collision on a 
160 bit value, or on a 224 or 256 or 384 or 512 bit value?  I can believe NSA 
doing a 2^{80} search 15 years ago, but I think it would have had to be a top 
priority.  There is no way they were doing 2^{112} searches 15 years ago, as 
far as I can see.

--John___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

Re: [Cryptography] RSA equivalent key length/strength

2013-10-02 Thread Kristian Gjøsteen
2. okt. 2013 kl. 16:59 skrev John Kelsey crypto@gmail.com:

 On Oct 2, 2013, at 9:54 AM, Paul Crowley p...@ciphergoth.org wrote:
 
 On 30 September 2013 23:35, John Kelsey crypto@gmail.com wrote:
 If there is a weak curve class of greater than about 2^{80} that NSA knew 
 about 15 years ago and were sure nobody were ever going to find that weak 
 curve class and exploit it to break classified communications protected by 
 it, then they could have generated 2^{80} or so seeds to hit that weak curve 
 class.
 
 If the NSA's attack involves generating some sort of collision between a 
 curve and something else over a 160-bit space, they wouldn't have to be 
 worried that someone else would find and attack that weak curve class with 
 less than 2^160 work.
 
 I don't know enough about elliptic curves to have an intelligent opinion on 
 whether this is possible.  Has anyone worked out a way to do this?  

Edlyn Teske [1] describes a way in which you select one curve and then find a 
second curve together with an isogeny (essentially a group homomorphism) to the 
first curve. The first curve is susceptible to Weil descent attacks, making it 
feasible to compute d.log.s on the curve. The other curve is not susceptible to 
Weil descent attacks.

You publish the latter curve, and keep the first curve and a description of the 
isogeny suitable for computation to yourself. When you want to compute a d.log. 
on the public curve, you use the isogeny to move it to your secret curve and 
then use Weil descent to find the d.log.

I suppose you could generate lots of such pairs of curves, and at the same time 
generate lots of curves from seeds. After a large number of generations, you 
find a collision. You now have your trapdoor curve. However, the amount of work 
should be about the square root of the field size.

Do we have something here?

(a) Weil descent (mostly) works over curves over composite-degree extension 
fields.

(b) Cryptographers worried about curves over (composite-degree) extension 
fields long before Weil descent attacks were discovered. (Some people like them 
because they speed things up slightly.)

(c) NIST's extension fields all have prime degree, which isn't optimal for Weil 
descent.

(d) NIST's fields are all too big, if we assume that NSA couldn't do 2^112 
computations in 1999.

(e) This doesn't work for prime fields.

It seems that if there is a trapdoor built into NIST's (extension field) 
curves, NSA in 1999 was way ahead of where the open community is today in 
theory, and had computing power that we generally don't think they have today.

We have evidence of NSA doing bad things. This seems unlikely to be it.

[1] Edlyn Teske: An Elliptic Curve Trapdoor System. J. Cryptology 19(1): 
115-133 (2006)

-- 
Kristian Gjøsteen



___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


[Cryptography] [nicol...@cmu.edu: [fc-announce] Financial Cryptography 2014 Call for Papers]

2013-10-02 Thread R. Hirschfeld
--- Start of forwarded message ---
Date: Wed, 2 Oct 2013 10:55:03 -0400
From: Nicolas Christin nicol...@cmu.edu
Subject: [fc-announce] Financial Cryptography 2014 Call for Papers

Call for Papers
FC 2014 March 3-7, 2014
Accra Beach Hotel  Spa, Barbados

Financial Cryptography and Data Security is a major international
forum for research, advanced development, education, exploration,
and debate regarding information assurance, with a specific focus on
financial, economic and commercial transaction security. Original works
focusing on securing commercial transactions and systems are solicited;
fundamental as well as applied real-world deployments on all aspects
surrounding commerce security are of interest. Submissions need not be
exclusively concerned with cryptography. Systems security, economic or
financial modeling, and, more generally, inter-disciplinary efforts are
particularly encouraged.

Topics of interests include, but are not limited to:

Anonymity and Privacy
Applications of Game Theory to Security
Auctions and Audits
Authentication and Identification
Behavioral Aspects of Security and Privacy
Biometrics
Certification and Authorization
Cloud Computing Security
Commercial Cryptographic Applications
Contactless Payment and Ticketing Systems
Data Outsourcing Security
Digital Rights Management
Digital Cash and Payment Systems
Economics of Security and Privacy
Electronic Crime and Underground-Market Economics
Electronic Commerce Security
Fraud Detection
Identity Theft
Legal and Regulatory Issues
Microfinance and Micropayments  
Mobile Devices and Applications Security and Privacy 
Phishing and Social Engineering
Reputation Systems
Risk Assessment and Management
Secure Banking and Financial Web Services
Smartcards, Secure Tokens and Secure Hardware
Smart Grid Security and Privacy
Social Networks Security and Privacy
Trust Management
Usability and Security
Virtual Goods and Virtual Economies
Voting Systems
Web Security

Important Dates

Workshop Proposal SubmissionJuly 31, 2013
Workshop Proposal Notification  August 20, 2013
Mandatory Abstract Submission   October 25, 2013, 23:59 UTC (firm)
Paper SubmissionNovember 2, 2013, 23:59 UTC (firm)
Paper Notification  December 22, 2013
Final PapersJanuary 31, 2014
Poster and Panel Submission January 8, 2014
Poster and Panel Notification   January 15, 2014

Conference  March 3-7, 2014

Submission

Submissions are sought in the following categories:
(i) regular papers (15 pg LNCS format excluding references and
appendices and maximum of 18 pg, i.e., 3 pg of references/appendices),
(ii) short papers (8 pg LNCS format in total),
(iii) panels and workshop proposals (2pg), and
(iv) posters (1 pg).

Committee members are not required to read the appendices, so the
full papers should be intelligible without them. The regular and
short paper submissions must be anonymous, with no author names,
affiliations, acknowledgements, or obvious references. In contrast,
panel, workshop proposal, and poster submissions must include author
names and affiliations.

Papers must be formatted in standard LNCS format and submitted as PDF
files. Submissions in other formats will be rejected. All papers must be
submitted electronically according to the instructions and forms found
here and at the submission site. For each accepted paper the conference
requires at least one registration at the general or academic rate.

Authors may only submit work that does not substantially overlap with
work that is currently submitted or has been accepted for publication
to a conference/workshop with proceedings or a journal. We consider
double submission serious research fraud and will treat it as such.
In case of doubt contact the program chairs for any clarifications at
fc14ch...@ifca.ai.

IMPORTANT THIS YEAR: Abstracts must be registered by October 25 for both
short and regular research papers. Papers whose abstract has not been
submitted in time will not be considered. Registering abstracts that are
currently under review at other venues is allowed, provided that the
paper is either no longer under review at another venue or withdrawn
from consideration before the submission deadline (November 2).

Regular Research Papers

Research papers should describe novel, previously unpublished scientific
contributions to the field, and they will be subject to rigorous
peer review. Accepted submissions will be included in the conference
proceedings to be published in the Springer-Verlag Lecture Notes
in Computer Science (LNCS) series. Submissions are limited to 15
pages excluding references and maximum of 18 pages (i.e., 3 pages of
references and appendices). Committee members are not required to read
the appendices, so the full papers should be intelligible without them.
Regular papers must be anonymous with no author names, affiliations,
acknowledgements, or obvious references.

Short Papers

Short papers are also subject 

Re: [Cryptography] Crypto Standards v.s. Engineering habits - Was: NIST about to weaken SHA3?

2013-10-02 Thread Arnold Reinhold
On 1 Oct 2013 23:48 Jerry Leichter wrote:

 The larger the construction project, the tighter the limits on this stuff.  I 
 used to work with a former structural engineer, and he repeated some of the 
 bad example stories they are taught.  A famous case a number of years back 
 involved a hotel in, I believe, Kansas City.  The hotel had a large, open 
 atrium, with two levels of concrete skyways for walking above.  The 
 skyways were hung from the roof.  As the structural engineer specified 
 their attachment, a long threaded steel rod ran from the roof, through one 
 skyway - with the skyway held on by a nut - and then down to the second 
 skyway, also held on by a nut.  The builder, realizing that he would have to 
 thread the nut for the upper skyway up many feet of rod, made a minor 
 change:  He instead used two threaded rods, one from roof to upper skyway, 
 one from upper skyway to lower skyway.  It's all the same, right?  Well, no:  
 In the original design, the upper nut holds the weight of just the upper 
 skyway.  In the mo
 di
 fied version, it holds the weight of *both* skyways.  The upper fastening 
 failed, the structure collapsed, and as I recall several people on the 
 skyways at the time were killed.  So ... not even a factor of two safety 
 margin there.  (The take-away from the story as delivered to future 
 structural engineers was *not* that there wasn't a large enough safety margin 
 - the calculations were accurate and well within the margins used in building 
 such structures.  The issue was that no one checked that the structure was 
 actually built as designed.)
 
 I'll leave it to others to decide whether, and how, these lessons apply to 
 crypto design.

This would be the 1981 Kansas City Hyatt Regency walkway collapse 
(http://en.wikipedia.org/wiki/Hyatt_Regency_walkway_collapse), where 114 people 
died, a bit more than several. And the take-away included the fact there 
there were no architectural codes covering that particular structural design. I 
believe they now exist and include a significant safety margin.  The Wikipedia 
article includes a link to a NIST technical report on the disaster, but NIST 
and its web site are now closed due to the government shutdown. 

The concept of safety margin is a meta-design principle that is basic to 
engineering.  It's really the only way to answer the questions, vital in 
retrospect, we don't yet know to ask.  

That nist.gov is down also keeps me from reading the slide sets there on the 
proposal to change to SHA-3 from the design that won the competition.  I'll 
reserve judgment on the technical arguments until I can see them, but there is 
a separate question of how much time the cryptographic community should be 
given to analyze a major change like that (think years). I would also note that 
the opinions of the designers of Keccak, while valuable, should not be 
considered dispositive any more than they were in the original competition.  


Arnold Reinhold
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Why is emailing me my password?

2013-10-02 Thread Lodewijk andré de la porte
2013/10/2 Russ Nelson nel...@crynwr.com

 If you are proposing that something needs stronger encryption than
 ROT-26, please explain the threat model that justifies your choice of
 encryption and key distribution algorithms.


ROT-26 is fantastic for certain purposes. Like when encrypting for kids
that just learned how to read. For anything else than no encryption you
should have a good understanding of why you're employing the cryptography,
and why in this way.
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

Re: [Cryptography] encoding formats should not be committee'ized

2013-10-02 Thread Jerry Leichter
On Oct 2, 2013, at 10:46 AM, Viktor Dukhovni cryptogra...@dukhovni.org wrote:
 Text encodings are easy to read but very difficult to specify
 boundaries in without ambiguity.
 
 Yes, and not just boundaries.
Always keep in mind - when you argue for easy readability - that one of 
COBOL's design goals was for programs to be readable and understandable by 
non-programmers.  (There's an *immense* amount of history and sociology and 
assumptions about how businesses should be managed hidden under that goal.  One 
could write a large article, and probably a book, starting from there.)

My favorite more recent example of the pitfalls is TL1, a language and protocol 
used to managed high-end telecom equipment.  TL1 has a completely rigorous 
syntax definition, but is supposed to be readable.  This leads to such 
wonderful features as that SPACE is syntactically significant, and SPACE SPACE 
sometimes means something different from just SPACE.  I have no idea if TL1 
messages have a well-defined canonical form.  I doubt it.

Correct TL1 parsers are complicated and if you need one it's generally best to 
bite the bullet and pay to buy one from an established vendor.   Alternatively, 
you can go to 
http://telecom-info.telcordia.com/site-cgi/ido/docs.cgi?ID=SEARCHDOCUMENT=GR-831;
 and pay $728 for a document that appears to be less than 50 pages long.  Oh, 
and you may wish to refer to 6 other documents available at similarly 
reasonable prices.
-- Jerry

___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] AES-256- More NIST-y? paranoia

2013-10-02 Thread Brian Gladman
On 02/10/2013 13:58, John Kelsey wrote:
 On Oct 1, 2013, at 5:58 PM, Peter Fairbrother zenadsl6...@zen.co.uk wrote:
 
 AES, the latest-and-greatest block cipher, comes in two main forms - AES-128 
 and AES-256.

 AES-256 is supposed to have a brute force work factor of 2^256  - but we 
 find that in fact it actually has a very similar work factor to that of 
 AES-128, due to bad subkey scheduling.

 Thing is, that bad subkey scheduling was introduced by NIST ... after 
 Rijndael, which won the open block cipher competition with what seems to be 
 all-the-way good scheduling, was transformed into AES by NIST.
 
 What on Earth are you talking about?  AES' key schedule wasn't designed by 
 NIST.  The only change NIST made to Rijndael was not including some of the 
 alternative block sizes.  You can go look up the old Rijndael specs online if 
 you want to verify this.

As someone who was heavily involved in writing the AES specification as
eventually used by NIST, I can confirm what John is saying.

The NIST specification only eliminated Rijndael options - none of the
Rijndael options included in AES were changed in any way by NIST.

   Brian Gladman

___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] encoding formats should not be committee'ised

2013-10-02 Thread Dave Horsfall
On Wed, 2 Oct 2013, Jerry Leichter wrote:

 Always keep in mind - when you argue for easy readability - that one 
 of COBOL's design goals was for programs to be readable and 
 understandable by non-programmers.

Managers, in particular.

-- Dave
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography