Re: Solving password problems one at a time, Re: The password-reset paradox

2009-02-24 Thread Ed Gerck

silky wrote:

On Sun, Feb 22, 2009 at 6:33 AM, Ed Gerck edge...@nma.com wrote:
  

(UI in use since 2000, for web access control and authorization) After you
enter a usercode in the first screen, you are presented with a second screen
to enter your password. The usercode is a mnemonic 6-character code such as
HB75RC (randomly generated, you receive from the server upon registration).
Your password is freely choosen by you upon registration.That second screen
also has something that you and the correct server know but that you did not
disclose in the first screen -- we can use a simple three-letter combination
ABC, for example. You use this to visually authenticate the server above the
SSL layer. A rogue server would not know this combination, which allays
spoofing considerations -- if you do not see the correct three-letter
combination, do not enter your password.



Well, this is an old plan and useless. Because any rogue server can
just submit the 'usercode' to the real server, and get the three
letters. Common implementations of this use pictures (cats dogs family
user-uploaded, whatever).
  


Thanks for the comment. The BofA SiteKey attack you mention does not 
work for the web access scheme I mentioned because the usercode is 
private and random with a very large search space, and is always sent 
after SSL starts (hence, remains private). The attacker has a 
/negligible/ probability of success in our case, contrary to a case 
where the user sends the email address to get the three letters -- which 
is trivial to bypass.

http://nma.com/papers/zsentryid-web.pdf

(UI in use since 2008, TLS SMTP, aka SMTPS, authentication). The SMTP
Username is your email address, while the SMTP Password is obtained by the
user writing in sequence the usercode and the password. With TLS SMTP,
encryption is on from the start (implict SSL), so that neither the Username
or the Password are ever sent in the clear.



I have no idea what you're referring to here. It doesn't seem to make
sense in the context of the rest of your email. Are you saying your
system is useless given SSL? (Aside from the fact that it's useless
anyway ...)
  


I'm referring to SMTP authentication with implicit SSL. The same 
usercode|password combination is used here as well, but the usercode is 
prepended to the password while the username is the email address. In 
this case, there is no anti-phishing needed.



(UI 2008 version, web access control) Same as the TLS SMTP case, where a
three-letter combination is provided for user anti-spoofing verification
after the username (email address) is entered. In trust terms, the user does
not trust the server with anything but the email address (which is public
information) until the server has shown that it can be trusted (to that
extent) by replying with the expected three-letter combination.



Wrong again, see above.
  


This case has the  same BofA SiteKey vulnerability. However, if that is 
bothersome, the scheme can also send a timed nonce to a cell phone, 
which is unknown to the attacker. This is explained elsewhere in 
http://nma.com/papers/zsentryid-web.pdf


(there are different solutions for different threat models)


In all cases, because the usercode is not controlled by the user and is
random, it adds a known and independently generated amount of entropy to the
Password.



Disregarding all of the above, consider that it may not be random, and
given that you can generate them on signup there is the potential to
know or learn the RNG a given site is using.
  


If the threat model is that you can learn or know the RNG a given site 
is using then the answer is to use a hardware RNG.



With a six-character (to be within the mnemonic range) usercode, usability
considerations (no letter case, no symbols, overload 0 with O, 1 with
I, for example), will reduce the entropy that can be added to (say) 35
bits. Considering that the average poor, short password chosen by users has
between 20 and 40 bits of entropy, the end result is expected to have from
55 to 75 bits of entropy, which is quite strong.



Doesn't really matter given it prevents nothing. Sites may as well
just ask for two passwords.
  
The point is that two passwords would still not have an entropy value 
that you can trust, as it all would depend on user input.

This can be made larger by,
for example, refusing to accept passwords that are less than 8 characters
long, by and adding more characters to the usercode alphabet and/or usercode
(a 7-character code can still be mnemonic and human friendly).

The fourth problem, and the last important password problem that would still
remain, is the vulnerability of password lists themselves, that could be
downloaded and cracked given enough time, outside the access protections of
online login (three-strikes and you're out). This is also solved in our
scheme by using implicit passwords from a digital certificate calculation.
There are no username and password lists

Re: Solving password problems one at a time, Re: The password-reset paradox

2009-02-24 Thread Ed Gerck

James A. Donald wrote:

No one is going to check for the correct three letter
combination, because it is not part of the work flow, so
they will always forget to do it.


Humans tend to notice patterns. We  easily notice mispelngs. Your 
experience may be different but we found out in testing that 
three-letters can be made large enough to become a visually noticeable 
pattern.


Reversing the point, the fact that a user can ignore the three-letters 
is useful if the user forgets them. The last thing users want is one 
more hassle. The idea is to give users a way to allay spoofing concerns, 
if they so want and are motivated to, or learn to be motivated. Mark 
Twain's cat was afraid of the cold stove.


Cheers,
Ed Gerck

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: Solving password problems one at a time, Re: The password-reset paradox

2009-02-24 Thread Ed Gerck

silky wrote:

On Tue, Feb 24, 2009 at 8:30 AM, Ed Gerck edge...@nma.com wrote:
[snip]
  

Thanks for the comment. The BofA SiteKey attack you mention does not work
for the web access scheme I mentioned because the usercode is private and
random with a very large search space, and is always sent after SSL starts
(hence, remains private).



This is meaningless. What attack is the 'usercode' trying to prevent?
You said it's trying to authorise the site to the user. It doesn't do
this, because a 3rd party site can take the usercode and send it to
the 'real' site.
  


What usercode? The point you are missing is that there are 2^35 private 
usercodes and you have no idea which one matches the email address that 
you want to sent your phishing email to.


The other points, including the  TLS SMTP login I mentioned, might be 
clearer with an example. I'll be happy to provide you with a test account.


Cheers,
Ed Gerck

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Solving password problems one at a time, Re: The password-reset paradox

2009-02-23 Thread Ed Gerck
-strikes and you're out). This is also 
solved in our scheme by using implicit passwords from a digital 
certificate calculation. There are no username and password lists to be 
attacked in the first place. No target, hence not threat.


In other words, to solve the fourth password problem we shift the 
information security solution space. From the yet-unsolved security 
problem of protecting servers and clients against penetration attacks to 
a connection reliability problem that is easily solved today.


This approach of solving password problems one at a time, shows that the 
big problem of passwords is now reduced to rather trivial data 
management functions -- no longer usability or data security functions.


Usability considerations still must be applied, of course, but not to 
solve the security problem. I submit that trying to solve the security 
problem while facing usability restrictions is what has prevented 
success so far.


Comments are welcome. More at www.Saas-ST.com

Best regards,
Ed Gerck
e...@gerck.com

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: The wisdom of the ill informed

2008-07-01 Thread Ed Gerck

Perry,

You may well think that You're completely wrong here, as you wrote. 
However, a first evidence that I'm correct is that the online banking 
system has /not/ collapsed under this attack (Dan's point) in many 
years... even though bad guys do have access to large blocks of 
different IP numbers, etc.



In any case, there are a large number of reasons US banks don't
(generally) require or even allow anyone to enter PINs for
authentication over the internet. 


Wells Fargo allows PINs for user authentication. Passwords are 
optional and PINs are used for password setting. This is just to name 
one key US bank.


Further, when you wrote:

 I suspect that currently invalid accounts are probably even cheaper
 than valid ones

we all know that invalid accounts are of no use to attack, so this 
issue is not relevant here.


But let me address your other points.

 I'm sure you will now go on about some other way to evade Dan's
 crucial point, but it should be obvious to almost anyone that you're
 not thinking like the bad guys. If you really want to go on about
 this, though, I'll let you have as much rope as you like, though
 only for a post or two as I don't want to bore people.

(don't worry, you never bore people)

Dan's question has to do with how to protect online access from 
multiple tries on the account number for a given PIN. Of course, the 
reverse (repeated use of the same account for different wrong PINs) 
can easily trigger a block.


As I replied to Dan, a counter-measure is for the server to 
selectively block IP numbers for the /same/ browser and /same/ PIN 
after 4 or 3 wrong attempts.


You present a valid objection in that there are people hijacking huge 
IP blocks for brief periods for spamming. People also hijack vast 
numbers of zombie machines. Either technology is easily used to 
prevent block-by-IP from doing squat for you, you wrote.


Not so fast.  Block-by-IP is not that useless. Many anti-spam 
blacklists use block-by-IP and it works. Further, if the PIN is held 
constant (eg, a common PIN such as ) and the IP as well as the 
browser identification are changed while different account numbers are 
targeted, this pattern can trigger a block by that PIN that repeatedly 
(3 or more times) causes an access error, for any IP number and 
browser. Excessive errors/minute can also trigger inspection and blocks.


You can find many other ways to try to trick the system. For example, 
you can space out the attacks and rotate the trivial PINs to reduce 
suspicion -- but you will also reduce the number of tries per hour 
that you can perform for each account.


What makes a good difference in preventing an attack as mentioned by 
Dan is to /not/ allow weak passwords in the first place! But, because 
this is not really possible with PIN systems (even with 6 digits), the 
security designer can detect attack patterns and use them to trigger a 
block even for an a priori unknown IP.


Cheers,
Ed Gerck

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: The wisdom of the ill informed

2008-07-01 Thread Ed Gerck

[Moderator's note: I'll let Ed have the last word. I'm sure everyone
knows what I'd say anyway. --Perry]

Perry E. Metzger wrote:

Ed Gerck [EMAIL PROTECTED] writes:

In any case, there are a large number of reasons US banks don't
(generally) require or even allow anyone to enter PINs for
authentication over the internet. 

Wells Fargo allows PINs for user authentication.


No they don't. 


Since you are not fully aware how Wells Fargo operates, let me 
clarify. What you say below is true for users entering the system /today/:



The new users of their online system get a temporary
password by phone or in the mail, and Wells Fargo requires that they
change it on first log in. The temporaries expire after 30 days,
too. They don't their bank account numbers as account names,
either.

Where did you get the idea that they'd use 4-digit PINS from? It is
totally false.


No. Any Wells Fargo user today that has an /older/ account (eg, opened 
in 2001), can login with their numeric PINs if that is how their 
online access was done then and they did not change it.


So, even though WF /today/ does not accept /new/ users to use only 
numbers for their password, WF is happy to continue to accept /older/ 
rules, including accepting the PIN for online account login.



(Anyone who doesn't believe me can just go through their web site --
it explains all of this to their customers.)


Their website today is what they use today. Older account users that 
have not changed their login can still use their PINs for login. I 
know one company that used way back when their numeric PIN for login, 
because that's what WF told them to do, and that just very recently 
changed to a safer password.


While it is good that WF has improved its rules, it would better if 
they had made it compulsory for all users (not just newer) to renew 
their passwords when the rules started prohibiting using only numbers 
and /not/ requiring the PIN for first login.


I imagine that there are lots of sites out there that have likewise 
improved their front-end password acceptance rules but have not 
bothered to ask all their users to renew their passwords, and thus 
force compliance with newer, safer rules.



The system you propose as safe isn't used by anyone that I'm aware
of, and for good reason, too -- people who've done things like that
have been successfully attacked.

BTW, if anyone was this foolish, the fun you could have would be
amazing. You could rent a botnet for a few bucks and lock out half the
customer accounts on the site in a matter of hours. You could ruin
banks at will. It would be great fun -- only it isn't possible. No one
is stupid enough to set themselves up for that.


WF does that, still today, for their most valued customers -- their 
older customers. May our words be a good warning for them!



I suspect that currently invalid accounts are probably even cheaper
than valid ones

we all know that invalid accounts are of no use to attack, so this
issue is not relevant here.


You would use the invalid accounts to reverse engineer the account
number format so you don't have to do exhaustive search. Any
practitioner in this field can tell you how useful intelligence like
that would be. I suggest you consult one.


When you do the math, you will see that knowing a few hundred invalid 
accounts will not considerably reduce your search space for the 
comparison we are talking about. Remember, we are talking about 
4-digit PINs that have a search space of 9,000 choices (before you 
complain about the count, note that all 0xxx combinations are usually 
not accepted as a valid PIN for registration) versus an account number 
that is a sparse space with 12-digits and that (by the sheer number of 
valid users) must have at least /millions/ of valid accounts.



It is easy enough to blacklist all of the cable modems in the world
for SMTP service. ISPs voluntarily list their cable modem and DSL
blocks. It is a lot harder to explain to people that they can't do
their at-home banking from home, though. With half the windows boxes
in the world as part of botnets, and with dynamic address assignment,
it is hard to know who's computer *wouldn't* be on the blacklists
anyway...


Please check with actual banks. Bank users logging in from a static IP 
account are treated differently by the servers than users from a 
dynamic IP account. As they should.


The dialogue disconnect here is classical in cryptography, as we all 
have probably seen in practice. In the extreme, but not too uncommon 
position, a crypto guy cries for a better solution (which, more 
often than not, is either not usable or too expensive) while 
dismissing a number of perfectly valid but incomplete solutions that, 
when used together, could mount a good-enough (and affordable) 
defense. Many people have frequently made this point here, including 
yourself with EV certs.


Yes, blocking by IP is not a panacea, and may fail to block, but when 
it works it is mostly correct

Re: The wisdom of the ill informed

2008-06-30 Thread Ed Gerck

Allen wrote:
Very. The (I hate to use this term for something so pathetic) password 
for the file is 6 (yes, six) numeric characters!


My 6 year old K6-II can crack this in less than one minute as there are 
only 1.11*10^6 possible.


Not so fast. Bank PINs are usually just 4 numeric characters long and 
yet they are considered /safe/ even for web access to the account 
(where a physical card is not required).


Why? Because after 4 tries the access is blocked for your IP number 
(in some cases after 3 tries).


The question is not only how many combinations you have but also how 
much time you need to try enough combinations so that you can succeed.


I'm not defending the designers of that email system, as I do not know 
any specifics -- I'm just pointing out that what you mention is not 
necessarily a problem and may be even safer than secure online banking 
today.


Cheers,
Ed Gerck

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: The wisdom of the ill informed

2008-06-30 Thread Ed Gerck

[EMAIL PROTECTED] wrote:

Ed Gerck writes:
-+--
 | ...
 | Not so fast. Bank PINs are usually just 4 numeric characters long and 
 | yet they are considered /safe/ even for web access to the account 
 | (where a physical card is not required).
 | 
 | Why? Because after 4 tries the access is blocked for your IP number 
 | (in some cases after 3 tries).

 | ...


So I hold the PIN constant and vary the bank account number.


Dan,

This is, indeed, a possible attack considering that the same IP may be 
legitimately used by different users behind NAT firewalls and/or with 
dynamic IPs. However, there are a number of reasons, and evidence, why 
this attack can be (and has been) prevented even for a short PIN:


1. there is a much higher number of combinations in a 12-digit account 
number;


2. banks are able to selectively block IP numbers for the /same/ 
browser and /same/ PIN after 4 or 3 wrong attempts, with a small false 
detection probability for other users of the same IP (who are not 
blocked). I know one online system that has been using such method for 
protecting webmail accounts, with several attacks logged but no 
compromise and no false detection complaints in 4 years.


3. some banks reported that in order to satisfy FFIEC requirements for 
two-factor authentication, but without requiring the customer to use 
anything else (eg, a dongle or a battle ship map), they were 
detecting the IP, browser information and use patterns as part of the 
authentication procedure. This directly enables #2 above.


I also note that the security problem with short PINs is not much 
different than that with passwords, as users notoriously choose 
passwords that are easy to guess. However, an online system that is 
not controlled by the attacker is able to likewise prevent multiple 
password tries, or multiple account tries for the same password.


Cheers,
Ed Gerck

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: The wisdom of the ill informed

2008-06-30 Thread Ed Gerck

Allen wrote:
During the transmission from an ATM machine 4 numeric characters are 
probably safe because the machines use dedicated dry pair phone lines 
for the most part, as I understand the system. This, combined with 
triple DES, makes it very difficult to compromise or do a MIM attack 
because one can not just tap into the lines remotely. 


We are in agreement. Even short PINs could be safe in a bank-side 
authenticated (no MITM) SSL connection with 128-bit encryption. 
What's also needed is to block multiple attempts after 3 or 4 tries, 
in both the ATM and the SSL online scenarios.


Cheers,
Ed Gerck

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Can we copy trust?

2008-06-03 Thread Ed Gerck

Bill Soley wrote:
I am thinking that trust is a relationship.  A trusts B.  So if you 
start with A trusts B and you do some operation that results in C 
trusts B then you have not copied anything because A trusts B is not 
equal to C trusts B.  You can't call that operation a copy. 


Trust is indeed expressed by relationships. And those relationships 
can be transmitted with proper consideration -- just not in your 
example. In the case of SSL certs, a simple file copy is enough.


Cheers,
Ed Gerck

Addendum:

Did you have a chance yet to read Kelly's paper? In that paper, he is 
looking for stuff that can't be copied -- because he hopes that such 
stuff is scarce and valuable. When copies are free, you need to sell 
things which can not be copied.


Kelly says that we can't copy trust. So, if I have 100 servers for the 
domain example.com does this mean that I have to buy 100 trusted SSL 
certs from the CA? Or, is any copy of the SSL cert as trustworthy as 
the original?


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Can we copy trust?

2008-06-03 Thread Ed Gerck

Ben Laurie wrote:

Obviously. Clearly I am talking about a server in a different domain.


And we (Kelly and I) were talking about copying trust, where a copy is 
(as usual) a reproduction, a replication of an original. If you are 
copying trust from a domain, as represented by a SSL cert signed by a 
trusted CA, it should be a reproduction of /that/ trust  -- not trust 
on a different domain.


If you want to copy trust to a different domain, then we need to 
transfer the trust. This is also /possible/, as you know, as long as 
the issuing CA has set the CA bit in the SSL certificate. Object 
Signing CA certs must have the Object Signing CA bit set.


In summary, in SSL you can both copy and transfer trust. Without 
further evidence, which can be provided in pvt if desired by anyone, 
(1) SSL is not such only example in the Internet; and (2) we can 
likewise copy and transfer trust in our social interactions, not just 
in our digital interactions.


Cheers,
Ed Gerck

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Can we copy trust?

2008-06-03 Thread Ed Gerck

IanG wrote:

Ed Gerck wrote:
When you look at trust in various contexts, you will still find the 
need to receive information from sources OTHER than the source you 
want to trust. You may use these channels under different names, such 
as memory which is a special type of output that serves as input at a 
later point in time.



It is useful and efficient to get trust from third parties, but not 
essential, imho.  If you find yourself meeting someone for the first 
time in random circumstances, you can get to know them over time, and 
trust them, fully 2nd party-wise.


Yes, and the OTHER channels needed for trust are exactly those 
time-defined channels that you set up as you get to know them over 
time. Each interaction, each phrase, each email exchanged is another 
channel.


Still, you can be talking to Doris in a p2p interaction over months 
and never realize it's actually Boris. This can happen in personal 
meetings as well, not just online.


The point being that (1) you need those other channels and can 
recognize them even if you are just in a p2p interaction; and (2) be 
careful because whatever channels you have, they will only span a 
certain, limited extent in the interaction that you want to trust, so 
your reliance space must be contained within that extent.


Attempting to cast trust as a aspect of channels is a technological 
approach, and will lead one astray, just as PKI did;  trust is built on 
acts, of humans, and involves parties and events, risks and rewards.  
The channels are incidental.


Shannon's information theory is a general approach that, even though 
it has  limitations as any other model, has allowed researchers to 
deal with both social and technical aspects of trust.


The important point, contrary to what PKI did, is to base the 
technical definition of trust on the social mediation of trust that we 
have learned over thousands of years.


Thus, when we look at linguistics and other areas where we find 
expressions of social experience and communication in a culture, we 
see that the unique, defining aspect of trust is that trust on 
something or someone needs /OTHER/ channels of information (where 
memory is also a channel) than the information channel we want to trust.


This social-linguistic observation transfers directly to the 
definition we can use with information theory for the technical aspect 
of trust, allowing the /same/ model of trust to be used in both 
worlds, as:


trust is that which is essential to a communication channel but 
cannot be transferred from a source to a destination using that channel.


From this abstract definition, you can instantiate a definition that 
applies to any desired context that you want -- social and/or 
technical -- while assuring that they all have the same model of 
trust. Examples are provided at the top of 
http://mcwg.org/mcg-mirror/trustdef.htm


As usual, information is defined as: information is that which is 
transferred from a source to a destination. If the same information 
is already present at the destination, there is no transfer. That's 
why information is surprise; there's no surprise if the information 
already exists at the destination.


You can see this better in the study of negotiation.  It is possible 
using this theorypractice to build trust, or to prove that no trust can 
be achieved.  Negotiation is primarily a paradigm of two parties.


You can use different models. I believe that trust is a more 
fundamental model than negotiation, as we can have trust without 
negotiation.


Cheers,
Ed Gerck

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Can we copy trust?

2008-06-03 Thread Ed Gerck

[EMAIL PROTECTED] wrote:

You don't have to trust the target site's self assertions about
its own identity because you trust the root to only validate for sites
that are what they claim to be.


From the viewpoint of the user (which is the viewpoint used by 
Kelly), we see that trust can be copied when different users, 
accessing different servers for the same domain, do not know that they 
are using different copies of the /same/ SSL cert. In fact, no copy is 
less of an original than the original itself!


We see that the trust relationship represented by that SSL cert can be 
copied without any loss, as many times as you wish (for the possible 
dismay of the CA). If the CA bit is set, trust can even be transferred 
to multiple domains, and the trust represented by each such SSL cert 
in each domain can be copied without limit as well.


As to another point of your comment, the problem most people have with 
PKI is not that SSL does not work. SSL does not even need PKI.


The problem can be explained in terms of extent of trust. If you don't 
define your extent of trust in a CA, for example in your acceptance 
policy of records signed by certs from a CA, you may run into 
difficulties. The difficulties are /solved/ (within your risk model) 
when you correctly define the extent of trust -- rather than just 
taking a trust in all matters attitude.


For example, even though I do not trust a CA's CRLs, I may trust that 
CA to prevent rogue use of its private-key for signing end-user certs. 
This trust, limited by this extent, can be used in automating use of 
certs from that CA -- for example, only accept signatures from 
end-user certs of that CA if the cert is less than 31 days old (or, 15 
days -- whatever your risk model says).


Cheers,
Ed Gerck

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Can we copy trust?

2008-06-02 Thread Ed Gerck
In the essay Better Than Free, Kevin Kelly debates which concepts hold 
value online, and how to monetize those values. See 
www.kk.org/thetechnium/archives/2008/01/better_than_fre.php


Kelly's point can be very useful: *When copies are free, you need to 
sell things which can not be copied.*


The problem that I see and present to this list is when he discusses 
qualities that can't be copied and considers trust as something that 
cannot be copied.


Well, in the digital economy we had to learn how to copy trust and we 
did. For example, SSL would not work if trust could not be copied.


How do we copy trust? By recognizing that because trust cannot be 
communicated by self-assertions (*), trust cannot be copied by 
self-assertions either.


To trust something, you need to receive information from sources OTHER 
than the source you want to trust, and from as many other sources as 
necessary according to the extent of the trust you want. With more trust 
extent, you are more likely to need more independent sources of 
verification.


To copy trust, all you do is copy the information from those channels in 
a verifiable way and add that to the original channel information. We do 
this all the time in scientific work: we provide our findings, we 
provide the way to reproduce the findings, and we provide the published 
references that anyone can verify.


To copy trust in the digital economy, we provide  digital signatures 
from one or more third-parties that most people will trust.


This is how SSL works. The site provides a digital certificate signed by 
a CA that most browsers trust, providing an independent channel to 
verify that the web address is correct -- in addition to what the 
browser's location line says.


Cheers,
Ed Gerck

(*) Trust as qualified reliance on information in 
http://nma.com/papers/it-trust-part1.pdf and
Digital Certificates: Applied Internet Security by J. Feghhi, J. Feghhi 
and P. Williams, Addison-Wesley, ISBN 0-20-130980-7, 1998.


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Can we copy trust?

2008-06-02 Thread Ed Gerck

Ben Laurie wrote:
But doesn't that prove the point? The trust that you consequently place 
in the web server because of the certificate _cannot_ be copied to 
another webserver. That other webserver has to go out and buy its own 
copy, with its own domain name it it.


A copy is something identical. So, in fact you can copy that server 
cert to another server that has the same domain (load balancing), and 
it will work. Web admins do it all the time. The user will not notice 
any difference in how the SSL will work.


Another point: When we talk about a copy, we're technically talking 
about a transmission. To copy a web page to your hard disk is to 
transmit bits from the web server to your disk. To say that we cannot 
copy trust would, thus, be the same as to say that we cannot transmit 
trust. But we can and do transmit trust -- we just have to do it right 
(see refs in previous post). Similarly, we have to do it right when we 
transmit data (for example, if we don't have enough bandwidth or if 
there is too much noise, the data will be not be 100% transferred).


Cheers,
Ed Gerck

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Can we copy trust?

2008-06-02 Thread Ed Gerck

Bill Frantz wrote:

[EMAIL PROTECTED] (Ed Gerck) on Monday, June 2, 2008 wrote:

To trust something, you need to receive information from sources OTHER 
than the source you want to trust, and from as many other sources as 
necessary according to the extent of the trust you want. With more trust 
extent, you are more likely to need more independent sources of 
verification.


In my real-world experience, this way of gaining trust is only
really used for strangers. For people we know, recognition and
memory are more compelling ways of trusting.


Recognition = a channel of information
memory = a channel of information

When you look at trust in various contexts, you will still find the 
need to receive information from sources OTHER than the source you 
want to trust. You may use these channels under different names, such 
as memory which is a special type of output that serves as input at a 
later point in time.


The distinguishing aspect between information and trust is this: 
trust is that which is essential to a communication channel but 
cannot be transferred from a source to a destination using that 
channel. In other words, self-assertions cannot transfer trust. 
Trust me is, actually, a good indication not to trust.



We can use this recognition and memory in the online world as well.
SSH automatically recognizes previously used hosts. Programs such
as the Pet Names Tool http://www.waterken.com/user/PetnameTool/
recognize public keys used by web sites, and provide us with a
human-recognizable name so we can remember our previous
interactions with that web site. Once we can securely recognize a
site, we can form our own trust decisions, without the necessity of
involving third parties.


Yes, where recognition is the OTHER channel that tells you that the 
value (given in the original channel) is correct. Just the value by 
itself is not useful for communicating trust -- you also need 
something else (eg, a digital sig) to provide the OTHER channel of 
information.


Cheers,
Ed Gerck

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: User interface, security, and simplicity

2008-05-05 Thread Ed Gerck

Ian G wrote: (on Kerckhoffs's rules)

=
6. Finally, it is necessary, given the circumstances that command its 
application, that the system be easy to use, requiring neither mental 
strain nor the knowledge of a long series of rules to observe.

=
...
PS:  Although his 6th is arguably the most important


Yes. Usability should be the #1 property of a secure system.

Conventional security thinking says that usability and security are 
like a seesaw; if usability goes up, security must go down, and 
vice-versa. This apparent antinomy actually works as a synergy: with 
more usability in a secure system, security increases. With less 
usability in a secure system, security decreases. A secure system that 
is not usable will be left aside by users.


Cheers,
Ed Gerck

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Designing and implementing malicious hardware

2008-04-28 Thread Ed Gerck

Leichter, Jerry wrote:

I suspect the only heavy-weight defense is the same one we use against
the Trusting Trust hook-in-the-compiler attack:  Cross-compile on
as many compilers from as many sources as you can, on the assumption
that not all compilers contain the same hook. 
...

Of course, you'd end up with a machine no faster than your slowest
chip, and you'd have to worry about the correctness of the glue
circuitry that compares the results. 


Each chip does not have to be 100% independent, and does not have to 
be used 100% of the time.


Assuming a random selection of both outputs and chips for testing, and 
a finite set of possible outputs, it is possible to calculate what 
sampling ratio would provide an adequate confidence level -- a good 
guess is 5% sampling.


This should not create a significant impact on average speed, as 95% 
of the time the untested samples would not have to wait for 
verification (from the slower chips). One could also trust-certify 
each chip based on its positive, long term performance -- which could 
allow that chip to run with much less sampling, or none at all.


In general, this approach is based on the properties of trust when 
viewed in terms of Shannon's IT method, as explained in [*]. Trust is 
seen not as a subjective property, but as something that can be 
communicated and measured. One of the resulting rules is that trust 
cannot be communicated by self-assertions (ie, asking the same chip) 
[**]. Trust can be positive (what we call trust), negative (distrust), 
and zero (atrust -- there is no trust value associated with the 
information, neither trust nor distrust). More in [*].


Cheers,
Ed Gerck

 References:
[*] www.nma.com/papers/it-trust-part1.pdf
www.mcwg.org/mcg-mirror/trustdef.htm

[**] Ken's paper title (op. cit.) is, thus, identified to be part of 
the very con game described in the paper.


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Designing and implementing malicious hardware

2008-04-28 Thread Ed Gerck

Perry E. Metzger wrote:

Ed Gerck [EMAIL PROTECTED] writes:

Each chip does not have to be 100% independent, and does not have to
be used 100% of the time.

Assuming a random selection of both outputs and chips for testing, and
a finite set of possible outputs, it is possible to calculate what
sampling ratio would provide an adequate confidence level -- a good
guess is 5% sampling.


Not likely.

Sampling will not work. Sampling theory assumes statistical
independence and that the events that you're looking for are randomly
distributed. 


Provided you have access to enough chip diversity so as to build a 
correction channel with sufficient capacity, Shannon's Tenth Theorem 
assures you that it is possible to reduce the effect of bad chips on 
the output to an error rate /as close to zero/ as you desire. There is 
no lower, limiting value but zero.


Statistical independence is not required to be 100%. Events are not 
required to be randomly flat either. Sampling is required to  be 
independent, but also not 100%.



We're dealing with a situation in which the opponent is
doing things that are very much in violation of those assumptions.


The counter-point is that the existence of a violation can be tested 
within a desired confidence level, which confidence level is dynamic.



The opponent is, on very very rare occasions, going to send you a
malicious payload that will do something bad. Almost all the time
they're going to do nothing at all. You need to be watching 100% of
the time if you're going to catch him with reasonable confidence, but
of course, I doubt even that will work given a halfway smart attacker.


The more comparison channels you have, and the more independent they 
are, the harder it is to compromise them /at the same time/.


In regard to time, one strategy is indeed to watch 100% of the time 
but for random windows of certain lengths and intervals. The duty 
ratio for a certain desired detection threshold depends on the 
correction channel total capacity, the signal dynamics, and some other 
variables. Different implementations will allow for different duty 
ratios for the same error detection capability.



The paper itself describes reasonable ways to prevent detection on the
basis of most other obvious methods -- power utilization, timing
issues, etc, can all be patched over well enough to render the
malhardware invisible to ordinary methods of analysis.


Except as above; using a correction channel with enough capacity the 
problem can /always/ be solved (ie, with an error rate as close to 
zero as desired).



Truth be told, I think there is no defense against malicious hardware
that I've heard of that will work reliably, and indeed I'm not sure
that one can be devised.


As above, the problem is solvable (existence proof provided by 
Shannon's Tenth Theorem).  It is not a matter of whether it works -- 
the solution exists; it's a matter of implementation.


Cheers,
Ed Gerck

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Designing and implementing malicious hardware

2008-04-28 Thread Ed Gerck

Perry E. Metzger wrote:

No. It really does not. Shannon's tenth theorem is about correcting
lossy channels with statistically random noise. This is about making
sure something bad doesn't happen to your computer like having someone
transmit blocks of your hard drive out on the network. I assure you
that Shannon's theorem doesn't speak about that possibility. 


Yet, Shannons' tenth theorem can be proven without a hypothesis that 
noise is random, or that the signal is anything in particular.


Using intuition, because no formality is really needed, just consider 
that the noise is a well-defined sinus function. The error-correcting 
channel provides the same sinus function in counter phase. You will 
see that the less random the noise is, the easier it gets. Not the 
other around.


How about an active adversary? You just need to consider the 
adversary's reaction time and make sure that the error-correcting 
channel has enough capacity to counter-react within that reaction 
time. For chip fabrication, this may be quite long.


Cheers,
Ed Gerck

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: 2factor

2008-04-18 Thread Ed Gerck

Leichter, Jerry wrote:

No real technical data I can find on the site, and I've never seen
a site with so little information about who's involved.  (Typically,
you at least get a list of the top execs.)  Some ex-spooks?  Pure
snake oil?  Somewhere in between?


He's likely called Paul McGough, of Washington, DC, and ignores that 
SSL prevents MITM. It gets worse after this.


http://www.linkedin.com/pub/0/6ab/50b
http://2factor.com/pdf/technology_brief.pdf
http://www.freshpatents.com/Method-and-system-for-performing-perfectly-secure-key-exchange-and-authenticated-messaging-dt20060216ptan20060034456.php

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Still locked up Shannon crypto work?

2008-04-16 Thread Ed Gerck

Consider Shannon. He didn’t do just information theory. Several
years before, he did some other good things and some which are still
locked up in the security of cryptography.

Shannon's crypto work that is still [1986] locked up? This was
said (*) by Richard W. Hamming on March 7, 1986. Hamming,
who died when he was almost 83 years old in 1998, was then a
Professor at the Naval Postgraduate School in Monterey, California.
He was also a retired Bell Labs scientist.

Does anyone about this or what it could be? Or if Hamming was
incorrect?

(*) http://magic.aladdin.cs.cmu.edu/wp-uploads/hamming.pdf

(BTW, this was a great talk!)

Cheers,
Ed Gerck

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: SSL/TLS and port 587

2008-01-23 Thread Ed Gerck

Paul Hoffman wrote:

At 10:38 AM -0800 1/22/08, Ed Gerck wrote:
The often expressed idea that SSL/TLS and port 587 are somehow able to 
prevent warrantless wiretapping and so on, or protect any private 
communications, is IMO simply not supported by facts.


Can you point to some sources of this often expressed idea? It seems 
like a pretty flimsy straw man.


It is common with those who think that the threat model is
traversing the public Internet. As I commented in the
second paragraph, an attack at the ISP (where SSL/TLS is
of no help) has been the dominant threat -- and that is
why one of the main problems is called warrantless
wiretapping. Further, because US law does /not/ protect
data at rest, anyone claiming authorized process (which
the ISP itself may) can eavesdrop without any required
formality.

For examples on claiming that SSL/TLS can protect email
privacy, see the commercial email security product by
www.postini.com (now with google):

Postini’s Encryption Manager Policy-Enforced TLS has successfully
met SEI’s email security needs, protecting communications where they
are most vulnerable — traversing the public Internet. [sic].
in http://www.postini.com/customers/SEI_0929.pdf

In another page at postini.com, we can read: With TLS,
we will be able to securely send and receive confidential
documents with our clients who support TLS. While this
part is 100% correct, it is not relevant for the security
of those documents, as they sit in plaintext at the ISPs.

Also, in the current thread on Comcast blocking port 25 at Farber's
IP list, and in previous threads here, using TLS/SSL has been promoted
to help cease to become low hanging fruit for reading or public
dissemination, and to prevent a private contractor's (ISP) misuse or
loss/exposure of your data. However, having a port 587 TLS connection
to my ISP (eg, gmail) is not going to make my email more or less
protected at that ISP, and is not going to prevent wiretapping.

Of course, SSL/TLS is very successful in e-commerce. But SSL/TLS is
not an email authentication and encryption solution, and fails for
email where the risk is higher.

Cheers,
Ed Gerck

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: SSL/TLS and port 587

2008-01-23 Thread Ed Gerck

Bodo Moeller wrote:

You don't take into account the many users these days who use wireless
Internet access from their laptop computers, typically essentially
broadcasting all network data to whoever is sufficiently close and
sufficiently nosy. 


Yes. Caveats apply but SSL/TLS is useful and simple for this purpose.


Of course using SSL/TLS for e-mail security does
not *solve* the problem of e-mail eavesdropping (unless special care
is taken within a closed group of users), but it certainly plays an
important role in countering eavesdropping in some relevant scenarios.


The problem is when it is generalized from the particular case where
it helps (above) to general use, and as a solution to prevent wireless
wiretapping. For example, as in this comment from a data center/network
provider:

-
Now, personally, with all the publicly available info regarding
warrantless wiretapping and so on, why any private communications should
be in the clear I just don't know. Even my MTA offers up SSL or TLS to
other MTA's when advertising its capabilities. The RFC is there, use it
as they say.
-

Cheers,
Ed Gerck

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: SSL/TLS and port 587

2008-01-23 Thread Ed Gerck

Steven M. Bellovin wrote:

On Tue, 22 Jan 2008 21:49:32 -0800
Ed Gerck [EMAIL PROTECTED] wrote:


As I commented in the
second paragraph, an attack at the ISP (where SSL/TLS is
of no help) has been the dominant threat -- and that is
why one of the main problems is called warrantless
wiretapping. Further, because US law does /not/ protect
data at rest, anyone claiming authorized process (which
the ISP itself may) can eavesdrop without any required
formality.


Please justify this.  Email stored at the ISP is protected in the U.S.
by the Stored Communications Act, 18 USC 2701
(http://www4.law.cornell.edu/uscode/18/2701.html).  While it's not a
well-drafted piece of legislation and has been the subject of much
litigation, from the Steve Jackson Games case
(http://w2.eff.org/legal/cases/SJG/) to Warshak v. United States
(http://www.cs.columbia.edu/~smb/blog/2007-06/2007-06-19.html), I don't
see how you can say stored email isn't protected at all.


As you wrote in your blog, users really need to read those boring
[ISP] licenses carefully.

ISP service terms grant the disclosure right on the basis of
something broadly called valid legal process or any such
term as defined /by the ISP/. Management access to the account
(including email data) is a valid legal process (authorized by the
service terms as a private contract) that can be used without
any required formality, for example to verify compliance to the
service terms or something else [1].

Frequently, common sense and standard use are used to
justify such access but, technically, no justification is
actually needed.

Further, when an ISP such as google says Google does not share
or reveal email content or personal information with third
parties. one usually forgets that (1) third parties may actually
mean everyone on the planet but you; (2) third parties also
have third parties; and (3) #2 is recursive.

Mr. Councilman's case and his lawyer's declaration that Congress
recognized that any time you store communication, there is an
inherent loss of privacy was not in your blog, though. Did I
miss something?

Cheers,
Ed Gerck

[1] in http://mail.google.com/mail/help/about_privacy.html :
Of course, the law and common sense dictate some exceptions. These exceptions include 
requests by users that Google's support staff access their email messages in order to 
diagnose problems; when Google is required by law to do so; and when we are compelled to 
disclose personal information because we reasonably believe it's necessary in order to 
protect the rights, property or safety of Google, its users and the public. For full 
details, please refer to the When we may disclose your personal information 
section of our privacy policy. These exceptions are standard across the industry and are 
necessary for email providers to assist their users and to meet legal requirements.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: SSL/TLS and port 587

2008-01-23 Thread Ed Gerck

Steven M. Bellovin wrote:

You're confusing two concepts.  Warrants apply to government
behavior; terming something a wireless wiretap carries the clear
implication of government action.  Private action may or may not
violate the wiretap act or the Stored Communications Act, but it has
nothing to do with warrants.


First, there is no confusion here; I was simply addressing both
issues as in my original question to the list:

  The often expressed idea that SSL/TLS and port 587 are
  somehow able to prevent warrantless wiretapping and so on, or
  protect any private communications, is IMO simply not
  supported by facts.

Second, those two issues are not as orthogonal as one might
think. After all, an ISP is already collaborating in the
case of a warrantless wiretap. So, where would the tap
take place:

1. where the email is encrypted, or
2. where the email is not encrypted.

Considering the objective of the tap, and the expenses incurred
to do it, it seems quite improbable to choose #1.

Thanks for Mr. Councilman's case update. I mentioned it only
because it shows what does happen and the economic motivations
for it, none of which could have been prevented by SSL/TLS
protecting email submission.

Cheers,
Ed Gerck

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


SSL/TLS and port 587

2008-01-22 Thread Ed Gerck

List,

I would like to address and request comments on the use of SSL/TLS and port 587 
for email security.

The often expressed idea that SSL/TLS and port 587 are somehow able to prevent 
warrantless wiretapping and so on, or protect any private communications, is 
IMO simply not supported by facts.

Warrantless wiretapping and so on, and private communications eavesdropping are done more 
efficiently and covertly directly at the ISPs (hence the name warrantless 
wiretapping), where SSL/TLS protection does NOT apply. There is a security gap at 
every negotiated SSL/TLS session.

It is misleading to claim that port 587 solves the security problem of email 
eavesdropping, and gives people a false sense of security. It is worse than 
using a 56-bit DES key -- the email is in plaintext where it is most vulnerable.

Cheers,
Ed Gerck

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: 2008: The year of hack the vote?

2007-12-26 Thread Ed Gerck

[EMAIL PROTECTED] wrote:

May I point out that if voting systems have a level
of flaw that says only an idiot would use them, then
how can you explain electronic commerce, FaceBook,
or gambling sites?  More people use just those three
than will *ever* vote.


The answer is NO, and that is so because it's different.

In elections, you must have a Chinese wall between the voter and the ballot. 
If I get the vote I don't know who the voter is, if I get the voter I don't know what the 
vote is. And that doesn't happen in e-commerce. In e-commerce I have a traceable credit 
card. I have a traceable name, I have an address for delivery. Anything that's bought 
must be delivered. I have a pattern of buying, if you go to Amazon.com, they will suggest 
the next book to you if you want, based on what you bought. They may know a lot more 
about you than you think they know.

And so there is a basic difference between e-commerce and Internet voting, 
which must not be ignored, otherwise ignorance is bliss, we don't see it.

In e-commerce there must be no privacy, the merchant must know who I am, my 
credit card must be valid. There are laws against [fraud in] this. So there is 
a basic divide here, which you need to take into account. There is a paradigm 
shift, there is a very strong technological point which those on the political 
side don't see, because that's natural. And there is a very strong political 
side that us, on the technological side don't see. For us, yes, voter 
participation is very good, or don't we all care if voter participation may 
decrease?

So the point that I wanted to make is that it [Internet voting] is not as easy 
[as in e-commerce], because it's a fundamentally different problem. The 
solution is not the same, what we have today [for e-commerce] does not 
transpose, and the solution, the final comment, the solution that we have today 
for e-commerce is not cryptography, is insurance, for 20 percent of fraud that 
is the Internet fraud in credit cards. And how is that paid? By us, 
cardholders, we socialize the cost. Imagine telling, yes, you were elected 
president, but you know, there was a fraud, here is our insurance policy. You 
collect your million dollars, next time play again. You know, we cannot 
socialize fraud in elections. We cannot accept 20 percent of fraud paid for by 
insurance, which is what happens today. We did solve the e-commerce security 
problem, by putting in insurance. We can not solve it that way [for elections].

(from my Brookings Symposium comment, Washington, DC, January 2000).

Cheers,
Ed Gerck

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: PlayStation 3 predicts next US president

2007-12-13 Thread Ed Gerck

Allen wrote:

William Allen Simpson wrote:
[snip]


The whole point of a notary is to bind a document to a person.  That the
person submitted two or more different documents at different times is
readily observable.  After all, the notary has the document(s)!


No, the notary does not have the documents *after* they are notarized, 
nor do they keep copies. Having been a notary I know this personally. 


Thanks, Allen. Interestingly, digital signatures do provide what
notaries can't provide in this case. Even though a digital signature
binds a document to a key, there are known legal frameworks that can
be used to bind the key to a person.

Cheers,
Ed Gerck

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Flaws in OpenSSL FIPS Object Module

2007-12-11 Thread Ed Gerck

Vin McLellan wrote:


What does it say about the integrity of the FIPS program, and its CMTL 
evaluation process, when it is left to competitors to point out 
non-compliance of evaluated products -- proprietary or open source -- to 
basic architectural requirements of the standard?


Enter Reality 2.0. Yesterday, security was based on authority --
on some particular agency or expert. Today, security is /also/ based
on anyone else that can point out non-compliance, and solutions.

The integrity of the FIPS program, and any other evaluation process,
can only increase when [x] are also able (entirely on their own and
not by a mandate) to point out non-compliance of evaluated products
-- proprietary or open source -- to basic architectural requirements
of the standard. Here [x] = competitors, attackers, outside experts,
anyone in general.

Cheers,
Ed Gerck

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: a new way to build quantum computers?

2007-08-19 Thread Ed Gerck

Steven M. Bellovin wrote:

http://www.tgdaily.com/content/view/33425/118/

Ann Arbor (MI) - University of Michigan scientists have discovered a
breakthrough way to utilize light in cryptography. The new technique
can crack even complex codes in a matter of seconds. Scientists believe
this technique offers much advancement over current solutions and could
serve to foil national and personal security threats if employed


It's a mater of (lack of) journalism English. The first paragraph phrase:
   The new technique can crack even complex codes in a matter
   of seconds.
should have been written as:
   The new technique may crack even complex codes in a matter
   of seconds.
The scientific authors, I believe, were more careful. Their technique
still has all the basic problems of QC built in.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: unintended consequences?

2007-08-08 Thread Ed Gerck
Steven M. Bellovin wrote:
 Does that mean that the new fiber is less tappable?

No change, notwithstanding anecdotal references on fiber bending
as used for tapping.

Tapping a fiber can be done without much notice by matching the
index of refraction outside the outer fiber layer, after abrasion
and etching to reach that layer. There is no need for bending,
which might not be physically possible (eg, in a thick cable bundle),
would increase propagation losses beyond that caused by the tapped
signal power itself, and might create detectable backward
propagating waves (BPWs are monitored to detect fiber breach).

Low-loss taps are essential. A tap must extract a portion of
the through-signal. This, however, should not have the effect of
significantly reducing the level of the remaining signal. For
example, if one-quarter of the incident signal is extracted, then
there is a 1.25 db loss in the remaining through-signal, which
can easily be detected.

Cheers,
Ed Gerck

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: improving ssh

2007-07-19 Thread Ed Gerck
Ivan Krstić wrote:
 On Jul 14, 2007, at 2:43 PM, Ed Gerck wrote:
 1. firewall port-knocking to block scanning and attacks
 2. firewall logging and IP disabling for repeated attacks (prevent DoS,
 block dictionary attacks)
 3. pre- and post-filtering to prevent SSH from advertising itself and
 server OS
 4. block empty authentication requests
 5. block sending host key fingerprint for invalid or no username
 6. drop SSH reply (send no response) for invalid or no username
 
 None of these are crypto issues. 

Perhaps not the way they are solved today (see above), and that IS
the problem. For example, the lack of good crypto solutions to protocol
bootstrap contributes significantly to security holes 1-7.

Cheers,
Ed Gerck

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


summary, Re: improving ssh

2007-07-19 Thread Ed Gerck
List,

Thanks everyone for the feedback. There are now some
ideas how things could be improved using crypto. I 
prepared a summary of the public and private responses, 
and clarifications, at:

http://email-security.blogspot.com/2007_07_01_archive.html

Comments are welcome in here (if crypto) an in the blog in
general.

Cheers,
Ed Gerck

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


improving ssh

2007-07-16 Thread Ed Gerck
List,

SSH (OpenSSH) is routinely used in secure access for remote server
maintenance. However, as I see it, SSH has a number of security issues
that have not been addressed (as far I know), which create unnecessary
vulnerabilities.

Some issues could be minimized by turning off password authentication,
which is not practical in many cases. Other issues can be addressed by
additional means, for example:

1. firewall port-knocking to block scanning and attacks
2. firewall logging and IP disabling for repeated attacks (prevent DoS,
block dictionary attacks)
3. pre- and post-filtering to prevent SSH from advertising itself and
server OS
4. block empty authentication requests
5. block sending host key fingerprint for invalid or no username
6. drop SSH reply (send no response) for invalid or no username

I believe it would be better to solve them in SSH itself, as one would
not have to change the environment in order to further secure SSH.
Changing firewall rules, for example, is not always portable and may
have unintended consequences.

So, I'd like to get list input (also by personal email if you think your
comment might be out of scope here),  on issues #1-6 above and if you 
have other SSH security issues that you would like to see solved /in SSH/.

Cheers,
Ed Gerck

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


a fraud is a sale, Re: The bank fraud blame game

2007-07-03 Thread Ed Gerck
Nicholas Bohm wrote:
 That is why efforts by banks to shift the risk to the customer are
 pernicious - they distort the incentive the bank ought to have to get
 the security right.


Yes. Today, under current practice, there's actually a strong
incentive to keep existing fraud levels than to try to scrub
it out -- fraud has become a sale:

   in 2001, the last year enough HARD data was available,
   their revenue stream from fraud was USD $550 Million.
   That all came from chargeback fees against the merchants.
   And since it was fraud, the merchants lost the product
   and the income from the product along with the shipping
   costs and the chargeback fees. Merchants, of course, have
   no choice but to pass those losses on to the honest customers.

in http://woip.blogspot.com/2007/03/fraud-is-sale.html
See also https://financialcryptography.com/mt/archives/000520.html

Cheers,
Ed Gerck

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: question re practical use of secret sharing

2007-06-22 Thread Ed Gerck
Alexander Klimov wrote:
 So if one xors a Linux iso image and some movie, it is quite hard to
 claim that the result is copyright-protected.

Why? A copyright-protected work is still copyright-protected,
encrypted or not.

It is just as with any reversible encoding of a copyright-
protected work, such as magnetic domain encoding when storing it
in a hard disk.

Now, if you pass a copyright-protected work through an irreversible
hash function, it would be hard to claim the result to be
copyright-protected.

Cheers,
Ed Gerck

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: BETA solution, Re: Failure of PKI in messaging

2007-02-16 Thread Ed Gerck
Guus Sliepen wrote:
 On Thu, Feb 15, 2007 at 02:47:05PM -0800, Ed Gerck wrote:
 
 Zmail actually reduces the amount of trust by not storing your usercode,
 password, or keys anywhere. This makes sense for zmail, and is an incentive
 to actually do it, to reduce risk -- anyone breaking into any zmail server,
 even physically, will not find any key or credential material for any user
 and, hence, cannot decrypt any user area (the user area keeps the address 
 book
 and contact keys, all encrypted using the user keys that are not there), or
 user messages collected from ISPs.
 
 Where are the usercode, password and keys stored then?

N O W H E R E, as it says above.

 [...]
 This will actually be available in v3.x, with an option for client-based
 super-encryption. If you are concerned about zmail peeking into the raw
 message, which zmail does not do, you can simply agree with your message
 partner on an out-of-band passphrase and use it in your client (without
 zmail access) to encrypt. Your recipient can do the same to decrypt. What
 you get from zmail is the secure routing and distribution -- for example,
 you can require the recipient to login, allow the recipient to prevent
 phishing, and expire the message in 7 days. You can also request a return
 receipt telling you when, where, how, and by whom the message was decrypted.
 
 /If/ I trust ZMail (the people behind it and the X.509 stuff that
 secures the website) then yes, this is functionality not offered by SMTP
 and PGP or S/MIME. But I don't see this replacing PGP or S/MIME. 

There's no need to replace PGP or S/MIME. After all, less than 5% of all email
is encrypted using them. What's needed is to offer an option for the other
95% that could be encrypted and authenticated.

I also
 still don't see how this improves the trust model.

Because you have to trust zmail less (the two quotes above), and also because
you have to trust the recipient less (the return receipt, for example). In
addition, you have to trust your platform less (no private-key that is stored
in your computer; ZSentryID can be used to render key-logging ineffective).
In short, the less you have trust everyone (including your own computer),
the better the trust model is -- what you trust is what can break your
security, when it fails.

Best,
Ed Gerck

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Failure of PKI in messaging

2007-02-15 Thread Ed Gerck
John Levine wrote:
  The great thing about Internet e-mail is that
 vast numbers of different mail systems that do not know or trust each
 other can communicate without prearrangement.  

That's not banking. Banks and their clients already have a trusted
relationship. The banks webmail interface leverages this to provide
a trust reference that the user can easily verify (yes, this is my
name and balance). That's why it works, and that's what is missing
in the bank PKI email model -- what's that relationship buying you?

Email for banks should thus leverage the relationship, rather than
present an ab initio communication.

 It's hard to see any
 successful e-mail system in the future, secure or otherwise, that
 doesn't do that, since Internet mail killed all of the closed systems
 that preceded it.

It is not true that you can't secure first communications. It is just
harder and _not_ necessary for banks (because the client already knows
the bank and vice versa).

Best,
Ed Gerck

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


BETA solution, Re: Failure of PKI in messaging

2007-02-15 Thread Ed Gerck
James A. Donald wrote:
 Ed Gerck wrote:
 I am using this insight in a secure email solution that provides
 just that -- a reference point that the user trusts, both sending
 and receiving email. Without such reference point, the user can
 easily fall prey to con games. Trust begins as self-trust. Anyone
 interested in trying it out, please send me a personal email with
 application info.
 
 Want to try it out.  Not clear what you mean by application info.

The application info is just so I can verify your requirements.
The solution is in BETA and does not use Java, Flash, stored cookies,
or ActiveX. Works in Linux, Mac, and Win. There's also a javascript-
free version (earlier BETA).

The solution is available free (for personal use)
at https://zsentry.com/zmail/emailsecurity.html
Summary is available at http://zsentry.com and how it works at
https://zsentry.com/privacy_security_compliance_zmail.htm

The question is: Why should I trust it?

Zmail actually reduces the amount of trust by not storing your usercode,
password, or keys anywhere. This makes sense for zmail, and is an incentive
to actually do it, to reduce risk -- anyone breaking into any zmail server,
even physically, will not find any key or credential material for any user
and, hence, cannot decrypt any user area (the user area keeps the address book
and contact keys, all encrypted using the user keys that are not there), or
user messages collected from ISPs.

This is more than X.509 or PGP can do, as the private-key must be exposed
somewhere.

Next, let's see what zmail does. It creates a point-to-point encrypted
channel, with authentication, delivery and control mechanisms that you define.
It's a secure routing/delivery system, working as an add-on interface (so it
does not change how you use email).

The message itself could be encrypted by you and just delivered by zmail
-- so that you have the secure routing/delivery from zmail but do not have
to trust zmail with your plaintext.

This will actually be available in v3.x, with an option for client-based
super-encryption. If you are concerned about zmail peeking into the raw
message, which zmail does not do, you can simply agree with your message
partner on an out-of-band passphrase and use it in your client (without
zmail access) to encrypt. Your recipient can do the same to decrypt. What
you get from zmail is the secure routing and distribution -- for example,
you can require the recipient to login, allow the recipient to prevent
phishing, and expire the message in 7 days. You can also request a return
receipt telling you when, where, how, and by whom the message was decrypted.

While version 3x is not there, or even afterwards, you can do the same with
any publicly available file encryption and just attach the encrypted file
or paste its ASCII into the message panel. You don't have to worry about
user registration, anti-phishing, authentication, delivery control or use,
as all this (and more) is handled by zmail.

Thank you for your interest and I look forward to your feedback.

Best,
Ed Gerck

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Failure of PKI in messaging

2007-02-13 Thread Ed Gerck

The solution is simpler than it seems.

Let's first look at one scenario that is already working
and use it as an example to show how the email scenario
may work.

Banks are already, and securely, sending and receiving
online messages to/from their clients. This is done by
a web interface, after the user logs in to their account.

Web user login can be based on a number of two-factor and
mutualauthentication solutions, some of them quite ineffective
to prevent phishing but, nonetheless, better than what the
email PKI model provides.

What's missing with the email PKI model?

While the bank is asking to be authenticated by the user, it
does so by asking the user to rely on a number of third-party
references that are actually unreliable (ie, by being without
recourse, warrantless, unverifiable, and chosen by the purported
sender in what may be a con game). The bank would never allow
the user to be authenticated under the same assumptions!

So, what's missing in the email PKI model is two-sidedness.
Fairness.

It is essential to have a reference point that the user trusts.
In the web messaging example already used by banks, this is
provided by the user login -- the user trusts that that is their
account -- their name is correct, their balance and transactions
are correct.

I am using this insight in a secure email solution that provides
just that -- a reference point that the user trusts, both sending
and receiving email. Without such reference point, the user can
easily fall prey to con games. Trust begins as self-trust. Anyone
interested in trying it out, please send me a personal email with
application info.

Best,
Ed Gerck

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: convenience vs risk -- US public elections by email and beyond

2007-02-07 Thread Ed Gerck
Thanks for all the comments in and off list. A revised write-up is
available at http://www.gather.com/viewArticle.jsp?articleId=281474976901451
More examples where convenience trumps ease-of-use, and risk, will be added
from time to time. Please check back. Comments and suggestions are welcome.

Best,
Ed Gerck

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Intuitive cryptography that's also practical and secure.

2007-02-05 Thread Ed Gerck
Andrea Pasquinucci wrote:
or to sit next to a 
 coercer with a gun watching her voting. 
 
 The fact that the voter is remote and outside a controlled location 
 makes it impossible to guarantee incoercibility and no-vote-selling. 
 This is not a crypto or IT problem. I do not think (correct me if I am 
 wrong) that it is possible to design a web-voting system where you can 
 vote from any PC in the world which guarantees against this.

It is possible and has been done by Safevote, the first time in 2001.
The solution also prevents vote selling. The solution was verified and
approved by the Swedish Ministry of Justice.

This is how it works. Voters are allowed to cast as many ballots as
desired but only the last ballot is counted (this is called the CL product
option). If anyone forces or rewards the voter for voting in a certain way,
and even watches the voter vote, the voter may always vote again afterwards
and effectively erase the former vote when in privacy. The coercer would have
to follow the voter 24/7 to prevent this.

There is a second method, also used by Safevote in 2001 and positively
evaluated by the Swedish Ministry of Justice. Voters can use the
Internet to vote but also in a supervised environment, a precinct, where the
voter is alone to vote. The vote cast at the precinct trumps the vote
cast elsewhere, which allows the voter an easy recourse in case of
difficulty (spouse, etc.).

This is often ignored by opponents of online voting, that online voting
does not eliminate precinct voting; it just allows it to be sent online
as well in a controlled environment. This also means that no one
needs to buy a computer or have Internet connection to vote -- there's
no digital divide. People can continue to use the precinct and vote
as usual.

About the screen picture issue, Safevote allows voters to print all
pages of the ballot, and all ballot choices made by the voter. However,
the server provides the ballot pages in such a way that the voter cannot
prove (except to himself when voting) how the voter actually voted. This
procedure also helps prevent vote selling and coercion. The voter cannot
produce a non-repudiable proof of how the voter voted.

Best,
Ed Gerck

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


convenience vs risk -- US public elections by email and beyond

2007-02-03 Thread Ed Gerck
The social aspects of ease-of-use versus security are well-known.
People would rather use something that works than something that
is secure but hard to use. Ease-of-use trumps risks.

What is less recognized, even though it seems intuitive, is that
convenience (even though costlier and harder to use) can also make
people ignore risks. Convenience trumps ease-of-use, which trumps
risks.

For example, people will often send a cell phone text message
that requires dozens of button-clicks, costs money and is less
secure (US Rep. Mark Foley case)... than do a one click, free
phone call. We all use regular email even though it is totally
insecure -- because it's convenient.

Convenience has a lot to do with personal comfort. It is often
more comfortable to send a text message or email than call and
actually speak with the person.

That you can do it on your own time, or save time, is a very
important component for personal comfort. A convenience store,
for example, sells items that saves the consumer a stop or
separate trip to the grocery store.

What happens when convenience is ignored? If convenient ways are
not available?

Let me note that opposition to any type of e-voting has led to
public elections in the US being carried out via regular email
in 2006.

It may be hard to imagine why opposition to e-voting would in any
way make adoption of email voting more likely.

It happens because voting is useful and voters want to vote.
Therefore, voters will find ways that are not safe but convenient
and available ...if more convenient and safe ways are blocked.

We already discovered that for the system to be usable is more
important than any security promises that might be made. Security
innovation has often improved usability -- for example, even though
public-key cryptography is hard to use by end-users, it represented
a major usability improvement for IT administrators. Usable
security is a major area of innovation today.

We are discovering that convenience is an even stronger force to
bring about innovation.

How about paper voting? It does not prevent large-scale fraud, which
has been a complement to paper elections for over a century, and is
not convenient. Lacks personal comfort, personal use of time. Lack
of convenience (not lack of security) will, eventually, kill paper
voting.

Regarding voting, our future is pretty obvious. Online voting
will be mainstream, and is already here in the public and private
sectors. But, to be secure, it should not happen with regular
email, e-commerce web sites, or current trust me e-voting machines
(DRE).

The socially responsible thing to do regarding voting is, thus, to
develop online voting so that it is secure _and_ easy to use. It
already has the top quality that paper voting and e-voting machines
(DRE) cannot have: convenience.

But the real-world voting security problem is very hard. Voting is an
open-loop process with an intrinsic vote gap, such that no one may
know for sure what the vote cast actually was -- unless one is willing
to  sacrifice the privacy of the vote.

A solution [1], however, exists, where one can fully preserve privacy
and security, if a small (as small as you need) margin of error is
accepted. Because the margin of error can be made as small as
one needs and is willing to pay, it is not really relevant. Even when
all operational procedures and flaws including fraud and bugs are
taken into account.

The solution is technologically neutral but has more chances for
success, and less cost, with online voting. Which just adds to the
winning hand for online voting, led by convenience.

I would like to invite your comments on this, to help build the trust
and integrity that our election system needs -- together with the
convenience that voters want. Personal replies are welcome. I am
thinking of opening a blog for such dialogue. Moderators are welcome
too.

Best,
Ed Gerck

[1] Based on a general, information-theory model of voting that applies
to any technology, first presented in 2001. See
http://safevote.com/doc/VotingSystems_FromArtToScience.pdf
Provides any desired number of independent records, which are readily
available to be reviewed by observers, without ever linking voters to
ballots.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Intuitive cryptography that's also practical and secure.

2007-01-30 Thread Ed Gerck
Matt Blaze wrote:
 an even more important problem
 than psychic debunking, namely electronic voting. I think intuitive
 cryptography is a very important open problem for our field.

The first problem of voting is that neither side (paper vote vs e-vote)
accepts that voting is hard to do right -- and that we have not done
it yet. Paper is not the gold standard of voting.

The real-world voting problem is actually much harder than people think.
Voting is an open-loop process with an intrinsic vote gap, such that
no one may know for sure what the vote cast actually was -- unless one
is willing to sacrifice the privacy of the vote. This problem is
technology-agnostic.

A solution [1], however, exists, where one can fully preserve privacy
and security, if a small (as small as you need) margin of error is
accepted. Because the margin of error can be made as small as
one needs and is willing to pay, it is not really relevant. Even when
all operational procedures and flaws including fraud and bugs are
taken into account.

The solution seems fairly intuitive. In fact, it was used about 500
years by the Mogul in India to prevent fraud.

The solution is also technologically neutral, but has more chances for
success, and less cost, with e-voting.

Best,
Ed Gerck

[1] In Shannon's cryptography terms, the solution reduces the probability
of existence of a covert channel to a value as close to zero as we want.
This is done by adding different channels of information, as intentional
redundancy. See http://www.vote.caltech.edu/wote01/pdfs/gerck-witness.pdf
I can provide more details on the fraud model, in case of interest.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Intuitive cryptography that's also practical and secure.

2007-01-30 Thread Ed Gerck
[Perry, please use this one if possible]

Matt Blaze wrote:
 an even more important problem
 than psychic debunking, namely electronic voting. I think intuitive
 cryptography is a very important open problem for our field.

Matt,

You mentioned in your blog about the crypto solutions for voting and
that they have been largely ignored. The reason is that they are either
solutions to artificially contrived situations that would be impractical
in real life, or postulate conditions such as threshold trust to protect
voter privacy that would not work in real life. Technology-oriented
colleagues are not even aware why threshold trust would not work in
elections.

Thus, the first problem of voting is that neither side (paper vote vs
e-vote accepts that voting is hard to do right -- and that we have not
done it yet.

The real-world voting problem is actually much harder than people think.

Voting is an open-loop process with an intrinsic vote gap, such that
no one may know for sure what the vote cast actually was -- unless one
is willing to sacrifice the privacy of the vote. This problem is
technology-agnostic.

A solution [1], however, exists, where one can fully preserve privacy
and security, if a small (as small as you need) margin of error is
accepted. Because the margin of error can be made as small as
one needs and is willing to pay, it is not really relevant. Even when
all operational procedures and flaws including fraud and bugs are
taken into account.

The solution seems fairly intuitive. In fact, it was used about 500
years by the Mogul in India to prevent fraud.

The solution is also technologically neutral, but has more chances for
success, and less cost, with e-voting.

Best,
Ed Gerck

[1] In Shannon's cryptography terms, the solution reduces the probability
of existence of a covert channel to a value as close to zero as we want.
The covert channel is composed of several MITM channels between the voter
registration, the voter, the ballot box, and the tally accumulator. This
is done by adding different channels of information, as intentional
redundancy. See http://www.vote.caltech.edu/wote01/pdfs/gerck-witness.pdf
I can provide more details on the fraud model, for those who are
interested.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Circle Bank plays with two-factor authentication

2006-09-29 Thread Ed Gerck

Steven M. Bellovin wrote:

I'd like to hear why you think the scheme isn't that usable.  I disagree
with you about its security.


The first condition for security is usability. I consider this to be
self-evident.

Users have difficulty already with something as simple as username/pwd.
Here, the user is additionally requested to find three numbers that match
(for example) G5:H1:D3, out of 40 matrix positions in 8 columns and 5 rows.
Anyone who has played battleship knows that matrix searching takes time and
mistakes happen.

The screen is likely to time out while the user is looking for the 3 numbers,
so that the user has to start again, possibly with another new time out. The
user may also make a parallax mistake, getting a wrong number. After the user
logs in the session times out after a while, requiring the same procedure anew.

Users will have a hard time using this. But I don't think there is so
much of a need to advocate for the users here -- they will just go back
to phone service (which costs much more for the bank). Eventually, because
of cost, something with higher usability will have to be used.

The introduction of a USB interface for SecurID was caused by user rejection
of a much simpler procedure -- the user just had to read the two-factor code
off a display.

The question is what the threat model is. 


We agree they should not have included the sign on ID. It is not such a
quick fix, however, to delete it from the message because different
accounts may share the same email address and the user would not know
what matrix to use for what account. But such a simple, clear mistake is
actually a harbinger -- there are other clear mistakes there. But which
cannot be solved.

For example, the scheme (contrary to SecurID) has no protection against
an insider threat (the highest risk). The matrix combinations are fully
known in advance from the bank side (and there are only 999 of them [*]).

Further, it does not allow the usual bank security policy of separating
development (inside knowledge) from operations (the bank's servers).
Watching a couple authentication events for a user should be enough to
find which matrix the user was assigned to, allowing the next authentication
event to be fully predictable without any cooperation from or attack on the
user.

After the severe usability burden of this scheme, one would think that
the threat model would be more robust -- to pay for your troubles.

There are, of course, also the outside threats. Contrary to what
people think, it's very common and very easy to intercept email.
ISPs can do it without trace. Companies do it all the time for
their employees. Of course, ISPs and employers already show trusted
functionality to the user but the use of insecure email here
multiplies the inside threat opportunity against the user.

There's also the question of plausible deniability. If the user's
username/pwd is compromised today, it's easy to argue it was not
safe to begin with. With this scheme, people (and the user) might
think the user is more protected -- when the user may actually
be more exposed.

Shifting the burden to the user is tempting. But, contrary to risks,
shifting the usability burden is less tolerable to users. As
technologists we cannot just do the math and say -- it works! This
was the same mistake of email encryption. That the system can actually
be used turns out to be more important than any security promise.

Cheers,
Ed Gerck

(*) Apparently, at most. Their 3-digit matrix counter, also included
in the message (!), can index at most 999 pages.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Circle Bank plays with two-factor authentication

2006-09-28 Thread Ed Gerck

Circle Bank is using a coordinate matrix to let
users pick three letters according to a grid, to be
entered together with their username and password.

The matrix is sent by email, with the user's account
sign on ID in plaintext.

Worse, the matrix is pretty useless for the majority of users,
with less usability than anything else I saw in a long time.
This is what the email says:

  The following is your Two Factor code for Online Banking for
  username (sign on ID changed here for privacy reasons).  You will be
  required to enter the grid values associated with the three
  Two Factor boxes presented with each sign-on to Online Banking.
  Please save and store this Matrix in a safe yet accessible place.
  The required entries will be different each time you sign-on.


Two Factor Matrix

ABCDEFGH
________

108421175

274992420

336069906

464514684

517686592


These are the additional instructions in the site:

  Check your e-mail for receipt of the Two Factor Matrix which should
  be delivered within 2-3 minutes of activation. You can save the
  e-mail to your desktop for easy access or print the matrix.
  However, do not write your sign on ID and password on this matrix –
  treat it securely as you do with a Debit or ATM card.

  Go back to the online banking sign on page and type in your sign
  on ID, password, and the three coordinates from your Two Factor
  Matrix. These three coordinates are randomly selected each time
  you sign on, so remember to keep your matrix secure and easily
  accessible.

Well, the bank itself already compromised both the sign on ID
and the matrix by sending them in an email. All that's left
now is a password, which a nice phishing email giving the
correct sign on ID might easily get.

When questioned about this, the bank's response is that this
scheme was designed by the people that design their web site
and had passed their auditing.

Of course, a compromise now would be entirely the user's fault
-- another example of shifting the burden to the user while
reducing the user's capacity to prevent a compromise.

This illustrates that playing with two-factor authentication can
make the system less secure than just username/password, while
considerably reducing usability. A lose-lose for users.

Cheers,
Ed Gerck

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: [IP] more on Can you be compelled to give a password?

2006-08-08 Thread Ed Gerck

Ariel Waissbein wrote:


Please notice that a second distress password becomes useless if the
would-be user of this password has access to the binaries (that is, the
encrypted data), e.g., because he will copy them before inserting the
password and might even try to reverse-engineer the decryption software
before typing anything. So I'm not sure what is the setting here.


The worst-case setting for the user is likely to be when the coercer can
do all that you said and has the time/resources to do them. However, if
the distress password is strong (ie, not breakable within the time/resources
available to the coercer), the distress password can be used (for example)
to create a key that decrypts a part of the code in the binary data that
says the distress password expired at an earlier date -- whereas the access
password would create a key that decrypts another part of the code.

There are other possibilities as well. For example, if the binary data
contains code that requires connection to a server (for example, to supply
the calculation of some function), that server can prevent any further
access, even if the access password is entered, after the distress password
is given. The data becomes inaccessible even if the coercer has the binary data.

Another possibility is to combine the above with threshold cryptography.

Cheers,
Ed Gerck

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: [IP] more on Can you be compelled to give a password?

2006-07-29 Thread Ed Gerck

List,

the Subject says it all. This might be of interest
here, for comments.


The answer is definitely NO even for the naive user,
just requiring the tech-savvy for set up. Several
examples are possible.

John Smith can set two passwords, one for normal use
and the other when in distress. The distress password
may simply announce that the data is expired or, more
creatively, also make the data unreadable.

John Smith can also set two passwords, one of them
unknown to him but known to a third-party (that
John S does not have to trust) that is subject to
a different jurisdiction /or rules /or is in another
place. John Smith may comply with any demand to
disclose his password but such a demand may not be
effective for the third-party.

John Smith can have the data, encrypted with a key
controlled by his password, sitting on some Internet
server somewhere. John S never carries the data
and anyone finding the data does not know to whom it
belongs to.

John Smith can also use keys with short expiration
dates in order to circumvent by delay tactics any
demand to reveal their passwords, during which time
the password expires.

Of course, this is not really a safe heaven for
criminals because criminal activity is often detected
and evidenced by its outside effects, including
tracing.

Cheers,
Ed Gerck

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Interesting bit of a quote

2006-07-13 Thread Ed Gerck

[EMAIL PROTECTED] wrote:

* That which was not recorded did not happen.
* That which is not documented does not exist.
* That which has not been audited is vulnerable.

and he did not mean this in the paths to invisibility
sense but rather that you have liability unless you can
prove that you don't.


Thanks for the quote. But That which was not recorded did
not happen and the other two points can, and IMO should, also
be taken in the positive sense that you need recorded, credible,
audited evidence in order to support business in case arguments
(as they do) arise. Trust depends on parallel channels. So
based, trust actually reduces liability.

The knife cuts the other way too, and that's why unrevocably
expiring documents that can be so treated (legally and business
wise) is also necessary to reduce liability.

Cheers,
Ed Gerck

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Call for Papers for the 4th VirtualGoods Workshop in Leeds

2006-07-11 Thread Ed Gerck

 C A L L   F O R   P A P E R S

  The 4th International Workshop for
   Technology, Economy and Legal Aspects of
Virtual Goods

 Organized by the GI Working Group ECOM
   and in parallel with
 IFIP Working Group 6.11
   Communication Systems in Electronic Commerce

  December 13 -15, 2006 on AXMEDIS 2006 in Leeds, England

   http://VirtualGoods.tu-ilmenau.de
   -

Full version:  http://virtualgoods.tu-ilmenau.de/2006/cfp.html

Topics of interest include, but are not restricted to, the following aspects:
-

* business models for virtual goods
* incentive and community management for virtual goods
* economic and legal aspects of virtual goods
* infrastructure services for virtual goods businesses

Important Dates:


 July 27, 2006 Full papers submitted
 August 25, 2006   Notification of acceptance
 September 2, 2006 Camera-ready papers due

Technical Committee:

Juergen Nuetzel: mailto:[EMAIL PROTECTED]
Ruediger Grimm:  mailto:[EMAIL PROTECTED]

Please freely distribute this call for papers.


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Is AES better than RC4

2006-05-25 Thread Ed Gerck

JA,

Please note that my email was way different in scope. My opening
sentence, where I basically said that it does not make much sense
to compare RC4 with AES, was cut in your quote -- but here it is:

AES has more uses and use modes than RC4, in addition to the fact that
it encrypts more than one byte at once. Having said that, it is curious
to note the following misconceptions:

BTW, discarding the first 100's of bytes in RC4 is easy, fast, and
has nothing to with lack of key agility. And, if you do it, you don't
even have to hash the key (ie, you must EITHER hash the key OR discard the
first bytes).

Cheers, Ed Gerck

Joseph Ashwood wrote:

- Original Message - From: Ed Gerck [EMAIL PROTECTED]
Subject: [!! SPAM] Re: Is AES better than RC4
...


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: History and definition of the term 'principal'?

2006-04-27 Thread Ed Gerck

tmcghan quoted:
SDSI's active agents (principals) are keys: specifically, the private keys 
that sign statements. We identify a principal with the 
corresponding verification (public) key...


Calling a key a principal (and saying that a key speaks) is just
a poetic language used in SDSI/SPKI. The goal was to eliminate liability
by using keys as syntactic elements - a digital signature reduced to
mathematics. This did not, however, turn out to be a real-world model
because someone must have allowed the software to use that key or, at least,
turned the computer on (even if by a cron job).

Usually (but not always consistently) cryptography's use of principal is
not what the dictionary says.

Here, principal conveys the idea of owning or operating.

In this sense, SDSI is somewhat right -- the private key seems to
operate the signature -- but fails to recognize that, ultimately, the key
by itself cannot operate(or own) anything.

Being responsible for an account, or creating keys or passwords, is within
the idea of owing or operating.

Cheers,
Ed Gerck

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Entropy Definition (was Re: passphrases with more than 160 bits of entropy)

2006-03-24 Thread Ed Gerck

Someone mentioned Physics in this discussion and this
was for me a motivation to point out something that
has been forgotten by Shannon, Kolmogorov, Chaitin
and in this thread.

Even though Shannon's data entropy formula looks like an
absolute measure (there is no reference included), the often
confusing fact is that it does depend on a reference. The
reference is the probability model that you assume to fit
the data ensemble. You can have the same data ensemble and
many different (infinite) probability models that fit that
data ensemble, each one giving you a valid but different
entropy value. For example, if a source sends the number 1
1,000 times in a row, what would be the source's entropy?

Aram's assertion that the sequence of bytes from 1-256 has
maximum entropy would be right if that sequence came as one of
the possible outcomes of a neutron counter with a 256-byte
register. Someone's assertion that any data has entropy X
can be countered by finding a different probability model that
also fits the data, even if the entropy is higher (!). In short,
a data entropy value involves an arbitrary constant.

The situation, which seems confusing, improves when we realize
that only differences in data entropy can be actually measured,
when the arbitrary constant can be canceled -- if we are careful.

In practice, because data security studies usually (and often
wrongly!) suppose a closed system, then, so to say automatically,
only difference states of a single system are ever considered.
Under such circumstances, the probability model is well-defined
and the arbitrary constant *always* cancel. However, data systems
are not really closed, probability models are not always ergodic
or even accurate. Therefore, due care must be exercised when
using data entropy.

I don't want to go into too much detail here, which results
will be available elsewhere, but it is useful to take a brief
look into Physics.

In Physics, Thermodynamics, entropy is a potential [1].
As is usual for a potential, only *differences* in entropy
between different states can be measured. Since the entropy
is a potential, it is associated with a *state*, not with
a process. That is, it is possible to determine the entropy
difference regardless of the actual process which the system
may have performed, even whether the process was reversible or
not.

These are quite general properties. What I'm suggesting is
that the idea that entropy depends on a reference also applies
to data entropy, not just the entropy of a fluid, and it solves
the apparent contradictions (often somewhat acid) found in data
entropy discussions. It also explains why data entropy seems
confusing and contradictory to use. It may actually be a much
more powerful tool for data security than currently used.

Cheers,
Ed Gerck

[1] For example, J. Kestin, A Course in Thermodynamics, Blaisdell,
1966.



-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Zfone and ZRTP :: encryption for voip protocols

2006-03-16 Thread Ed Gerck

cybergio wrote:


Zfone :: http://www.philzimmermann.com/EN/zfone/index.html


...it achieves security without reliance on a PKI, key certification,
trust models, certificate authorities, or key management...

Good. But, uf course, there's a trust model and you need to rely on it.

...allows the detection of man-in-the-middle (MiTM) attacks by
displaying a short authentication string for the users to read and
compare over the phone.

Depends on the trust model. May not work.

Cheers,
Ed Gerck

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: NPR : E-Mail Encryption Rare in Everyday Use

2006-03-01 Thread Ed Gerck

John W Noerenberg II wrote:

At 5:58 PM -0800 2/24/06, Ed Gerck wrote:
A phone number is not an envelope -- it's routing information, just 
like

an email address. Publishing the email address is not in question and
there are alternative ways to find it out, such as search engines.


Oh really?  Then you should be able to send a note to my gmail address.


I did quite not get the irony/humor. All I'm saying about an email
address is that (1) it does not work as an envelope (hiding contents); and
(2) there's no big problem in using it. You publish your email address
every time you send an email from it, which may also make it searchable.


At 1:11 PM -0800 2/25/06, Ed Gerck wrote:
Arguments that people give each other their cell phone numbers, for 
example,

and even though there isn't a cell phone directory people use cell phones
well, also forget the user's point of view when comparing a phone 
number with

a public-key.


And that distinction is?

To me a cell-phone number is a string of characters, and a public-key is 
- a string of characters.


The distinction should be obvious if you try to tell someone your public-key
over the phone, byte by byte for 1024 bits, versus telling her your
8-digit cell phone number.

Cheers,
Ed Gerck

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: NPR : E-Mail Encryption Rare in Everyday Use

2006-02-26 Thread Ed Gerck

Ben Laurie wrote:

I totally don't buy this distinction - in order to write to you with
postal mail, I first have to ask you for your address.


We all agree that having to use name and address are NOT the problem,
for email or postal mail. Both can also deliver a letter just with
the address (CURRENT RESIDENT junk mail, for example).

The problem is that pesky public-key. A public-key such as

[2. application/pgp-keys]...


is N O T user-friendly.

Arguments that people give each other their cell phone numbers, for example,
and even though there isn't a cell phone directory people use cell phones
well, also forget the user's point of view when comparing a phone number with
a public-key.

Finally, the properties of MY public-key will directly affect the 
confidentiality
properties of YOUR envelope. For example, if (on purpose or by force) my 
public-key
enables a covert channel (eg, weak key, key escrow, shared private key), YOUR
envelope is compromised from the start and you have no way of knowing it. This 
is
quite different from an address, which single purpose is to route the 
communication.

That's I said the postal analogue of the public-key is the envelope.


Ed Gerck wrote:

My $0.02: If we want to make email encryption viable (ie, user-level
viable)
then we should make sure that people who want to read a secure
communication
should NOT have to do anything before receiving it. Having to publish my
key
creates sender's hassle too ...to find the key.


So you think people can use the post to write to you without you
publishing your address?


I get junk mail all the time at two different postal addresses, without ever
having published either of them. Again, addresses and names are user friendly
(for better or for worse) while public-keys are not -- in addition to their
different security roles (see above).


Ed Gerck wrote:

BTW, users should NOT be trusted to handle keys, much less to handle them
properly. This is what the users themselves are saying and exemplifying in
15 years of experiments.


I think users are perfectly capable of handling keys. The problem they
have is in choosing operating systems that are equal to the task.


That's another notorious area where users can't be trusted -- and that's why
companies lock down their OSes -- or, should a company really allow each user
to choose their desired OS? Apart from compatibility issues, which also do
not allow users to  freely choose even the OS in their homes (Junior wants
to play his games too scenario).

Cheers,
Ed Gerck

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: NPR : E-Mail Encryption Rare in Everyday Use

2006-02-24 Thread Ed Gerck

Paul,

Usability should by now be recognized as the key issue for security -
namely, if users can't use it, it doesn't actually work.

And what I heard in the story is that even savvy users such as Phil Z
(who'd have no problem with key management) don't use it often.

BTW, just to show that usability is king, could you please send me an
encrypted email -- I even let you choose any secure method that you want.

Cheers,
Ed Gerck

Paul Hoffman wrote:

At 1:56 PM -0800 2/23/06, Ed Gerck wrote:
This story (in addition to the daily headlines) seems to make the case 
that
the available techniques for secure email (hushmail, outlook/pki and 
pgp) do

NOT actually work.


That's an incorrect assessment of the short piece. The story says that 
it does actually work but no one uses it. They briefly say why: key 
management. Not being easy enough to use is quite different than NOT 
actually working.


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: NPR : E-Mail Encryption Rare in Everyday Use

2006-02-24 Thread Ed Gerck

Ben Laurie wrote:

Ed Gerck wrote:

Paul,

Usability should by now be recognized as the key issue for security -
namely, if users can't use it, it doesn't actually work.

And what I heard in the story is that even savvy users such as Phil Z
(who'd have no problem with key management) don't use it often.

BTW, just to show that usability is king, could you please send me an
encrypted email -- I even let you choose any secure method that you want.


Sure I can, but if you want it to be encrypted to you, then you need to
publish a key.


This IS one of the sticky points ;-) If postal mail would work this way,
you'd have to ask me to send you an envelope before you can send me mail.
This is counter-intuitive to users.

Your next questions could well be how do you know my key is really mine...
how do you know it was not revoked ...all of which are additional sticky points.
In the postal mail world, how'd you know the envelope is really from me or
that it is secure?

Cheers,
Ed Gerck

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


NPR : E-Mail Encryption Rare in Everyday Use

2006-02-23 Thread Ed Gerck

This story (in addition to the daily headlines) seems to make the case that
the available techniques for secure email (hushmail, outlook/pki and pgp) do
NOT actually work.

http://www.npr.org/templates/story/story.php?storyId=5227744

Cheers,
Ed Gerck


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


surveillance, Re: long-term GPG signing key

2006-01-20 Thread Ed Gerck

Ben Laurie wrote:

Perhaps this is time to remind people of Security Against Compelled
Disclosure: http://www.apache-ssl.org/disclosure.pdf.



Thanks. Survelillance technology is now almost 6 years ahead of April, 1999,
when the cited Report to the Director General for Research of the European
Parliament was issued.

Today, surveillance is not just a political problem or a concern for
someone involved in illegal activities, or just about breaking my own
privacy. Surveillance has become an ubiquitious threat to the right to
privacy and duty of confidence to others whom I have the legal or moral
obligation to protect, dramatically increasing the probability of
disclosure by eliminating the need to know block usually applied to
reduce disclosure risk. Untrustworthy individuals exist and are hard to
detect in any organization, including federal and law enforcement agencies
and at any government level. The need to know policy, which would be
the #1 barrier to prevent more individuals to be exposed to the critical
information, directly reducing the probability of disclosure, is silently
destroyed by surveillance.

Thinking about IT security needs in the XXI century, the solution of using
encryption and document control to prevent surveillance and secret-disclosure
would seem to impose itself.

Despite the apparent simplicity and widespread availability of public-key
cryptography, PGP and X.509 S/MIME, less than 5% of all email is encrypted.
Banks won't even consider using encryption for sending out monthly statements
and notices. It's not just the mounting problem with email fraud schemes such
as spoofing and phishing. Banks discovered that not even their own employees
were willing to use encryption.

The real security question of the XXI century is easy-of-use -- that the
security solution will actually be used takes precedence over any potential
benefits. In this context, the subject of email security is being discussed at
http://email-security.net/ -- please take a look at the Blog and Papers 
sections.
Contributions are welcome. A comparison of current email technologies is
presented at http://email-security.net/papers/pki-pgp-ibe.htm

Cheers,
Ed Gerck

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Comparison of secure email technologies

2005-12-22 Thread Ed Gerck


Thanks for the comments. A new version of the work paper
Comparison Of Secure Email Technologies X.509 / PKI, PGP, and IBE
is available at http://email-security.net/papers/pki-pgp-ibe.htm
The Blog (link in the paper page) contains the most relevant
public input; private input is also appreciated.

Comments are welcome.

Cheers,
Ed Gerck

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: X.509 / PKI, PGP, and IBE Secure Email Technologies

2005-12-16 Thread Ed Gerck

James A. Donald wrote:

--
From:   Werner Koch [EMAIL PROTECTED]

You need to clarify the trust model.  The OpenPGP
standard does not define any trust model at all.  The
standard merely defines fatures useful to implement a
trust model.


Clarifying the trust model sounds suspiciously like
designers telling customers to conform to designer
procedures.  This has not had much success in the past.

People using PGP in practice verify keys out of band,
not through web of trust.


James,

Yes. Your observation on out-of-band PGP key verification
is very important and actually exemplifies what Werner
wrote. Exactly because there's no trust model defined
a priori, uses can choose the model they want including
one-on-one trust.

This is important because it eliminates the need for a
common root of trust -- with a significant usability
improvement.

If the web of trust is used, the sender and recipient must
a priori trust each other's key signers, requiring a
common root of trust -- that may not even exist to begin
with.

So, instead of worrying about what trust model PGP uses,
the answer is that you can use any trust model you want --
including a hierarchical trust model as used with X.509.

Jon Callas and I had several conversations on trust in
May '97, when Jon visited me for two weeks while I was
in Brazil at the time, I think before the OpenPGP WG was
even working on these issues. This is one of the comments
Jon wrote in a listserv then, with a great insight that
might be useful today:

  As I understand it, then, I've been thinking about some
  of the wrong issues. For example, I have been wondering
  about how exactly the trust model works, and what trust
  model can possibly do all the things Dr Gerck is claiming.
  I think my confusion comes from my asking the wrong
  question. The real answer seems to be, 'what trust model
  would you like?' There is a built in notion (the
  'archetypical model' in the abstract class) of the meta-
  rules that a trust model has to follow, but I might buy a
  trust model from someone and add that, design my own, or
  even augment one I bought. Thus, I can ask for a
  fingerprint and check it against the FBI, Scotland Yard,
  and Surite databases, check their PGP key to make sure
  that it was signed my Mother Theresa, ask for a letter of
  recommendation from either the Pope or the Dalai Lama
  (except during Ramadan, when only approval by the Taliban
  will do), and then reject them out of hand if I haven't had
  my second cup of coffee.

Cheers,
Ed Gerck



-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: X.509 / PKI, PGP, and IBE Secure Email Technologies

2005-12-12 Thread Ed Gerck

Anne  Lynn Wheeler wrote:

OCSP provides for a online
transaction which asks whether the stale, staic information is still
usuable, attempting to preserve the facade that digital certificates
serve some useful purpose when there is online, direct access
capability. The alternative is to eliminate the digital certificates all
together and rather than doing an OCSP transaction, do a direct, online
transaction.


The benefits of not always requiring direct online transactions has been
pointed out before in this thread, in terms of anonymity, availability and
reliability. What happens when you get a message and the direct, online
connection isn't there? You can' decrypt it even though it you need to?

Digital certs (X.509 and PGP) are useful when the key owner is not online.
There is a world when this not only happens but is also useful. BTW, this
is recognized in IBE as well.

A couple additional comments:

 the baseline analysis, threat/vulnerability models, etc ... start with
 the simplest and then build the incremental pieces  frequently
 looking at justification for the additional complexity.

 when doing the original design and architecture you frequently start
 with the overall objective and do a comprehensive design (to try and
 avoid having things fall thru the cracks).

Agreed, and that's where a baseline analysis really fails to reveal a
design's pros and cons -- because it follows a different path. Seems
logical but denies the design's own logic (which did NOT use a baseline
approach to begin with, on purpose).

Therefore, when I look into X.509 / PKI issues, or secure email issues,
a baseline analysis is not so very useful.

 the trusted third party certification authority is selling digital
 certificates to key owners for the benefit of relying parties.

The RPs are not part of the contract. Without CAs, there's no key
owner in PKI. It's for the benefit (and reduction of liability)
of the key owners.

Cheers,
Ed Gerck

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: X.509 / PKI, PGP, and IBE Secure Email Technologies

2005-12-10 Thread Ed Gerck

Anne  Lynn Wheeler wrote:

usually when you are doing baseline ... you start with the simplest,
evaluate that and then incrementally add complexity. 


I think that's where PKI got it wrong in several parts and not
just the CPS. It started with the simplest (because it was meant to
work for a global RA -- remember X.500?) and then complexity was
added. Today, in the most recent PKIX dialogues, even RFC authors
often disagree on what is meant in the RFCs. Not to mention the
readers.

As another example, at least one IBE offer does not talk about
key lifetime at all -- in fact, the documentation online talks
about using the same key for _all_ future communications. When this,
of course, fails and key expiration is introduced, it will be
over an existing baseline... a patch. Key revocation will be
even harder to introduce in IBE.

As new capabilities conflict with the old, the end result of this
approach seems to ne a lot of patched in complexity and vulnerabilities.

It seems better to start with a performance specification for the full
system. The code can follow the specs as close as possible for
each version, the specs can change too, but at least the grand
picture should exist beforehand. This is what this thread's subject
paper is about, the grand picture for secure email and why aren't
we there yet (Phil's PGP is almost 15 years old) -- what's missing.

BTW, there's a new version out for the X.509 / PKI, PGP, and IBE
Secure Email Technologies paper and Blog comments in the site as well,
at http://email-security.net

Cheers,
Ed Gerck

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: X.509 / PKI, PGP, and IBE Secure Email Technologies

2005-12-09 Thread Ed Gerck

Anne  Lynn Wheeler wrote:

Ed Gerck wrote:
Regarding PKI, the X.509 idea is not just to automate the process of 
reliance but to do so without introducing vulnerabilities in the 
threat model considered in the CPS.


but that is one of the points of the article that as you automate more 
things you have to be extra careful about introducing new 
vulnerabilities 


I believe that's what I wrote above. This rather old point (known to the X.509
authors, as one can read in their documents) is why X.509 simplifies what it
provides to the least possible _to_automate_ and puts all the local and human-
based security decisions in the CPS.

(The fact that the CPS is declared to be out of scope of X.509 is both a
solution and a BIG problem as I mentioned previously.)

the issue of public key email w/o PKI ... is you have all the identical, 
same basic components that PKI also needs.


PGP is public-key email without PKI. So is IBE. And yet neither of them has
all the identical, same basic components that PKI also needs. Now, when you
look at the paper on email security at
http://email-security.net/papers/pki-pgp-ibe.htm
you see that the issue of what components PKI needs (or not) is not
relevant to the analysis.

 ... as in my oft repeated description of a crook attacking the
authoritative agency that a certification authority uses for the basis 
of its certification, and then getting a perfectly valid certificate.


What you say is not really about X.509 or PKI, it's about the CPS. If the CPS
says it restricts the cert to the assertion that the email address was timely
responsive to a random challenge when the cert was issued, then relying
on anything else (e.g., that the email address is owned or operated by an
honest person or by a person who bears a name similar to that mailbox's 
username)
is unwarranted.

Cheers,
Ed Gerck

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: X.509 / PKI, PGP, and IBE Secure Email Technologies

2005-12-08 Thread Ed Gerck

Anne  Lynn Wheeler wrote:

i've periodically written on security proportional to risk ... small sample
http://www.garlic.com/~lynn/2001h.html#61

...
introductioin of PKI and certificates in such an environment may
actually create greater vulnerabilities ... since it may convince the
recipient to trust the PKI operation more than they trust their own,
direct knowledge ... and the PKI operation opens up more avenues of
compromise for the attackers.


Regarding PKI, the X.509 idea is not just to automate the process of reliance
but to do so without introducing vulnerabilities in the threat model considered
in the CPS.

What's a bit of a struggle, still, is that many people do not fully realize
that the CPS is outside the scope of PKI. This is both a solution (makes the
X.509 effort independent of local needs) and a big problem, as CAs (writers
of the CPS) have the power to write almost anything they want, including
their notorious DISCLAIMER (where _near_ everything of value to the subscriber
is disclaimed, while _everything_ of value to the user is disclaimed).

That's why its useful to compare X.509 / PKI, PGP, and IBE technologies
for secure email, to know what are the trade-offs.

By comparing the capabilities and faults of the secure email products
per technology used, these and other problems come up in the score card.

Cheers,
Ed Gerck

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


X.509 / PKI, PGP, and IBE Secure Email Technologies

2005-12-07 Thread Ed Gerck

http://email-security.net/papers/pki-pgp-ibe.htm

X.509 / PKI (Public-Key Infrastructure), PGP (Pretty Good Privacy)
and IBE (Identity-Based Encryption) promise privacy and security
for email. But comparing these systems has been like comparing apples
with speedboats and wingbats. A speedboat is a bad apple, and so on.

To help develop a common yardstick, I would like feedback (also by
private email) on a list of desirable secure email features as well
as a list of attacks or problems, with a corresponding score card for
the secure email technologies X.509 / PKI, PGP and IBE. The paper
is at http://email-security.net/papers/pki-pgp-ibe.htm

Cheers,
Ed Gerck

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Call for papers -- IS-TSPQ 2006

2005-11-30 Thread Ed Gerck

==

  CALL FOR PAPERS


 First International Workshop on
  Interoperability Solutions to Trust, Security, Policies and QoS
 for Enhanced Enterprise Systems
  (IS-TSPQ 2006)

  In the frame of

Second International Conference on
 Interoperability for Enterprise Software and Applications
 (I-ESA)

 Bordeaux, France
 March 21st, 2006

 http://istspq2006.cs.helsinki.fi/

==

  SCOPE:

With the increasing demands from the networked economy and government,
interoperability has become a strategic factor for enterprise software
and applications. In the context of collaboration between enterprises
and their business services, several interoperability issues stem from
non-functional aspects (NFA). The non-functional aspects are introduced
to provide separation of concerns between the main functions of enterprise
software and the supporting themes that cause modification of the main
functional behaviour. Traditionally, this is applied to supporting
technology that addresses, for example, quality of service, security and
dependability, but it may also involve business value, business policies,
and trust.

The IS-TSPQ 2006 workshop objective is to explore architectures, models,
systems, and utilization for non-functional aspects, especially addressing
the new requirements on interoperability. Space is given for building
understanding of the non-functional aspects themselves and improve the
shared understanding of future solutions for non-functional aspect
interoperability.

The IS-TSPQ 2006 workshop is hosted by the Second International Conference
on Interoperability of Enterprise Software and Applications (I-ESA)
organized by the INTEROP NoE. The workshop aims to bring together
researchers and practitioners.


  Topics:

In keeping with the focus on interoperability and non-functional aspects,
the IS-TSPQ 2006 workshop especially encourages original unpublished papers
addressing the following areas:

- modelling of enterprises and their collaboration;
- interoperability architectures and models;
- negotiation mechanisms and representations of agreements that support
  interoperability;
- challenges from the strategic business needs;
- alignment of business needs and computing support; and
- linking the above to trusted, dependable infrastructure solutions.

General papers on these topics will be welcome, but it would be particularly
valuable for papers to relate to the target domains of:

- Trust and Trust Models, Reputation, and Privacy on data integration
  and inter-enterprise computing;
- eContracting, contract knowledge management, business commitment
  monitoring and fulfilment, and the ontologies of contracts;
- Non-Functional Aspects, Quality of Service (QoS), Quality Attributes;
- Information Security, Performance, Reliability and Availability;
- Digital Rights and Policy Management, Compliance, regulatory
  environments, corporate governance, and Policy Frameworks; and
- Business Value, Business processes, Risk Management and Asset
  Management.


  SUBMISSION GUIDELINES:

Submissions must be no longer than 12 pages in length and should follow
the guidelines given at
http://www.hermes-science.com/word/eng-guidelines.doc.
Authors are requested to submit their manuscripts electronically in PDF
format
using the paper submission tool available at the workshop web page.

The workshop proceedings will be published after the conference (and will be
sent by post to the registered participants). Papers will be included in the
proceedings only if they are presented by one of the authors at the
workshop.
The final, cameraready papers are accepted by the publisher as Word files
only.


  GENERAL INFORMATION:

For more information please visit the web site at:

 http://istspq2006.cs.helsinki.fi/

 http://www.i-esa.org/


  IMPORTANT DATES:

 Papers due: January 5, 2006
 Acceptance: February 1, 2006
 Papers for participant proceedings: February 23, 2006
 Workshop : March 21, 2006
 Final papers due: April 10, 2006


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


announcing email-security.net

2005-09-30 Thread Ed Gerck


I'd like to get list feedback on the opening discussion paper
at email-security.net, which is a technical development forum
dedicated to a fresh exploration of the Internet email security
issues of today. Comments and paper contributions on the theme
of email security are welcome. Papers will be peer-reviewed
before publication. Product and service listings are also
welcome, search-engine style (short pitch + link).

Regards,
Ed Gerck

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Another entry in the internet security hall of shame....

2005-09-13 Thread Ed Gerck

Read in an email from a website:

You'll need to send us your CC information via regular email or fax.  I
would suggest splitting up your CC info if you send it to us via email in
two separate emails for security.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


instant lottery cards too, Re: reading PINs in secure mailers without opening them

2005-08-27 Thread Ed Gerck

Years ago, I could read instant win lottery cards and still leave them
as new by using the laser photoacoustic effect. A low-power chopped laser
beam is focused and line-scans the target while a microphone picks up
the acoustic waves caused by differential absorption of the laser light
as it sweeps the line. By phase-shifting the received acoustic signal
versus the chopped light signal (they have the same frequency), you
can read at different depths of the target. Adjusting to hearing at
the depth of the paper substrate, below the covering ink, all markings
could be read as if the covering ink did not exist, line by line.

The apparatus could be built today by something like $500, I believe,
using parts readily available. Distributors of the instant lottery
cards could, without detection, separate the winning cards.

Unlike ATM cards, there are no cards that must be stolen at the same
time for the attack to be successful.

Cheers,
Ed Gerck

Perry E. Metzger wrote:

Often, banks send people PINs for their accounts by printing them on
tamper secure mailers. Some folks at Cambridge have discovered that
it is easy to read the PINs without opening the seals...

http://news.bbc.co.uk/1/hi/technology/4183330.stm



-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: EMV and Re: mother's maiden names...

2005-07-16 Thread Ed Gerck


Thanks for some private comments. What I posted is a short
summary of a number of arguments. It's not an absolute position,
or an expose' of the credit card industry. Rather, it's a wake-
up call -- The time has come to really face the issues of
information security seriously, without isolating them with
insurance at the cost of the consumers. Why? Because the
insurance model will not scale as the Internet and ecommerce
do.

In other words, CardSystems Exposes 40 Million Identities
as a harbinger. Now that we know more about the facts in this
recent case, expect more to come unless we begin to improve
our security paradigm.

Yes, public opinion and credit card companies can and will
force companies that process credit card data to increase
their security. However, as my comments show, how about the
acceptable risk concept that turns fraud into sales?
Do As I Say, Not As I Do?

By weakly fighting fraud, aren't we allowing fraud systems
to become stronger and stronger, just like any biological
threat? The parasites are also fighting for survival. We're
allowing even email to be so degraded that fax and snail
mail are now becoming atractive again.

Cheers,
Ed Gerck

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: EMV and Re: mother's maiden names...

2005-07-15 Thread Ed Gerck

Well, the acceptable risk concept  that appears in these two
threads has been for a long time an euphemism for that business
model that shifts the burden of fraud to the customer.

The dirty little secret of the credit card industry is that they
are very happy with 10% of credit card fraud, over the Internet or not.

In fact, if they would reduce fraud to _zero_ today, their revenue
would decrease as well as their profits. So, there is really no
incentive to reduce fraud. On the contrary, keeping the status
quo is just fine.

This is so because of insurance -- up to a certain level,
which is well  within the operational boundaries of course,
a fraudulent transaction does not go unpaid through VISA,
American Express or Mastercard servers.  The transaction is
fully paid, with its insurance cost paid by the merchant and,
ultimately, by the customer.

Thus, the credit card industry has successfully turned fraud into
a sale.  This is the same attitude reported to me by a car manufacturer
representative when I was talking to him about simple techniques
to reduce car theft -- to which he said: A car stolen is a car sold.
In fact, a car stolen will need replacement that will be provided by
insurance or by the customer working again to buy another car.  While
the stolen car continues to generate revenue for the manufacturer
in service and parts.

Whenever we see continued fraud, we should be certain: the defrauded
is profiting from it.  Because no company will accept a continued  loss
without doing anything to reduce it. Arguments such as we don't
want to reduce the fraud level because it would cost more to reduce the
fraud than the fraud costs are just a marketing way to say that
a fraud has become a sale.

Because fraud is an hemorrage that adds up, while efforts to fix it --
if done correctly -- are mostly an up front cost that is incurred only
once.  So, to accept fraud debits is to accept that there is also a credit
that continuously compensates the debit. Which credit ultimately flows
from the customer -- just like in car theft.

What is to blame? Not only the twisted ethics behind this attitude but
also that traditional security school of thought which focus on risk,
surveillance and insurance as the solution to security problems.

There is no consideration of what trust really would mean in terms of
bits and machines[*], no  consideration that the insurance model of
security cannot scale in Internet volumes and cannot even be ethically
justifiable.

A fraud is a sale is the only outcome possible from using such security
school of thought.  Also sometimes referred to as acceptable risk --
acceptable indeed, because it is paid for.

Cheers,

Ed Gerck

[*] Unless the concept of trust in communication systems is defined in
terms of bits and machines, while also making sense for humans, it really
cannot be applied to e-commerce. And there are some who use trust as a
synonym for authorization. This may work in a network, where a trusted
user is a user authorized by management to use some resources. But it
does not work across trust boundaries, or in the Internet, with no
common reporting point possible.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


[Fwd: VirtualGoods Workshop in Florence: Deadline for Submission, July 20th]

2005-07-07 Thread Ed Gerck


 Original Message 
Subject: 	VirtualGoods Workshop in Florence: Deadline for Submission, 
July 20th

Date:   Wed, 6 Jul 2005 15:55:37 +0200
From:   Juergen Nuetzel [EMAIL PROTECTED]
Reply-To:   Juergen Nuetzel [EMAIL PROTECTED]
To: [EMAIL PROTECTED]



Dear Members of the VirtualGoods mailing list,

this e-mail is a kindly reminder for the deadline (July 20th) of the
3rd VirtualGoods workshop in Florence, Italy.

This year the workshop is part of the Axmedis conference (30 Nov - 2 Dec
2005) www.axmedis.org/axmedis2005/

See the VirtualGoods CFP for details and guidelines:
http://virtualgoods.tu-ilmenau.de/2005/cfp.html

best
Juergen Nuetzel


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: expanding a password into many keys

2005-06-13 Thread Ed Gerck

Ian,

You need to go beyond the scope of simple-minded PKCS recommendations
to calculate keys from passwords. If you want to improve security,
just adding padding and salt is not enough.

Yes, of course, your code should add padding, so that the sha1 argument
always has the same, fixed, length for any password and key name.

Further, as you know, passwords (especially if chosen by a user)
have low entropy... let's say 10 ~ 40 bits. Key names (constrained
by natural language) should also have low entropy per character.
The end result is that a dictionary attack could be quite easy to do,
if you are not careful on several fronts.  You need to:

- define your threat model;
- warn users about bad passwords (not all bad pwds can be detected!);
- prevent really bad passwords from being used (ditto);
- prevent easy key names (ditto);
- estimate minimum lengths for passwords AND key names as a function
  of all the above -- including the threat model;
- provide for key management, with revocation, expiration and roll-over,
  before you face these needs without planning.

Cheers,
Ed Gerck

Ian G wrote:

I'd like to take a password and expand it into
several keys.  It seems like a fairly simple operation
of hashing the concatonatonation of the password
with each key name in turn to get each key.

Are there any 'gotchas' with that?

iang

PS: some psuedo code if the above is not clear.

for k in {set of keys needed}
do
key[k] = sha1( pass | k );
done




-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Citibank discloses private information to improve security

2005-05-30 Thread Ed Gerck

Suppose you choose A4RT as your codeword. The codeword has no privacy concern
(it does not identify you) and is dynamic -- you can change it at will, if you
suspect someone else got it.

Compare with the other two identifiers that Citibank is using. Your full name
is private and static. The ATM's last-four is private and static too (unless
you want the burden to change your card often).

Lance James wrote:
But from your point, the codeword would be in the clear as well. 
Respectively speaking, I don't see how either solution would solve this.



Ed Gerck wrote:


List,

In an effort to stop phishing emails, Citibank is including in a 
plaintext

email the full name of the account holder and the last four digits of the
ATM card.



-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Citibank discloses private information to improve security

2005-05-30 Thread Ed Gerck

Wells Fargo reported to me some time ago that they tried using digitally
signed S/MIME email messages and it did not work even for their _own employees_.

Also, in an effort to make their certs more valuable, CAs have made digitally
signed messages imply too much -- much more than they warrant or can even 
represent.
There are now all sorts of legal implications tied to PKI signatures, in my 
opinion
largely exagerated and casuistic.

If someone forges a digitally signed Citibank message, or convincingly spoofs
it, the liability might be too large to even think of it.

Using a non-signed codeword that the user has defined beforehand allows the
user to have a first proof that the message is legitimate. Since the user
chooses it, there is no privacy concern or liability for the bank. Of course,
here trust decreases with time -- a fresh codeword is more valuable. But if
the user can refresh it at will, each user will have the security that he wants.


Matt Crawford wrote:

On May 26, 2005, at 13:24, Ed Gerck wrote:


A better solution, along the same lines, would have been for Citibank to
ask from their account holders when they login for Internet banking,
whether they would like to set up a three- or four-character combination
to be used in all emails from the bank to the account holder.



Why couldn't they just use digitally signed S/MIME email?  I'm sure that 
works just as well as signed SSL handshakes.



Oh.  Answered my own question, didn't I?


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Citibank discloses private information to improve security

2005-05-26 Thread Ed Gerck

List,

In an effort to stop phishing emails, Citibank is including in a plaintext
email the full name of the account holder and the last four digits of the
ATM card.

Not only are these personal identifiers sent in an insecure communication,
such use is not authorized by the person they identify. Therefore, I believe
that some points need to be made in regard to right to privacy and security
expectations.

It's the usual tactic of pushing the liability to the user. The account
holder gets the full liability for the security procedure used by
the bank.

A better solution, along the same lines, would have been for Citibank to
ask from their account holders when they login for Internet banking,
whether they would like to set up a three- or four-character combination
to be used in all emails from the bank to the account holder. This
combination would not be static, because it could be changed by the user
at will, and would not identify the user in any other way.

Private, identifying information of customers have been used before
by banks for customer login. The account holder's name, the ATM card
number, the account number, and the SSN have all been used, and abandoned,
for Internet banking login. Why? Because of the increased exposure
creating additional risks.

Now, with the unilateral disclosure by Citibank of the account holder's
name as used in the account and the last four digits of the ATM number,
Citibank is back tracking its own advances in user login (when they
abandoned those identifiers).

Of course, banks consider the ATM card their property, as well as the
number they contain. However, the ATM card number is a unique personal
identifier and should not be disclosed in a plaintext email without
authorization.

A much better solution (see above) exists, even using plaintext email --
use a codeword that is agreed beforehand with the user. This would be
a win-win solution, with no additional privacy and security risk.

Or is email becoming even more insecure, with our private information
being more and more disclosed by those who should actually guard it,
in the name of security?

Cheers,
Ed Gerck


--

I use ZSentry Mail Secure Email
https://zsentry.com/R/index.html/[EMAIL PROTECTED]

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: two-factor authentication problems

2005-03-13 Thread Ed Gerck

Matt Crawford wrote:
On Mar 5, 2005, at 11:32, Ed Gerck wrote:
The worse part, however, is that the server side can always fake your
authentication using a third-party because the server side can
always calculate ahead and generate your next number for that
third-party to enter -- the same number that you would get from your
token. So, if someone breaks into your file using your number --
who is responsible? The server side can always deny foul play.

Huh?  The server can always say response was good when it wasn't 
good.  Unless someone reclaims the server from the corrupt operator and 
analyzes it, the results are the same.
This is a different attack. If you have someone outside auditing, they will
notice what you said but not what I said. A simple log verification will
show the response was NOT good in your case. What I said passes 100% all
auditing -- and the operator does not have to be corrupt.
-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


two-factor authentication problems

2005-03-06 Thread Ed Gerck
Current solutions for two-factor authentication may be weaker than they
seem. Let me present two cases, including SecurID, for comments.
1. First case, without a clock, take a look at:
 http://www.ietf.org/internet-drafts/draft-mraihi-oath-hmac-otp-02.txt
Because the algorithm MUST be sequence or counter-based, poor
transmission can cause repeated failures to resynch. Also, someone
could get your token and quickly generate dozens of numbers without
you knowing it -- when you use the token later on, your new number is
not accepted and could fall outside the resynch window (even for two
numbers in sequence).
The worse part, however, is that the server side can always fake your
authentication using a third-party because the server side can
always calculate ahead and generate your next number for that
third-party to enter -- the same number that you would get from your
token. So, if someone breaks into your file using your number --
who is responsible? The server side can always deny foul play.
2. SecurID:
The last comment above applies. The server side always know what
numbers your token should generate now and in for days in the
future (clock drift included) -- and that's how they are recognized.
So, again, if someone breaks into your file using your number --
who is responsible?
Cheers,
Ed Gerck
-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Can you help develop crypto anti-spoofing/phishing tool ?

2005-02-09 Thread Ed Gerck

Jerrold Leichter wrote:
N-version programming - which is what you are proposing here - can increase
your level of trust against random errors[2], but its of no use at all against
a deliberate attack. 
I heartly disagree. If the N-outputs are continuously verified for 
coherence,
any difference readily stands out. The number N and the cost of always using 
those
N-outputs should, of course, be outweighed against the cost of failure to
detect an attack. Theoretically, however, there is always a finite number N
that can make the probability of such an attack _ as small as you please_.
The mathematical basis for this result was proven by Shannon more than 50 years
ago; the practical intuition for this result was demonstrated during the Mogul
period in India (more than 500 years ago), who are known to have used at least 
three
parallel reporting channels to survey their provinces with some degree of 
reliability,
notwithstanding the additional efforts to do so.
(Recall the conversation here a couple of months ago
about how difficult - to the point of impossibility - it would be to use
external testing to determine if a crypto-chip had been spiked.)
Aren't we talking about different things? A covert channel, looking at
the crypto-chip by itself, is demonstrably impossible to detect with
certainty. However, what I was talking about is NOT this situation.
You are looking at *one* crypto-chip, a single source of information, a single
trusted source, when you have no correction channel available.  I am
looking at N outputs, N sources of information (each one as independent as
possible but not necessarily 100% independent). You have no reference for
detecting a spike, I have N-1.
Cheers,
Ed Gerck
-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Can you help develop crypto anti-spoofing/phishing tool ?

2005-02-08 Thread Ed Gerck

Amir Herzberg wrote:
Ed Gerck responded to me:
 Can
you trust what trustbar shows you? 
This trust translates to:
-- Trusting the TrustBar code (which is open source so can be validated 
by tech-savvy users / sys-admin)
-- Trusting that this code was not modified (same as for any other 
aspect of your machine)
-- Trusting the CA - well, not exactly; TrustBar allows users to specify 
for each CA whether the user is willing to display logos/names from this 
CA automatically, or wants to be asked for each new site. Only if the 
user selects `display logo/name automatically`, then he really trusts 
the CA in this regard, and still the brand (logo) of the CA appears (for 
accountability). I'll admit, though, that currently VeriSign is 
`trusted` in this respect by default (of course user can chnage this 
easily).
In other words, if trustbar can be verified it can be trusted.
Redundancy is useful to qualify trust in information. Trusting the trustbar
code might be hard to qualify by itself (ie, source code verification) but
redundancy helps here [1]. Trust increases if the two channels trustbar and
browser CA status [2] agree with each other. Trustbar can become a trusted
verifier after positively checking with the browser CA status.
This would also help prevent one-sided attacks to trustbar, as one would need
to attack both trustbar and browser CA status,
Cheers,
Ed Gerck
[1] This is also my solution to the famous trust paradox proposed by Ken
Thompson in his  Reflections of Trusting Trust. Trust is earned, not
given. To trust Ken's code, I would first ask two or more programmers (who
I choose) to code the same function and submit their codes to tests. If they
provide the same answers for a series of inputs, including random inputs,
I would have a qualification for trusting (or not) Ken's code. This works
even without source code. Trust is not in the thing, it's how the thing works.
[2] Mozilla already shows the signing CA name when the mouse is over the lock
symbol in SSL. This is more readily visible than clicking with the right-button
and reading the cert.
-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


[Fwd: Call for Papers: Virtual Goods 2005]

2005-01-26 Thread Ed Gerck

Dear Virtual Goods Community,
here is the link to the cfp:
http://virtualgoods.tu-ilmenau.de/2005/cfp_short.txt
Please feel free to distrubute it.
Best regards
Juergen
Here is the text:
 C A L L   F O R   P A P E R S
  The 3rd International Workshop for
   Technology, Economy, Social and Legal Aspects of
Virtual Goods
including the new
  Virtual Goods Tutorial
 Organized by the GI Working Group ECOM
   and in parallel with
 IFIP Working Group 6.11
   Communication Systems in Electronic Commerce
   June 2 - 4, 2005, Ilmenau, Germany
   http://VirtualGoods.tu-ilmenau.de
   -
Full version:  http://virtualgoods.tu-ilmenau.de/2005/cfp.html
Topics of interest include, but are not restricted to, the following
aspects:
-
* Digital rights management
* Peer-to-Peer systems
* Payment systems
* New business models
* Solution architectures
* Legal aspects
* Inter-cultural aspects
* Security and privacy
* Content protection
* Watermarking
* Cryptographic mechanisms
Important Dates:

 March 1, 2005  Full papers submitted
 April 5, 2005  Notification of acceptance
 May 1, 2004Web-ready papers due
Technical Committee:

General chair: Ruediger Grimm:  mailto:[EMAIL PROTECTED]
Program Chair: Juergen Nuetzel: mailto:[EMAIL PROTECTED]
Local chair:   Thomas Boehme: mailto:[EMAIL PROTECTED]
Please freely distribute this call for papers.
-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Entropy and PRNGs

2005-01-11 Thread Ed Gerck
John Denker wrote:
 For the sources of entropy that I consider
real entropy, such as thermal noise, for a modest payoff I'd
be willing to bet my life -- and also the lives of millions
of innocent people -- on the proposition that no adversary,
no matter how far in the future and no matter how resourceful,
will ever find in my data less entropy than I say there is.
Let me comment, John, that thermal noise is not random and is
not real entropy (btw, is there a fake entropy in your
view?).
There are several quantities that can be estimated in thermal
noise, reducing its entropy according to what you seem to expect
today. See photon bunching, as an example that is usually ignored.
Another, even though trivial, example is due to the observation that
thermal noise is not white noise. Yet another observation is that no
noise is really white, because of causality (in other words, it's
duration must be finite). The noise that is due to photon fluctuations
in thermal background radiation, for another example, depends
also on the number of detectors used to measure it, as well as
single- or multiple-mode illumination, and both internal and external
noise sources.
Yes, it's entirely possible that someone in the future will know
more about your entropy source than you do today! Even thermal
noise.
OTOH, why are nuclear decay processes considered safe as a source
of entropy? Because the range of energies preclude knowing or
tampering with the internal state. These processes are, however,
not free from correlations either.
Cheers,
Ed Gerck

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: When A Pencil And Paper Makes Sense

2004-11-18 Thread Ed Gerck
Here are some things that can --and do-- go wrong with the scanned ballots:
- blank votes (where the voter could have made a mark but did not) can
be voted at will after the ballot is cast by the voter, and no one can
detect the fraud.
- by looking at the vote pattern, a voter contract to vote for a certain
candidate can be verified by a third-person (not necessarily a poll official,
could be a party observer) and the voter can be rewarded or punished (if the
pattern does not show up).
- in a two-candidate race, voters circle a candidate and write not this one.
Should it not count, even though voter intent is clear?
- voters pause the pencil on an option, and decide not to mark it; notheless,
the optical reader reads it as a vote. Was the voter intent respected?
- the cost of just storing these paper ballots, also after the election runs,
is several millions of dollars for San Francisco for example.
- the cost of printing and the special paper, makes this sytem have a high
recurring cost, election after election, in addition to the mounting storage
cost for past elections.
The solution to secure voting is not the current generation of trust me
electronic voting machines either, with or without an added paper ballot that
the voter can verify. The solution begins, as I see it, to recognize the hard
information theory problem behind what seems to be a simple process. This
analysis, and solution, is outlined in
http://www.vote.caltech.edu/wote01/pdfs/gerck-witness.pdf
Cheers,
Ed Gerck
R.A. Hettinga wrote:
http://www.forbes.com/2004/11/05/cx_ah_1105tentech_print.html
Forbes

Ten O'Clock Tech
When A Pencil And Paper Makes Sense
Arik Hesseldahl,   11.05.04, 10:00 AM ET
Thank goodness, it's over. Sometime around 4:30 A.M. Wednesday I went to
bed, not the least bit uncertain that George W. Bush had been re-elected.
 But the one thing during this election cycle about which I have been
uncertain is electronic voting. Florida in 2000 was a mess, and in
reaction, some states and counties have turned to newfangled electronic
voting machines, thinking that computer technology is the answer to a
voting system that has started to creak under pressure.
 It seems that despite much worry about a repeat of Florida in other
states, voting has gone pretty smoothly. Electronic voting methods are
getting high marks. Of the 27,500 voting problems reported to the Verified
Voting Project, a San Francisco-based group that monitored the election for
voting problems, less than 6% of the issues reported stemmed from
electronic voting machines.
 Election officials in states like Nevada, Georgia and Hawaii gave
electronic voting systems a try. There were some problems: a memory card on
an electronic voting machine in Florida failed; five machines in Reno,
Nev., malfunctioned, causing lines to back up.
 Overall voter turnout was high. The Committee for the Study of the
American Electorate, a nonprofit, nonpartisan outfit based in Washington,
D.C., estimated that 120.2 million people, or 59.6% of those eligible to
vote, cast ballots in this election, which would be an improvement of 5%
and 15 million people, compared with the 2000 elections, and would make
2004's turnout the highest since 1968.
 Still, that's not as high as voter participation in my home state of
Oregon, where 1.7 million people, or nearly 82% of those eligible, voted.
 In Oregon, voters cast their votes from home rather than going to a
polling place. They submit their ballots by mail. The state abolished
polling places in 1998 and has been voting entirely by mail ever since.
 Voters get their ballots roughly two weeks before election day. This year
some were delayed because of an unexpectedly high number of voter
registrations. Ballots must be received by county elections offices by 8
P.M. on the day of the election. Drop boxes are located throughout the
state, as well.
 Voting should indeed take time and effort. It's undoubtedly important. But
I like Oregon's common-sense approach. Voting from the comfort of your own
home eliminates the inherent disincentive that comes from having to stand
on a long line, for example.
 It's pretty simple. Oregon voters fill out their ballots using a pencil,
just like those standardized tests everyone took in high school. If they
want to write in a candidate, the ballot allows for that, too.
 I thought of this as I stood for about 45 minutes in a long, cold line at
6:30 A.M. to vote in my neighborhood in New York's Upper East Side.
Throughout the day I heard reports from around the country of people who
had to stand in line for as long as eight hours so they could vote, and I
wondered how many others just threw up their hands in frustration because
they had someplace else to be.
 The mail-in ballot also gives the voter a little time to consider his or
her choice. Too often, voters will enter a voting booth knowing a few of
the people they intend to vote for, but read about some ballot initiative
or amendment for the first time. Rather than

Re: public-key: the wrong model for email?

2004-09-18 Thread Ed Gerck
Ben Laurie wrote:
Ed Gerck wrote:
If the recipient cannot in good faith detect a key-access ware, or a
GAK-ware, or a Trojan, or a bug, why would a complete background
check of the recipient help?
Let's assume for a moment that a solution exists that satisfies your 
requirements. Since the recipient _must_ be able to read the document in 
the end, and is assumed to be incapable of securing their software, then 
the document is still available to third parties without the consent of 
the sender, is it not?
The recipient was not assumed to be entirely incapable of securing his
software. The recipient can be trusted to do a number of things that are
basic, for example, to operate an email software. In fact, we can even
assume a pretty sophisticated recipient, trained to use all the security
email software systems commercially available. Still, the recipient will
be incapable of verifying whether his RSA private-key is weak or not. Thus,
even fully unwillingly and under best efforts, the recipient can put the
sender's message security at risk when using that recipient's RSA public-
key.
It seems to me that fixing the PK problem would in no way improve the 
senders situation given that threat model.
The sender's situation can be improved mostly when the sender trusts the
least possible (ideally, nothing) regarding message security. In other
words, the sender would like to avoid anything that is not under control or
cannot be directly verified by the sender. Doing a background check of the
recipient is orthogonal to the PK problem. It does not help nor solve it.
Cheers,
Ed Gerck
-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: public-key: the wrong model for email?

2004-09-18 Thread Ed Gerck
Anne  Lynn Wheeler wrote:
At 12:53 PM 9/16/2004, Ed Gerck wrote:
If the recipient cannot in good faith detect a key-access ware, or a
GAK-ware, or a Trojan, or a bug, why would a complete background
check of the recipient help?

a complete audit and background check ... would include an audit of 
the recipient ... not just the recipient person  but the recipient 
... as in the recipient operation.
I agree with you that more checks is usually better. But if you are talking
about someone else verifying the recipient's machine, we're talking about
what seems to me to be a much worse security risk. Who exactly would you
trust to verify your machine and potentially read your decrypted email and
other documents? A neutral third-party? Just allowing a third-party to
have access to my machine would go against a number of NDAs and security
policies that I routinely sign. Further, in terms of internal personnel doing
it, we know that 70% of the attacks are internal. The solution to my email
security problem should not be installing a back-door in your machine.
(snip) the 
leakage of a classified document wouldn't solely be restricted to 
technical subversion.
The leakage of a classified document has a number of aspects to consider
in order to prevent it, as we all know. From the sender's viewpoint, however,
what strategy should have the most impact in reducing leakage of a classified
document? It seems clear to me that it is in avoiding anything that is not
under control or cannot be directly verified by the sender. In other words,
it should be more effective to eliminate the sender's reliance on the
recipient's public-key (the sender cannot control or verify whether the key
is weak or not) than do yet another background check of the recipient operation.
Even if the recipient passes today, it may be vulnerable tomorrow. The
sender can't control it.
Cheers--/Ed Gerck
-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: public-key: the wrong model for email?

2004-09-17 Thread Ed Gerck
Adam Shostack wrote:
On Thu, Sep 16, 2004 at 12:05:57PM -0700, Ed Gerck wrote:
| Adam Shostack wrote:
| 
| I think the consensus from debate back last year on
| this group when Voltage first surfaced was that it
| didn't do anything that couldn't be done with PGP,
| and added more risks to boot.
| 
| Voltage actually does. It allows secure communication
| without pre-registering the recipient.

Generate a key for [EMAIL PROTECTED] encrypt mail to
Bob to that key.  When Bob shows up, decrypt and send over ssl.
How do you know when the right Bob shows up? And...why encrypt? The
email never left your station. Your method is equivalent to: send
anything to Bob at [EMAIL PROTECTED]. When Bob shows
up pray he is the right one and send email over ssl. You also have to
run an ssl server (or trust Bob's server key).
With Voltage, you encrypt the email to [EMAIL PROTECTED]
and send it. The sender's work is done[*]. Yes, the other problems still
exist with Voltage.
Cheers,
Ed Gerck
[*] The recipient can decrypt the Voltage email only IF both the sender
and  recipient can refer to the same key generation parameters for the
recipient. This is a problem that I have not seen Voltage discuss. Users
in different or competing administration boundaries will not be able
to communicate with each other in general.
-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: public-key: the wrong model for email?

2004-09-17 Thread Ed Gerck
Bill Stewart wrote:
At 10:19 PM 9/15/2004, Ed Gerck wrote:
Yes, PKC provides a workable solution for key distribution... when you
look at servers. For email, the PKC solution is not workable (hasn't 
been)
and gives a false impression of security. For example, the sender has no
way of knowing if the recipient's key is weak (in spite of its length)
or has some key-access feature. Nonetheless, the sender has to use 
that key.

I don't understand the threat model here.  The usual models are
 ...
Good list, even though missing some points that are important here,
mentioned below.
The disclosure threat is that the message may be disclosed AFTER it is
decrypted (this may happen even without the recipient being at fault).
I am NOT talking about the disclosure threat. Except for the disclosure
threat, the threat model includes anything that is not under control
or cannot be directly verified by the sender.
The obvious strategy for the sender is to trust the least possible
(ideally, nothing) regarding message security.
Public-key encryption for email makes this difficult from the start.
With all the public-key fancy foot work (eg, CA certs, CRL, OCSP, etc.),
the sender still has to trust the public-key generated by the recipient
regarding its basic property to encrypt messages that can only be
decrypted by the recipient when the message arrives.
Yes, the sender can do a challenge-response using that key and confirm
that the recipient has the private-key. But what the sender does not
have, and cannot have, is any assurance that his messages are only
decryptable by the recipient. The sender has no way of knowing if the
recipient's public-key is weak (in spite of its great length), or has
some key-access feature, or bug, or has been revoked in the mean
time [1]. Trusting the recipient helps but the recipient may not even
know it (in spite of the recipient's best efforts).
This problem also affects SSH and anything that uses public-key crypto,
including smart-card generated keys. For email, however, it can break
message security in spite of the sender's and recipient's best efforts.
Since the sender is the party who bears most, if not all, the risk, my
question was whether email security could benefit by using a different
model. Public-key crypto could still be used, but possibly not as it
is today. Again, the problem is both technical and business-wise.
If you _still_ want more control, set up a web server,
and instead of sending your actual secret message, send
Encrypt ( Key=Alice, Message=
- BEGIN PGP SIGNED MESSAGE
Alice - I've sent you an encrypted message at
https://bob.example.net/cookie123456.PGP
This URL will self-destruct in 5 business days.
- Bob
- END PGP SIGNED MESSAGE
)
The attacker could read the first message and download the second message.
It could make it detectable, though (but not necessarily traceable).
Cheers,
Ed Gerck
[1] The security fault happens when you (in spite of all your best efforts) send
an email message using a public-key that is revoked (eg, because the private-key was
compromised) at the time the message is received, due to various delays such as in
message transport, certificate revocation, and compromise discovery. You simply have
no way of knowing, even if the key was not listed in a CRL at the very second you
sent the message, whether the key has not been compromised at the time the message
is received. It gets worse. If the private-key is compromised any time after the
message is sent, the message can be decrypted from a cache copy somewhere -- even
if the recipient keeps the decrypted copy safe.
-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: public-key: the wrong model for email?

2004-09-16 Thread Ed Gerck
Anne  Lynn Wheeler wrote:
PGP allows that a relying party vet a public key with the key owner 
and/or vet the key with one or more others (web-of-trust)

note that while public key alleviates the requirement that a key be 
distributed with secrecy ... it doesn't eliminate the requirement that 
the public key have some trust characteristic associated (i.e. secrecy 
will tend to include some trust, but elimination of secrecy doesn't 
eliminate the requirement for trust).
Lynn,
My question on this is not about trust, even though I usually have  many
questions on trust ;-)
Yes, PKC provides a workable solution for key distribution... when you
look at servers. For email, the PKC solution is not workable (hasn't been)
and gives a false impression of security. For example, the sender has no
way of knowing if the recipient's key is weak (in spite of its length)
or has some key-access feature. Nonetheless, the sender has to use that
key.
The analogy here is with you sending a confidential document using a courier
you don't know and cannot verify. Would you?
Further, it is generally in the recipient's interest that the decision to
send document X using channel Y should be under the sender's control. Any
limitation or directive imposed by the recipient on the sender (such as:
use my public-key) can shift the burden of risk to the recipient (your key
was weak, hence I had a loss). Liability follows power. The current use of
PKC in email is neither good to the sender nor to the recipient.
To further clarify, my comment is not that PKC is not useful for email. I
believe it is, but not directly used as it is today. The PKC key distribution
solution is backwards for email.
Cheers,
Ed Gerck
-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: public-key: the wrong model for email?

2004-09-16 Thread Ed Gerck
Benne,
With Voltage, all communications corresponding to the same public key can be
decrypted using the same private key, even if the user is offline. To me, this
sounds worse than the PKC problem of trusting the recipient's key. Voltage
also corresponds to mandatory key escrow, as you noted, with all its drawbacks.
Cheers,
Ed Gerck
Weger, B.M.M. de wrote:
Hi Ed,
What about ID-based crypto: the public key can be any string, such as
your e-mail address. So the sender can encrypt even before the
recipient has a key pair. The private key is derived from the ...
-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: public-key: the wrong model for email?

2004-09-16 Thread Ed Gerck
Anne  Lynn Wheeler wrote:
  the issue then is what level do you trust the recipient, what is the
threat model, and what are the countermeasures.
if there is a general trust issue with the recipient (not just their key 
generating capability) ... then a classified document compromise could 
happen after it has been transmitted. you may have to do a complete 
audit  background check of the recipient before any distribution of 
classified document.
If the recipient cannot in good faith detect a key-access ware, or a
GAK-ware, or a Trojan, or a bug, why would a complete background
check of the recipient help?
Talking about trust, it is important to note that when the email is sent
the recipient is already trusted not to disclose. But even though the
recipient is trustworthy his environment may not be. It is not a matter of
personal trust  or complete background checks. This may all be fine
and, unknown to the recipient, the key might be weak, on purpose or by
some key-access feature included in the software (unknown to the user).
Or, the PKC software may have a bug (as PGP recently disclosed).
Loss from disclosure is also something that is much more important for
the sender. If the recipient's public-key fails to be effective in
protecting the sender, the sender's information is compromised. That's
why I make the point that PKC for email has it backwards: the sender
should not be at the recipient's mercy.
PKC for email also reverses the usual business model, because the
recipient is not so interested in protecting the sender or paying
for the sender's security. The sender would.
Regarding the use of PKC to sign emails, I see no problems using
PKC. The sender has the private-key, has the incentive to keep it
secure, and uses it to sign when he so desires. The sender does not
need to rely on the recipient, or receive anything from the recipient,
in order to sign an email. The problem with PKC email signature is
PKI. However, email signature can also be done without PKI, by PGP.
Cheers,
Ed Gerck
-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: public-key: the wrong model for email?

2004-09-16 Thread Ed Gerck
Adam Shostack wrote:
I think the consensus from debate back last year on
this group when Voltage first surfaced was that it
didn't do anything that couldn't be done with PGP,
and added more risks to boot.
Voltage actually does. It allows secure communication
without pre-registering the recipient.
Cheers,
Ed Gerck
-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: system reliability -- Re: titles

2004-08-31 Thread Ed Gerck
David Honig wrote:
At 12:12 AM 8/27/04 -0700, Ed Gerck wrote:
David Honig wrote:
Applications can't be any more secure than their
operating system. -Bram Cohen
That sounds cute but I believe it is incorrect. Example: error-
correcting codes. The theory of error-correcting codes allows
information to be coded so that it can be recovered even after
significant corruption. 

Yes.  But what makes you think the implementation you are
using is not subverted? 
If I have N independent platforms, the probability is smaller.
What makes you trust your md5 (or whatever) calculator,
which is how/why you trust your downloaded code? 
Ah, the word trust. What makes you trust something cannot be
that something by itself. It needs to be provided in multiple,
independently as possible, channels. What may make me trust a
MD5 fingerprint is the fact that the code works according to
some test vectors I define.
And, summarizing a Turing award lecture, what makes you
trust your compiler, much less ps or other OS monitors? 
That lecture needs to be understood after the word trust is
defined -- which, btw, the lecture never did.
 What this means is that the search for the perfect operating
system as the solution to security is backwards.

What it means is that the weakest link will break first.
This is true but only if the weakest link is isolated. If you have
a strand with three threads, the weakest thread will break first but
the other two threads will still hold. Increase the number of threads
to N  1 and the weakest thread is not really relevant any more. Of
course, the system will still fail under an excess stress, but not
because one thread (read, OS) failed.
Humans, generally.  
Yes, humans AND data are the weakest links.
Also the infrastructure under your
tools, ie OS.  And the tools used to build your tools, 
ie compilers or interpreters.
But, according to the theory of error-correcting codes, the influence
of the errors you mention can be reduced to a value as close to ZERO as
you desire.
Its not a search for a perfect anything; its a recognition
that trust in a system relies on trusting a great number of things; 
if any one is toast, the system is toast.  
Not if designed well. A good security system is not like a baloon
that pops with one shot.
Ask Niko Scarfo... used great crypto, but a $10 keylogger
got him.  He might have run the most secure MULTICs around,
but the weakest link was his keyboard, and a black-bag job.
When the heart confutes the mind, that man's hand confutes itself.
Cheers,
Ed Gerck
-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Microsoft .NET PRNG (fwd)

2004-08-10 Thread Ed Gerck
The PRNG should be the least concern when using MSFT's cryptographic
provider. The MSFT report 140sp238.pdf says:
RSAENH stores keys in the file system, but relies upon Microsoft
Windows XP for the encryption of the keys prior to storage.
Not only RSAENH writes keys to a lower-security file system... it also does
not provide the encryption security to protect those keys. Because RSAENH
trusts Windows XP to provide that critical link in the security, RSAENH cannot
be trusted to provide the security. In addition, there is a third problem in
securing the keys, namely the security gap between RSAENH and Windows XP.
The most troubling aspect, however, is that RSAENH makes it easy to provide
a covert channel for key access. FIPS 140-1 Level 1 compliant.
Cheers,
Ed Gerck
Anton Stiglic wrote:
There is some detail in the FIPS 140 security policy of Microsoft's
cryptographic provider, for Windows XP and Windows 2000.  See for example
http://csrc.nist.gov/cryptval/140-1/140sp/140sp238.pdf
where they say the RNG is based on FIPS 186 RNG using SHS.  The seed is
based on the collection of allot of data, enumerated in the security policy.
I would guess that what is written is true, less NIST would look very bad if
someone reversed engineered the code and showed that what they certified was
wrong.
So based on that it would seem that the PRNG in recent Microsoft
cryptographic providers is o.k.
--Anton
-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: The future of security

2004-07-30 Thread Ed Gerck
Email end-to-end: PGP, PGP/MIME, S/MIME. Not tunnel SSL or SSL
at the end points.
Lars Eilebrecht wrote:
According to Ed Gerck:

But encryption and authentication are a hassle today, with less
than 2% of all email encrypted (sorry, can't cite the source I know).

Are these 2% 'only' S/MIME and PGP-encrypted email messages or
is SSL-encrypted email communication included?
ciao...
-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: identification + Re: authentication and authorization

2004-07-09 Thread Ed Gerck

Aram Perez wrote:
Hi Ed and others,
Like usual, you present some very interesting ideas and thoughts. The
problem is that while we techies can discuss the identity theft definition
until we are blue in the face, the general public doesn't understand all the
fine subtleties. Witness the (quite amusing) TV ads by CitiBank.
Thanks. That's why my suggestion is that techies should solve the real
problem (authentication theft) that is allowing identity theft to create
damage to the general public. What's the use of stolen identity data if
that data cannot be used to impersonate the victim? At most, it would be
a breach of privacy... but not a breach of access and data protected by
the access. Furthermore, if identity data is not used as authenticators,
they would not be so much available (and valuable!) to be stolen in the
first place.
BTW, the confusion between identification and authentication begins in
our circle. Just check, for example, The Handbook of Cryptography by
Menezes et. al.:
10.2 Remark (identification terminology) The terms identification
and entity authentication are used synonymously throughout this book.
Cheers,
Ed Gerck
-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


identification + Re: authentication and authorization

2004-07-08 Thread Ed Gerck
I believe that a significant part of the problems discussed here is that
the three concepts named in the subject line are not well-defined. This
is not a question of semantics, it's a question of logical conditions
that are at present overlapping and inconsistent.
For example, much of what is called identity theft is actually
authentication theft -- the stolen credentials (SSN, driver's
license number, address, etc) are used to falsely *authenticate* a
fraudster (much like a stolen password), not to identify. Once we
understand this, a solution, thus, to what is called  identity theft
is to improve the *authentication mechanisms*, for example by using
two-factor authentication. Which has nothing to do with identification,
impersonation, or even the security of identification data.
In further clarifying the issue, it seems that what we need first is
a non-circular definition for identity. And, of course, we need a
definition that can be applied on the Internet.  Another important
goal is to permit a safe automatic processing of identification,
authentication and authorization [1].
Let me share with you my conclusion on this, in revisiting the
concept of identification some time ago. I found it useful to ask
the meta question -- what is identification, that we can identify it?
In short, a useful definition of identification should also work
reflexively and self-consistently [2].
In this context, what is to identify? I think that to identify
is to look for connections. Thus, in identification we should look
for logical and/or natural connections. For example:
- between a fingerprint and the person that has it,
- between a name and the person that answers by that name,
- between an Internet host and a URL that connects to it,
- between an idea and the way we can represent it in words,
- conversely, between words and the ideas they represent,
- etc.
Do you, the reader, agree?
If you agree you have just identified. If you do not agree, likewise
you have identified! The essence of identification is thus to find
connections -- where absence of connections also counts.
Identification can thus be understood not only in the sense of an
identity connection, but in the wider sense of any connection.
Which one to use is just a matter of protocol expression, need, cost
and (very importantly) privacy concerns.
The word coherence is useful here, meaning any natural or logical
connection. To identify is to look for coherence. Coherence with and
between a photo, a SSN, an email address, a public-key and other
attributes: *Identification is a measure of coherence*.
The same ideas can be applied to define authentication and
authorization in a self-consistent way, without overlapping with
each other.
Comments?
Cheers,
Ed Gerck
[1] The effort should also aim to safely automate the process of reliance
by a relying-party. This requires path processing and any algorithm to
eliminate any violations of those policies (i.e., vulnerabilities) that
might be hard to recognize or difficult to foresee, which would
interfere with the goal of specifying a wholly automated process of
handling identification, authentication and authorization.
[2] This answer should be useful to the engineering development of all
Internet protocols, to all human communication modes, to all
information transfer models and anywhere one needs to reach beyond
one's own point in space and time.
-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


  1   2   >