Re: [Cryptography] Crypto Standards v.s. Engineering habits - Was: NIST about to weaken SHA3?

2013-10-11 Thread Bill Frantz

On 10/11/13 at 10:32 AM, zoo...@gmail.com (Zooko O'Whielacronx) wrote:


Don't try to study
foolscap, even though it is a very interesting practical approach,
because there doesn't exist documentation of the protocol at the right
level for you to learn from.


Look at the E language sturdy refs, which are a lot like the 
Foolscap references. They are documented at www.erights.org.


Cheers - Bill

---
Bill Frantz| Truth and love must prevail  | Periwinkle
(408)356-8506  | over lies and hate.  | 16345 
Englewood Ave
www.pwpconsult.com |   - Vaclav Havel | Los Gatos, 
CA 95032


___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Crypto Standards v.s. Engineering habits - Was: NIST about to weaken SHA3?

2013-10-10 Thread Bill Frantz

On 10/9/13 at 7:18 PM, crypto@gmail.com (John Kelsey) wrote:

We know how to address one part of this problem--choose only 
algorithms whose design strength is large enough that there's 
not some relatively close by time when the algorithms will need 
to be swapped out.  That's not all that big a problem now--if 
you use, say, AES256 and SHA512 and ECC over P521, then even in 
the far future, your users need only fear cryptanalysis, not 
Moore's Law.  Really, even with 128-bit security level 
primitives, it will be a very long time until the brute-force 
attacks are a concern.


We should try to characterize what a very long time is in 
years. :-)



This is actually one thing we're kind-of on the road to doing 
right in standards now--we're moving away from 
barely-strong-enough crypto and toward crypto that's going to 
be strong for a long time to come.


We had barely-strong-enough crypto because we couldn't afford 
the computation time for longer key sizes. I hope things are 
better now, although there may still be a problem for certain 
devices. Let's hope they are only needed in low security/low 
value applications.



Protocol attacks are harder, because while we can choose a key 
length, modulus size, or sponge capacity to support a known 
security level, it's not so easy to make sure that a protocol 
doesn't have some kind of attack in it.
I think we've learned a lot about what can go wrong with 
protocols, and we can design them to be more ironclad than in 
the past, but we still can't guarantee we won't need to 
upgrade.  But I think this is an area that would be interesting 
to explore--what would need to happen in order to get more 
ironclad protocols?  A couple random thoughts:


I fully agree that this is a valuable area to research.



a.  Layering secure protocols on top of one another might 
provide some redundancy, so that a flaw in one didn't undermine 
the security of the whole system.


Defense in depth has been useful from longer ago than the 
Trojans and Greeks.



b.  There are some principles we can apply that will make 
protocols harder to attack, like encrypt-then-MAC (to eliminate 
reaction attacks), nothing is allowed to need change its 
execution path or timing based on the key or plaintext, every 
message includes a sequence number and the hash of the previous 
message, etc.  This won't eliminate protocol attacks, but will 
make them less common.


I think that the attacks on MAC-then-encrypt and timing attacks 
were first described within the last 15 years. I think it is 
only normal paranoia to think there may be some more equally 
interesting discoveries in the future.



c.  We could try to treat at least some kinds of protocols more 
like crypto algorithms, and expect to have them widely vetted 
before use.


Most definitely! Lots of eye. Formal proofs because they are a 
completely different way of looking at things. Simplicity. All 
will help.




What else?
...
Perhaps the shortest limit on the lifetime of an embedded 
system is the security protocol, and not the hardware. If so, 
how do we as society deal with this limit.


What we really need is some way to enforce protocol upgrades 
over time.  Ideally, there would be some notion that if you 
support version X of the protocol, this meant that you would 
not support any version lower than, say, X-2.  But I'm not sure 
how practical that is.


This is the direction I'm pushing today. If you look at auto 
racing you will notice that the safety equipment commonly used 
before WW2 is no longer permitted. It is patently unsafe. We 
need to make the same judgements in high security/high risk applications.


Cheers - Bill

---
Bill Frantz|The nice thing about standards| Periwinkle
(408)356-8506  |is there are so many to choose| 16345 
Englewood Ave
www.pwpconsult.com |from.   - Andrew Tanenbaum| Los Gatos, 
CA 95032


___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Crypto Standards v.s. Engineering habits - Was: NIST about to weaken SHA3?

2013-10-10 Thread Bill Frantz

On 10/9/13 at 7:12 PM, watsonbl...@gmail.com (Watson Ladd) wrote:


On Tue, Oct 8, 2013 at 1:46 PM, Bill Frantz fra...@pwpconsult.com wrote:
... As professionals, we have an obligation to share our 
knowledge of the limits of our technology with the people who 
are depending on it. We know that all crypto standards which 
are 15 years old or older are obsolete, not recommended for 
current use, or outright dangerous. We don't know of any way 
to avoid this problem in the future.


15 years ago is 1997. Diffie-Hellman is much, much older and still
works. Kerberos is of similar vintage. Feige-Fiat-Shamir is from 1988,
Schnorr signature 1989.


When I developed the VatTP crypto protocol for the E language 
www.erights.org about 15 years ago, key sizes of 1024 bits 
were high security. Now they are seriously questioned. 3DES was 
state of the art. No widely distributed protocols used 
Feige-Fiat-Shamir or Schnorr signatures. Do any now? I stand by 
my statement.




I think the burden of proof is on the people who suggest that 
we only have to do it right the next time and things will be 
perfect. These proofs should address:


New applications of old attacks.
The fact that new attacks continue to be discovered.
The existence of powerful actors subverting standards.
The lack of a did right example to point to.


... long post of problems with TLS, most of which are valid 
criticisms deleted as not addressing the above questions.



Protocols involving crypto need to be so damn simple that if it
connects correctly, the chance of a bug is vanishingly small. If we
make a simple protocol, with automated analysis of its security, the
only danger is a primitive failing, in which case we are in trouble
anyway.


I agree with this general direction, but I still don't have the 
warm fuzzies that good answers to the above questions might 
give. I have seen too many projects to do it right that didn't 
pull it off.


See also my response to John Kelsey.

Cheers - Bill

---
Bill Frantz| Privacy is dead, get over| Periwinkle
(408)356-8506  | it.  | 16345 
Englewood Ave
www.pwpconsult.com |  - Scott McNealy | Los Gatos, 
CA 95032


___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Crypto Standards v.s. Engineering habits - Was: NIST about to weaken SHA3?

2013-10-09 Thread Bill Frantz

On 10/8/13 at 7:38 AM, leich...@lrw.com (Jerry Leichter) wrote:


On Oct 8, 2013, at 1:11 AM, Bill Frantz fra...@pwpconsult.com wrote:


We seriously need to consider what the design lifespan of our 
crypto suites is in real life. That data should be 
communicated to hardware and software designers so they know 
what kind of update schedule needs to be supported. Users of 
the resulting systems need to know that the crypto standards 
have a limited life so they can include update in their 
installation planning.



This would make a great April Fool's RFC, to go along with the classic evil 
bit.  :-(


I think the situation is much more serious than this comment 
makes it appear. As professionals, we have an obligation to 
share our knowledge of the limits of our technology with the 
people who are depending on it. We know that all crypto 
standards which are 15 years old or older are obsolete, not 
recommended for current use, or outright dangerous. We don't 
know of any way to avoid this problem in the future.


I think the burden of proof is on the people who suggest that we 
only have to do it right the next time and things will be 
perfect. These proofs should address:


New applications of old attacks.
The fact that new attacks continue to be discovered.
The existence of powerful actors subverting standards.
The lack of a did right example to point to.


There are embedded systems that are impractical to update and 
have expected lifetimes measured in decades...
Many perfectly good PC's will stay on XP forever because even 
if there was the will and staff to upgrade, recent versions of 
Windows won't run on their hardware.

...
I'm afraid the reality is that we have to design for a world in 
which some devices will be running very old versions of code, 
speaking only very old versions of protocols, pretty much 
forever.  In such a world, newer devices either need to shield 
their older brethren from the sad realities or relegate them to 
low-risk activities by refusing to engage in high-risk 
transactions with them.  It's by no means clear how one would 
do this, but there really aren't any other realistic alternatives.


Users of this old equipment will need to make a security/cost 
tradeoff based on their requirements. The ham radio operator who 
is still running Windows 98 doesn't really concern me. (While 
his internet connected system might be a bot, the bot 
controllers will protect his computer from others, so his radio 
logs and radio firmware update files are probably safe.) I've 
already commented on the risks of sending Mailman passwords in 
the clear. Low value/low risk targets don' need titanium security.


The power plant which can be destroyed by a cyber attack, c.f. 
STUXNET, does concern me. Gas distribution systems do concern 
me. Banking transactions do concern me, particularly business 
accounts. (The recommendations for online business accounts 
include using a dedicated computer -- good advice.)


Perhaps the shortest limit on the lifetime of an embedded system 
is the security protocol, and not the hardware. If so, how do we 
as society deal with this limit.


Cheers -- Bill

---
Bill Frantz| gets() remains as a monument | Periwinkle
(408)356-8506  | to C's continuing support of | 16345 
Englewood Ave
www.pwpconsult.com | buffer overruns. | Los Gatos, 
CA 95032


___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Crypto Standards v.s. Engineering habits - Was: NIST about to weaken SHA3?

2013-10-08 Thread Bill Frantz

On 10/6/13 at 8:26 AM, crypto@gmail.com (John Kelsey) wrote:

If we can't select ciphersuites that we are sure we will always 
be comfortable with (for at least some forseeable lifetime) 
then we urgently need the ability to *stop* using them at some 
point.  The examples of MD5 and RC4 make that pretty clear.
Ceasing to use one particular encryption algorithm in something 
like SSL/TLS should be the easiest case--we don't have to worry 
about old signatures/certificates using the outdated algorithm 
or anything.  And yet we can't reliably do even that.


We seriously need to consider what the design lifespan of our 
crypto suites is in real life. That data should be communicated 
to hardware and software designers so they know what kind of 
update schedule needs to be supported. Users of the resulting 
systems need to know that the crypto standards have a limited 
life so they can include update in their installation planning.


Cheers - Bill

---
Bill Frantz| If the site is supported by  | Periwinkle
(408)356-8506  | ads, you are the product.| 16345 
Englewood Ave
www.pwpconsult.com |  | Los Gatos, 
CA 95032


___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Why is emailing me my password?

2013-10-03 Thread Bill Frantz

On 10/2/13 at 7:16 AM, g...@kinostudios.com (Greg) wrote:


I'm interested in cases where Mailman passwords have been abused.


Show me one instance where a nuclear reactor was brought down 
by an earthquake! Just one! Then I'll consider spending the $$ 
on it!


And while you're at it, show me the cost of the abuse.

Cheers - Bill

-
Bill Frantz| When it comes to the world | Periwinkle
(408)356-8506  | around us, is there any choice | 16345 
Englewood Ave
www.pwpconsult.com | but to explore? - Lisa Randall | Los Gatos, 
CA 95032


___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] NIST about to weaken SHA3?

2013-10-01 Thread Bill Frantz
On 9/30/13 at 4:09 PM, cryptogra...@dukhovni.org (Viktor Dukhovni) wrote:

 Just because they're after you, doesn't mean they're controlling
 your brain with radio waves.  Don't let FUD cloud your judgement.

ROTFLOL!

---
Bill Frantz| Since the IBM Selectric, keyboards have gotten
408-356-8506   | steadily worse. Now we have touchscreen keyboards.
www.pwpconsult.com | Can we make something even worse?

___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Crypto Standards v.s. Engineering habits - Was: NIST about to weaken SHA3?

2013-10-01 Thread Bill Frantz
On 10/1/13 at 12:29 AM, di...@webweaving.org (Dirk-Willem van 
Gulik) wrote:


While in a lot of other fields - it is very common for 'run of 
the mill' constructions; such as when calculating a floor, 
wooden support beam, a joist, to take the various standards and 
liberally apply safety factors. A factor 10 or 20x too strong 
is quite common *especially* in 'consumer' constructions.


In cave rescue the National Cave Rescue Commission (a training 
organization) uses a 7:1 system safety ratio in its trainings. 
This is for building systems where people could be seriously 
hurt or killed if the system fails.


Cheers - Bill, NCRC instructor

---
Bill Frantz| If the site is supported by  | Periwinkle
(408)356-8506  | ads, you are the product.| 16345 
Englewood Ave
www.pwpconsult.com |  | Los Gatos, 
CA 95032


___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] are ECDSA curves provably not cooked? (Re: RSA equivalent key length/strength)

2013-10-01 Thread Bill Frantz

On 10/1/13 at 8:47 AM, basc...@gmail.com (Tony Arcieri) wrote:


If e.g. the NSA knew of an entire class of weak curves, they could perform
a brute force search with random looking seeds, continuing until the curve
parameters, after the seed is run through SHA1, fall into the class that's
known to be weak to them.


Or NSA could have done what it did with DES and chosen a 
construct that didn't have that weakness. We just don't know.


Cheers - Bill

---
Bill Frantz| I don't have high-speed  | Periwinkle
(408)356-8506  | internet. I have DSL.| 16345 
Englewood Ave
www.pwpconsult.com |  | Los Gatos, 
CA 95032


___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Why is emailing me my password?

2013-10-01 Thread Bill Frantz

On 10/1/13 at 1:43 PM, mar...@bluegap.ch (Markus Wanner) wrote:


Let's compare apples to apples: even if you manage to actually read the
instructions, you actually have to do so, have to come up with a
throw-away-password, and remember it. For no additional safety compared
to one-time tokens.


Let Mailman assign you a password. Then you don't have to worry 
about someone collecting all your mailing list passwords and 
reverse engineering your password generation algorithm. You'll 
find out what the password is in a month. Save that email so you 
can make changes. Get on with life.


Lets not increase the level of user work in cases where there 
isn't, in fact, a security problem.


I'm interested in cases where Mailman passwords have been abused.

Cheers - Bill

---
Bill Frantz| If the site is supported by  | Periwinkle
(408)356-8506  | ads, you are the product.| 16345 
Englewood Ave
www.pwpconsult.com |  | Los Gatos, 
CA 95032


___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] check-summed keys in secret ciphers?

2013-09-30 Thread Bill Frantz

On 9/30/13 at 1:16 AM, i...@iang.org (ianG) wrote:


Any comments from the wider audience?


I talked with a park ranger who had used a high-precision GPS 
system which decoded the selective availability encrypted 
signal. Access to the device was very tightly controlled and it 
had a control-meta-shift-whoopie which erased the key should the 
device be in danger of being captured. And this was a relatively 
low security device.


Cheers - Bill

---
Bill Frantz|After all, if the conventional wisdom was 
working, the
408-356-8506   | rate of systems being compromised would be 
going down,

www.pwpconsult.com | wouldn't it? -- Marcus Ranum

___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] check-summed keys in secret ciphers?

2013-09-30 Thread Bill Frantz

On 9/30/13 at 2:07 PM, leich...@lrw.com (Jerry Leichter) wrote:

People used to wonder why NSA asked that DES keys be 
checksummed - the original IBM Lucifer algorithm used a full 
64-bit key, while DES required parity bits on each byte.  On 
the one hand, this decreased the key size from 64 to 56 bits; 
on the other, it turns out that under differential crypto 
attack, DES only provides about 56 bits of security anyway.  
NSA, based on what we saw in the Clipper chip, seems to like 
running crypto algorithms tight:  Just as much effective 
security as the key size implies, exactly enough rounds to 
attain it, etc.  So *maybe* that was why they asked for 56-bit 
keys.  Or maybe they wanted to make brute force attacks easier 
for themselves.


The effect of NSA's work with Lucifer to produce DES was:

  DES was protected against differential cryptanalysis without 
making this attack public.


  The key was shortened from 64 bits to 56 bits adding parity bits.

I think the security side of NSA won here. It is relatively easy 
to judge how much work a brute force attack will take. It is 
harder to analyze the effect of an unknown attack mode. DES 
users could make a informed judgment based on $$$, Moore's law, 
and the speed of DES.


Cheers - Bill

---
Bill Frantz| Privacy is dead, get over| Periwinkle
(408)356-8506  | it.  | 16345 
Englewood Ave
www.pwpconsult.com |  - Scott McNealy | Los Gatos, 
CA 95032


___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


[Cryptography] Hardware Trojan Protection

2013-09-25 Thread Bill Frantz
On 9/22/13 at 6:07 PM, leich...@lrw.com (Jerry Leichter) wrote 
in another thread:


Still, it raises the question:  If you can't trust your 
microprocessor chips, what do you do?  One possible answer:  
Build yourself a processor out of MSI chips.  We used to do 
that, not so long ago, and got respectable performance (if not, 
perhaps, on anything like today's scale).  An MSI chip doesn't 
have enough intrinsic computation to provide much of a hook for 
an attack.  Oh, sure, the hardware could be spiked - but to do 
*what*?  Any given type of MSI chip could go into many 
different points of many different circuit topologies, and 
won't see enough of the data to do much anyway.  There may be 
some interface issues:  This stuff might not be fast enough to 
deal with modern memory chips.  (How would you attack a memory 
chip?  Certainly possible if you're make a targeted attack - 
you can slip in a small processor in the design to do all kinds 
of nasty things.  But commercial of the shelf memory chips are 
built right up to the edge of what we can make, so you can't 
change a

ll that much.)

Some stuff is probably just impossible with this level of 
technology.  I doubt you can build a Gig-E Ethernet interface 
without large-scale integration.  You can certainly do the 
original 10 Mb/sec - after all, people did!  I have no idea if 
you could get to 100 Mb/sec.


Do people still make bit-slice chips?  Are they at a low-enough 
level to not be a plausible attack vector?


You could certainly build a respectable mail server this way - 
though it's probably not doing 2048-bit RSA at a usable speed.


We've been talking about crypto (math) and coding (software).  
Frankly, I, personally, have no need to worry about someone 
attacking my hardware, and that's probably true of most 
people.  But it's *not* true of everyone.  So thinking about 
how to build harder to attack hardware is probably worth the effort.


You might get a reasonable level of protection implementing the 
core of the crypto operations in a hardware security module 
(HSM) using Field Programmable Gate Arrays (FPGA) or Complex 
Programmable Logic Device (CPLD). There is an open source set of 
tools for programming these beasts based on Python called MyHDL 
www.myhdl.org. The EFF DES cracker may have some useful ideas too.


The largest of these devices are also pressing the current chip 
limits. There isn't a lot of extra space for Trojans. In 
addition, knowing what to look at is somewhat difficult if pin 
assignments etc are changed from chip to chip at random.


As with any system, there are tool chain issues. Open source 
helps, but there is always the Key Thompson attack. The best 
solution I can think of is to audit the output. Look very 
carefully at the output of the tool chain, and at the final 
piece that loads the configuration data into the device.


Cheers - Bill

---
Bill Frantz|Web security is like medicine - trying to 
do good for

408-356-8506   |an evolved body of kludges - Mark Miller
www.pwpconsult.com |

___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] RSA equivalent key length/strength

2013-09-24 Thread Bill Frantz
On 9/21/13 at 5:07 PM, c...@funwithsoftware.org (Patrick 
Pelletier) wrote:


I'm inclined to agree with you, but you might be 
interested/horrified in the 1024 bits is enough for anyone 
debate currently unfolding on the TLS list:


http://www.ietf.org/mail-archive/web/tls/current/msg10009.html


I think that this comment is a serious misinterpretation of the 
discussion on the TLS list.


The RFC under discussion is a Best Current Practices (BCP) RFC. 
Some people, including me, think that changes to the protocol or 
current implementations of the protocol are out of scope for a 
BCP document.


There are several implementations of TLS which will only do 1024 
bit Diffie-Hellman ephemeral (DHE)[1]. The question as I see it 
is: Are we better off recommending forward security with 1024 
bit DHE, with the possibility that large organizations can brute 
force it; or using the technique of having the client encrypt 
the keying material with the server's RSA key with the 
probability that the same large organizations have acquired the 
server's secret key.


Now there are good arguments on both sides.

The nearly complete database of who talks to who allows 
interesting communications [2] to be singled out for attacks 
on the 1024 bit DHE. Cracking all the DHE exchanges is probably 
more work than these large organizations can do with current 
technology. However, it is almost certain that these sessions 
will be readable in the not too distant future.


It is widely believed that most large sites have had their RSA 
secret keys compromised, which makes all these sessions are 
trivially readable.


I think that the vast majority of TLS list commenters want to 
have TLS 1.3 include fixes for the problems that have been 
identified. However, getting TLS 1.3 approved is at least a 
year, and getting it through the FIPS process will add at least 
another year. We already know that these large organizations 
work to delay better crypto, sometimes using the argument that 
we should wait for the perfect solution rather than 
incrementally adopt better solutions in the mean time.


Cheers - Bill

[1] Implementations which will only do 1024 bit DHE are said to 
include: Apache with OpenSSL, Java, and Windows crypt libraries 
(used by Internet Explorer). If longer keys are used by the 
other side, they abort the connection attempt.


[2] I actually believe NSA when they say they aren't interested 
in grandma's cookie recipe. I am, but I like good cookies. :0)


---
Bill Frantz| Privacy is dead, get over| Periwinkle
(408)356-8506  | it.  | 16345 
Englewood Ave
www.pwpconsult.com |  - Scott McNealy | Los Gatos, 
CA 95032


___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Specification: Prism Proof Email

2013-09-22 Thread Bill Frantz

On 9/20/13 at 11:59 AM, hal...@gmail.com (Phillip Hallam-Baker) wrote:


As someone who has seen the documents said to me this week, given a choice
between A and B, the NSA does both. We have to do the same. Rather than
have a pointless argument about whether Web 'o Trust or PKIX is the way to
go, let everyone do both. Let people get a certificate from a CA and then
get it endorsed by their peers: belt and braces.


This approach certainly meets my requirements. As a UI 
designer/user I want it to JFW (Just ... Work) invisibly under 
the covers. As a boarder-line paranoid, I want a indicator of 
which methods passed. :-)


Let's add to the list of methods the SSH method of, The same 
key used the last time.


I assume users of the CA method would register with the CA in 
some maner which would probably cost money. (How the CA 
separates me from Bill Frantz, the professional photographer in 
Illinois is not going to be cheap.) I understand there is still 
a trademark dispute between the US beer Budwiser and the German 
beer of the same name.


In the WoT case, having your key fingerprint written on a QR 
code is a neat hack. Put it on the back of your business card[1].


I think CAs will be most useful for businesses while WoT will be 
most useful for individuals. Everyone will be more comfortable 
when the SSH test passes.


Cheers - Bill

[1] Back in days of yore, I needed to send some company private 
data to my home computer. I didn't have the fingerprint of my 
key at work, but I did have Carl Ellison's business card with 
the fingerprint of his key. He had signed my key which was 
available on a key server, so I had good enough reason to trust 
that the key was actually mine.


---
Bill Frantz| Since the IBM Selectric, keyboards have gotten
408-356-8506   | steadily worse. Now we have touchscreen keyboards.
www.pwpconsult.com | Can we make something even worse?

___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] PRISM-Proofing and PRISM-Hardening

2013-09-19 Thread Bill Frantz

On 9/19/13 at 5:26 AM, rs...@akamai.com (Salz, Rich) wrote:


I know I would be a lot more comfortable with a way to check the mail against a 
piece of paper I

received directly from my bank.

I would say this puts you in the sub 1% of the populace.  Most 
people want to do things online because it is much easier and 
gets rid of paper.  Those are the systems we need to secure.  
Perhaps another way to look at it:  how can we make out-of-band 
verification simpler?


Do you have any evidence to support this contention? Remember 
we're talking about money, not just social networks.


I can support mine. ;-)

If organizations like Consumers Union say that you should take 
that number from the bank paperwork you got when you signed up 
for an account, or signed up for online banking, or got with 
your monthly statement, or got as a special security mailing and 
enter it into your email client, I suspect a reasonable 
percentage of people would do it. It is, after all a one time operation.


Cheers - Bill

---
Bill Frantz| If the site is supported by  | Periwinkle
(408)356-8506  | ads, you are the product.| 16345 
Englewood Ave
www.pwpconsult.com |  | Los Gatos, 
CA 95032


___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] PRISM-Proofing and PRISM-Hardening

2013-09-18 Thread Bill Frantz

On 9/18/13 at 6:08 AM, hal...@gmail.com (Phillip Hallam-Baker) wrote:


If I am trying to work out if an email was really sent by my bank then I
want a CA type security model because less than 0.1% of customers are ever
going to understand a PGP type web of trust for that particular purpose.
But its the bank sending the mail, not an individual at the bank.


I know I would be a lot more comfortable with a way to check the 
mail against a piece of paper I received directly from my bank 
(the PGP model). I would have no problem in entering a magic 
authentication string (the key fingerprint) into my mail agent 
to authenticate my bank. The security of my money is of more 
that trivial importance.


Second would be having my mail agent tell me that the mail came 
from the same place as the previous piece of email I received 
(the SSH model). This model would work for most of my friends 
where MitM is unlikely. In the cases where MitM worries became 
important, I could then check fingerprints.


The CA model lets a powerful attacker subvert the CA at any time 
ignoring both out of band and same-as-the-last-time 
authentications. I'm OK with CAs for credit card transactions. 
There's a $50 limit on my risk from fraud.


Cheers - Bill

---
Bill Frantz| Truth and love must prevail  | Periwinkle
(408)356-8506  | over lies and hate.  | 16345 
Englewood Ave
www.pwpconsult.com |   - Vaclav Havel | Los Gatos, 
CA 95032


___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] The paranoid approach to crypto-plumbing

2013-09-17 Thread Bill Frantz

On 9/17/13 at 2:48 AM, i...@iang.org (ianG) wrote:


The problem with adding multiple algorithms is that you are also adding 
complexity. ...


Both Perry and Ian point out:

And, as we know, the algorithms rarely fail. [but systems do] ...


Absolutely! The techniques I suggested used the simplest 
combining function I could think of: XOR. But complexity is the 
mortal enemy of reliability and security.




Do you really have the evidence that such extra effort is required?


I don't have any evidence, which is why I included Paranoid it 
the message subject. I do know that NSA is well served when 
people believe things about cryptography which aren't true. If 
you believe TLS is broken[1] then you might use something much 
weaker in its place. If you believe AES/RSA/ECDSA etc. are 
strong when they aren't you will continue to rely on them.



Remember, while you're building this extra capability, 
customers aren't being protected at all, and are less likely to 
be so in the future.


I see this a the crux of our problem as responsible crypto 
people. The systems we thought were working are broken. For both 
professional and political reasons we need to fix them quickly.


My morning paper includes the comic Non Sequitur. Today's 
strip has one of the regular characters being visited by two NSA 
agents. This story is front and center in the public's 
attention. There is no better time to press for whatever 
disruptive changes may be needed.


What we need is working code we can get adopted. It can be 
prototype code with more complete versions to come later. But 
our best chance of adoption is now.



On 9/17/13 at 8:54 AM, pe...@piermont.com (Perry E. Metzger) wrote:


(Of course, if the endpoints are trusted hardware running a formally
verified capability operating system and you still have time on your
hands, hey, why not? Of course, when I posted a long message about
modern formal verification techniques and how they're now practical,
no one bit on the hook.)


And I happen to have one in my back pocket. :-)

Yes, CapROS[2] isn't proven, but it is mature enough to build 
Perry's household encryption box. (There are ports to both X86 
and ARM. Device drivers are outside the kernel. The IP stack 
works. You probably don't need much more.) Any code that works 
in CapROS will probably port easily to a proven capability OS.


[And yes Perry, I was very impressed by your arguments for 
program proving technology. It is a bit out of my area of 
expertise. But I have always thought that different ways of 
looking at programs can only help them be more reliable, and 
proving is a different way.]




All that said, even I feel the temptation for low performance
applications to do something like Bill Frantz suggests. It is in the
nature of people in our community to like playing with such things.
Just don't take them *too* seriously please.


Hay, I like playing in the crypto sandbox, and redundancy is a 
classic technique. I have seen questions about DH -- factoring 
and key sizes, and EC -- cooked curves. If you worry about these 
issues, and don't have a third alternative, combining them seems 
like a good idea.



[1] And TLS is big enough to share with the internet the 
characteristic that it can be two things. The internet is always 
up somewhere. Some parts of TLS are secure for certain uses. The 
internet is never all up. Some parts of TLS are seriously broken.


[2] http://www.capros.org/

---
Bill Frantz| Concurrency is hard. 12 out  | Periwinkle
(408)356-8506  | 10 programmers get it wrong. | 16345 
Englewood Ave
www.pwpconsult.com |- Jeff Frantz | Los Gatos, 
CA 95032


___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] The paranoid approach to crypto-plumbing

2013-09-17 Thread Bill Frantz

On 9/17/13 at 4:18 PM, leich...@lrw.com (Jerry Leichter) wrote:

MAC'ing the actual data always seemed more logical to me, but 
once you look at the actual situation, it no longer seems like 
the right thing to do.


When I chose MAC then encrypt I was using the MAC to check the 
crypto code. CRC would have worked too, but the MAC was free. (I 
really don't trust my own code very much.)


Cheers - Bill

-
Bill Frantz| The first thing you need when  | Periwinkle
(408)356-8506  | using a perimeter defense is a | 16345 
Englewood Ave
www.pwpconsult.com | perimeter. | Los Gatos, 
CA 95032


___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


[Cryptography] The paranoid approach to crypto-plumbing

2013-09-16 Thread Bill Frantz
After Rijndael was selected as AES, someone suggested the really 
paranoid should super encrypt with all 5 finalests in the 
competition. Five level super encryption is probably overkill, 
but two or three levels can offer some real advantages. So 
consider simple combinations of techniques which are at least as 
secure as the better of them.


Unguessable (aka random) numbers:

  Several generators, each reseeded on its own schedule, combined
  with XOR will be as good as the best of them.


Symmetric encryption:

  Two algorithms give security equal to the best of them. Three
  protect against meet-in-the-middle attacks. Performing the
  multiple encryption at the block level allows block cyphers to
  be combined with stream cyphers. RC4 may have problems, but
  adding it to the mix isn't very expensive.


Key agreement:

  For forward security, using both discrete log and elliptic
  curve Diffie-Hellman modes combined with XOR to calculate
  keying material is as good as the better of them. Encrypting a
  session key with one public key algorithm and then encrypting
  the result with another algorithm has the same advantage for
  the normal mode of TLS key agreement if you don't want
  forward security (which I very much want).


MACs:

  Two MACs are better than one. :-)

All this has costs, some of them significant, but those costs 
should be weighted against the security risks. Introducing a new 
algorithm with interesting theoretical security properties is a 
lot safer if the data is also protected with a well-examined 
algorithm which does not have those properties.


Cheers - Bill (who has finally caught up with the list)

---
Bill Frantz| Re: Computer reliability, performance, and security:
408-356-8506   | The guy who *is* wearing a parachute is 
*not* the

www.pwpconsult.com | first to reach the ground.  - Terence Kelly

___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] The paranoid approach to crypto-plumbing

2013-09-16 Thread Bill Frantz

On 9/16/13 at 12:36 PM, leich...@lrw.com (Jerry Leichter) wrote:


On Sep 16, 2013, at 12:44 PM, Bill Frantz fra...@pwpconsult.com wrote:

After Rijndael was selected as AES, someone suggested the really paranoid 
should super encrypt with
all 5 finalests in the competition. Five level super encryption 
is probably overkill, but two or three levels can offer some 
real advantages. So consider simple combinations of techniques 
which are at least as secure as the better of them

This is trickier than it looks.

Joux's paper Multicollisions in iterated hash functions 
http://www.iacr.org/archive/crypto2004/31520306/multicollisions.ps
shows that finding ... r-tuples of messages that all hash to 
the same value is not much harder than finding ... pairs of 
messages.  This has some surprising implications.  In 
particular, Joux uses it to show that, if F(X) and G(X) are 
cryptographic hash functions, then H(X) = F(X) || G(X) (|| is 
concatenation) is about as hard as the harder of F and G - but 
no harder.


That's not to say that it's not possible to combine multiple 
instances of cryptographic primitives in a way that 
significantly increases security.  But, as many people found 
when they tried to find a way to use DES as a primitive to 
construction an encryption function with a wider key or with a 
bigger block size, it's not easy - and certainly not if you 
want to get reasonable performance.


This kind of result is why us crypto plumbers should always 
consult real cryptographers. :-)


I am not so much trying to make the construction better than the 
algorithms being used, like 3DES is much more secure than 1DES, 
(and significantly extended the useful life of DES); but to make 
a construction that is at least as good as the best algorithm 
being used.


The idea is that when serious problems are discovered with one 
algorithm, you don't have to scramble to replace the entire 
crypto suite. The other algorithm will cover your tail while you 
make an orderly upgrade to your system.


Obviously you want to chose algorithms which are likely to have 
different failure modes -- which I why I suggest that RC4 (or an 
extension thereof) might still be useful. The added safety also 
allows you to experiment with less examined algorithms.


Cheers - Bill

---
Bill Frantz|The nice thing about standards| Periwinkle
(408)356-8506  |is there are so many to choose| 16345 
Englewood Ave
www.pwpconsult.com |from.   - Andrew Tanenbaum| Los Gatos, 
CA 95032


___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] The paranoid approach to crypto-plumbing

2013-09-16 Thread Bill Frantz

On 9/16/13 at 4:02 PM, leich...@lrw.com (Jerry Leichter) wrote:

The feeling these days among those who do such work is that 
unless you're going to use a specialized combined encryption 
and authentication mode, you might as well use counter mode 
(with, of course, required authentication).  For the encryption 
part, counter mode with multiple ciphers and independent keys 
has the nice property that it's trivially as strong as the 
strongest of the constituents.  (Proof:  If all the ciphers 
except one are cracked, the attacker is left with a 
known-plaintext attack against the remaining one.


Let me apply the ideas to the E communication protocol 
http://www.erights.org/elib/distrib/vattp/index.html. The code 
is available on the ERights site http://www.erights.org/.


Cutting out the details about how IP addresses are resolved, the 
initiator sends a series of messages negotiating the details of 
the connection and uses Diffie-Hellman for session key 
agreement.  --  Change the protocol to use both discrete log and 
elliptic curve versions of Diffie-Hellman, and use the results 
of both of them to generate the session key. I would love to 
have a key agreement algorithm other than Diffie-Hellman to use 
for one of the two algorithms to get a further separation of 
failure modes.


Authentication is achieved by signing the entire exchange with 
DSA.  --  Change the protocol to sign the exchange with both RSA 
and DSA and send and check both signatures.


In all cases, use algorithm bit lengths acceptable by modern standards.

The current data exchange encryption uses SHA1 in HMAC mode and 
3DES in CBC mode with MAC then encrypt. The only saving grace is 
that the first block of each message is the HMAC, which will 
make the known plain text attacks on the protocol harder. -- I 
would replace this protocol with one that encrypts twice and 
MACs twice. Using one of the modes which encrypt and MAC in one 
operation as the inner layer is very tempting with a different 
cypher in counter mode and a HMAC as the outer layer.



The need for independent keys is clear since if I use two 
copies of the same cipher with the same key, I end up sending 
plaintext!  You'd need some strong independence statements 
about the ciphers in the set if you want to reuse keys.  
Deriving them from a common key with a one-way hash function is 
probably safe in practice, though you'd now need some strong 
statements about the hash function to get any theoretical 
result.  Why rely on such things when you don't need to?)


I'm not sure you can avoid that one-way hash function in 
practice. Either it will be distilling randomness in your RNG or 
it will be stretching the pre-master secret in your key/IV/etc 
generation. You could use several and XOR the results if you can 
prove that their outputs are always different.




It's not immediately clear to me what the right procedure for multiple 
authentication is.


The above proposal uses two different digital signature 
algorithms, sends both, and checks both. I think it meets the 
no worse than the best of the two test.


Cheers - Bill

---
Bill Frantz|We used to quip that password is the most common
408-356-8506   | password. Now it's 'password1.' Who said 
users haven't

www.pwpconsult.com | learned anything about security? -- Bruce Schneier

___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Email and IM are ideal candidates for mix networks

2013-09-05 Thread Bill Frantz

On 8/25/13 at 8:32 PM, leich...@lrw.com (Jerry Leichter) wrote:

*The* biggest headache is HTTP support.  Even the simplest 
modern HTTP server is so complex you can never be reasonably 
sure it's secure (though, granted, it's simpler than a 
browser!)  You'd want to stay simple and primitive.


I'm currently over 250 messages behind, so please pardon me if 
this item has already been mentioned.


Back in 2009, Charlie Landau and I worked on a DARPA contract to 
demonstrate a secure web key server[1]. We used CAPROS[2] as the 
underlying operating system and build a HTTP interpreter to act 
as the server. The system is GPL and the source for the web key 
server is available on Sourceforge[3].


Charlie comments that the IDL files are quite useful, but there 
really isn't any documentation. Let me give a brief overview:


When a new TCP connection arrives, a new instance of the web key 
server is created. It can not communicate with any other 
instance of the web key server, and the only real authority it 
has, beyond sending and receiving on the TCP circuit, is to a 
name lookup system.


This name lookup system takes a string -- the secret part of the 
web key -- and returns a resource. The web key server then 
returns the contents of that resource to the requestor.


Since the name lookup system does not allow enumeration of its 
contents, even if an instance of the web key server is 
compromised, an attacker will still have to guess the secret 
part of the web key to retrieve authorities from the name lookup system.


Cheers - Bill

[1] Web key: http://waterken.sourceforge.net/web-key/

[2] http://www.capros.org/, http://capros.sourceforge.net/

[3] http://sourceforge.net/projects/capros/

---
Bill Frantz| Truth and love must prevail  | Periwinkle
(408)356-8506  | over lies and hate.  | 16345 
Englewood Ave
www.pwpconsult.com |   - Vaclav Havel | Los Gatos, 
CA 95032


___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: Intel plans crypto-walled-garden for x86

2010-09-14 Thread Bill Frantz

On 9/13/10 at 8:58 PM, g...@toad.com (John Gilmore) wrote:


Intel's Paul
Otellini framed it as an effort to move the way the company approaches
security from a known-bad model to a known-good model.


Does that include monetary indemnity when the known-good turns 
out to be bad? I bet not.


If we could know good, security would be a lot easier, but 
nobody has a clue how to actually achieve that knowledge.



Let me guess -- to run anything but Windows, you'll soon have 
to jailbreak even laptops and desktop PC's?


I expect Steve Jobs will get them to approve MacOS too.

For the rest, there's always AMD.

Cheers - Bill

---
Bill Frantz| gets() remains as a monument | Periwinkle
(408)356-8506  | to C's continuing support of | 16345 
Englewood Ave
www.pwpconsult.com | buffer overruns. | Los Gatos, 
CA 95032


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: [IP] Malware kills 154

2010-08-24 Thread Bill Frantz
This came in from SANS NewsBites Vol. 12 Num 67 : Did a computer 
virus cause the 150 deaths in the Spanair crash?


 --Judge to Examine Evidence on Malware in Spanair Fatal Air 
Crash Case

(August 20  23, 2010)
A Spanish judge will investigate whether or not malware on a Spanair
computer system had anything to do with the system's failure to raise
alerts prior to a 2008 airplane crash that killed 154 of 172 
people on
board.  The official cause of the crash was pilot error; the 
pilots were

found to have failed to extend the airplane's take-off flaps and slats.
However, the investigation also found that a warning system 
failed to
alert the pilots that the flaps and slats had not extended and 
had also

failed to do so on two previous occasions.  Each failure should have
been logged into Spanair's maintenance system, which was found 
to be

infected with malware.  Three failures would have triggered an alarm
that would have kept the airplane grounded until the problem was fixed.
The judge has called for Spanair to release computer logs for 
the days

before and after the crash.  The malware infection appears to have
spread through a flash drive.
Internet Storm Center: http://isc.sans.edu/diary.html?storyid=9433
http://www.securecomputing.net.au/News/229633,trojans-linked-to-spanish-air-crash.aspx
http://www.informationweek.com/news/security/management/showArticle.jhtml?articleID=226900089
http://content.usatoday.com/communities/technologylive/post/2010/08/infected-usb-thumb-drive-implicated-in-deadly-2008-spanair-jetliner-crash/1?loc=interstitialskip
http://www.theregister.co.uk/2010/08/20/spanair_malware/
http://www.msnbc.msn.com/id/38790670/ns/technology_and_science-security/
http://news.cnet.com/8301-1009_3-20014237-83.html?tag=mncol;title
[Editor's Note (Schultz): This is a potentially very significant turn
of events. If the loss of 172 lives can be traced to the 
presence of
malware, corporate executives and government officials are 
likely to

take security risk management much more seriously than they generally
now do.]

OBLegal: Please feel free to share this with interested parties 
via email, but
no posting is allowed on web sites. For a free subscription, 
(and for

free posters) or to update a current subscription, visit
http://portal.sans.org/

Cheers - Bill

---
Bill Frantz| gets() remains as a monument | Periwinkle
(408)356-8506  | to C's continuing support of | 16345 
Englewood Ave
www.pwpconsult.com | buffer overruns. | Los Gatos, 
CA 95032


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: A mighty fortress is our PKI, Part II

2010-08-02 Thread Bill Frantz

On 7/28/10 at 8:52 PM, pfarr...@pfarrell.com (Pat Farrell) wrote:


When was the last time you used a paper Yellow Pages?


Err, umm, this last week. I'm in a place where cell coverage 
(ATT, Verizon has a better reputation) is spotty and internet 
is a dream due to a noisy land line. I needed to find a ceramic 
tile store. The paper yellow pages had survived being left in 
the driveway in the rain and I used it.


However, I agree that this is the 2% case for many parts of the world.

Cheers - Bill

---
Bill Frantz|Web security is like medicine - trying to 
do good for

408-356-8506   |an evolved body of kludges - Mark Miller
www.periwinkle.com |

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: A Fault Attack Construction Based On Rijmen's Chosen-Text Relations Attack

2010-07-25 Thread Bill Frantz

Alfonso De Gregorio wrote:
The last Thursday, Vincent Rijmen announced a new clever attack 
on   AES (and KASUMI) in a report posted to the Cryptology 
ePrint   Archive: Practical-Titled Attack on AES-128 Using 
Chosen-Text   Relations, http://eprint.iacr.org/2010/337


On 7/21/10 at 11:49 AM, d...@cs.berkeley.edu (David Wagner) 
wrote, with some drastic editing which I hope doesn't change 
David's meaning:



For what it's worth, I read Vincent Rijmen's paper ... as written with
tongue embedded firmly in cheek: I took it as
a serious argument, hidden behind some gentle humor.

...

Personally, I found it an effective communication style.  I thought the
point came across very clearly.  And, I have to admit I enjoyed seeing
someone having a spot of fun with what can otherwise be a somewhat dry
topic.  I thought it was brilliantly done.


My favorite paper in this style is one which has not (yet) been 
published. It turns out that at one time there were at least 
three Mark Millers active in computer science. One of them, cced 
above, wanted to publish a paper:


  Global Names Considered Harmful
  by Mark Miller, Mark Miller, and Mark Miller

And the paper really doesn't need to go any further than this.

Cheers - Bill

---
Bill Frantz| I like the farmers' market   | Periwinkle
(408)356-8506  | because I can get fruits and | 16345 
Englewood Ave
www.pwpconsult.com | vegetables without stickers. | Los Gatos, 
CA 95032


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: Against Rekeying

2010-03-23 Thread Bill Frantz
On 3/23/10 at 8:21 AM, pe...@piermont.com (Perry E. Metzger) wrote:

 Ekr has an interesting blog post up on the question of whether protocol
 support for periodic rekeying is a good or a bad thing:
 
 http://www.educatedguesswork.org/2010/03/against_rekeying.html
 
 I'd be interested in hearing what people think on the topic. I'm a bit
 skeptical of his position, partially because I think we have too little
 experience with real world attacks on cryptographic protocols, but I'm
 fairly open-minded at this point.

Eric didn't mention it in his blog post, but he has been deeply involved
in cleaning up the mess left by a protocol error in in SSLv3 and
subsequent TLS versions. This error was in the portion of the protocols
which supported rekeying and created a vulnerability that affected all
users of those protocols, whether they used the rekeying part or not.

The risks from additional protocol complexity must be balanced with the
benefits of including the additional facility. My own opinion is that in
this case, the benefits didn't justify the risk. The few applications
which desired rekeying could have been designed to build a completely
new TLS connection, avoiding the risk for everyone.

Cheers - Bill

---
Bill Frantz| I like the farmers' market   | Periwinkle
(408)356-8506  | because I can get fruits and | 16345 Englewood Ave
www.pwpconsult.com | vegetables without stickers. | Los Gatos, CA 95032

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: Crypto dongles to secure online transactions

2009-11-25 Thread Bill Frantz
leich...@lrw.com (Jerry Leichter) on Saturday, November 21, 2009 wrote:

It's no big deal to read these cards,  
and from many times the inch or so that the standard readers require. 

So surely someone has built a portable reader for counterfeiting the cards
they read in restaurants near big target companies...

Cheers - Bill

---
Bill Frantz|After all, if the conventional wisdom was working, the
408-356-8506   | rate of systems being compromised would be going down,
www.periwinkle.com | wouldn't it? -- Marcus Ranum

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: Crypto dongles to secure online transactions

2009-11-18 Thread Bill Frantz
jo...@iecc.com (John Levine) on Wednesday, November 18, 2009 wrote:

Such a device does however need to be able to suppor multiple mutually
distrusting verifiers, thus the destination public key is managed by
the untrusted PC + browser, only the device signing key is inside
the trust boundary. A user should be able to enroll the same device
with another bank, ...

If you really need the ability to do that, I'd think it would be
better to make an expandable version into which you could plug each
bank's chip+pin cards, not try to invent a super-protocol for
downloading a bank's preferred keys.

Perhaps I'm missing something, but my multiple banks will all accept my
signature when made with the same pen. Why wouldn't they not accept my
signature when made with the same, well protected, signing/user verifying
device. I might have to take it to the bank to give them its public key in
person, but that seems a minor inconvenience.

This kind of device sounds like a fine device for a banking industry
committee to specify.

Cheers - Bill

-
Bill Frantz| Airline peanut bag: Produced  | Periwinkle
(408)356-8506  | in a facility that processes   | 16345 Englewood Ave
www.pwpconsult.com | peanuts and other nuts. - Duh | Los Gatos, CA 95032

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: deterministic random numbers in crypto protocols -- Re: Possibly questionable security decisions in DNS root management

2009-11-02 Thread Bill Frantz
zo...@zooko.com (Zooko Wilcox-O'Hearn) on Thursday, October 29, 2009 wrote:

I'm beginning to think that *in general* when I see a random number  
required for a crypto protocol then I want to either  
deterministically generate it from other data which is already  
present or to have it explicitly provided by the higher-layer  
protocol.  In other words, I want to constrain the crypto protocol  
implementation by forbidding it to read the clock or to read from a  
globally-available RNG, thus making that layer deterministic.

One concern is that if the encryption key is deterministically generated
from the data, then the same plain text will generate the same cypher text,
and a listener will know that the same message has been sent. The same
observation applies to a DSA signature. If this leakage of information is
not a problem, e.g. the signature is encrypted along with the data using a
non-deterministic key, then there doesn't seem to be anything obvious wrong
with the approach. (But remember, I'm far from an expert.)

Cheers - Bill

---
Bill Frantz|After all, if the conventional wisdom was working, the
408-356-8506   | rate of systems being compromised would be going down,
www.periwinkle.com | wouldn't it? -- Marcus Ranum

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: Weakness in Social Security Numbers Is Found

2009-07-08 Thread Bill Frantz
docbook@gmail.com (Ali, Saqib) on Wednesday, July 8, 2009 wrote:

Read more:
http://www.nytimes.com/2009/07/07/us/07numbers.html?_r=2ref=instapundit


saqib
http://www.capital-punishment.us

[Moderator's note: this isn't really a weakness in SSNs, unless you're
stupid enough to use them as a password -- which we already knew was
bad. None the less, interesting work. --Perry]

How separate algorithms reduce security when used together:

The last 4 digits of the SSN are frequently used as an authenticator. These
may be the hardest digits to recover with the technique which, according to
the researchers (Alessandro Acquisti and Ralph Gross) at CMU, would not be
easy for cybercriminals to reconstruct but would be within the grasp of
sophisticated attackers.

My solution is to have the Social Security Administration announce that
they will publish names and SSNs for everyone in their database on a
certain date. Fat chance it will happen.

Cheers - Bill

---
Bill Frantz|Web security is like medicine - trying to do good for
408-356-8506   |an evolved body of kludges - Mark Miller
www.periwinkle.com |

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: password safes for mac

2009-06-28 Thread Bill Frantz
pe...@piermont.com (Perry E. Metzger) on Sunday, June 28, 2009 wrote:

It has problems. Among other things, it only mlocks your session key
itself into memory, leaving both the AES key schedule (oops!) and the
decrypted data (oops!) pageable into swap. (Why bother mlocking the text
of the key if you're not going to lock the key schedule?)

You should probably use the encrypted swap feature on the Mac.

System Preferences - Security - Use secure virtual memory.

Cheers - Bill

---
Bill Frantz| gets() remains as a monument | Periwinkle
(408)356-8506  | to C's continuing support of | 16345 Englewood Ave
www.pwpconsult.com | buffer overruns. | Los Gatos, CA 95032

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: Has any public CA ever had their certificate revoked?

2009-05-07 Thread Bill Frantz
pgut...@cs.auckland.ac.nz (Peter Gutmann) on Thursday, May 7, 2009 wrote:

Paul Hoffman paul.hoff...@vpnc.org writes:

Peter, you really need more detents on the knob for your hyperbole setting.
nothing happened is flat-out wrong: the CA fixed the problem and researched
all related problems that it could find. Perhaps you meant the CA was not
punished: that would be correct in this case.

What I meant was that there were no repercussions due to the CA acting
negligently.  This is nothing happened as far as motivating CAs to exercise
diligence is concerned, you can be as negligent as you like but as long as you
look suitably embarassed afterwards there are no repercussions (that is,
there's no evidence that there was any exodus of customers from the CA, or any
other CA that's done similar things in the past).

...

If a CA in a trust anchor pile does something terribly wrong and there are no
repercussions, why would any CA care about doing things right?  All that does
is drive up costs.  The perverse incentive that this creates is for CAs to
ship as many certificates as possible while applying as little effort as
possible.  And thus we have the current state of commercial PKI.

It seems to me that there are a number of problems with the current CA
situation. Since no CAs have been identified by name (except Verisign for a
very old problem), it is hard for me to reduce the reputation of a specific
CA. Even if one was identified, it's not clear what I could do to move
business to more responsible CAs.  So my reaction is to say that it's all a
big stinking pile and try to develop systems and procedures that don't rely
on CAs. (e.g. curl with a copy of the server's self-signed certificate, the
Petname toolbar, etc.)

If SSL/TLS had as part of its handshake, a list of CAs that are acceptable
to the client, I could configure my browser with only high-reputation CAs.
This step would probably make it desirable for servers to get certificates
from more than one CA so they could return a certificate signed by an
acceptable CA. It would certainly allow for some market pressure on CAs,
and high reputation CA might be able to charge more for certificates.

(The last time I ran into a case where the server certificate was not
signed by a CA on my browser's default list, I used the 800 number instead.
That was for activating a credit card.)

In addition, I am worried that some countries cyber-warfare department has
a copy of some well-installed CA's signing key and can generate
certificates whenever it wants. When D-day comes, it will spoof DNS and use
the certificates to disrupt the economy of its target country. If we had a
2 level security system, with CAs for the first introduction, and something
more robust for subsequent sessions, these attack scenarios would be less
likely.

Cheers - Bill

---
Bill Frantz| gets() remains as a monument | Periwinkle
(408)356-8506  | to C's continuing support of | 16345 Englewood Ave
www.pwpconsult.com | buffer overruns. | Los Gatos, CA 95032

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: Has any public CA ever had their certificate revoked?

2009-05-07 Thread Bill Frantz
pgut...@cs.auckland.ac.nz (Peter Gutmann) on Thursday, May 7, 2009 wrote:

If SSL/TLS had as part of its handshake, a list of CAs that are acceptable to
the client, I could configure my browser with only high-reputation CAs.

Uhh, how is that meant to work?

The client hello message would include the list of acceptable CAs. The
server could use that list to select an acceptable certificate to return to
the client. In the rare cases where there is a client certificate, the
server hello could include a similar list and the client could use it to
select an acceptable certificate. If the lists aren't included in the hello
messages, the behavior is the same as the current versions of SSL/TLS.


In any case even if it did, every time you went to a site using a cert vending
machine not on your list the browser wouldn't let you connect (or at least not
without serious amounts of messing around, which means that eventually you'd
add it to your list just to get rid of the nuisance).

Yes, I know I'm way out in left field, but I just might not go to a web
site if I cared about security with my transaction and the site didn't use
a reasonable CA. There are many alternatives both with competitor
organizations, and competitive communication techniques. For example, if I
didn't like the CA my bank used, I could either change banks or do my
banking by phone or in person at a local branch.

I have avoided many sites that want user names and passwords, or want me to
turn on Javascript. The popularity of the noscript plugin for Firefox means
that perhaps I'm not the only one out in left field.

Cheers - Bill

---
Bill Frantz| gets() remains as a monument | Periwinkle
(408)356-8506  | to C's continuing support of | 16345 Englewood Ave
www.pwpconsult.com | buffer overruns. | Los Gatos, CA 95032

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: Proof of Work - atmospheric carbon

2009-01-29 Thread Bill Frantz
jo...@iecc.com (John Levine) on Wednesday, January 28, 2009 wrote:

You know those crackpot ideas that keep showing up in snake oil crypto?
Well, e-postage is snake oil antispam.

While I think this statement may be true for POW coinage, because for a bot
net it grows on trees, for money that traces back to the international
monetary exchange system, it may not be completely true.

Snail mail postage limits, but does not eliminate junk mail. I think,
without proof, that most people can live with the amount of junk mail they
receive. At least I don' hear a lot of conversations about the Junk mail
problem.

Now it is certainly true that if machines have a small amount of money
stored within them for postage, someone who 0wns that machine could steal
some of that money. There is a limit to the amount that can be stolen based
on the person who pays for the machine noticing and being bothered. There
is probably safe profit in skimming small amounts from large number of
machines just like there was profit in skimming the round off in payroll
calculations.

Cheers - Bill

-
Bill Frantz| The first thing you need when  | Periwinkle
(408)356-8506  | using a perimeter defense is a | 16345 Englewood Ave
www.pwpconsult.com | perimeter. | Los Gatos, CA 95032

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: Bitcoin v0.1 released

2009-01-24 Thread Bill Frantz
h...@finney.org (Hal Finney) on Saturday, January 24, 2009 wrote:

Countermeasures by botnet operators would include moderating their take,
perhaps only stealing 10% of the productive capacity of invaded computers,
so that their owners would be unlikely to notice. This kind of thinking
quickly degenerates into unreliable speculation, but it points out the
difficulties of analyzing the full ramifications of a world where POW
tokens are valuble.

Some people tell me that the 0wned machines are among the most secure on
the network because botnet operators work hard to keep others from
compromising their machines. I could see the operators moving toward
being legitimate security firms, protecting computers against compromise in
exchange for some of the proof of work (POW) money.

Cheers - Bill

-
Bill Frantz| When it comes to the world | Periwinkle
(408)356-8506  | around us, is there any choice | 16345 Englewood Ave
www.pwpconsult.com | but to explore? - Lisa Randall | Los Gatos, CA 95032

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: CPRNGs are still an issue.

2008-12-15 Thread Bill Frantz
d...@mindrot.org (Damien Miller) on Friday, December 12, 2008 wrote:

On Thu, 11 Dec 2008, James A. Donald wrote:

 If one uses a higher resolution counter - sub
 microsecond - and times multiple disk accesses, one gets
 true physical randomness, since disk access times are
 effected by turbulence, which is physically true
 random.

Until someone runs your software on a SSD instead of a HDD. Oops.

I find myself in this situation with a design I'm working on. I
have an ARM chip, where each chip has two unique numbers burned
into the chip for a total of 160 bits. I don't think I can really
depend on these numbers being secret, since the chip designers
thought they would be useful for DRM. It certainly will do no harm
to hash them into the pool, and give them a zero entropy weight.

The system will be built with SSD instead of HDD, so Damien's
comment hits close to home. I hope to be able to use timing of
external devices, the system communicates with a number of these,
along with a microsecond counter to gather entropy from clock skew
between the internal clock and the clocks in those devices.

Unfortunately the system doesn't normally have a user, so UI
timings will be few and far between.

Short of building special random number generation hardware, does
anyone have any suggestions for additional sources?

Cheers - Bill

---
Bill Frantz| Barack Hussein Obama, President of the United States.
408-356-8506   | Now we can return to being a partner with the rest of
www.periwinkle.com | the world.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Request for Input (RFI)--National Cyber Leap Year

2008-12-08 Thread Bill Frantz
From: http://edocket.access.gpo.gov/2008/E8-24257.htm

NATIONAL SCIENCE FOUNDATION

 
Request for Input (RFI)--National Cyber Leap Year

AGENCY: The National Coordination Office (NCO) for Networking 
Information Technology Research and Development (NITRD).

ACTION: Request for Input (RFI).

---

DATES: To be considered, submissions must be received by December 15, 
2008.

SUMMARY: This request is being issued to initiate the National Cyber 
Leap Year under the Comprehensive National Cybersecurity Initiative 
(CNCI). The goal of the National Cyber Leap Year is to identify the 
most promising game-changing ideas with the potential to reduce 
vulnerabilities to cyber exploitations by altering the cybersecurity 
landscape. This RFI is the first step in constructing a national 
research and development agenda in support of the CNCI. 
Multidisciplinary contributions from organizations with cybersecurity 
interests are especially encouraged.

Cheers - Bill

-
Bill Frantz| When it comes to the world | Periwinkle
(408)356-8506  | around us, is there any choice | 16345 Englewood Ave
www.pwpconsult.com | but to explore? - Lisa Randall | Los Gatos, CA 95032

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Fake popup study

2008-09-24 Thread Bill Frantz
[EMAIL PROTECTED] (Perry E. Metzger) on Wednesday, September 24, 2008 wrote:

I don't want to claim that there is no place for better human factors
work in security engineering. There clearly is. However, I will
repeat, that is not the only story here, and it is not unreasonable to
note that there are people who are clearly nearly impossible to
protect with almost any level of human factors engineering and
security technology.

I would suggest that, in the real world, most of the people that
are nearly impossible to protect, don't have much money. Now real
world scams have been around for quite a while, and we teach about
them in school. However they still work with some people, which is
why those people don't have much money.

Online scams are newer, and many of their victims left school long
before the scams became popular. I expect the online situation will
stabilize in about the same way as the real world one has.

Cheers - Bill

-
Bill Frantz| The first thing you need when  | Periwinkle
(408)356-8506  | using a perimeter defense is a | 16345 Englewood Ave
www.pwpconsult.com | perimeter. | Los Gatos, CA 95032

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Fake popup study

2008-09-24 Thread Bill Frantz
[EMAIL PROTECTED] (Perry E. Metzger) on Wednesday, September 24, 2008 wrote:

there are clearly people we do not allow to cross
the street on their own (young children, some mentally ill people,
etc), so there is perhaps a class of people who should not be allowed
to do unsupervised banking on the basis that they cannot be trusted to
protect themselves adequately.

My 96 year old mother does not have a check book or credit cards.
All her bills are paid through her lawyer's office. QED.

Cheers - Bill

---
Bill Frantz| gets() remains as a monument | Periwinkle
(408)356-8506  | to C's continuing support of | 16345 Englewood Ave
www.pwpconsult.com | buffer overruns. | Los Gatos, CA 95032

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Password Recovery Attack

2008-09-21 Thread Bill Frantz
One attack on services, which use personal questions as a backup
form of user verification, works well for high-profile users of
these systems. The attack is very simple. Go into the password
recovery page, and use Google to look up the answers to the
personal questions asked. There is enough Googleable data around
for high-profile people, and perhaps not so high profile people,
that the attack can be successful often enough to be useful. My
sources say Sarah Palin's email account was breached using this
attack.

Cheers - Bill

---
Bill Frantz|We used to quip that password is the most common
408-356-8506   | password. Now it's 'password1.' Who said users haven't
www.periwinkle.com | learned anything about security? -- Bruce Schneier

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: road toll transponder hacked

2008-08-26 Thread Bill Frantz
[EMAIL PROTECTED] (Ken Buchanan) on Tuesday, August 26, 2008 wrote:

I think this is a bit different than what Michael Heyman said.  TxTag,
IIRC, was implemented by the same company (Raytheon) that implemented
the 407 ETR toll system in Toronto.  In the case of the 407, there is
no image recognition done if the car has a valid transponder.  Only in
the case of a missing or invalid transponder is the plate imagery
used.  Supposedly the OCR has a high enough error rate that there is
still manual verification of plates before sending a bill, and
accordingly a $3.60 additional charge is applied per trip.

If the images are used even when the vehicle has a valid transponder
-- as Michael Heyman suggests is happening with E-ZPass -- then it
might be feasible to have back end defenses against cloning, though
not without inconvenience to customers who borrow cars, buy new cars,
or rent cars while their own is getting serviced.  Also as Matt Blaze
pointed out this makes the transponder wholly redundant.

I could see where knowing what the license plate should be, from
the transponder code, could feed back into the OCR and only
generate a hit when the disagreement was obvious.

In the San Francisco Bay Area, they are using the transponder codes
to measure how fast traffic is moving from place to place. They
post the times to various destinations on the electric signs when
there are no Amber alerts or other more important things to
display. It is quite convenient, and they promise they don't use it
to track people's trips.

If one were paranoid, one could put a different ID into the
transponder for each trip, and only put the one it was issued with
into it for toll crossings. :-)

Cheers - Bill

---
Bill Frantz|We used to quip that password is the most common
408-356-8506   | password. Now it's 'password1.' Who said users haven't
www.periwinkle.com | learned anything about security? -- Bruce Schneier

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: The wisdom of the ill informed

2008-07-01 Thread Bill Frantz
[EMAIL PROTECTED] (James A. Donald) on Monday, June 30, 2008 wrote:

The only people who know who the real experts are, are the real 
experts.   If you knew who to hire, you could do it yourself, and 
probably should do it yourself.

I would say, even if you can do it yourself, hire another expert to
review your design.

When these systems are announced, we should get in the habit of
asking the people announcing them, Which recognized crypto protocol
and algorithm experts have reviewed your design?

Cheers - Bill

-
Bill Frantz| When it comes to the world | Periwinkle
(408)356-8506  | around us, is there any choice | 16345 Englewood Ave
www.pwpconsult.com | but to explore? - Lisa Randall | Los Gatos, CA 95032

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Can we copy trust?

2008-06-02 Thread Bill Frantz
[EMAIL PROTECTED] (Ed Gerck) on Monday, June 2, 2008 wrote:

To trust something, you need to receive information from sources OTHER 
than the source you want to trust, and from as many other sources as 
necessary according to the extent of the trust you want. With more trust 
extent, you are more likely to need more independent sources of 
verification.

In my real-world experience, this way of gaining trust is only
really used for strangers. For people we know, recognition and
memory are more compelling ways of trusting.

We can use this recognition and memory in the online world as well.
SSH automatically recognizes previously used hosts. Programs such
as the Pet Names Tool http://www.waterken.com/user/PetnameTool/
recognize public keys used by web sites, and provide us with a
human-recognizable name so we can remember our previous
interactions with that web site. Once we can securely recognize a
site, we can form our own trust decisions, without the necessity of
involving third parties.

Cheers - Bill

-
Bill Frantz| The first thing you need when  | Periwinkle
(408)356-8506  | using a perimeter defense is a | 16345 Englewood Ave
www.pwpconsult.com | perimeter. | Los Gatos, CA 95032

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: delegating SSL certificates

2008-03-19 Thread Bill Frantz
[EMAIL PROTECTED] (Peter Gutmann) on Sunday, March 16, 2008 wrote:

[EMAIL PROTECTED] writes:

I would think this would be rather common, and I may have heard about certs
that had authority to sign other certs in some circumstances...

The desire to do it isn't uncommon, but it runs into problems with PKI
religious dogma that only a CA can ever issue a certificate.

Is that a religious dogma, or a business model masquerading as a
religious dogma?

Whichever it is, it is an impediment to improving protection
against fishing attacks. Consider a large organization like Amazon
and users using tools like the Petname toolbar
http://www.waterken.com/dev/YURL/Name/. Amazon has many servers,
each of which needs a TLS signing key. In a more ideal world,
Amazon would have a CA signed public certification key, which it
would use to certify each server's TLS signing key. The Petname
toolbar would use Amazon's public certification key as the identity
matching the user's petname for Amazon. Once that petname has been
established, AKA the introduction problem, the identity would be
safe, regardless of what happens higher in the PKI hierarchy. The
higher levels of the PKI hierarchy would only be used during the
introductory contact.

Given the current situation, with the CAs having a monopoly on
issuing certificates, there are many different public keys
associated with Amazon. In addition, Amazon may chose to change the
CA it uses. To handle this situation, the Petname toolbar the DN
instead of a public key, which opens the Petname tool bar to
spoofing by any of the 100 or so CAs configured in the standard
browsers. Does anyone know what happened to Baltimore's signing key
when they went out of business?

Cheers - Bill

-
Bill Frantz| The first thing you need when  | Periwinkle
(408)356-8506  | using a perimeter defense is a | 16345 Englewood Ave
www.pwpconsult.com | perimeter. | Los Gatos, CA 95032

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: cold boot attacks on disk encryption

2008-02-21 Thread Bill Frantz
[EMAIL PROTECTED] (Perry E. Metzger) on Thursday, February 21, 2008 wrote:


Ed Felten blogs on his latest research:

http://www.freedom-to-tinker.com/?p=1257

Excerpt:

Today eight colleagues and I are releasing a significant new
research result. We show that disk encryption, the standard
approach to protecting sensitive data on laptops, can be defeated
by relatively simple methods. We demonstrate our methods by using
them to defeat three popular disk encryption products: BitLocker,
which comes with Windows Vista; FileVault, which comes with MacOS
X; and dm-crypt, which is used with Linux.

More info: http://citp.princeton.edu/memory

Paper: http://citp.princeton.edu.nyud.net/pub/coldboot.pdf

Their key recovery technique gets a lot of mileage from using the
computed key schedule for each round of AES or DES to provide
redundant copies of the bits of the key.  If the computer cleared
the key schedule storage, while keeping the key itself when the
system is in sleep mode, or when the screen-saver password mode
kicks in, this attack would be less possible.

If, in addition, the key was kept XORed with the secure hash of a
large block of random memory, as suggested in their countermeasures
section, their attacks would be considerably more difficult.

These seem to be simple, low overhead countermeasures that provide
value for machines like laptops in transit.

Cheers - Bill

-
Bill Frantz| The first thing you need when  | Periwinkle
(408)356-8506  | using a perimeter defense is a | 16345 Englewood Ave
www.pwpconsult.com | perimeter. | Los Gatos, CA 95032

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Gutmann Soundwave Therapy

2008-02-06 Thread Bill Frantz
[EMAIL PROTECTED] (Peter Gutmann) on Monday, February 4, 2008 wrote:

Eric Rescorla [EMAIL PROTECTED] writes:

I don't propose to get into an extended debate about whether it is better to
use SRTP or to use generic DTLS. That debate has already happened in IETF and
SRTP is what the VoIP vendors are doing. However, the good news here is that
you can use DTLS to key SRTP (draft-ietf-avt-dtls-srtp), so there's no need
to invent a new key management scheme.

Hmm, given this X-to-key-Y pattern (your DTLS-for-SRTP example, as well as
OpenVPN using ESP with TLS keying), I wonder if it's worth unbundling the key
exchange from the transport?  At the moment there's (at least):

  TLS-keying --+-- TLS transport
   |
   +-- DTLS transport
   |
   +-- IPsec (ESP) transport
   |
   +-- SRTP transport
   |
   +-- Heck, SSH transport if you really want

Is the TLS handshake the universal impedance-matcher of secure-session
mechanisms?

If there had been a separation between the key exchange and
validation part of SSL (early TLS) and the transport part, the E
language protocol[1] almost certainly would have used the transport
part of the protocol.  The reasons at the time for not using SSL are
described in [2].  They are all associated with the connection and
cryptograph setup.

Simplified overview:

When an E program needs to contact a remote E program, it starts
with a hash of the other program's public key and large random
number, the Swiss number.  It gets the IP and port of the remote
program from a well-known network service called the Process Location
Service.  It then contacts that IP and port, sends its public key,
receives the remote public key, performs a Diffie Hellman exchange
for forward secrecy, checks the hash of the remote public key, and
sends a signature over the exchange.  It checks the remote programs
signature over the exchange, and if all the checks pass, sends the
encrypted Swiss number to identify the specific remote resource.

I couldn't see any way to take this self-authenticating key exchange
and jam it into a x.509 structure.  Perhaps I wasn't inventive
enough, but I ended up rolling my own transport protocol, at certain
extra cost in development and testing, and a significant risk of
security errors.

Cheers - Bill

[1] http://www.erights.org/elib/distrib/vattp/index.html

[2] http://www.erights.org/elib/distrib/vattp/SSLvsDataComm.html

---
Bill Frantz| gets() remains as a monument | Periwinkle
(408)356-8506  | to C's continuing support of | 16345 Englewood Ave
www.pwpconsult.com | buffer overruns. | Los Gatos, CA 95032

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Death of antivirus software imminent

2008-01-03 Thread Bill Frantz
[EMAIL PROTECTED] (Jason) on Wednesday, January 2, 2008 wrote:

On the other hand, writing an OS that doesn't get infected in the first place 
is a fundamentally winning battle: OSes are insecure because people make 
mistakes, not because they're fundamentally insecurable.

I fully agree that a better OS would go a long way toward helping in
the battle.  We even know some techniques for building a better OS. 
Consider plash http://sourceforge.net/projects/plash/, and Polaris
http://www.hpl.hp.com/techreports/2004/HPL-2004-221.html, both of
which run programs for a user with less than that user's privilege. 
This technique helps prevent viruses from infecting computers by
denying them write privileges to system files and most of the user's
files.

The model that any program a user runs can do anything that user is
permitted to do is fundamentally broken.  It is the model that all
current popular OSes support, so in that sense these OSes are
insecure.  The only mistake users make in many cases is running
software with bugs such as buffer overruns, where the virus then
uses the user's privileges to take over their system.  In these
cases, IMHO, blaming the user is inappropriate.  And in all cases,
OSes should give the user more support in making sound decisions. 
See for example: http://www.skyhunter.com/marcs/granmaRulesPola.html

Cheers - Bill

-
Bill Frantz| The first thing you need when  | Periwinkle
(408)356-8506  | using a perimeter defense is a | 16345 Englewood Ave
www.pwpconsult.com | perimeter. | Los Gatos, CA 95032

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Death of antivirus software imminent

2008-01-02 Thread Bill Frantz
On Dec 29, 2007, at 6:37 PM, Anne  Lynn Wheeler wrote:
 Virtualization still hot, death of antivirus software imminent

My favorite virtual machine use is for the virus to install itself
as a virtual machine, and run the OS in the virtual machine.  This
technique should be really good for hiding from virus scanners.

Cheers - Bill

---
Bill Frantz| I like the farmers' market   | Periwinkle
(408)356-8506  | because I can get fruits and | 16345 Englewood Ave
www.pwpconsult.com | vegetables without stickers. | Los Gatos, CA 95032

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: no surprise - Sun fails to open source the crypto part of Java

2007-05-14 Thread Bill Frantz
[EMAIL PROTECTED] (Ian G) on Monday, May 14, 2007 wrote:

Third option:  the architecture of Sun's Java crypto 
framework is based on motives that should have been avoided, 
and have come back to bite (again).

I think it is likely that Sun architected the Java crypto framework to
be able to obey the letter of the export regulations then in effect. 
They made it so the actual crypto implementations could differ from
country to country, and from supplier to supplier, while trying (not
very successfully IMHO) to provide a consistent application interface.

Cheers - Bill

---
Bill Frantz| I like the farmers' market   | Periwinkle
(408)356-8506  | because I can get fruits and | 16345 Englewood Ave
www.pwpconsult.com | vegetables without stickers. | Los Gatos, CA 95032

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Exponent 3 damage spreads...

2006-09-15 Thread Bill Frantz
[EMAIL PROTECTED] (James A. Donald) on Thursday, September 14, 2006 wrote:

Obviously we do need a standard for describing structured data, and we 
need a standard that leads to that structured data being expressed 
concisely and compactly, but seems to me that ASN.1 is causing a lot of 
grief.

What is wrong with it, what alternatives are there to it, or how can it 
be fixed?

In SPKI we used S-Expressions.  They have the advantage of being simple,
perhaps even too simple.

In describing interfaces in the KeyKOS design document
http://www.cis.upenn.edu/~KeyKOS/agorics/KeyKos/Gnosis/keywelcome.html
we used a notation similar to S-Expressions which was:

(length, data)

These could be combined into a structure: e.g. (4, len), (len, data) for
data proceeded by a four byte length field.  If you standardize that the
data is always right justified in a field of length len, and that
binary data is encoded with a standard encoding (hexadecimal,
6-bit/character, decimal etc.), most of the problems I have seen
described in this thread should just go away.

Some might object that having a specific number of bits for the length
field limits future expansion of this approach.  Indeed, ASN.1 avoids
this issue by allowing the encoding of infinite length integers, and
XML does the same.  The cost of that flexibility is much more difficult
encoding and decoding.  If a length field length of 4 to 8 bytes (32 to
64 bits) is chosen, as a practical matter, any length data that is
transmittable in an exchange can be represented.  (A terabit/second is
10**12 bits/second.  32 bits can represent a million seconds at that
data rate.  64 bits can represent much longer data items.)

Cheers - Bill

---
Bill Frantz| gets() remains as a monument | Periwinkle 
(408)356-8506  | to C's continuing support of | 16345 Englewood Ave
www.pwpconsult.com | buffer overruns. | Los Gatos, CA 95032

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: encrypted file system issues (was Re: PGP master keys)

2006-05-02 Thread Bill Frantz
[A bit off topic but I thought I'd let it through anyway. Those
uninterested in OS design should skip the rest of this message. --Perry]

On 5/1/06, [EMAIL PROTECTED] (Perry E. Metzger) wrote:

Disk encryption systems like CGD work
on the block level, and do not propagate CBC operations across blocks,
so if the atomic disk block write assumption is correct (and almost
all modern file systems operate on that assumption), you have no more
real risk of corruption than you would in any other application.

I haven't seen the failure specs on modern disk systems, but the KeyKOS
developers ran into an interesting (and documented) failure mode on IBM
disks about 20 years ago.  Those IBM systems connected disks to a
controller which was connected to a channel which was a specialized
processor with DMA access to the main storage of the system.  Note that
these systems were designed in the days when memory was expensive, so
there was an absolute minimum of buffering in the channel, controller,
and disk.

There are many possible failure modes, including power failure on the
individual components, hardware failure/microprogram failure in the
components, etc.  The failure we experienced was a microcode hang in the
channel (probably caused by a transient hardware failure), which also
stopped the CPU.  The failure occurred while the controller and disk was
writing a block, and the channel ceased providing data.  The
specification for the controller was if the channel failed to provide
data, it filled the block with the last byte received from the channel. 
If the channel and CPU had been running, the overrun would have been
reported back to the OS with an interrupt.  As it was, all we had was a
partially klobbered disk block.

Since KeyKOS was supposed to be a high reliability OS, we needed to code
for this situation.  Because of the design of the disk I/O system, there
were only two disk blocks (copies of each other) where this kind of
failure could cause a problem.  We defined the format of these blocks so
the last two bytes were 0xFF00.  By checking for this pattern, we could
determine if the block has been partially klobbered.  We then had to
ensure that we checked for correct write on one of the blocks before
starting to write the other.

Does anyone have any idea how modern disks and computers handle similar
situations?

Cheers - Bill

---
Bill Frantz| gets() remains as a monument | Periwinkle 
(408)356-8506  | to C's continuing support of | 16345 Englewood Ave
www.pwpconsult.com | buffer overruns. | Los Gatos, CA 95032

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Linux RNG paper

2006-03-22 Thread Bill Frantz
On 3/21/06, [EMAIL PROTECTED] (Heyman, Michael) wrote:

Gutterman, Pinkas, and Reinman have produced a nice as-built-specification and 
analysis of the Linux 
random number generator.

From http://eprint.iacr.org/2006/086.pdf:

...

” Since randomness is often consumed in a multi-user environment, it makes 
sense to generalize the BH 
model to such environments. Ideally, each user should have its own 
random-number generator, and these 
generators should be refreshed with different data which is all derived from 
the entropy sources 
available to the system (perhaps after going through an additional PRNG). This 
architecture should 
prevent denial-of-service attacks, and prevent one user from learning about 
the randomness used by 
other users

One of my pet peeves: The idea that the user is the proper atom of
protection in an OS.

My threat model includes different programs run by one (human) user.  If
a Trojan, running as part of my userID, can learn something about the
random numbers harvested by my browser/gpg/ssh etc., then it can start
to attack the keys used by those applications, even if the OS does a
good job of keeping the memory spaces separate and protected.

Cheers - Bill

-
Bill Frantz| The first thing you need   | Periwinkle 
(408)356-8506  | when using a perimeter | 16345 Englewood Ave
www.pwpconsult.com | defense is a perimeter.| Los Gatos, CA 95032

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: browser vendors and CAs agreeing on high-assurance certificat es

2006-01-05 Thread Bill Frantz
On 12/24/05, [EMAIL PROTECTED] (Ben Laurie) wrote:

I don't see why not - the technical details actually matter. Since the
servers will all share a socket, on any normal architecture, they'll all
have access to everyone's private keys. So, what is gained by having
separate certs?

I responded in private email:

 With a POLA architecture, perhaps on a capability OS (dream, dream),
 they might not share access to the private keys.  However, given current
 software, I grant your point.

Ben responded that I should post my comments to the list.

There are two scenarios I see as being viable for separating the private
keys with a security barrier.  One is the single machine case alluded to
above.  Here the private keys would be in separate security domains, and
the common part of the web server, which listens on the socket, would
read the initial data on the TCP connection, select the correct security
domain, and pass the connection to that domain. While the common part
could continue to examine all the data, those data would be encrypted,
so the it would have the same access as any other untrusted node in the
path.

The other scenario involves a network switch which performs the function
of the common code of the web server.  It uses network address
translation to forward the connection's packets to the back-end computer
with the correct private key.  Here the keys are protected by being kept
on separate computers.

Both approaches allow a web hosting ISP to protect its customers from
each other.  This mutual protection is much the same requirement as
existed in the time-sharing systems KeyKOS was designed to support.

Cheers - Bill

---
Bill Frantz| gets() remains as a monument | Periwinkle 
(408)356-8506  | to C's continuing support of | 16345 Englewood Ave
www.pwpconsult.com | buffer overruns. | Los Gatos, CA 95032

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: PKI too confusing to prevent phishing, part 28

2005-09-27 Thread Bill Frantz
On 9/25/05, [EMAIL PROTECTED] (Paul Hoffman) wrote:

http://www.informationweek.com/story/showArticle.jhtml?articleID=171200010

Summary: some phishes are going to SSL-secured sites that offer up 
their own self-signed cert. Users see the warning and say I've seen 
that dialog box before, no problem, and accept the cert. From that 
point on, the all-important lock is showing so they feel safe.

One important point is that the dialog box will appear the same, even if
the self-signed cert is signed by a different key.  It has no memory of
previously accessed sites.  It takes something like the petname or
trustbar tools to provide the memory that make self-signed certs like
SSH keys.

Cheers - Bill

-
Bill Frantz| The first thing you need   | Periwinkle 
(408)356-8506  | when using a perimeter | 16345 Englewood Ave
www.pwpconsult.com | defense is a perimeter.| Los Gatos, CA 95032

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Java: Helping the world build bigger idiots

2005-09-23 Thread Bill Frantz
On 9/22/05, [EMAIL PROTECTED] (Olle Mulmo) wrote:

Peter's example is standard to the language. It's just not used much 
by those influenced by other idioms prior to learning Java.

I guess another way of saying this is: the people on this list are 
getting old. :-)

I guess insisting on correct error handling is just for old people.

Peter's example:

  try { 
int idx = 0; 

while (true) { 
  displayProductInfo(prodnums[idx]);
  idx++; 
  } 
} 
  catch (IndexOutOfBoundException ex) { 
// nil
}

has a serious bug in error handling.  We do not know where the
IndexOutOfBoundException was raised.  Was it raised in the while loop,
the expected case; or was it raised in the displayProductInfo method,
due to some bug in that method?  (It could also be raised in some other
method called by displayProductInfo.)

In order for this code to be correct, we would have to prove that the
displayProductInfo method either could not raise this exception, or that
it caught and handled any IndexOutOfBoundException exceptions raised in
it or in methods it calls.  In either case, we must examine the details of
displayProductInfo, and depend on our conclusions remaining correct
during maintenance.  This level of coupling between caller and callee is
too risky for reliable software.

Cheers - Bill

-
Bill Frantz| The first thing you need   | Periwinkle 
(408)356-8506  | when using a perimeter | 16345 Englewood Ave
www.pwpconsult.com | defense is a perimeter.| Los Gatos, CA 95032

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: [Clips] Contactless payments and the security challenges

2005-09-21 Thread Bill Frantz
One issue I have not seen addressed in these contactless payment systems is 
the needs of people who carry multiple payment instruments.  A simple example 
is a personal and a corporate credit card.

Cheers - Bill

-
Bill Frantz| The first thing you need   | Periwinkle 
(408)356-8506  | when using a perimeter | 16345 Englewood Ave
www.pwpconsult.com | defense is a perimeter.| Los Gatos, CA 95032

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: [Clips] Contactless payments and the security challenges

2005-09-21 Thread Bill Frantz
On 9/21/05, [EMAIL PROTECTED] (Nick Owen) wrote:

Interesting question.  I know that we can solve it on a
application-enabled cell phone with public keys - each service has only
swapped public keys so you can have any number.  Can such a thing be
done on an RFID card too?

Bill Frantz wrote:
 One issue I have not seen addressed in these contactless payment systems 
is the needs of people who carry multiple payment instruments.  A simple 
example is a personal and a corporate credit card.

It seems to me a use case is paying for a meal.  The cost may be
personal: I've taken my wife out to dinner and a show; or corporate: I'm
on a business trip.  I need to specify which payment instrument is to be
used, and not have it automatically sniffed out of my wallet or cell
phone.

If payment means putting the token next to the reader, i.e. a read
distance of only a few centimeters, then there should be no problem.  If
payment happens at RFID distances, then I'll need Faraday shields for the
tokens, eliminating most of the value of contactless payments.

Cheers - Bill

---
Bill Frantz| gets() remains as a monument | Periwinkle 
(408)356-8506  | to C's continuing support of | 16345 Englewood Ave
www.pwpconsult.com | buffer overruns. | Los Gatos, CA 95032

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Java: Helping the world build bigger idiots

2005-09-20 Thread Bill Frantz
On 9/19/05, [EMAIL PROTECTED] (Peter Gutmann) wrote:

Found on the Daily WTF, http://www.thedailywtf.com/forums/43223/ShowPost.aspx:

  try { 
int idx = 0; 

while (true) { 
  displayProductInfo(prodnums[idx]);
  idx++; 
  } 
} 
  catch (IndexOutOfBoundException ex) { 
// nil
}

This is obviously just an attempt to make Java array access more like Java file 
access.  :-)

Seriously, the real flaw in this approach, which I did not see mentioned in the 
comments on the web page Peter references above, is the masking of 
IndexOutOfBoundExceptions that may be generated by displayProductInfo.  This 
code will treat such errors as end of array.  A more normal coding of the 
loop:

for (int i=1; iprodnums.length; i++) { 
  displayProductInfo(prodnums[idx]);
  idx++; 
} 

would let the exception pass up the call chain, and with good error handling, 
the problem would come to the attention of those responsible for fixing the 
program.

If ArrayIndexOutOfBoundException were used instead of IndexOutOfBoundException, 
errors in string indexing would pass up the call chain, while catching array 
problems.


Cheers - Bill

-
Bill Frantz| The first thing you need   | Periwinkle 
(408)356-8506  | when using a perimeter | 16345 Englewood Ave
www.pwpconsult.com | defense is a perimeter.| Los Gatos, CA 95032

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Clearing sensitive in-memory data in perl

2005-09-16 Thread Bill Frantz
On 9/13/05, [EMAIL PROTECTED] (Perry E. Metzger) wrote:


Generally speaking, I think software with a security impact should not
be written in C.

I agree.  I also note that Paul A. Karger and Roger R. Schell, in their
paper, Thirty Years Later: Lessons from the Multics Security
Evaluation state:

2.3.1 Programming in PL/I for Better Security

Multics was one of the first operating systems to be
implemented in a higher level language.(1) While the Multics
developers considered the use of several languages,
including BCPL (an ancestor of C) and AED (Algol Extended
for Design), they ultimately settled on PL/I [15].

Although PL/I had some influence on the development
of C, the differences in the handling of varying length
data structures between the two languages can be seen as
a major cause of buffer overflows. In C, the length of all
character strings is varying and can only be determined by
searching for a null byte. By contrast, PL/I character
strings may be either fixed length or varying length, but a
maximum length must always be specified, either at compile
time or in an argument descriptor or in another variable
using the REFER option. When PL/I strings are used
or copied, the maximum length specifications are honored
by the compiled code, resulting in automatic string truncation
or padding, even when full string length checking is
not enabled. The net result is that a PL/I programmer
would have to work very hard to program a buffer overflow
error, while a C programmer has to work very hard
to avoid programming a buffer overflow error.

Multics added one additional feature in its runtime
support that could detect mismatches between calling and
called argument descriptors of separately compiled programs
and raise an error.

PL/I also provides richer features for arrays and structures.
While these differences are not as immediately
visible as the character string differences, an algorithm
coded in PL/I will have less need for pointers and pointer
arithmetic than the same algorithm coded in C. Again,
the compiler will do automatic truncation or padding,
even when full array bounds checking is not enabled.

While neither PL/I nor C are strongly typed languages
and security errors are possible in both languages, PL/I
programs tend to suffer significantly fewer security problems
than the corresponding C programs.


(1) Burroughs’ use of ALGOL for the B5000 operating system was well
known to the original Multics designers.

15. Corbató, F.J., PL/I As a Tool for System Programming.
Datamation, May 1969. 15(5): p. 68-76. URL:
http://home.nycap.rr.com/pflass/plisprg.htm


Cheers - Bill

---
Bill Frantz| gets() remains as a monument | Periwinkle 
(408)356-8506  | to C's continuing support of | 16345 Englewood Ave
www.pwpconsult.com | buffer overruns. | Los Gatos, CA 95032

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Possible non-extension property for hash functions

2005-08-06 Thread Bill Frantz
In Steve Bellovin and Eric Rescorla's paper, Deploying a New Hash Algorithm*, 
the author's note the well known property of hash functions:

For two different stings x and y,

H(x) = H(y) == H(x||s) = H(y||s)


It seems to me that there might be a class of hash functions for which this 
property did not hold.  While hashes in this class might require random access 
to the entire input, they could prevent the message extension class of attack 
exploited by Lucks and Daum (see 
http://th.informatik.uni-mannheim.de/people/lucks/HashCollisions) when they 
generated two Postscript files with very different output, but the same MD5 
hash.

* A draft of Bellovin and Rescorla's paper is available at 
http://www.cs.columbia.edu/~smb/papers/new-hash.ps and 
http://www.cs.columbia.edu/~smb/papers/new-hash.pdf.)

Cheers - Bill

-
Bill Frantz| The first thing you need   | Periwinkle 
(408)356-8506  | when using a perimeter | 16345 Englewood Ave
www.pwpconsult.com | defense is a perimeter.| Los Gatos, CA 95032

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: [Clips] Escaping Password Purgatory

2005-08-04 Thread Bill Frantz
On 8/3/05, [EMAIL PROTECTED] (R.A. Hettinga) quoted:


 http://www.forbes.com/2005/08/03/usps-password-casestudy-cx_de_0803password_print.html

 Forbes


 Computer Hardware Software
 Escaping Password Purgatory
 David M. Ewalt,  08.03.05, 3:00 PM ET

 ... I think I have passwords for
 over 47 different applications both internal and external that I access,
 and I've acquired those IDs and passwords over several years, says Wayne
 Grimes, manager of customer care operations for the U.S. Postal Service.

Try Site Password, http://www.hpl.hp.com/personal/Alan_Karp/site_password/.  
It takes a good master password, and a site name, and hashes them together to 
produce a site-specific password.

Cheers - Bill


-
Bill Frantz| The first thing you need   | Periwinkle 
(408)356-8506  | when using a perimeter | 16345 Englewood Ave
www.pwpconsult.com | defense is a perimeter.| Los Gatos, CA 95032

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: encrypted tapes

2005-06-09 Thread Bill Frantz
On 6/8/05, [EMAIL PROTECTED] (Perry E. Metzger) wrote:

If you have no other choice, pick keys for the next five years,
changing every six months, print them on a piece of paper, and put it
in several safe deposit boxes. Hardcode the keys in the backup
scripts. When your building burns to the ground, you can get the tapes
back from Iron Mountain and the keys from the safe deposit box.

I think I would be tempted to keep a private key in those safe deposit boxes, 
and when writing the backup tape, pick a random (as best you can with the 
hardware and software available) session key, encrypt it using the public key, 
hard coded in the backup procedure, and write the encrypted result as the first 
part of the backup.  This procedure allows you to keep your secrets hidden 
away, at least until you need to use one of the tapes.

Cheers - Bill

IP note:  This technique is so obvious to any practitioner skilled in the art 
as to be non-patentable (except in the USA, where obviousness is no barrier).  
In any case I put it into the public domain.

---
Bill Frantz| gets() remains as a monument | Periwinkle 
(408)356-8506  | to C's continuing support of | 16345 Englewood Ave
www.pwpconsult.com | buffer overruns. | Los Gatos, CA 95032

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Petname Tool version 0.5

2005-03-25 Thread Bill Frantz
Tyler Close has written an anti-phishing tool for the Firefox browser called 
the Petname tool.  It works with SSL sites, including those with self-signed 
certificates, and is available at http://www.waterken.com/user/PetnameTool/.

Mark Stiegler has written an overview of petname systems, including a list of 
existing examples, available at: 
http://www.skyhunter.com/marcs/petnames/IntroPetNames.html.  Note that Amir 
Herzberg and Ahmad Gbara's Trustbar system is an example of a pet name system.

From the Petname Tool web site, Need help avoiding phishing and spoofing 
attacks? The petname tool can help you keep it all straight by clearly 
distinguishing your online relationships.

Using the petname tool, you can save a reminder note about a relationship you 
have with a site. The petname tool will then automatically display this 
reminder note every time you visit the site. After following a hyperlink, you 
need only check that the expected reminder note is being displayed. If so, you 
can be sure you are using the same site you have in the past.

Cheers - Bill

-
Bill Frantz| The first thing you need   | Periwinkle 
(408)356-8506  | when using a perimeter | 16345 Englewood Ave
www.pwpconsult.com | defense is a perimeter.| Los Gatos, CA 95032

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Cryptography Expert Paul Kocher Warns: Future DVDs Prime Target for Piracy, Pay TV Foreshadows Challenges

2004-04-22 Thread Bill Frantz
At 10:40 AM -0700 4/20/04, R. A. Hettinga wrote:
While it's unfortunate that security on the current DVD format is broken
and can't be reprogrammed, HD is what really matters. Once studios release
high-definition content, there will be little or no distinction between
studio-quality and consumer-quality, said Kocher. This means that HD is
probably Hollywood's one and only chance to get security right.

According to Kocher, Hollywood is following a path common to other
industries facing similar problems. Typically, first-generation security
systems fail irrecoverably, but later generations are designed to recover
from failures, Kocher said. As an example, he cites K-band (big dish)
satellite TV systems, which suffered from devastating piracy because
security flaws could not be corrected. Having learned this lesson, modern
pay TV systems place critical security components in smart cards or
security modules that can be replaced. While this approach is not optimal
because hardware upgrades are expensive, it has enabled the industry to
keep piracy at survivable levels.

Continuously changing the protection on permanent storage media is a much
more difficult problem than changing broadcast protection.  With broadcast,
you give current subscribers the new smart card, change what's broadcast,
and away you go.  With permanent storage media, once the protection is
broken, the content is still available to pirate.  Only new releases can be
protected with new protection schemes.

These technical considerations would seem to lead to a marketing strategy
of short product cycles driven by big advertising campaigns, to reap as
much profit as possible while piracy is still difficult.  This approach is
not new to the movie industry.  In recent years, the number of theaters
opening a big movie release has increased greatly, and the time it runs in
theaters has become shorter.

It is ironic to compare the marketing strategy of reaping most of the
profit quickly, with the public policy stance that long copyright terms are
necessary to provide incentive for production.

Cheers - Bill


-
Bill Frantz| There's nothing so clear as a | Periwinkle
(408)356-8506  | vague idea you haven't written | 16345 Englewood Ave
www.pwpconsult.com | down yet. -- Dean Tribble | Los Gatos, CA 95032


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


RE: voting

2004-04-16 Thread Bill Frantz
One area we are not addressing in voting security is absentee ballots.  The
use of absentee ballots is rising in US elections, and is even being
advocated as a way for individuals to get a printed ballot in jurisdictions
which use electronic-only voting machines.  Political parties are
encouraging their supporters to vote absentee.  I believe that one election
in Oregon was recently held entirely with absentee ballots.

For classic polling place elections, one strength of an electronic system
which prints paper ballots is that there are two separate paths for the
counts.  The machine can keep its own totals and report them at the end of
the election.  These totals can then be compared with the totals generated
for that precinct by counting the paper ballots.  This redundancy seems to
me to provide higher security than either system alone.

Cheers - Bill


-
Bill Frantz| There's nothing so clear as a | Periwinkle
(408)356-8506  | vague idea you haven't written | 16345 Englewood Ave
www.pwpconsult.com | down yet. -- Dean Tribble | Los Gatos, CA 95032


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Difference between TCPA-Hardware and other forms of trust

2003-12-20 Thread Bill Frantz
At 7:30 AM -0800 12/17/03, Jerrold Leichter wrote:

...

If the system were really trusted, it could store things like your credit
balance:  A vendor would trust your system's word about the contents, because
even you would not be able to modify the value.  This is what smart cards
attempt to offer - and, again, it would be really nice if you didn't have to
have a whole bunch of them.  The bank records stored on your system could
be trusted:  By the bank, by you - and, perhaps quite useful to you, by a
court if you claimed that the bank's records had been altered.

One should note that TCPA is designed to store its data (encrypted) in the
standard file system, so standard backup and restore techniques can be
used.  However, being able to backup my bank balance, buy a bunch of neat
stuff, and then restore the previous balance is not really what a banking
application wants.  Smart cards address this situation by storing the data
on the card, which is designed to be difficult to duplicate.

[I always considered the biggest contribution from Mondex was the idea of
deposit-only purses, which might reduce the incentive to rob late-night
business.]

Cheers - Bill



-
Bill Frantz| There's nothing so clear as a | Periwinkle
(408)356-8506  | vague idea you haven't written | 16345 Englewood Ave
www.pwpconsult.com | down yet. -- Dean Tribble | Los Gatos, CA 95032


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Clipper for luggage

2003-11-15 Thread Bill Frantz
At 9:27 AM -0800 11/13/03, David Turner wrote:
On Tue, 2003-11-11 at 22:31, Tim Dierks wrote:
  From the New York Times. Any guesses on how long it'll take before your
 local hacker will have a key which will open any piece of your luggage?

Local hacker, hell:

 It will also mean more peace of mind for
 passengers worried about reports of increased pilferage from unlocked bags.

... so, TSA people are stealing from unlocked bags.  The solution:

 In other words, we can open it, but no one else can.

... allow only the TSA to get into bags.  Brilliant!

Actually, this does have some security benefit, in that now TSA can be
effectively held responsible for thefts.  Still, the subject is quite
accurate, except that it won't be mandatory as Clipper is.

I've never seen a luggage lock that provides anything like what I would
call security.  On the other hand, unlocked luggage does sometimes open in
transit.  (I saw a suitcase open when it was dropped while being loaded
onto an airplane.)

I usually travel with zipper closed duffel bags.  I fasten the zipper
closed with a screw link.  Anyone can unscrew the link and get into the
bag, but it does effectively keep the zipper closed in transit.  I suppose
it also provides some level of security because someone wanting to do a
quick grab from luggage will probably pick a less-secured piece.\

Cheers - Bill


-
Bill Frantz| There's nothing so clear as a | Periwinkle
(408)356-8506  | vague idea you haven't written | 16345 Englewood Ave
www.pwpconsult.com | down yet. -- Dean Tribble | Los Gatos, CA 95032


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Trusting the Tools - was Re: Open Source ...

2003-10-11 Thread Bill Frantz
At 8:18 AM -0700 10/7/03, Rich Salz wrote:
Are you validating the toolchain?  (See Ken Thompson's
Turing Aware lecture on trusting trust).

With KeyKOS, we used the argument that since the assembler we were using
was written and distributed before we designed KeyKOS, it was not feasible
to include code to subvert KeyKOS.  How do people feel about this form of
argument?

Cheers - Bill


-
Bill Frantz| There's nothing so clear as a | Periwinkle
(408)356-8506  | vague idea you haven't written | 16345 Englewood Ave
www.pwpconsult.com | down yet. -- Dean Tribble | Los Gatos, CA 95032


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: [e-lang] Re: Protocol implementation errors

2003-10-11 Thread Bill Frantz
At 5:36 PM -0700 10/5/03, Norman Hardy wrote:
I can't recall Keykos security problems stemming from hostile message
strings in a key invocation.
I don't know why. Perhaps we always expected hostile messages as a
cultural thing.

I think there were several additional reasons for this:

* Most of the strings passed were very simple, consisting of just one
string.  We didn't get much more complex than, for example, the record
collection (which used byte string names to look up data and capabilities).
Its string was:

  * 1 byte - length of name
  * n byte (0-255) name
  * the rest data.

* We did not try to handle infinite length strings.  In general, strings
were limited to 4096 bytes.

* We were programming in 370 assembler, which, IMHO, is better suited to
writing secure code than C.  For example, the string copy primitive in C is
strcpy, where the source string determines the length.  In 370 assembler,
the primitive is MVCL (move long) which takes 5 parameters, the source
address and length, the destination address and length, and the fill
character.  Having to specify those 5 parameters made the issues involved
in a string copy very obvious each time one was coded.

* We did not, as a general rule, have a stack.  As a result, it was less
likely to have program control data (return addresses) and buffers next to
each other, where a buffer overrun could result in both hostile code, and
the necessary changes to the program control data to pass control to it.


At 10:58 PM -0700 10/3/03, Peter Gutmann wrote:
I would say the exact opposite: ASN.1 data, because of its TLV encoding, is
self-describing (c.f. RPC with XDR), which means that it can be submitted to a
static checker that will guarantee that the ASN.1 is well-formed.  In other
words it's possible to employ a simple firewall for ASN.1 that isn't possible
for many other formats (PGP, SSL, ssh, etc etc).  This is exactly what
cryptlib does, I'd be extremely surprised if anything could get past that.
Conversely, of all the PDU-parsing code I've written, the stuff that I worry
about most is that which handles the ad-hoc (a byte here, a unit32 there, a
string there, ...) formats of PGP, SSH, and SSL.  We've already seen half the
SSH implementations in existence taken out by the SSH malformed-packet
vulnerabilities, I can trivially crash programs like pgpdump (my standard PGP
analysis tool) with malformed PGP packets (I've also crashed quite a number of
SSH clients with malformed packets while fiddling with my SSH server code),
and I'm just waiting for someone to do the same thing with SSL packets.  In
terms of safe PDU formats, ASN.1 is the best one to work with in terms of
spotting problems.

I think Peter has convinced me that ASN.1 (by which I mean DER
specifically, since that is the form that the run-time code parses), is
probably not a whole lot worse than the other formats (which also have
shown significant parsing bugs).  While some of the problems come from the
details of the format, most probably come from the complex data structures
that are part of the problem space.  Given that this is a correct
assessment, then we need to think of ways to protect ourselves against the
programs that parse these data.

If we were coding in KeyKOS or EROS, I would put the parsing code in a
separate domain.  To protect against denial of service that domain would
have a limited space bank, a keeper, and a timeout to signal parse failure
to the caller.  To guard against attack, that domain would only have
capabilities to:

* The input string and/or the input byte stream
* No hole creators for the output objects.

Further protection can be built by limiting the privilege of the domains
that use these output objects.  (Note that the cost of a domain call is
very small compared with the cost of a public key operation.)

I don't really know how to apply this approach to a UNIX like system.  I
think for these systems, it might be best to write the parsing code in a
safe language such as Java, Smalltalk, E, Scheme etc.  That would result
in an additional layer of protection from the runtime.  If the cost of
calling into and return from the language is small enough, the extra cost
of the language should be tolerable.  (Having to fire up a Java Virtual
Machine each time could make public key operations look fast.)

Cheers - Bill



-
Bill Frantz| There's nothing so clear as a | Periwinkle
(408)356-8506  | vague idea you haven't written | 16345 Englewood Ave
www.pwpconsult.com | down yet. -- Dean Tribble | Los Gatos, CA 95032


___
e-lang mailing list
[EMAIL PROTECTED]
http://www.eros-os.org/mailman/listinfo/e-lang

-
Bill Frantz| There's nothing so clear as a | Periwinkle
(408)356-8506  | vague idea you haven't written | 16345 Englewood Ave
www.pwpconsult.com

Protocol implementation errors

2003-10-03 Thread Bill Frantz
From:

 -- Security Alert Consensus --
   Number 039 (03.39)
  Thursday, October 2, 2003
Network Computing and the SANS Institute
  Powered by Neohapsis

*** {03.39.004} Cross - OpenSSL ASN.1 parsing vulns

OpenSSL versions 0.9.6j and 0.9.7b (as well as prior) contain multiple
bugs in the parsing of ASN.1 data, leading to denials of services. The
execution of arbitrary code is not yet confirmed, but it has not been
ruled out.

This is the second significant problem I have seen in applications that use
ASN.1 data formats.  (The first was in a widely deployed implementation of
SNMP.)  Given that good, security conscience programmers have difficultly
getting ASN.1 parsing right, we should favor protocols that use easier to
parse data formats.

I think this leaves us with SSH.  Are there others?

Cheers - Bill


-
Bill Frantz| There's nothing so clear as   | Periwinkle
(408)356-8506  | vague idea you haven't written | 16345 Englewood Ave
www.pwpconsult.com | down yet. -- Dean Tribble | Los Gatos, CA 95032


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Reliance on Microsoft called risk to U.S. security

2003-10-02 Thread Bill Frantz
Peter has raised a number of important points.  Let me start by saying that
I do not see a strong distinction between a file to be viewed and a
program.  Both are instructions to the computer to perform some actions.
While we might think the renderer showing us flat ASCII text is quite
trustworthy, our degree of trust in an HTML should be less, and we
shouldn't trust a Word format renderer at all (thanks to Word Macro
viruses).

At 9:21 PM -0700 9/30/03, Peter Gutmann wrote:
Bill Frantz [EMAIL PROTECTED] writes:

The real problem is that the viewer software, whether it is an editor, PDF
viewer, or a computer language interpreter, runs with ALL the user's
privileges.  If we ran these programs with a minimum of privilege, most of
the problems would just go away.

This doens't really work.  Consider the simple case where you run Outlook with
'nobody' privs rather than the current user privs.  You need to be able to
send and receive mail, so a worm that mails itself to others won't be slowed
down much.

I do not envision running either programs or viewers under the privileges
of the mail agent.  Since I am not really a Unix person, let me take a stab
at a design and let the people who know what they are doing take stabs at
it.

What we need is an environment of very limited privilege to use to confine
our untrusted code.  Specifically:

* No ability to make connections to other services, either over the network
or locally.  (I think this item requires a kernel change.)

* Very limited access to the file system.  We might take the view that
since we control all the ways the confined process can send data out of the
system, it can have full read-only access to the file system without
risking anything important.  I am told we can build general limits of file
system access with chroot, but I am also told that processes can break out
of these limits.  This is an area where I would love to learn more.

* We can pass in the privileges we think the process should have via open
file handles.  These will probably include the ability to render on a
portion of the screen, and the ability to get mouse and keyboard focus.
(We need a way for trusted code to mediate these accesses so the user can
have a secure attention function.)

* Strict control of other communication paths I haven't thought of.  :-)


In addition everyone's sending you HTML-formatted mail, so you
need access to (in effect) MSIE via the various HTML controls.

An HTML renderer should be able to run in the above environment.


Further, you
need Word and Excel and Powerpoint for all the attachments that people send
you.

For viewing Word etc. documents, the applications should run in the above
environment.  The interesting case is where someone has sent you something
like a Word document and asked you to mark it up.  Everything should
proceed well in the above until it comes time to save a local copy or mail
the changed document back.

http://www.combex.com/papers/darpa-report/html/ describes the Powerbox
pattern for allowing the user to specify an output file for a confined
process such as we are discussing.  I would think the best way to return
such a file in Unix would be as an open file handle.  However I don't know
of a way for a program to call a service with greater privilege than it has
and accept a return value unless that service is part of the kernel.
Perhaps some Unix experts can come up with ideas.

As for mailing the document back, if the mail agent gave the confined
program read-write access to a copy of the file, the confined program could
write its output over the input and the mail agent could then make that
file available to the user when the confined program returns, and the user
could include it in the reply email.


They need access to various subsystems like ODBC and who knows what else
as an extension of the above.

Since most users do not have these facilities running on their machines, I
suspect that most Word/Excel/Powerpoint files would render quite well from
a confined process.  I would say that having random, perhaps hostile, files
able to update my local data bases is a violation of my security policy.


As you follow these dependencies further and
further out, you eventually end up running what's more or less an MLS system
where you do normal work at one privilege level, read mail at another, and
browse the web at a third.  This was tried in the 1970s and 1980s and it
didn't work very well even if you were prepared to accept a (sizeable) loss of
functionality in exchange for having an MLS OS, and would be totally
unacceptable for someone today who expects to be able to click on anything in
sight and have it automatically processed by whatever app is assigned to it.

UIs have changed considerably since the 1980s.  It turns out that modern
UIs make it much easier for users to run programs with only the privileges
they need than the UIs of the 80s.  (See the email thread at
http://www.eros-os.org/pipermail/e-lang/2001-March

Re: Monoculture

2003-10-02 Thread Bill Frantz
At 8:32 PM -0700 10/1/03, Matt Blaze wrote:
It might be debatable whether only licensed electricians should
design and install electrical systems.  But hardly anyone would argue
that electrical system designers and installers needn't be competent
at what they do.  (Perhaps most of those who would advance such arguments
were electrocuted or killed in fires before they had a chance to make
their case).

In most of the US, a homeowner can install electrical systems in their
house.  However, their installation must be up to code, and inspected by a
government inspector.  The analog for crypto protocols seems to be obvious,
although the inspector part seems to be more ad hoc and community based.
(But there's no building permit either.)

Cheers - Bill


-
Bill Frantz| There's nothing so clear as   | Periwinkle
(408)356-8506  | vague idea you haven't written | 16345 Englewood Ave
www.pwpconsult.com | down yet. -- Dean Tribble | Los Gatos, CA 95032


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Reliance on Microsoft called risk to U.S. security

2003-09-28 Thread Bill Frantz
At 8:12 AM -0700 9/27/03, [EMAIL PROTECTED] wrote:
On Fri, 26 Sep 2003, Bill Frantz wrote:

 The real problem is that the viewer software, whether it is an editor, PDF
 viewer, or a computer language interpreter, runs with ALL the user's
 privileges.  If we ran these programs with a minimum of privilege, most of
 the problems would just go away.


And what privileges should the Perl interpreter run with when I click on a
.pl file? How would the graphical shell know what privileges to assign
to each file?

Given a strange program that has just arrived on my machine, my current
policy is not to run it at all.

I might be willing to adopt a policy of giving it a small amount of memory,
CPU, and some space to render on the screen.  That would allow people to
exchange active amusements with a degree of safety.

If the program required more privilege (for example, a new version of a
utility from a co-worker), I would be happy to move it to an environment
where it had the resources it needs to run.


On the other hand a *trivial* privilege system: View (zero privs) vs.
Run (full privs) is viable, and is one of the pre-requisites for a more
secure UI, along with the previously discussed trusted path issues,
non-spoofing of the security interface, ...

Limiting the privilege of the View program would provide protection
against flaws in the viewer.  Given the number of flaws in very basic
software, such protection seems of have real value.

Cheers - Bill


-
Bill Frantz| There's nothing so clear as   | Periwinkle
(408)356-8506  | vague idea you haven't written | 16345 Englewood Ave
www.pwpconsult.com | down yet. -- Dean Tribble | Los Gatos, CA 95032


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Reliance on Microsoft called risk to U.S. security

2003-09-26 Thread Bill Frantz
At 6:47 AM -0700 9/26/03, [EMAIL PROTECTED] wrote:
While part of the security problems in Windows are Microsoft specific, in
my view a large part is inherited from earlier graphiscal desktop designs,
and is almost universal in this space. Specifically, when a user clicks
(or double-clicks) on an icon there is not a clear distinction between
Run and View. Instead we have the polymorphic Open.

If files always opened in a safe viewer, (e.g. clicking on a .pl file
fired up an editor, not the ActiveState Perl interpreter) a good part of
the security problem with Graphical desktops, Microsoft's, Apple's,
RedHat's, ... would be solved. The bizarre advice we give users to not
open message attachments would be largely unnecessary (one also needs to
close the the macro invocation problem, but this is not insurmountable).

It is my contention that so long as activating an icon does not
distinguish between Run and View all Graphical Shells will be
insecure.

The real problem is that the viewer software, whether it is an editor, PDF
viewer, or a computer language interpreter, runs with ALL the user's
privileges.  If we ran these programs with a minimum of privilege, most of
the problems would just go away.

See:
http://www.combex.com/tech/edesk.html
http://www.combex.com/papers/darpa-review/index.html
http://www.combex.com/papers/darpa-report/index.html

Cheers - Bill


-
Bill Frantz| There's nothing so clear as   | Periwinkle
(408)356-8506  | vague idea you haven't written | 16345 Englewood Ave
www.pwpconsult.com | down yet. -- Dean Tribble | Los Gatos, CA 95032


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: End of the line for Ireland's dotcom star

2003-09-23 Thread Bill Frantz
At 12:45 PM -0700 9/23/03, Anne  Lynn Wheeler wrote:
At 01:06 PM 9/23/2003 -0400, R. A. Hettinga wrote:
http://www.guardian.co.uk/print/0,3858,4759214-103676,00.html

so ignore for the moment the little indiscretion
http://www.garlic.com/~lynn/2003l.html#44 Proposal for a new PKI model (At
least I hope it's new)
http://www.garlic.com/~lynn/2003l.html#50 Proposal for a new PKI model (At
least I hope it's new)

and the part of turning a simple authentication problem into a
significantly harder and error prone (along with exploits and
vulnerabilities ... not to say expensive) problem:
http://www.garlic.com/~lynn/aadsm15.htm#4 Is cryptography where security
took the wrong branch?
http://www.garlic.com/~lynn/aadsm15.htm#7 Is cryptography where security
took the wrong branch?
http://www.garlic.com/~lynn/aadsm15.htm#11 Resolving an identifier into a
meaning


there has been the some past discussions of what happens to long term CA
private key management over an extended period of time, possibly involving
several corporate identities. Checking latest release browsers ... I find
two CA certificates for GTE cybertrust ... one issued in 1996 and good for
10 years and another issued in 1998 and good for 20 years.

so lets say as part of some audit ... is it still possible to show that
there has been long term, continuous, non-stop, highest security custodial
care of the GTE cybertrust CA private keys. If there hasn't ... would
anybody even know? ... and is there any institutional memory as to who
might be responsible for issuing a revokation for the keys? or responsible
for notifying anybody that the certificates no longer need be included in
future browsers?
--
Anne  Lynn Wheelerhttp://www.garlic.com/~lynn/
Internet trivia 20th anv http://www.garlic.com/~lynn/rfcietff.htm

Note that proposals such as Tyler Close's YURL
http://www.waterken.com/dev/YURL/  avoid the issue of trust in the
TTP/CA.  As such, I find them attractive whenever they can be used.

Cheers - Bill


-
Bill Frantz| There's nothing so clear as   | Periwinkle
(408)356-8506  | vague idea you haven't written | 16345 Englewood Ave
www.pwpconsult.com | down yet. -- Dean Tribble | Los Gatos, CA 95032


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


RE: Keyservers and Spam

2003-06-13 Thread Bill Frantz
At 2:35 PM -0700 6/13/03, Pat Farrell wrote:
At 11:56 AM 6/13/2003 -0400, John Kelsey wrote:
At 10:27 AM 6/11/03 -0700, bear wrote:
That is the theory.  In practice, as long as the PGP web of trust

The thing that strikes me is that the PGP web of trust idea is appropriate
for very close-knit communities, where reputations matter and people
mostly know one another.  A key signed by Carl Ellison or Jon Callas
actually means something to me, because I know those people.  But
transitive trust is just always a slippery and unsatisfactory sort of thing--

I may have missed it, but I thought that the web-o-trust model of PGP has
generally been dismissed by the crypto community
precisely because trust is not transitive.

Similarly, the tree structured, hierarchical trust model has failed,
we currently have a one level, not very trusted model with Verisign
or Thawte or yourself at the top.

I know from discussions with some of the SPKI folks that encouraging
self defined trust trees was one of the goals.

Of course, if the size of the tree is small enough, you can just
use shared secrets.

The HighFire project at Cryptorights
http://www.cryptorights.org/research/highfire/ is planning on building a
web of trust rooted in the NGOs who will be using the system.  Each NGO
will have a signing key.  A NGO will sign the keys of the people working
for it.  In this manner, we have way of saying, The John Jones who works
for Amnesty International.  A NGO may decide to sign another NGO's signing
key.  Now we have a way to say to someone in Amnesty, Send a message to
Steve Smith in Médecins Sans Frontières.  The plan is to show the trust
relationship in the UI as a path of keys.

I would appreciate your comments.

Cheers - Bill


-
Bill Frantz   | A Jobless Recovery is | Periwinkle -- Consulting
(408)356-8506 | like a Breadless Sand- | 16345 Englewood Ave.
[EMAIL PROTECTED] | wich. -- Steve Schear | Los Gatos, CA 95032, USA



-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


RE: Keyservers and Spam

2003-06-11 Thread Bill Frantz
To try to reflect some of David's points with a real-world situation.  I
was at work, with a brand new installation of PGP.  I wanted to send some
confidential data home so I could work with it.  However I didn't have my
home key at work, so I didn't have a secure way to send either the data, or
the work key.  I didn't even have the fingerprint of the home key.

My solution was to pull Carl Ellison's business card out of my pocket.  It
had his key fingerprint on it, and I remember getting it directly from him,
so I could trust the fingerprint.  Now Carl had signed my key, so when I
downloaded it from the key server, I could verify that it was indeed mine
(to the extent I trusted Carl).  Carl's signature, and the key server
allowed me to bootstrap trust into my own key.

At 3:53 PM -0700 6/10/03, David Honig wrote:
At 04:54 PM 6/10/03 +0100, [EMAIL PROTECTED] wrote:
I don't know you.  Why should I trust your signing of someone else's key?

If I know a mutual aquaintence, no need for web of trust.
...
If we allow this, then the entire web-of-trust disintegrates.

There *is no web of trust* unless you know the signers.  In which
case you may as well have them forward keys manually.

But with a key server, I didn't have to bother Carl to send me my key.  Or
depend on him being online when I needed it.

Cheers - Bill


-
Bill Frantz   | Due process for all| Periwinkle -- Consulting
(408)356-8506 | used to be the | 16345 Englewood Ave.
[EMAIL PROTECTED] | American way.  | Los Gatos, CA 95032, USA



-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: An attack on paypal

2003-06-10 Thread Bill Frantz
At 5:12 PM -0700 6/8/03, Anne  Lynn Wheeler wrote:
somebody (else) commented (in the thread) that anybody that currently
(still) writes code resulting in buffer overflow exploit maybe should be
thrown in jail.

A nice essay, partially on the need to include technological protections
against human error, included the above paragraph.

IMHO, the problem is that the C language is just too error prone to be used
for most software.  In Thirty Years Later:  Lessons from the Multics
Security Evaluation,  Paul A. Karger and Roger R. Schell
www.acsac.org/2002/papers/classic-multics.pdf credit the use of PL/I for
the lack of buffer overruns in Multics.  However, in the Unix/Linux/PC/Mac
world, a successor language has not yet appeared.

YMMV - Bill


-
Bill Frantz   | Due process for all| Periwinkle -- Consulting
(408)356-8506 | used to be the | 16345 Englewood Ave.
[EMAIL PROTECTED] | American way.  | Los Gatos, CA 95032, USA



-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Maybe It's Snake Oil All the Way Down

2003-06-04 Thread Bill Frantz
At 7:42 AM -0700 6/3/03, John Kelsey wrote:
I keep wondering how hard it would be to build a cordless phone system on
top of 802.11b with some kind of decent encryption being used.  I'd really
like to be able to move from a digital spread spectrum cordless phone
(which probably has a 16-bit key for the spreading sequence or some such
depressing thing) to a phone that can't be eavesdropped on without tapping
the wire.

rant

I've spent some time recently looking at Voice over IP (VoIP)
implementations.  My immediate reaction to reading the standards is that
they a complete answer to a telephone company executive's wet dreams.
Conferencing, Automatic call forwarding, Billing etc. etc., they're all
covered.  The result is a protocol that is beyond baroque and well into
rococo.  I think the various standards bodies are still trying to deal with
issues in the protocols that weren't thought of from the start.

Of course, once you have your call set up, you have to encrypt it.  Most of
the VoIP implementations use Real Time Streaming Protocol (RTSP, RFC2326),
which requires two UDP ports through your firewall.  Then you have to
encrypt the RTSP traffic.  I have seen reference to an encryption protocol
specifically for RTSP, but a quick scan of STD1 didn't turn it up, so it is
probably still a draft.  I don't know anything about its security.

The other choice is IPSec.  IPSec seems happiest securing traffic between
machines with permanent IP addresses.  It is a nightmare to use with
Network Address Translation.

What would be really nice would be a VoIP system that used TCP instead of
UDP.  (I know that if TCP goes into error recovery, there is going to be
major jitter in the voice.  I know it will be hard to support conferencing.
I know it will not gracefully bridge to the POTS network.  Etc. I'm willing
to put up with that to avoid the pain that comes with UDP.)  Then I can
just tunnel it through SSH, or hack it to use SSL/TLS.  Oh well.

/rant

Cheers - Bill


-
Bill Frantz   | Due process for all| Periwinkle -- Consulting
(408)356-8506 | used to be the | 16345 Englewood Ave.
[EMAIL PROTECTED] | American way.  | Los Gatos, CA 95032, USA



-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]