Re: [Cryptography] check-summed keys in secret ciphers?

2013-10-03 Thread Philipp Gühring
Hi,

Am 2013-09-30 10:16, schrieb ianG:
 I'm not really understanding the need for checksums on keys.
Perhaps it is a DLP (Data Leakage Prevention) technology. At least the
same method works great for Creditcard numbers.
Oh, there is a 14 digit number being sent on a unclassified network,
and all the checksums are correct? Someone is trying to leak ...
terminate the connection, forensically analyze the machine, ...

Best regards,
Philipp

___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] TLS2

2013-09-30 Thread Philipp Gühring
Hi,

What I personally think would be necessary for TLS2:

* At least one quantum-computing resistant algorithm which must be useable
either as replacement for DH+RSA+EC, or preferrably as additional
strength(double encryption) for the transition period.

* Zero-Knowledge password authentication (something like TLS-SRP), but
automatically re-encrypted in a normal server-authenticated TLS session
(so that it's still encrypted with the server if you used a weak password).

* Having client certificates be transmitted in the encrypted channel, not
in plaintext

Best regards,
Philipp 

___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] *** SPAM *** dead man switch [was: Re: Snowden fabricated digital keys to get access to NSA servers?]

2013-07-09 Thread Philipp Gühring
Hi,

I would suggest Secret Key Splitting (e.g. Shamir's scheme), with an n-out-of-m 
scheme. Add decryption instructions, give everyone you trust and who is not 
easily discoverable a share of the key, the complete encrypted backups, and 
tell them to follow instructions when they believe you are dead or imprisoned. 
(The instructions could be as easy as boot your PC from this DVD and keep it 
running for at least a week. Given enough secret shares, it should work and be 
interference-safe, and still only be decryptable if n of the m trusted parties 
collaborate.

Best regards,
Philipp



StealthMonger stealthmon...@nym.mixmin.net schrieb:

Richard Salz rich.s...@gmail.com writes:

 How could it be arranged that if anything happens at all to Edward
 Snowden, he told me he has arranged for them to get access to the
full
 archives?

 A lawyer or other (paid) confidant was given instructions that would
 disclose the key.  Do this if something happens to me.

An adversary can verify an open source robot, but not such
instructions.

NSA cannot verify a claim that such instructions have been given
(unless
they know the lawyer's identity, but in that case they can
interfere).
(On the other hand, NSA cannot afford to assume that such a claim is a
bluff, and that's the strength of this idea.)

The intended interpretation of the open source clause in the original
problem statement is that anyone could inspect the workings of the
robot
and verify that it does indeed harbor a secret and that if the signed
messages stop coming it will indeed release that secret.

(For example, in one implementation -- NOT CRYPTOGRAPHICALLY STRONG --
a
secret file's access permissions can only be granted by the robot.)


-- 


 -- StealthMonger stealthmon...@nym.mixmin.net
Long, random latency is part of the price of Internet anonymity.

   anonget: Is this anonymous browsing, or what?
http://groups.google.ws/group/alt.privacy.anon-server/msg/073f34abb668df33?dmode=sourceoutput=gplain

   stealthmail: Hide whether you're doing email, or when, or with whom.
   mailto:stealthsu...@nym.mixmin.net?subject=send%20index.html


Key:
mailto:stealthsu...@nym.mixmin.net?subject=send%20stealthmonger-key





___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

Re: [Cryptography] *** SPAM *** dead man switch [was: Re: Snowden fabricated digital keys to get access to NSA servers?]

2013-07-09 Thread Philipp Gühring
Hi,

I would suggest Secret Key Splitting (e.g. Shamir's scheme), with an n-out-of-m 
scheme. Add decryption instructions, give everyone you trust and who is not 
easily discoverable a share of the key, the complete encrypted backups, and 
tell them to follow instructions when they believe you are dead or imprisoned. 
(The instructions could be as easy as boot your PC from this DVD and keep it 
running for at least a week. Given enough secret shares, it should work and be 
interference-safe, and still only be decryptable if n of the m trusted parties 
collaborate.

Best regards,
Philipp



StealthMonger stealthmon...@nym.mixmin.net schrieb:

Richard Salz rich.s...@gmail.com writes:

 How could it be arranged that if anything happens at all to Edward
 Snowden, he told me he has arranged for them to get access to the
full
 archives?

 A lawyer or other (paid) confidant was given instructions that would
 disclose the key.  Do this if something happens to me.

An adversary can verify an open source robot, but not such
instructions.

NSA cannot verify a claim that such instructions have been given
(unless
they know the lawyer's identity, but in that case they can
interfere).
(On the other hand, NSA cannot afford to assume that such a claim is a
bluff, and that's the strength of this idea.)

The intended interpretation of the open source clause in the original
problem statement is that anyone could inspect the workings of the
robot
and verify that it does indeed harbor a secret and that if the signed
messages stop coming it will indeed release that secret.

(For example, in one implementation -- NOT CRYPTOGRAPHICALLY STRONG --
a
secret file's access permissions can only be granted by the robot.)


-- 


 -- StealthMonger stealthmon...@nym.mixmin.net
Long, random latency is part of the price of Internet anonymity.

   anonget: Is this anonymous browsing, or what?
http://groups.google.ws/group/alt.privacy.anon-server/msg/073f34abb668df33?dmode=sourceoutput=gplain

   stealthmail: Hide whether you're doing email, or when, or with whom.
   mailto:stealthsu...@nym.mixmin.net?subject=send%20index.html


Key:
mailto:stealthsu...@nym.mixmin.net?subject=send%20stealthmonger-key





___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

Re: Unattended reboots (was Re: The clouds are not random enough)

2009-08-03 Thread Philipp Gühring
Hi,

 If you (or anyone on this forum) know of technology that allows the
 application to gain access to the crypto-hardware after an unattended
 reboot - but can prevent an attacker from gaining access to those keys
 after compromising a legitimate ID on the machine - I'd welcome hearing
 about it.  TIA.

I (re?)invented a concept for that application, which can be applied in
certain situations.
I have started from the assumption that we are talking about something
like an E-Business system that is hosted in a normal commercial
environment with wired, routed networks.
The attack-vector I wanted to secure against was stealing the machines.
So in the scenario, an attacker would break into the building, steal the
server, get out again, and would try to get access to the data afterwards.
As long as the machine stays in place, it should be able to reboot
unattendedly, as soon as it's somewhere else, it shouldn't be able to
reboot unattendedly anymore.

The concept is to have a secondary server (or several secondary servers)
somewhere else, which has the necessary key available. It should be
situated in a place where it is highly unlikely that it also gets stolen
when the primary server gets stolen, and it has to be connected through
a somewhat trusted routed network.

Now the secondary server has configured the IP address of the primary
server, and regularly tries to contact the primary server, every minute
or something. (Or it uses a different method to detect when the primary
server needs the key).
The contact-tries are done over the routed network, and the routers must
be somewhat secured, and the links in between also have to be trusted.
When the secondary server succeeds the connection to the primary server,
 it authenticates the connection to the primary server (with a key that
is stored in plaintext on the primary server, or perhaps generated from
the hardware configuration). If the authentication succeeds, the
secondary server sends the private key to the primary server, and the
primary server can continue to boot normally.
If an attacker steals the server, and connects it at a different place
in the network, or somewhere else, then the secondary server will not be
able to reach the IP address due to the routing, and won't be able to
provide the key. So the attacker could only try to break in again, and
trace back where the server is, where it comes from, ...

Due to the connection originating from the secondary server and not from
the primary server, you have to have both the server, and you have to be
on the right place of the network.

It's not perfect security, but I think it's a reasonable tradeoff for
the given threats and the need for high-availability in those certain
situations.

Please let me know if you hear about any other interesting solutions too.

Best regards,
Philipp Gühring

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Decimal encryption

2008-08-27 Thread Philipp Gühring
Hi,

I am searching for symmetric encryption algorithms for decimal strings.

Let's say we have various 40-digit decimal numbers:
2349823966232362361233845734628834823823
3250920019325023523623692235235728239462
0198230198519248209721383748374928601923

As far as I calculated, a decimal has the equivalent of about 3,3219
bits, so with 40 digits, we have about 132,877 bits.

Now I would like to encrypt those numbers in a way that the result is a
decimal number again (that's one of the basic rules of symmetric
encryption algorithms as far as I remember).

Since the 132,877 bits is similar to 128 bit encryption (like eg. AES),
I would like to use an algorithm with a somewhat comparable strength to AES.
But the problem is that I have 132,877 bits, not 128 bits. And I can't
cut it off or enhance it, since the result has to be a 40 digit decimal
number again.

Does anyone know a an algorithm that has reasonable strength and is able
to operate on non-binary data? Preferrably on any chosen number-base?

Best regards,
Philipp Gühring

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: On the randomness of DNS

2008-08-03 Thread Philipp Gühring

Hi Ben,


http://www.cacert.at/cgi-bin/rngresults


Are you seriously saying that the entropy of FreeBSD /dev/random is 0?


Thanks for the notice, that was a broken upload by a user.

Best regards,
Philipp Gühring

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: On the randomness of DNS

2008-07-31 Thread Philipp Gühring

Hi,

I would suggest to use http://www.cacert.at/random/ to test the 
randomness of the DNS source ports. Due to the large variety of 
random-number sources that have been tested there already, it's useful 
as a classification service of unknown randomly looking numbers.
You just have to collect 12 MB of numbers from a DNS server and upload 
it there. (If you get 2 Bytes per request, that's 6 million requests you 
have to do)



I don't see the point of evaluating the quality of a random number
generator by statistical tests.


We successfully used statistical tests to detect broken random number 
generators, we informed the vendors and they fixed them.

http://www.cacert.at/cgi-bin/rngresults

Best regards,
Philipp Gühring

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: The perils of security tools

2008-06-03 Thread Philipp Gühring
Hi,

 It is not an implementaion issue but a requirement of the C standard.
 To avoid buffering use

setvbuf (fp, NULL, _IONBF, 0);

 right after the fopen.

Ah! Thanks a lot!

Ok, I think that should be written into the man-pages of /dev/random and 
fgetc/fread and other related howtos.

Best regards,
Philipp Gühring

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: The perils of security tools

2008-05-28 Thread Philipp Gühring
Hi,

 (it doesn't just slow down a lot). Since /dev/random use depletes
 the pool directly, it is imperative that wasteful reads of this
 pseudo-device be avoided at all costs. 

Yes. Still, some people are using fopen/fread to access /dev/random, which 
does pre-fetching on most implementations I saw, so using open/read is 
preferred for using /dev/random.

Implementations can be rather easily checked with strace.

Best regards,
Philipp Gühring

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Levels of security according to the easiness to steel biometric data

2008-04-16 Thread Philipp Gühring
Hi,

 QUESTION: Does anybody knows about the existence of a
 security research in area of grading the easiness to
 steel biometric data.

There are several relevant threats:
* Accidental leaking the biometric data (colour-photos for face, fingerprints 
on glasses for fingers, public documents for human signature)
* Intentional stealing of biometric data (cellphone cameras, hidden 
cameras, ...)


 For example, I guess that stealing information of
 someone's face is easier than stealing information
 about someone's fingerprints,

Depends.
Stealing fingerprints is easy if you hand the target person a glass of water.
With face you have to differentiate between the different kinds of faces.
Taking colour photos of faces is easy. Taking infrared photos of faces, or 
taking 3D scans of faces, ... is much harder.

 but stealing information about someone's retina
 would be much harder.

Yes, stealing retina is harder. (It's even harder in the normal usage ...)

 Such a scale can be useful in the design of secure
 protocols and secured information systems.

Yes. Choosing the right biometrics for the right application, implementing it 
correctly and educating/training the users properly can be challenging.

But in the end, you can steal any biometric data if you really want to.
(Take a look at the film Gattaca to see how this can be done in practice. 
I didn't noticed any technically really unrealistic things in the film 
Gattaca.)

Another important question is whether you can apply a faked/copied biometric 
at a certain place. It could be difficult to mount an attack with a full face 
mask at a guarded entrypoint. But applying fake fingerprints is far less 
noticable for guards.
(It might be easy to steal the face, but you can't apply it due to all entries 
being guarded)

Tamper evidence, Tamper protection, Tamper proof, Tamper resistance ...

As usual, it depends on your threat-models, on your environment, on your 
resources, on your enemies, ...

Best regards,
Philipp Gühring

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Fixing SSL (was Re: Dutch Transport Card Broken)

2008-02-13 Thread Philipp Gühring
Hi,

 Microsoft broke this in IE7... It is no longer possible to generate and
 enroll a client cert from a CA not on the trusted root list. So private
 label CAs can no longer enroll client certs. We have requested a fix,
 so this may come in the future, but the damage is already done...

 Also the IE7 browser APIs for this are completely different and rather
 minimally documented. The interfaces are not portable between browsers,
 ... It's a mess.

I can fully confirm this.

Microsoft claimed that they had to rewrite the API to make it more secure, but 
I only found one small security-relevant weakness that they fixed, the others 
are still there. (And even that fix wouldn´t have justified a rewrite of the 
API for websites. They could have kept the frontend-API compatible in my 
opinion.)

I had the feeling that Microsoft wants to abandon the usage of client 
certificates completely, and move the people to CardSpace instead.
But how do you sign your emails with CardSpace? CardSpace only does the 
realtime authentication part of the market ...

If anyone needs more information how to upgrade your Web-based CA for IE7:
http://wiki.cacert.org/wiki/IE7VistaSource

Best regards,
Philipp Gühring

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Fixing SSL (was Re: Dutch Transport Card Broken)

2008-01-31 Thread Philipp Gühring
 that the 
water in the pot gets hotter and hotter, since it happens to slowly and not 
at once ...)
Since all those people asked one after each other on the list, they were all 
ignored, since everyone had just one single case and one single argument.
If they had come up at the same time and coordinated their arguments ...
(But I don´t think that we can blame all those people for not coordinating 
their arguments.)

We have an issue here. And the issue isn´t going to go away, until we 
deprecate SSL/TLS, or it gets solved.

 If you have 
 an actual credible security argument you should post it to
 [EMAIL PROTECTED]

Do you think the the security arguments I summed up above qualify on the tls 
list? Should I go into more detail? Present practical examples?
Or does it take a Slashdot article with some governmental CA´s certificates 
that contain social security numbers, some SSL sniffing logfiles, ... for the 
responsible people to react? Or is it possible that we can pro-act and fix  
this issue, without giving SSL and TLS a bad name in the press?
I am not interested in reading SSL leaks personal details in the media.

Has anyone counted the amount of people that asked for it in all the years on 
the TLS mailinglist? 


I see several possible options:
* We fix SSL  
Does anyone have a solution for SSL/TLS available that we could propose on the 
TLS list? 
If not: Can anyone with enough protocol design experience please develop it?

* We deprecate SSL for client certificate authentication.
We write in the RFC that people MUST NOT use SSL for client authentication.
(Perhaps we get away by pretending that client certificates accidently slipped 
into the specification.)

* We switch from TLS to hmmm ... perhaps SSH, which has fixed the problem 
already.
Hmm, there we would have to write all the glue RFCs like HTTP over SSH 
again ...

* We will all have to answer nasty questions, why we didn´t do anything about 
it that SSL leaks personal certificates in plaintext ...

* We change the rules of the market, and tell the people that they MUST NOT 
ask for additional data in their certificates anymore

* Does anyone have any better and perhaps more realistic options?


Come on guys, let´s solve this issue together before it hurts.

Ok, what I can do to get it fixed?

-
Different topic: Fixing TCP/SSL

  TCP could need some stronger integrity protection. 8 Bits of checksum
  isn´t enough in reality. (1 out of 256 broken packets gets injected into
  your TCP stream)  Does IPv6 have a stronger TCP?

 Whether this is true or not depends critically on the base rate
 of errors in packets delivered to TCP by the IP layer, since
 the rate of errors delivered to SSL is 1/256th of those delivered
 to the TCP layer. Since link layer checksums are very common,
 as a practical matter errored packets getting delivered to protocols
 above TCP is quite rare.

Try to send a DVD iso image (4GB) over a SSL or SSH encrypted link with bit 
errors every 1 bits with a client software like scp that cannot resume 
downloads. I gave up after 5 tries that all broke down in average after 1 GB.
(In that case it was a hardware (bad cable) initiated denial of service 
attack ;-)

The problem is that you can´t work around this issue with standard software. 
You can´t tell Putty or OpenSSH or any normal IP stack or any network card to 
add more protection there, to solve that problem. You could try to setup some 
tunneling to get more protection, but that´s usually highly impractical for 
copying a single file from one computer to the next.

If the link layer gives you 1/256, and the TCP layer gives you 1/65536, and 
the SSL layer demands 0/16777216, then end up with 1/16777216 too much.

(And there is no guarantee that the link layer actually gives you the 1/256. 
It could also give you 1/1)

Best regards,
Philipp Gühring

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Fixing SSL (was Re: Dutch Transport Card Broken)

2008-01-30 Thread Philipp Gühring
Hi,

 SSL key distribution and management is horribly broken,
 with the result that everyone winds up using plaintext
 when they should not.

Yes, sending client certificates in plaintext while claiming that SSL/TLS is 
secure doesn´t work in a world of phishing and identity theft anymore.

We have the paradox situation that I have to tell people that they should use 
HTTPS with server-certificates and username+password inside the HTTPS 
session, because that´s more secure than client certificates ...

Does anyone have an idea how we can fix this flaw within SSL/TLS within a 
reasonable timeframe, so that it can be implemented and shipped by the 
vendors in this century?

(I don´t think that starting from scratch and replacing SSL makes much sense, 
since it´s just one huge flaw ...)

 SSL is layered on top of TCP, and then one layers one's
 actual protocol on top of SSL, with the result that a
 transaction involves a painfully large number of round
 trips.

SSL already looks quite round-trip optimized to me (at least the key-agreement 
part)

 We really do need to reinvent and replace SSL/TCP,
 though doing it right is a hard problem that takes more
 than morning coffee.

TCP could need some stronger integrity protection. 8 Bits of checksum isn´t 
enough in reality. (1 out of 256 broken packets gets injected into your TCP 
stream)  Does IPv6 have a stronger TCP?

 As discussed earlier on this list, layering induces
 excessive round trips.

The SSL implementations I analyzed behaved quite nicely, I didn´t noticed any 
round trip problems there. (But feel free to send me a traffic capture file 
that shows the problem)

I once implemented SSL over GSM data channel (without PPP and without TCP), 
and discovered that SSL needs better integrity protection than raw GSM 
delivers. (I am quite sure that´s why people normally run PPP over GSM 
channels ...)
SSH has the same problems. It also assumes an active attack in case of 
integrity problems of the lower layer, and terminates the connection.

 Layering communications 
 protocols is analogous to having a high level
 interpreter written in a low level language. What we
 need instead of layering is a protocol compiler,
 analogous to the Microsoft IDL compiler.  The Microsoft
 IDL compiler automatically generates a C++ interface
 that correctly handles run time version negotiation,
 which hand generated interfaces always screw up, with
 the result that hand generated interfaces result in
 forward and backward incompatibility, resulting in the
 infamous Microsoft DLL hell.  Similarly we want a
 compiler that automatically generates secure message
 exchange and reliable transactions from unreliable
 packets. (And of course, run time version negotiation)

Sounds like an interesting idea to me.

Best regards,
Philipp Gühring

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: two-person login?

2008-01-29 Thread Philipp Gühring
Hi,

 I have been asked to opine on a system that requires a
 two-person login.  Some AIX documents refer to this as
 a common method of increasing login security
   http://www.redbooks.ibm.com/redbooks/pdfs/sg245962.pdf

I would like to have a two-person remote login:
The server is in the datacenter, two sysadmins login remotely (SSH or 
something similar), and the login only works if both are there. As soon as 
either one drops the connection, the other one is frozen too.
They should see what each other is doing (key-press logging of the other admin 
in the bottom line)
(In case they detect the other sysadmin doing something evil, they can simply 
disconnect, which also disconnects/freezes the other one)

I would be happy about such an implementation in a SSH server. 
(combined with screen perhaps ...)

Best regards,
Philipp Gühring

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: The bank fraud blame game

2007-07-05 Thread Philipp Gühring
Hi,

  The second possiblity has been realized by some european banks now, based
  on SMS and mobile phones, which sends the important transaction details
  together with a random authorisation code, that is bound to the
  transaction in the banks database. The user can then verify the
  transaciton, and then has to enter the authorisation code on the
  webinterface.

 How large is this code?

5 characters, including numbers and letters. I think you have something like 4 
tries to enter a code correctly.

(rough estimation: 5^30 = 931322574615478515625 / 4 = 232830643653869628906 , 
so you have a chance of 1:232830643653869628906 per transaction if you try it 
4 times)

 The security of this system would seem to rest on the security of mobile
 phones against cloning.  How were mobile phones protected against cloning?

Well, the security depends on an attacker not being able to infect a specific 
users´s computer with a MitB and knowing and being able to clone this 
specific users´s mobile phone at the same time.


Peter Gutmann wrote:
 The external device emulates a standard USB memory key, to send data to it
 you write a file, to get data back you read a file (think /dev).  There's
 no device driver to install, and no particularly tricky programming on the
 PC either.

Neat idea!  
It only has the problem that I know several companies already where you have 
to register your USB-stick, and only registered USB-sticks are allowed on the 
network ..., but it´s a neat workaround, yes. 
I think SecurityLayer should be easily adaptable to that concept.
Do you already have an demo implementation of that external device, Peter?


Best regards,
Philipp Gühring

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: The bank fraud blame game

2007-07-03 Thread Philipp Gühring
Hi,

The problem I found (during my research for 
http://www.cacert.at/svn/sourcerer/CAcert/SecureClient.pdf )
for Smartcards and other external devices for secure banking is the following:

About 50% of the online-banking users are doing personal online banking on 
company PCs, while they are at work. Company PCs have a special property: 
They are secured against their users. A user can´t attach any device to a 
company PC that would need a driver installed. 
So any solution like Smartcard-readers, or USB Tokens that needs any special 
application or driver will not work for 50% of the online-banking customers.
(And the banks aren´t that happy about loosing 50% of their customers).

So I would say there are 2 possibilities left:

* An external device, where you have to enter the transaction details a second 
time to generate some security code
(Can you show me the user that wants to enter each transaction twice?)

* An external device that lets the user verify the transaction independently 
from the PC.

The second possiblity has been realized by some european banks now, based on 
SMS and mobile phones, which sends the important transaction details together 
with a random authorisation code, that is bound to the transaction in the 
bank´s database. The user can then verify the transaciton, and then has to 
enter the authorisation code on the webinterface.
(And the good thing is that they succeeded to get the usability so good that 
it´s more convenient than the previous TAN solution, and the cost increase of 
SMS compared to paper TANs is irrelevant)

So I personally would declare the online-banking problem solved (with SMS as 
second channel), but I am still searching for solutions for all others, 
especially non-transactional applications.

Best regards,
Philipp Gühring

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


RNG Summary

2006-11-29 Thread Philipp Gühring
Hi,

I would like to inform you about the current status of our RNG Market Survey. 
We have included most Hardware and Software RNG vendors now. (If we missed 
some, please tell me)

The current results are available here:
http://sig.cacert.at/cgi-bin/rngresults

The general project page:
http://sig.cacert.at/random/

The service is fully automated online now, so you can easily test your own RNG 
now, and compare them to the rest of the market.

Best regards,
Philipp Gühring

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Exponent 3 damage spreads...

2006-09-25 Thread Philipp Gühring
Hi,

We have been researching, which vendors were generating Exponent 3 keys, and 
we found the following until now:

* Cisco 3000 VPN Concentrator
* CSP11
* AN.ON / JAP (they told me they would change it on the next day)
(perhaps more to come)

My current estimate is that 0.26% of the certificates in the wild have 
Exponents =17

Best regards,
Philipp Gühring


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: NPR : E-Mail Encryption Rare in Everyday Use

2006-02-24 Thread Philipp Gühring
Hi,

 And what I heard in the story is that even savvy users such as Phil Z
 (who'd have no problem with key management) don't use it often.

 Phil *does* have a problem with key management. He knows how to do
 it, but his communications partners are not as good as he is.

Phil Z doesn´t know how to do it himself, at least with PGP. 
He told me that he doesn´t sign people´s keys who ask for it, simply because 
it would pollute his keyring on his computer, and he couldn´t work with a 
keyring with thousands of people on it anymore. 
So PGP obviously has a usability and scalability problem.
So he only signs the keys of his friends because of that.
I wonder now, why he didn´t tried to solve that usability/scalability problem 
himself yet, but gave up instead.

Best regards,
Philipp Gühring


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: RNG quality verification

2006-01-03 Thread Philipp Gühring
Hi,

Ok, now I did the first test.
I took OpenSSL, generated 1 RSA keys, and took them apart.
First I analyzed the raw keys:

--
~~ ./ent RNGQA/openssl-keys-raw.random
Entropy = 7.992782 bits per byte.

Optimum compression would reduce the size
of this 258 byte file by 0 percent.

Chi square distribution for 258 samples is 38940.74, and randomly
would exceed this value 0.01 percent of the times.

Arithmetic mean value of data bytes is 127.2214 (127.5 = random).
Monte Carlo value for Pi is 3.177609302 (error 1.15 percent).
Serial correlation coefficient is -0.016663 (totally uncorrelated = 0.0).
--

Then I stripped off the first 2 bytes and the last byte:

--
~~ ./ent RNGQA/openssl-keys-stripped.random
Entropy = 7.32 bits per byte.

Optimum compression would reduce the size
of this 252 byte file by 0 percent.

Chi square distribution for 252 samples is 236.33, and randomly
would exceed this value 75.00 percent of the times.

Arithmetic mean value of data bytes is 127.4632 (127.5 = random).
Monte Carlo value for Pi is 3.14527 (error 0.12 percent).
Serial correlation coefficient is 0.000327 (totally uncorrelated = 0.0).
--

It isn´t perfect random quality, but I also don´t see any big problems with 
it.

You can get the program and the extracted data here:
http://www2.futureware.at/~philipp/RNGQA-light.tar.bz2


 It's really unsolvable, in several different ways.

Perhaps I should have stated the quality demands for possible solutions:
Since I am working on a practical solution, and not a theoretical solution, 
the following demands apply:
* A 99.999% solution is ok.


 First -- you just cannot tell if a single number is random.  At best,
 you can look at a large selection of numbers and see if they fit
 certain randomness tests.  Even that isn't easy, though there are
 several packages that will help.  The best-known one is DIEHARD;
 ask your favorite search engine for diehard random.

Sure.

 However -- and it's a big however -- numbers that are random enough
 for statistical purposes are not necessarily good enough for
 cryptographic purposes.  As several people have pointed out already,
 there are processes involving cryptographic algorithms that produce
 very random sequences, but are in fact deterministic to someone who
 knows a secret.  In other words, if you don't control the generator,
 it's not possible to distinguish these two cases.  

Has anyone tested yet, how much samples are needed to detect those PRNGs?

 In fact, any cipher 
 or hash function whose output was easily distinguishable from a true-
 random source would be rejected by the cryptographic community.

Yes, sure.

 Furthermore, even if the generator is good, if the machine using the
 certificates has been compromised it doesn't matter, because the
 malware can steal the secret key.  What this boils down to is that you
 either trust the endpoint or you don't.

Sure. To secure against compromised machines, you need Hardware Tokens with a 
qualified certificate request mechanism. 
But in the scenario, I am currently working on, the assumption is that we only 
have a software engine, and that the machine of the user is not compromised.
But still the quality of the random number generator and the correct usage of 
the random numbers for the certificate request are not known yet.

 Finally, even if it were possible for you to verify that p and q were
 random, you *really* don't want to do that -- you *never* want to see
 users' secret keys, because that exposes the keys to danger and hence
 you to liability.

I will not ask the users to send in their private keys for testing!

As you write below, I would like to test the standard generation packages 
(Firefox, IE, Opera, OpenSSL), and I also want to offer a guideline (or even 
the testing software) for the advanced users that they can test their own 
generation package, if they really want to.

 Let me make an alternative suggestion.  Pick two or three key
 generation packages -- as I recall, both Firefox and IE have such --
 generate a lot of keys, and run them through DIEHARD.  Then warn your
 users to use only approved mechanisms for generating their certificate
 requests -- you just can't do any better.

That´s exactly what I wanted to do. (Sorry if I didn´t wrote that clear enough 
yet.)

Best regards,
Philipp Gühring


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: RNG quality verification

2005-12-23 Thread Philipp Gühring
Hi Peter,

 Easily solveable bureaucratic problems are much simpler than unsolveable
 mathematical ones.

Perhaps there is some mis-understanding, but I am getting worried that the 
common conception seems to be that it is an unsolveable problem.

What is wrong with the following black-box test?

* Open browser
* Go to a dummy CA´s website
* Let the browser generate a keypair through the keygen or cenroll.dll
* Import the generated certificate
* Backup the certificate together with the private key into a PKCS#12 
container
* Extract the private key from the backup
* Extract p and q from the private key
* Extract the random parts of p and q (strip off the first and the last bit)

* Automate the previous steps with some GUI-Automation system

* Concatenate all random bits from all the keypairs together
* Do the usual statistical tests with the random bits

Is this a valid solution, or is the question of the proper usage of random 
numbers in certificate keying material really mathematically unsolveable?

(I am not a RSA specialist yet, I tried to stay away from the bit-wise details 
and the mathematics, so I might be wrong)

But I would really worry, if it is mathematically impossible to attestate the 
correct usage (to a certain extent, I know about the statistical limitations) 
of random numbers with the software I am using to get certificates.

Best regards,
Philipp Gühring


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


RNG quality verification

2005-12-22 Thread Philipp Gühring
Hi,

I have been asked by to verify the quality of the random numbers which are 
used for certificate requests that are being sent to us, to make sure that 
they are good enough, and we don´t issue certificates for weak keys.

The client applications that generate the keys and issue the certificate 
requests are the usual software landscape OpenSSL, IE, Firefox, 
SmartCards, ... and we would like to be able to accept all normally used 
software.

We are being asked to either issue the keys for our users (I don´t want to), 
or alternatively demand the users to have good quality random numbers with a 
contract for the user. Now it might be easy that I demand the user to have 
good random numbers, but the first question will likely be and how do I do 
that? or which software/hardware does that?

So I guess I have to ask the vendors, whether ther random numbers are good 
enough. But what if they just say yes or no? 
I think the better way would be if I had a possibility to verify the quality 
of the random numbers used in a certificate request myself, without the 
dependence on the vendor.

From what I remember of the usual RSA key generation, random numbers gathered 
are being put into a field with the expected keysize. Then the first and last 
bit is set to 1, to make sure that the key has the necessary size, and to 
have it odd (not to be devidable by 2). Then it is verified for primeness, 
and if the check is ok, the number is used.

So if I extract the key, remove the first and the last bit, then I should have 
the pure random numbers that are being used. If I do that with lots of keys, 
I should have a good amount of random material for the usual statistical 
tests.

Am I right? Am I wrong?
Has anyone done that before?
Any other, better ideas?
Should I do it that way?

Best regards,
Philipp Gühring


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: RNG quality verification

2005-12-22 Thread Philipp Gühring
Hi Travis,

 The only thing is, you cannot test in randomness, 

That´s true, but I can test non-randomness. And if I don´t detect 
non-randomness, I can assume randomness to a certain extent.

 and it is an abuse 
 of statistics to make predictions about individual events -- 

Wasn´t that one of the reasons, why statistic was invented?

 they 
 describe populations.  The best thing you could do is combine them
 with a truly random source that you control.  

I don´t control the software everyone is using on this world.

 Of course then your 
 users may not trust you, so you have to do a cryptographically strong
 combination such that control of one of the inputs doesn't translate
 into control of the outputs.  For example, you cannot simply XOR them
 or you could force the key to be anything of the same length by
 choosing an appropriate stream.  Also, you could not do this with
 small input spaces or else exhaustive search is trivial (try every
 input until the output is what you want).

The problem is that I have to live with COTS (Common-off-the-shelf) software 
out there, that is generating the certificate requests. The only thing I can 
do is create a blacklist or a whitelist of known bad or known good software, 
to tell the users: Use this software, or don´t use that software.

 The best you could do is examine (reverse engineer) the RNGs in the
 products, and whatever seeds them, and then create tests for their
 nonrandom properties, and then see if the tests work.  This would,
 however, not tell you anything you didn't already know once you had
 examined the internals.  

Has anyone done this yet?

 You might be able to find structure in their 
 outputs through blind application of general-purpose statistics, but
 it will likely take a great deal more output, even with supposedly
 sensitive statistics like double-sided Kolmogorov-Smirnof.

Hmm, every key should deliver about 1000 bits of randomness, I guess. How many 
bits should I collect for the tests in your opinion?

 As a pathological example, my RNG may output the text of the King
 James Bible, encrypted with AES-CBC using a counter as the key, and
 uniquified across instances by using a processor serial number or
 licence number as an IV.  Unless you knew this, you would be
 hard-pressed to tell they were not random and in fact totally
 predictable to anyone who knows the secret.  If a general statistic
 could distinguish this from a random stream, I think it would imply a
 weakness in AES-CBC.  The tests would likely fail until enough output
 was generated that it started to repeat itself.  On the other hand, I
 could decrypt it with a counter and see what pops out, and all I'd
 have to do is distringuish the KJV from a random stream.

I guess someone would have noticed already, if Microsoft, Mozilla or OpenSSL 
had done that.

Wait. How many LOC(lines of code) does the King James Bible have? Mozilla had 
something like 13 Mio. LOC as far as I remember ... perhaps they really hid 
the KJ Bible in it! ;-)

 I'd look at seeding techniques first, as that's an easy win.
 Predictable seed - predictable output.  If that bootstrap is wrong,
 you can treat everything else as an oracle and still get a good
 distinguisher.

Contrary to the normal challenge of developing a new random number generator, 
I don´t have the possibility to change the existing software, I just have to 
evaluate them, and find out, whether it´s ok or not.

I first thought about a black-box test, by simply tracing in the operating 
system where the software gets its numbers from. A open(/dev/random) 
systemcall at the time of key generation might be a good hint for good random 
numbers. But as Netscape proofed some years ago, you can 
ch=read(stream,ch,1) one perfectly random number, and overwrite it with the 
value 1 (which is not soo random anymore) in one single line of error, and 
invisibly to the operating system failing to use the random numbers given.
So since the random numbers might be modified between gathering and using for 
the keypair, I thought that I need to evaluate the quality at the end of the 
keypair generation.

Best regards,
Philipp Gühring


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Qualified Certificate Request

2005-07-21 Thread Philipp Gühring
Hello,

Peter Saint-Andre invited me here to present my concept of Qualified 
Certificate Requests to you.

It is a long-term goal of CAcert to be able to provide qualified certificates.

Regarding the requirements for qualified certificates, the only obstacle we 
still have is the problem, that CAcert has to make sure, that the private key 
for the certificate is generated and stored securely in a SmartCard, or 
another Hardware Token.

Since the users should be able to issue the certificates at home, we need a 
technical solution to make sure, that the private key is from within a 
SmartCard, when we receive a certificate request.

Therefore I designed Qualified Certificate Requests, which cryptographically 
signs the public key in the CSR with a vendor key, to state that it comes 
from a secure device.

Now I created a software-based reference implementation, so that the security 
of the system can be evaluated, and that the Token Vendors can see how to do 
it, and can do interop testing.

http://www2.futureware.at/svn/sourcerer/CAcert/QCSR/

And here is the documentation:

http://wiki.cacert.org/wiki/QualifiedCertificateRequest

Please test it, analyze it, try to break it.

Regards,
Philipp Gühring


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]