Re: [cryptography] urandom vs random

2013-08-21 Thread Dominik
You can use DieHarder, which is a collection of statistical tests to evaluate 
if somethings looks random.



grarpamp grarp...@gmail.com schrieb:
The subject thread is covering a lot about OS implementations
and RNG various sources. But what are the short list of open
source tools we should be using to actually test and evaluate
the resulting number streams?
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography

-- 
Diese Nachricht wurde von meinem Android-Mobiltelefon mit K-9 Mail gesendet.___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Preventing Time Correlation Attacks on Leaks: Help! :-)

2013-08-21 Thread Fabio Pietrosanti (naif)
Hey Peter,

thanks for your analysis!

I think we need to provide some additional input!

In the context of GlobaLeaks where, stating from our Threat Model at
https://docs.google.com/document/d/1niYFyEar1FUmStC03OidYAIfVJf18ErUFwSWCmWBhcA/pub
, the Whistleblower can also be NON anonymous but approach a submission
with Confidential level (over HTTPS over the internet) .

No anonymity, but forced disclaimer (
https://github.com/globaleaks/GlobaLeaks/issues/260) and acceptance to
take the risk.

So, let's say that whistleblower is already in a bad position, but he
accepted this condition.

We are not considering in any way to add actions/protection on
Whistleblower-Side but only on Receiver-Side that is where the bad guy
would be able to read Notification information sent and apply Time
Correlation on the Whistleblower-Action.

Today if a Whistleblower make a submission, the system immediatelly send
a notification to the Receiver.

That's bad, because it leave a trace that allow time correlation.

Who can read Receiver's email and traffic, can make a correlation on
other data source where the whistleblower may leave traffic-traces (like
a proxy, but also internet traffic dump, phisical badge/access logs,
surveillance camera, etc) .

Which kind of logic / algorithm to apply on the Receiver's notification
timing in order to prevent / reduce the likelihood that a time
correlation pattern is possible?

A random delay between a lower bounday and an upper boundary seems like
the most simple and effective approach to defeat this kind of correlation.

However this does not work on very low-traffic globaleaks node.

What do you think?

-- 
Fabio Pietrosanti (naif)
HERMES - Center for Transparency and Digital Human Rights
http://logioshermes.org - http://globaleaks.org - http://tor2web.org



Il 8/21/13 4:17 AM, Peter Maxwell ha scritto:

 Hi Fabio,

 While I don't mean to be dismissive, I suspect your threat model is
 flawed for the following reasons:

 i. Most mid to large companies would not permit the use of Tor within
 their infrastructure and even if the hypothetical company did, it
 doesn't take a whole lot of effort to track down the handful of users
 within a company using Tor/stunnel/ssh/VPN.  For that matter, I
 understand some companies even install private CA certificates into
 the browsers on company computers and decrypt outgoing SSL/TLS traffic
 at their web-proxy/firewall... in that situation, you're WB is going
 to stand out like a sore thumb as they'll be the only TLS connection
 that isn't being decrypted (because it's Tor).  So unless you want
 your whistle-blowers to literally advertise their presence as worthy
 of attention, they aren't going to do the leak from an company system
 directly.

 ii. So, presuming i. is valid - and I suspect anyone who has worked
 within a competent security operations team will tell you the same -
 then you must assume the whistle-blower will do the leak from either
 their personal systems, a burn computer or a public system.  If we
 make the assumption that the WB has taken the data out of the
 company/organisation on removable media or otherwise made it available
 to themselves outside the company infrastructure in a secure manner
 (while sometimes difficult, that is still far easier than i.) then
 your attacker can only see the WB's traffic if they are actively
 monitoring the WB's use of computers outside the company, in which
 case said WB has far bigger problems to worry about.  If the attacker
 cannot monitor the timing of the leak, your problem is not framed in
 the manner you've presented.

 iii. Even if your model was realistic, you cannot adequately defend
 against traffic analysis for such a low-traffic network: you need
 other traffic to hide in, lots of it, from other users within the same
 company - it's not realistic for this type of service.

 iv. There are more subtle problems you are going to come across, not
 least of which are issues such as document
 tagging/water-marking/document versioning and the ability for the
 attacker - your hypothetical manager - to correlate leaked documents
 against the access rights and access patterns of potential
 whistle-blowers.  For that matter, simple forensic analysis of staff
 computers is usually more than sufficient (and yes, organisations do
 this).


 It's also Isle of Man that people like hiding their ill-gotten-gains
 in, not Island of Mann ;-)  Interestingly, I think anyone who has
 used Isle of Man accounts for tax avoidance are scuppered as the HMRC
 has signed an agreement with the authorities there for automatic
 disclosure.


 Anyway, as far as I can see it, you have two different scenarios to
 consider with one being significantly more difficult to solve than the
 other:


 A. The scenario where the whistle-blower is able to take the data out
 the company on removable media or paper-copy.  This is the easy one to
 solve.  Personally I would favour a combination of asymmetric
 encryption with 

Re: [cryptography] Preventing Time Correlation Attacks on Leaks: Help! :-)

2013-08-21 Thread Sebastian Schinzel
Dear Fabio,

On 21. Aug 2013, at 09:35 AM, Fabio Pietrosanti (naif) 
li...@infosecurity.ch wrote:
 Which kind of logic / algorithm to apply on the Receiver's notification 
 timing in order to prevent / reduce the likelihood that a time correlation 
 pattern is possible?
 
 A random delay between a lower bounday and an upper boundary seems like the 
 most simple and effective approach to defeat this kind of correlation.
 
 However this does not work on very low-traffic globaleaks node.
 
 What do you think?

Random delay have a bad reputation in crypto because you can filter
them out by repeating measurements. This criticism, however, is not
relevant here as the attacker (e.g. a rouge state) has only a single data
point and has no way to repeat this measurement.

So yes, a random delay might help here. The difficulty is to choose 
the distribution and the minimum and maximum delay within.

Another option would be to not send a notification, but to let the submitter
choose some token during submission. The submitter can then later verify
whether the token was received through another service. The service is
public and anyone can query it. This removes the strong correlation
between a submission and the notification.

Regards,
Sebastian
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] urandom vs random

2013-08-21 Thread Sebastian Schinzel
On 21. Aug 2013, at 09:32 AM, Dominik domi...@dominikschuermann.de wrote:

 You can use DieHarder, which is a collection of statistical tests to evaluate 
 if somethings looks random.

Problem is that you have to use the suite in a proper way. Checking
a single weak Debian SSL key pair probably would not have 
raised the problem. You'd have had to generate many keys ( 2^16)
with that Debian SSL version to learn that they repeat.

So simply running DieHarder is not enough.

Regards,
Sebastian
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] urandom vs random

2013-08-21 Thread Rob Kendrick
On Mon, Aug 19, 2013 at 09:41:20AM -0400, Jeffrey Walton wrote:
 On Mon, Aug 19, 2013 at 9:20 AM, Aaron Toponce aaron.topo...@gmail.com 
 wrote:
  ...
 
  It's a shame http://entropykey.co.uk is no longer in business. I was able to
  procure 5 entropy keys just before they folded, and they're awesome.
 Yeah, I really liked EntropyKey. I tried to place an order last year
 (or early this year). It was never fulfilled and no one responded.
 
 I knew the were having some troubles, but I could not determine the
 cause. Why did they fold?

A combination of medical and family issues.  (I used to work there.)

B.
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] urandom vs random

2013-08-21 Thread Rob Kendrick
On Mon, Aug 19, 2013 at 07:20:45AM -0600, Aaron Toponce wrote:
 On Sun, Aug 18, 2013 at 05:07:49PM -0700, coderman wrote:
  i am surprised this has not surfaced more often in this thread:
   if you need good entropy: use a hardware entropy generator!
 
 It's a shame http://entropykey.co.uk is no longer in business. I was able to
 procure 5 entropy keys just before they folded, and they're awesome. 

They should be available again by the end of the year, if all goes well.

B.
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Preventing Time Correlation Attacks on Leaks: Help! :-)

2013-08-21 Thread Ben Laurie
On 21 August 2013 03:35, Fabio Pietrosanti (naif) li...@infosecurity.chwrote:

  Hey Peter,

 thanks for your analysis!

 I think we need to provide some additional input!

 In the context of GlobaLeaks where, stating from our Threat Model at
 https://docs.google.com/document/d/1niYFyEar1FUmStC03OidYAIfVJf18ErUFwSWCmWBhcA/pub,
  the Whistleblower can also be NON anonymous but approach a submission
 with Confidential level (over HTTPS over the internet) .

 No anonymity, but forced disclaimer (
 https://github.com/globaleaks/GlobaLeaks/issues/260) and acceptance to
 take the risk.

 So, let's say that whistleblower is already in a bad position, but he
 accepted this condition.

 We are not considering in any way to add actions/protection on
 Whistleblower-Side but only on Receiver-Side that is where the bad guy
 would be able to read Notification information sent and apply Time
 Correlation on the Whistleblower-Action.

 Today if a Whistleblower make a submission, the system immediatelly send a
 notification to the Receiver.

 That's bad, because it leave a trace that allow time correlation.

 Who can read Receiver's email and traffic, can make a correlation on other
 data source where the whistleblower may leave traffic-traces (like a proxy,
 but also internet traffic dump, phisical badge/access logs, surveillance
 camera, etc) .

 Which kind of logic / algorithm to apply on the Receiver's notification
 timing in order to prevent / reduce the likelihood that a time correlation
 pattern is possible?

 A random delay between a lower bounday and an upper boundary seems like
 the most simple and effective approach to defeat this kind of correlation.

 However this does not work on very low-traffic globaleaks node.

 What do you think?


I think that if you want to send messages that are hard to trace, there's
an existing technology: mixmaster, with an existing server network.

Or, better yet, finish off mixminion,

Even better: implement Minx (the fixed version).




 --
 Fabio Pietrosanti (naif)
 HERMES - Center for Transparency and Digital Human 
 Rightshttp://logioshermes.org - http://globaleaks.org - http://tor2web.org



 Il 8/21/13 4:17 AM, Peter Maxwell ha scritto:


  Hi Fabio,

  While I don't mean to be dismissive, I suspect your threat model is
 flawed for the following reasons:

  i. Most mid to large companies would not permit the use of Tor within
 their infrastructure and even if the hypothetical company did, it doesn't
 take a whole lot of effort to track down the handful of users within a
 company using Tor/stunnel/ssh/VPN.  For that matter, I understand some
 companies even install private CA certificates into the browsers on company
 computers and decrypt outgoing SSL/TLS traffic at their
 web-proxy/firewall... in that situation, you're WB is going to stand out
 like a sore thumb as they'll be the only TLS connection that isn't being
 decrypted (because it's Tor).  So unless you want your whistle-blowers to
 literally advertise their presence as worthy of attention, they aren't
 going to do the leak from an company system directly.

  ii. So, presuming i. is valid - and I suspect anyone who has worked
 within a competent security operations team will tell you the same - then
 you must assume the whistle-blower will do the leak from either their
 personal systems, a burn computer or a public system.  If we make the
 assumption that the WB has taken the data out of the company/organisation
 on removable media or otherwise made it available to themselves outside the
 company infrastructure in a secure manner (while sometimes difficult, that
 is still far easier than i.) then your attacker can only see the WB's
 traffic if they are actively monitoring the WB's use of computers outside
 the company, in which case said WB has far bigger problems to worry about.
  If the attacker cannot monitor the timing of the leak, your problem is not
 framed in the manner you've presented.

  iii. Even if your model was realistic, you cannot adequately defend
 against traffic analysis for such a low-traffic network: you need other
 traffic to hide in, lots of it, from other users within the same company -
 it's not realistic for this type of service.

  iv. There are more subtle problems you are going to come across, not
 least of which are issues such as document tagging/water-marking/document
 versioning and the ability for the attacker - your hypothetical manager -
 to correlate leaked documents against the access rights and access patterns
 of potential whistle-blowers.  For that matter, simple forensic analysis of
 staff computers is usually more than sufficient (and yes, organisations do
 this).


  It's also Isle of Man that people like hiding their ill-gotten-gains in,
 not Island of Mann ;-)  Interestingly, I think anyone who has used Isle
 of Man accounts for tax avoidance are scuppered as the HMRC has signed an
 agreement with the authorities there for automatic disclosure.


  Anyway, as far as I 

Re: [cryptography] Jingle and Otr

2013-08-21 Thread stef
On Wed, Aug 21, 2013 at 01:47:33PM +1000, James A. Donald wrote:
 The Jitsi FAQ https://jitsi.org/Documentation/FAQ says that chat
 sessions are protected by OTR, which implies that nothing else is.

i think before considering using jitsi-s otr:
http://lists.jitsi.org/pipermail/users/2013-July/004370.html
http://lists.jitsi.org/pipermail/dev/2011-May/001484.html

someone needs to contribute a port to otr4j or evaluate their inhouse
implementation.

-- 
pgp: https://www.ctrlc.hu/~stef/stef.gpg
pgp fp: FD52 DABD 5224 7F9C 63C6  3C12 FC97 D29F CA05 57EF
otr fp: https://www.ctrlc.hu/~stef/otr.txt
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] urandom vs random

2013-08-21 Thread The Doctor
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 08/20/2013 05:33 PM, grarpamp wrote:
 The subject thread is covering a lot about OS implementations and
 RNG various sources. But what are the short list of open source
 tools we should be using to actually test and evaluate the
 resulting number streams?

I've done some non-scientific experiments (i.e., sated my curiosity)
with ENT (formerly at http://www.fourmilab.ch/random/, now maintained
at http://packages.debian.org/sid/ent).  I found the results
interesting and somewhat useful.

- -- 
The Doctor [412/724/301/703] [ZS]
Developer, Project Byzantium: http://project-byzantium.org/

PGP: 0x807B17C1 / 7960 1CDC 85C9 0B63 8D9F  DD89 3BD8 FF2B 807B 17C1
WWW: https://drwho.virtadpt.net/

Meeble!  Meeble meeble meeble!

-BEGIN PGP SIGNATURE-
Version: GnuPG v2.0.20 (GNU/Linux)
Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/

iEYEARECAAYFAlIU6D4ACgkQO9j/K4B7F8Hi1QCgi6Bm8JqfK6YdlVJJme0FkMS6
ofoAnj5Xy4JxstfcoJjSg6rO4KuYOHB8
=KMra
-END PGP SIGNATURE-
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Preventing Time Correlation Attacks on Leaks: Help! :-)

2013-08-21 Thread Michael Rogers
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Hi Fabio,

It seems to me that there are two fundamental problems to solve if you
want to disguise the correlation between a node's inputs (submissions,
comments and edits) and its outputs (notifications).

The first problem is disguising the correlation between a single input
and its outputs. To do that, every output must correspond to several
possible inputs. So if you plan to disguise the correlation by
randomly delaying the outputs, you need to delay them by several times
the maximum interval between inputs. At this point two practical
questions arise:

1. Is there any maximum interval between inputs, and is it possible to
know what it is?

2. Does the resulting delay make the notification system less
responsive than, say, logging into the node once a week to check for
changes?

The second problem is disguising the correlation between a series of
inputs and their outputs, where the adversary knows that the outputs
are related. This is much harder than the first problem.

For example, if the adversary knows that a series of outputs went to a
journalist who published a certain leak, the adversary may guess that
many of those outputs were caused by inputs made by the leaker. For
each output, the adversary finds the set of suspects who could have
made an input that caused the output. If we've done a good job of
solving the first problem then there are many possible inputs per
output, so the set of suspects for each output is large. But the
leaker probably appears in more of those sets than anyone else, so the
adversary counts how times each suspect appears. The longer the series
of outputs, the more likely it is that the leaker will stand out.

In the anonymity literature, attacks like this are called intersection
attacks or disclosure attacks, and they're very effective. You're not
going to prevent them with a simple approach like random delays.

Cheers,
Michael

On 20/08/13 20:31, Fabio Pietrosanti (naif) wrote:
 Hi all,
 
 at GlobaLeaks we are going to implement a feature that want to
 mitigate time correlation attacks between a Whistleblower
 submitting something and a Receiver, receiving a notification that
 there's a new leak outstanding to be accessed.
 
 We already had a internal discussion and received some valuable 
 suggestions and comments available here 
 https://github.com/globaleaks/GlobaLeaks/issues/264 .
 
 However being the argument possibly tricky, we would like to
 subject to suggestion, criticism and review the proposal.
 
 That's a summary of the context:
 
 
 Overview
 
 When a whistleblower submits to a globaleaks node all receivers
 that they have selected as recipients for their submission will
 receive a notification informing them that a new submission has
 occurred. Other whistleblower interactions also trigger a
 notification (that should therefore be protected from timing
 attacks) and such interactions are:
 
 *
 
 A new comment is added to an existing submission by a WB
 
 *
 
 A new comment is added to an existing submission by a Receiver
 
 *
 
 A new file is uploaded to an existing submission by a WB
 
 
 Goals
 
 We are interested in mitigating correlation attacks based on the 
 dispatching of notifications for interactions performed by a 
 whistleblower. It should not be possible (or harder) for an
 attacker to determine which person is a whistleblower for a certain
 submission based on their capabilities (more on that below).
 
 
 Adversary model A
 
 Their goal is to find which user has performed a certain submission
 on a globaleaks node.
 
 This adversary has the following capabilities:
 
 *
 
 They can read the content of notification messages.
 
 *
 
 They can perform a new submission to a globaleaks node and
 therefore trigger notifications (i.e. they are capable of doing a
 /flooding/ /blending/ attack)
 
 *
 
 A log of traffic from N users they suspect to have blown the 
 whistle. This log includes the timestamp of when the request was 
 made, the response was received and the size of the payload.
 
 *
 
 The log of the notification traffic. This includes the timestamp
 of when the notification was dispatched and the size of it. The
 content of the notification will be either encrypted (model A) or
 plaintext (model B).
 
 
 Adversary model B
 
 This adversary has all the capabilities of the above adversary, but
 they do not have the ability of reading the content of the
 notification messages.
 
 
 Adversary model C
 
 All of the above except the receiver is not trusted: their goal is
 to de-anonymise the WB.
 
 Is this any different from Adversary A, that is an adversary that
 has the ability to read the notification emails because they are
 not encrypted?
 
 
 Example real world scenario
 
 The GL node is a GL node for a private company. The adversary is a 
 Manager of The Company that wants to find out who blew the whistle
 on the fact that he is recycling money through a shell company in
 the island of mann. 

Re: [cryptography] urandom vs random

2013-08-21 Thread Aaron Toponce
On Tue, Aug 20, 2013 at 12:46:42PM +1200, Peter Gutmann wrote:
 I don't see what the point is though, given that there's more than enough
 noisy data available on a general-purpose PC.

True. I use http://www.issihosts.com/haveged/ on physical hardware, and the
entropy keys by Simtec for virtual machines and containers.

-- 
. o .   o . o   . . o   o . .   . o .
. . o   . o o   o . o   . o o   . . o
o o o   . o .   . o o   o o .   o o o


pgpHl2fdbjiSU.pgp
Description: PGP signature
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] enabling blind signatures in GPG

2013-08-21 Thread Jake

thank you Steve for the link to your work!

I really like the idea you had and i hope it catches on, people need 
something like that.  But I don't think they realize it yet, and the ones 
who do have other ways to achieve it.


My focus is very specific though.  I want to use openPGP to do the 
blinding and blind-signing and unblinding, so that the entire system I 
want to create can be based off of a familiar and trusted suite of tools.


Does anyone have experience with the GPG source tree who might be able to 
help expose the blinding routines to the user?  I'm scared to start from 
scratch.


-jake

On Sun, 18 Aug 2013, Steve Weis wrote:


Hi Jake. This is not GPG-related, but I worked on an OpenID-based private 
federated login system called PseudoID that used blind
signatures. Basically, an identity provider will check your real identity, then 
issue you a blindly-signed token which you can
then later use to log in pseudo-anonymously to an OpenID consumer. The consumer 
and provider can't latter correlate your real
identity with that login.
This was a summer project from an intern at the time and should be considered a 
proof-of-concept. It does the unblinding crypto
in server-delivered Javascript so is not secure as-is. Do not use for anything 
in practice.

Here's the paper:
http://saweis.net/pdfs/pseudoid-pets2010.pdf

Here's the source:
https://code.google.com/p/pseudoid/

Here's a demo video:
https://www.youtube.com/watch?feature=player_embeddedv=fCBPuGsO_I4

Here's a site that was the private ID provider demo:
http://private-idp.appspot.com/

Here was the blind-signer demo, which is broken since we accidentally let the 
pseudoid.net domain lapse:
http://blind-signer.appspot.com/



On Sun, Aug 18, 2013 at 1:08 AM, Jake j...@spaz.org wrote:
  Hello everybody,

  I am trying to form an anonymous opining sytem based on a single 
Registrar, whose signatures deify users' public keys
  with the mark of a Participant.  But to protect the users from an evil 
registrar, blinding must be used.

  I have been told that blinding is already implemented internally to deter 
timing-based attacks, so this would be a
  matter of implementing a command-line option to blind a blob and save the 
blinding salts.

  I am not a cryptographer so I can only repeat what i've heard on this.

  
http://en.wikipedia.org/wiki/Blind_signature#Blind_RSA_signatures.5B2.5D:235

  Basically, a Participant generates a key pair (only for use in opining, 
not with their real identity) and wants to be
  able to prove, in public signed cleartext postings, that their public key 
has been signed by the Registar as an
  endorsement of Participation.  But they don't want the Registrar to see 
their public key and correlate it with their
  real identity (their proof of eligibility for participation) because that 
would compromise their anonymity.

  So the Participant blinds their public key, presents that blob to the 
Registrar (along with their real identity)
  and receives the Registrar's signature of the blob.  Then they take the 
blob home, and unblind it, revealing a
  perfect Registrar's signature of their public key.

  Please write if you can help me make this happen.  I believe that the 
system i'm trying to create could have a very
  positive effect on democracy in the world, and hopefully make politicians 
into simple clerks whose job is simply to
  count the opinions and follow the will of the people.

  take care,
  -jake
  ___
  cryptography mailing list
  cryptography@randombit.net
  http://lists.randombit.net/mailman/listinfo/cryptography



___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Reply to Zooko (in Markdown)

2013-08-21 Thread Mansour Moufid
On 2013-08-17, at 1:50 PM, Jon Callas wrote:

 On Aug 17, 2013, at 12:49 AM, Bryan Bishop kanz...@gmail.com wrote:
 
 Would providing (signed) build vm images solve the problem of
 distributing your toolchain?
 
 Maybe. The obvious counterexample is a compiler that doesn't
 deterministically generate code, but there's lots and lots of hair in
 there, including potential problems in distributing the tool chain
 itself, including copyrighted tools, libraries, etc.
 
 But let's not rathole on that, and get to brass tacks.
 
 I *cannot* provide an argument of security that can be verified on its
 own. This is Godel's second incompleteness theorem. A set of
 statements S cannot be proved consistent on its own. (Yes, that's a
 minor handwave.)
 
 All is not lost, however. We can say, Meh, good enough and the
 problem is solved. Someone else can construct a *verifier* that is
 some set of policies (I'm using the word policy but it could be a
 program) that verifies the software. However, the verifier can only be
 verified by a set of policies that are constructed to verify it. The
 only escape is decide at some point, meh, good enough.

Gitian can build projects deterministically such that the result can be
corroborated by many parties:

http://gitian.org/

I don't know if it can be used with the app stores but it shows that the
process is doable for those who really care. Personally I think time is
better spent on static analysis for example.

___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] urandom vs random

2013-08-21 Thread Sandy Harris
grarpamp grarp...@gmail.com wrote:

 The subject thread is covering a lot about OS implementations
 and RNG various sources. But what are the short list of open
 source tools we should be using to actually test and evaluate
 the resulting number streams?

Two good ones are listed  linked here
http://en.citizendium.org/wiki/Random_number#Testing_for_Randomness

My system is running Xubuntu. Randomness testers that are in its
repositories, and presumably quite a few others, are ent(1) and
dieharder(1).

See other posts in the thread for limitations of such testing.
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] urandom vs random

2013-08-21 Thread Aaron Toponce
On Tue, Aug 20, 2013 at 05:33:05PM -0400, grarpamp wrote:
 The subject thread is covering a lot about OS implementations
 and RNG various sources. But what are the short list of open
 source tools we should be using to actually test and evaluate
 the resulting number streams?

As already mentioned in the thread, you can only identify a random source,
which in order to be truly random, must come from some chaotic random
source, such as radioactive decay. However, you can make statistical
judgements on the output, to determine if the source is 'random enough'.
This is where the Die Hard and FIPS 140-2 checks come into play. The trick
is sampling for a long period of time, rather than a few minutes here and
there.

# timeout 1h rngtest  /dev/random
rngtest 2-unofficial-mt.14
Copyright (c) 2004 by Henrique de Moraes Holschuh
This is free software; see the source for copying conditions.  There is
NO warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR
PURPOSE.

rngtest: starting FIPS tests...
rngtest: bits received from input: 79369360032
rngtest: FIPS 140-2 successes: 3965374
rngtest: FIPS 140-2 failures: 3094
rngtest: FIPS 140-2(2001-10-10) Monobit: 378
rngtest: FIPS 140-2(2001-10-10) Poker: 393
rngtest: FIPS 140-2(2001-10-10) Runs: 1205
rngtest: FIPS 140-2(2001-10-10) Long run: 1128
rngtest: FIPS 140-2(2001-10-10) Continuous run: 0
rngtest: input channel speed: (min=419.675; avg=25223.970; 
max=28892.382)Kibits/s
rngtest: FIPS tests speed: (min=6.227; avg=143.700; max=155.069)Mibits/s
rngtest: Program run time: 360102 microseconds

~.078% failure rate for these tests.

-- 
. o .   o . o   . . o   o . .   . o .
. . o   . o o   o . o   . o o   . . o
o o o   . o .   . o o   o o .   o o o


pgp9uG1y9f79F.pgp
Description: PGP signature
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography