Re: [cryptography] skype backdoor confirmation
Gmail only keeps in the clear what you leave in the clear. s/a hostile act/less useful to power users than filter but notify On Mon, May 20, 2013 at 8:48 PM, James A. Donald jam...@echeque.com wrote: On 2013-05-21 3:08 AM, Mark Seiden wrote: (i know that at least jake and ian understand all the nuances here, probably better than me.) bus still, i would like you to consider, for a moment, this question: suppose there were a service that intentionally wanted to protect recipients of communications from malicious traffic? when i was at $big_provider, i spent an awful lot of time and energy communicating with colleagues and sharing threat intelligence about bad guys. Gmail is very efficient at filtering out malicious traffic. It also spies on all its customers and keeps all their mail in the clear forever. For this reason I use mail services that perform absolutely no filtering, and do my own filtering. If I get filtered, I want to know it. Furtive filtering is a hostile act. __**_ cryptography mailing list cryptography@randombit.net http://lists.randombit.net/**mailman/listinfo/cryptographyhttp://lists.randombit.net/mailman/listinfo/cryptography -- Kyle Creyts Information Assurance Professional BSidesDetroit Organizer ___ cryptography mailing list cryptography@randombit.net http://lists.randombit.net/mailman/listinfo/cryptography
Re: [cryptography] chaos-based cryptosystem with quantum crypto similarities
While my institution does appear to have access to the publications of World Scientific, World Scientific doesn't _LIST_ this paper it as one of their publications in the journal. They have works from all three authors listed, but not that paper. It appears to have been accepted for publication, but not yet published, per http://www.ee.cityu.edu.hk/~gchen/IJBC/IJBC_accepted.htm On Sat, Sep 29, 2012 at 5:54 AM, Peter Gutmann pgut...@cs.auckland.ac.nz wrote: d...@geer.org writes: I clearly need to read something else first. Suggestions? One of Underwood Dudley's books perhaps? Peter. ___ cryptography mailing list cryptography@randombit.net http://lists.randombit.net/mailman/listinfo/cryptography -- Kyle Creyts Information Assurance Professional BSidesDetroit Organizer ___ cryptography mailing list cryptography@randombit.net http://lists.randombit.net/mailman/listinfo/cryptography
Re: [cryptography] TEC Fibonacci Version
I am of the opinion that removing any such absolute words as uncrackable and impervious, and replacing them with more flexible language like highly-resistant to traditional cryptanalytic attacks or a resistance to cracking exceeding that of currently employed methods, or the like (possibly just going through a few rounds of discussion and proofreading with a willing and helpful colleague (hopefully one who is extremely familiar with the venue in which you are trying to have your submission reviewed) whose first language is English), and/or possibly submitting to a different publication venue with different scope, could yield excellent results for this paper. On Wed, Aug 22, 2012 at 12:04 PM, Jeffrey Walton noloa...@gmail.com wrote: On Wed, Aug 22, 2012 at 1:41 PM, Givonne Cirkin givo...@37.com wrote: Hi, For those interested, the demo version of the codec for the Fibonacci implementation of TEC is now available. www.givonzirkind.weebly.com on the download page. The article describing the techniques involved: http://arxiv.org/abs/0912.4080 website My colleagues agree with me. But, I have not been able to get pass peer website review and publish this paper. Apparently, your colleagues don't don't agree with you (or you agree that the system has flaws, which seems like it could be a problem). Jeff ___ cryptography mailing list cryptography@randombit.net http://lists.randombit.net/mailman/listinfo/cryptography -- Kyle Creyts Information Assurance Professional BSidesDetroit Organizer ___ cryptography mailing list cryptography@randombit.net http://lists.randombit.net/mailman/listinfo/cryptography
Re: [cryptography] Why do scammers say they're from Nigeria?
Emphasis on _most profitable_ here. Clearly not the only one employed. Also, this mode applies mostly to spam; there are a number of other ways of filtering the victims who will take interest, be more gullible, or get hooked that do not require being obviously dubious. On Wed, Jun 20, 2012 at 1:56 PM, Tim Dierks t...@dierks.org wrote: This is an interesting paper that presumably has implications for other social engineering schemes beside financial scammers: http://research.microsoft.com/pubs/167719/WhyFromNigeria.pdf ABSTRACT False positives cause many promising detection technologies to be unworkable in practice. Attackers, we show, face this problem too. In deciding who to attack true positives are targets successfully attacked, while false positives are those that are attacked but yield nothing. This allows us to view the attacker’s problem as a binary classification. The most profitable strategy requires accurately distinguishing viable from non-viable users, and balancing the relative costs of true and false positives. We show that as victim density decreases the fraction of viable users than can be profitably attacked drops dramatically. For example, a 10× reduction in density can produce a 1000× reduction in the number of victims found. At very low victim densities the attacker faces a seemingly intractable Catch-22: unless he can distinguish viable from non-viable users with great accuracy the attacker cannot find enough victims to be profitable. However, only by finding large numbers of victims can he learn how to accurately distinguish the two. Finally, this approach suggests an answer to the question in the title. Far-fetched tales of West African riches strike most as comical. Our analysis suggests that is an advantage to the attacker, not a disadvantage. Since his attack has a low density of victims the Nigerian scammer has an over-riding need to reduce false positives. By sending an email that repels all but the most gullible the scammer gets the most promising marks to self-select, and tilts the true to false positive ratio in his favor. - Tim ___ cryptography mailing list cryptography@randombit.net http://lists.randombit.net/mailman/listinfo/cryptography -- Kyle Creyts Information Assurance Professional BSidesDetroit Organizer ___ cryptography mailing list cryptography@randombit.net http://lists.randombit.net/mailman/listinfo/cryptography
Re: [cryptography] Intel RNG
On Mon, Jun 18, 2012 at 7:12 PM, Marsh Ray ma...@extendedsubset.com wrote: On 06/18/2012 12:20 PM, Jon Callas wrote: A company makes a cryptographic widget that is inherently hard to test or validate. They hire a respected outside firm to do a review. What's wrong with that? I recommend that everyone do that. Un-reviewed crypto is a bane. Let's accept that the review was competent, thorough, and independent. Here's what I'm left wondering: How do I know that this circuit that was reviewed is actually the thing producing the random numbers on my chip? Why should I assume it doesn't have any backdoors, bugdoors, or engineering revisions that make it different from what was reviewed? Is RDRAND driven by reprogrammable microcode? If not, how are they going to address bugs in it? If so, what are algorithms are used by my CPU to authenticate the microcode updates that can be loaded? What kind of processes are used to manage the signing keys for it? Let's take a look at the actual report: http://www.cryptography.com/public/pdf/Intel_TRNG_Report_20120312.pdf Page 12: At an 800 MHz clock rate, the RNG can deliver post-processed random data at a sustained rate of 800 MBytes/sec. In particular, it should not be possible for a malicious process to starve another process. Wait a minute... that second statement doesn't follow from the first. We're talking about chips with a 25 GB/s *external* memory bus bandwidth, why can't a 4-core 2 GHz processor request on the order of 64 bits per core*clock for 4*64*2e9 = 512e9 b/s = 60 GiB/s ? So 800 MiB/s @ 800 MHz = 7.62 clocks per 64 bit RDRAND result (if they mean 1 core) = 30.5 clocks per 64 bit RDRAND (if they mean 4 cores). = 61.0 clocks per 64 bit RDRAND (if they mean 8 hyperthread cores). More info: http://software.intel.com/en-us/articles/intel-digital-random-number-generator-drng-software-implementation-guide/ Data taken from an early engineering sample board with a 3rd generation Intel Core family processor, code-named Ivy Bridge, quad core, 4 GB memory, hyper-threading enabled. Software: LINUX* Fedora 14, GCC version 4.6.0 (experimental) with RDRAND support, test uses p-threads kernel API. Why does an Intel Software Implementation Guide have more information about the actual device under test than the formal report? Measured Throughput: Up to 70 million RDRAND invocations per second Implies: 4 cores @ 2 GHz - 114 clocks per RDRAND 8 cores @ 2 GHz - 229 clocks per RDRAND 500+ million bytes of random data per second Implies: rate = 62.5e6 RDRAND/s t(RDRAND) = 16 ns = 32 clocks, 1 core @ 2 GHz = 256 core*clocks, 8 cores @ 2 GHz RDRAND Response Time and Reseeding Frequency ~150 clocks per invocation Note: Varies with CPU clock frequency since constraint is shared data path from DRNG to cores. Little contention until 8 threads – or 4 threads on 2 core chip Simple linear increase as additional threads are added So when the statement is given it should not be possible for a malicious process to starve another process without justification, it leads us to ask why should it not be possible to starve another process? Maybe this is our answer: Because this hardware instruction is slow! 150 clocks (Intel's figure) implies 18.75 clocks per byte. then perhaps this is the proper graph to examine. http://bench.cr.yp.to/graph-sha3/8-thumb.png maybe this is the case RDRAND is intended for? producing random 8-byte seeds? It would appear that the instruction is actually a blocking operation that does not return until the request has been satisfied. Or does it? Can we then expect it will never result in an out-of-entropy condition? What happens when 16 cores are put on a chip? Will some future chip begin occasionally returning zeroes from RDRAND when an attacker fires off 31 simultaneous threads requesting entropy? Or will RDRAND take 300 clocks to execute? Note that Skein 512 in pure software costs only about 6.25 clocks per byte. Three times faster! If RDRAND were entered in the SHA-3 contest, it would rank in the bottom third of the remaining contestants. http://bench.cr.yp.to/results-sha3.html So perhaps we should not throw away our software-stirred entropy pools just yet and if RDRAND is present it should be used to contribute 128 bits or so at a time as just one of several sources of entropy. It could certainly help to kickstart the software RNG in those critical first seconds after cold boot (you know, when the SSH keys are being generated). - Marsh ___ cryptography mailing list cryptography@randombit.net http://lists.randombit.net/mailman/listinfo/cryptography -- Kyle Creyts Information Assurance Professional BSidesDetroit Organizer ___ cryptography mailing list cryptography@randombit.net http://lists.randombit.net
Re: [cryptography] Master Password
password security applications must store entropy on behalf of the user, there's no way around it. I'm not quite convinced that things would be worse. Master Password's algorithm was always designed with the knowledge that individual passwords can easily be compromised, and in fact was conceived as a way of addressing this problem specifically. If we compare the Master Password solution being used by a large user-base against that same user-base using their own individual password management solutions, I believe we'll find that since users easily give up on security and simply reuse or rehash a small subset of passwords for all accounts, an attacker that obtains the user's credentials for one site will have a much easier time at figuring out the user's credentials for their other sites than they would if they first had to brute-force that user's master password. Note that an attacker gains very little from discovering a user's password for a site, and very little more by discovering that multiple users are using the same password. As far as I understand, there's no way to reverse the site's password and determine the master password from it, and knowing that multiple users that have used the same master password does not aid in this attempt either. Knowing that 2, 5 or 10 users are using the same master password might make it more profitable a target for brute-forcing, but I'm currently still convinced that an attacker will opt for a user-chosen password before he'll have a go at brute-forcing Master Password. But please re-iterate your point if I've completely missed it. - Marsh ___ cryptography mailing list cryptography@randombit.net http://lists.randombit.net/mailman/listinfo/cryptography -- Kyle Creyts Information Assurance Professional BSidesDetroit Organizer ___ cryptography mailing list cryptography@randombit.net http://lists.randombit.net/mailman/listinfo/cryptography
Re: [cryptography] Master Password
Which is not to say that I find the single case, or cryptographic strength to be superior to other systems. But it certainly complicates the job of an attacker seeking to exploit large numbers of passwords, or cross-service password reuse. Imperfect, but not a terrible step. On Wed, May 30, 2012 at 11:23 AM, Kyle Creyts kyle.cre...@gmail.com wrote: I would hazard a guess that this system would stand up well against mass attacks, at the very least making them much less economically desirable or feasible for attackers who benefit most from password dumps. Most architectures fail in single cases, anyway, due to poor user awareness, poor user decisions, or bad implementations. On Wed, May 30, 2012 at 9:06 AM, Maarten Billemont lhun...@lyndir.com wrote: First of all, thanks for your time and very valuable feedback. On 30 May 2012, at 07:20, Marsh Ray wrote: On 05/29/2012 06:01 PM, Maarten Billemont wrote: Dear readers, I've written an iOS / Mac application whose goal it is to produce passwords for any purpose. I was really hoping for the opportunity to receive some critical feedback or review of the algorithm used[1]. [1] http://masterpassword.lyndir.com/algorithm.html Master Password Master Password So how does it work? The theory behind Master Password is simple. The user remembers a single, secure password. The user only ever uses that password to log into the Master Password application. This master password is then used as a seed to generate a different password based on the name of How about just used to generate? Usually the term seed is used when it completely determines a pseudorandom sequence rather than being combined with other information. [edit: OIC, the master password generates a seed based on the name of the site. This was unclear.] the site to generate a password for. The result is that each master password generates its own unique sequence of passwords for any site name. Since the only input data is the master password and the site name (along with a password counter, see below), there is no need for any kind of storage to recreate a site's password. All that's needed is the correct master password and the correct algorithm implementation. How do you know how far along in the password sequence you are? The summary descriptions of the algorithm are simplified (perhaps, too much so) by assuming default values for those inputs that have ones. Technically, there are these inputs: - The master password, - The site name, - The password counter (default 0), - The password type (default Long Password). What that does for you is make it almost impossible to lose your passwords. To lose a password has multiple meanings. How about much easier to create and remember strong passwords? The intended meaning was, get in a situation where you can no longer obtain to the data necessary for accessing your passwords. Additionally, you can no longer get in a situation where this data can be lost to a third party, such as when someone steals your notebook full of passwords. It also makes it nearly impossible for hackers to steal your online identity. I don't think this claim is well substantiated. :-) The basis of this claim is the theory that attacking your password for a single site remains as vulnerable as password authentication has ever been, but this does not risk your global identity as reusing or rehashing the same password on multiple sites would. It's phrased awkwardly for an audience of professionals, but was targeted at users. This page may not be the right location for such language. The Algorithm In short, the algorithm is comprised of the following steps: Determining the master key Determining the cipher seed Encoding a user-friendly password The Master Password The user chooses a single master password, preferably sufficiently long to harden against brute-force attacks. Master Password recommends absurd two or three-word sentences as they're easily remembered and generally sufficiently high in entropy. I believe this is not good advice because two and three word sentences will not provide enough entropy. Wikipedia cites a Harvard/Google study indicating English has 1,022,000 words. https://en.wikipedia.org/wiki/Number_of_words_in_English#Number_of_words_in_English Users have great difficulty picking original passwords, even security pros tend to be overconfident in the uniqueness of password schemes. For example: http://www.imperva.com/docs/WP_Consumer_Password_Worst_Practices.pdf the 5000 most popular passwords [] are used by a share of 20% of the users. So even if users chose two completely independent passwords from the RockYou statistical distribution to form their sentence it would mean that more than 4% of users will use one of the top 25 million two password combinations. But that's when users are asked to pick unguessable passwords, asking