Re: PGPfreeware 8.0: Not so good news for crypto newcomers
On Monday, Dec 9, 2002, at 05:18 Europe/London, Bill Frantz wrote: At 2:34 PM -0800 12/8/02, John Doe Number Two wrote: For all the whining from the 'free beer' crowd, no one had bothered to make PGP/gnupg compatible with OSX and Windoze XP. Looks like PGP Inc was trying to fill a hole in the market by doing just that. My wife is using GPG on OS X. She has integrated with Mail in GUI mode using a package called PGPMail. She says, It seems to be working OK. (I remember spending some time helping her get it up. Knowledge of Unix shell helps.) I can also vouch for GPG on OSX and the GPGMail plug-in for the Apple Mail application works fairly well for GPG use, though not for key generation or administrative activities. I moved over to PGP 8.0 when the beta came out and while I'm no novice with these things I greatly appreciate the slick user interface (and the disc encryption that's a few times faster than Apple's) so I am now running the PGP 8.0 Personal edition rather than GPGmail. I think this comes down to a classic time/money tradeoff. PGP 8.0 Personal edition is currently priced at $39. Even as an experienced Unix and PGP user I think that the GUI on PGP 8.0 will save me an hour of effort over the lifetime of the product, which means it saves me money in the long run. Nicko - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]
Re: QuizID?
On Thursday, Oct 17, 2002, at 19:39 Europe/London, Rich Salz wrote: Marc Branchaud wrote: Any thoughts on this device? At first glance, it doesn't seem particularly impressive... http://www.quizid.com/ Looks like hardware S/Key, doesn't it? If I could fool the user into entering a quizcode, then it seems like I could get the device and the admin database out of sync and lock the user out of the system. [Note: I have an interest, since QuizID use nCipher hardware] Their device has a neat way of synchronizing the sequence number to the server which both avoids the clock drift problems that trouble RSA SecurID and mean that you'd have to get the user to pass you a large number of codes before you got them out of sync with the server. It also helps them avoid some of RSA's later patents which deal with their troublesome clock sync problems. Nicko - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]
Re: objectivity and factoring analysis
Anonymous wrote: Nicko van Someren writes: The estimate of the cost of construction I gave was some hundreds of millions of dollars, a figure by which I still stand. But what does that mean, to specify (and stand by) the cost of construction of a factoring machine, without saying anything about how fast it runs? Heck, we could factor 1024 bit numbers with a large abacus, if we don't care about speed. A cost figure is meaningless unless in the context of a specific performance goal. You'd need a large abacus and a vary large stack of paper... If you had read the Bernstein proposal in detail you would understand that (among other things) it details the conceptual design of a machine for computing kernels in a large, sparse matrix. The design talks of the number of functional units and the nature of the communication between these units. What I set out to do was look at how complex those units are and how hard it would be to connect them and to place a cost on this. I was then asked how fast this machine would run and I tried to do the calculation on the spot without a copy of the proposal to hand, and came up with a figure on the order of a second based on very conservative hardware design. This figure is *wildly* erroneous as a result of both not having the paper to hand and also not even having an envelope on the back of which I could scratch notes. And yet here you say that it took you completely by surprise when someone asked how fast the machine would run. In all of your calculations on the design of the machine, you had apparently never calculated how fast it would be. I'm sorry, but I don't think at infinite speed. I started this process after lunch and the panel session started at 1:30pm. I did say that this was an impromptu session. How could this be? Surely in creating your hundreds of millions of dollars estimate you must have based that on some kind of speed consideration. How else could you create the design? This seems very confusing. See my comments above. The costing was based on transistor count and engineering costs. The design suggested in the Bernstein proposal does not have a simple size/time trade-off since the size of the system is proscribed by the algorithm. And, could you clarify just a few more details, like what was the size you were assuming for the factor base upper bounds, and equivalently for the size of the matrix? I used the number 10^9 for the factor base size (compared to about 6*10^6 for the break of the 512 bit challenge) and 10^11 for the weight of the matrix (compared to about 4*10^8 for RSA512). Again these were guesses and they certainly could be out by an order of magnitude. This would give us a better understanding of the requirements you were trying to meet. And then, could you even go so far as to discuss clock speeds and numbers of processing and memory elements? Just at a back of the envelope level of detail? OK, here are the numbers I used. Again I preface this all with it being order of magnitude estimates, not engineering results. It's based on a proposal, not a results paper, and there are doubtless numerous engineering details that will make the whole thing more interesting. The matrix reduction cells are pretty simple and my guess was that we could build the cells plus inter-cell communication in about 1000 transistors. I felt that, for a first order guess, we could ignore the transistors in the edge drivers since for a chip with N cells there are only order N^(1/2) edge drivers. Thus I guessed 10^14 transistors which might fit onto about 10^7 chips which in volume (if you own the fabrication facility) cost about $10 each, or about $10^8 for the chips. Based on past work in estimating the cost of large systems I then multiplied this by three or four to get a build cost. As far at the speed goes, this machine can compute a dot product in about 10^6 cycles. Initially I thought that the board to board communication would be slow and we might only have a 1MHz clock for the long haul communication, but I messed up the total time and got that out as a 1 second matrix reduction. In fact to compute a kernel takes about 10^11 times longer. Fortunately it turns out that you can drive from board to board probably at a few GHz or better (using GMII type interfaces from back planes of network switches). If we can get this up to 10GHz (we do have lots to spend on RD here) we should be able to find a kernel in somewhere around 10^7 seconds, which is 16 weeks or 4 months. There are, of course, a number of other engineering issues here. One would want to try to work out how to build this machine to be tolerant of hardware faults since getting 10^7 chips to all run faultlessly for months at a time is tough to say the least. Secondly, we are going to need some neat power reduction techniques in these chips to dissipate the huge power that it needs, which will likely run to 10^8 watts or so so
Re: objectivity and factoring analysis
- Forwarded message from Adam Back [EMAIL PROTECTED] - To: Cryptography [EMAIL PROTECTED] Cc: [EMAIL PROTECTED] From: Adam Back [EMAIL PROTECTED] Subject: objectivity and factoring analysis Date: Fri, 19 Apr 2002 14:51:59 +0100 Sender: [EMAIL PROTECTED] I'd just like to make a few comments about the apparently unnoticed or unstated conflicts of interest and bias in the analysis surrounding Bernstein's proposal. ... - I'm not sure any of the respondents so far except Bernstein have truly understood the math -- there are probably few who do, factoring being such a narrow research area. I'm inclined to agree with you there ... - Nicko van Someren -- the person credited with originally making the exaggerated, or at least highly worst case interpretation at the FC02 panel -- has a conflict interest -- hardware accelerator gear that ncipher sell will be more markedly needed if people switch to 2048 or larger keys. Nicko has made no public comments in the resulting discussion. Since you mention it, I will make a comment for the purpose of clearing up a number of misunderstandings about what I said and the context in which the comments were made. At the Financial Cryptography 2002 conference a small and impromptu panel was convened to discuss the Bernstein proposal. Since I'm in the business of building hardware I was asked to comment on the cost of building some of the hardware described therein. I limited myself to comments about the design for the engine that could be used to take the results of the sieve process and compute values leading to a pair of roots, and furthermore prefaced and qualified my comments with strong statements about any numbers being very rough back of an envelope calculations. The estimate of the cost of construction I gave was some hundreds of millions of dollars, a figure by which I still stand. I was then asked how fast this machine would run and I tried to do the calculation on the spot without a copy of the proposal to hand, and came up with a figure on the order of a second based on very conservative hardware design. This figure is *wildly* erroneous as a result of both not having the paper to hand and also not even having an envelope on the back of which I could scratch notes. The number was based on a miscalculation of the number of clock ticks the circuit would need by a factor of 10^11, which is a vast error that is only slightly moderated by the fact that on further analysis I concluded that the hardware could be made to operate on a clock that ran between 10^3 and 10^4 times faster since the inter-circuit communication did not need to be as slow as I had originally thought. Thus I think that with care a matrix reduction machine of the sort described could be built to run in a few weeks or months for 1024 bit keys. Despite all the qualifying of these statements I felt then, and still feel now, that if you have keys that you think rich governments might genuinely be interested in then you should use ones that are longer than 1024 bits. If you have personal information that you want to keep secret from rich governments for many years to come then you should probably use longer keys anyway. After all we can expect on past form that the security agencies are some years ahead of the academic state of the art in this field anyway. On the other hand if you are moving general commercial data around on an everyday basis I don't think that there is much wrong with 1024 bits keys and I would not, and have not, suggested that there is anything insecure about such key lengths for, say, electronic banking or e-commerce using SSL. I have to say that Adam's suggestion that there was some sort of ulterior motive for my comments is both disingenuous and somewhat insulting. In the context of short and impromptu discussion on a topic which people felt was a live one, I made a back of the envelope calculation in which I used the time figure as the cost figure and came up with totally the wrong number. I wasn't expecting that this was going to then be used as the basis for anything other than maybe an excuse for looking into the problem a little more deeply. The critical problem here seems to be that Lucky then quoted this number on a mailing list before I'd had a chance to look more closely. I don't think that there was any conflict of interest involved at all given both the nature of the discussion and the fact that these days cryptographic acceleration is pretty peripheral to the nature of my business. - Lucky on the other hand suggested a practical security engineering approach to start to plan for possibility of migrating to larger key sizes. Already one SSH implementation added a configuration option to select a minimum key size accepted by servers as a result. This seems like a positive outcome. Generally the suggestion to move to 2048 bit keys seems like a good idea to me. Somewhat like MD5 - SHA1, MD5 isn't broken for most
Financial Cryptography 2002: Discount rate ends 1st February
Please note The discount registration rate is only available until the 1st of February 2002. After this date registration will be changed at the full rate. * Financial Cryptography 2002 March 11-14, 2002 Southhampton, Bermuda Call for Participation Financial Cryptography is the only international conference dedicated to the understanding cryptography and its relevance to all aspects of the world of finance. The conference aims to bring together cryptographers, technologists, businesses, bankers, lawyers and policy makers to further the understanding of what can be done with cryptography and what needs to be done for the world of finance. Topics for the conference range from Anonymity to Authentication, from Digital Cash to Digital Rights Management, from Legal and Regulatory Issues to Loyalty Mechanisms and from Payments Systems to Privacy issues. The program is a combination of peer reviewed papers, panel discussions and invited talks and the proceedings will be published in the Springer-Verlag Lecture Notes in Computer Science series. Registration for Financial Cryptography 2002 is now open; details and online registration can be found at http://www.fc02.ai/ along with information about discounted hotel accommodation. Financial Cryptography is organized by the International Financial Cryptography Association (IFCA), a non-profit company dedicated to the same goals as the conference. More information can be had from the IFCA web site at http://www.ifca.ai/ or contacting the conference general chair, Nicko van Someren, at [EMAIL PROTECTED] or by phone on +44 1223 723600. - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]