Re: [EMAIL PROTECTED]: Skype security evaluation]
- Original Message - Subject: [Tom Berson Skype Security Evaluation] Tom Berson's conclusion is incorrect. One needs only to take a look at the publicly available information. I couldn't find an immediate reference directly from the Skype website, but it uses 1024-bit RSA keys, the coverage of breaking of 1024-bit RSA has been substantial. The end, the security is flawed. Of course I told them this now years ago, when I told them that 1024-bit RSA should be retired in favor of larger keys, and several other people as well told them. Joe
Re: SHA1 broken?
- Original Message - From: Dave Howe [EMAIL PROTECTED] Subject: Re: SHA1 broken? Indeed so. however, the argument in 1998, a FPGA machine broke a DES key in 72 hours, therefore TODAY... assumes that (a) the problems are comparable, and (b) that moores law has been applied to FPGAs as well as CPUs. That is only misreading my statements and missing a very large portion where I specifically stated that the new machine would need to be custom instead of semi-custom. The proposed system was not based on FPGAs, instead it would need to be based on ASICs engineered using modern technology, much more along the lines of a DSP. The primary gains available are actually from the larger wafers in use now, along with the transistor shrinkage. Combined these have approximately kept the cost in line with Moore's law, and the benefits of custom engineering account for the rest. So for exact details about how I did the calculations I assumed Moore's law for speed, and an additional 4x improvement from custom chips instead of of the shelf. In order to verify the calculations I also redid them assuming DSPs which should be capable of processing the data (specifically from TI), I came to a cost within a couple orders of magnitude although the power consumption would be substantially higher. Joe
Re: SHA1 broken?
- Original Message - From: Dave Howe [EMAIL PROTECTED] Sent: Thursday, February 17, 2005 2:49 AM Subject: Re: SHA1 broken? Joseph Ashwood wrote: I believe you are incorrect in this statement. It is a matter of public record that RSA Security's DES Challenge II was broken in 72 hours by $250,000 worth of semi-custom machine, for the sake of solidity let's assume they used 2^55 work to break it. Now moving to a completely custom design, bumping up the cost to $500,000, and moving forward 7 years, delivers ~2^70 work in 72 hours (give or take a couple orders of magnitude). This puts the 2^69 work well within the realm of realizable breaks, assuming your attackers are smallish businesses, and if your attackers are large businesses with substantial resources the break can be assumed in minutes if not seconds. 2^69 is completely breakable. Joe Its fine assuming that moore's law will hold forever, but without that you can't really extrapolate a future tech curve. with *todays* technology, you would have to spend an appreciable fraction of the national budget to get a one-per-year break, not that anything that has been hashed with sha-1 can be considered breakable (but that would allow you to (for example) forge a digital signature given an example) This of course assumes that the break doesn't match the criteria from the previous breaks by the same team - ie, that you *can* create a collision, but you have little or no control over the plaintext for the colliding elements - there is no way to know as the paper hasn't been published yet. I believe you substantially misunderstood my statements, 2^69 work is doable _now_. 2^55 work was performed in 72 hours in 1998, scaling forward the 7 years to the present (and hence through known data) leads to a situation where the 2^69 work is achievable today in a reasonable timeframe (3 days), assuming reasonable quantities of available money ($500,000US). There is no guessing about what the future holds for this, the 2^69 work is NOW. - Original Message - From: Trei, Peter [EMAIL PROTECTED] To: Dave Howe [EMAIL PROTECTED]; Cypherpunks [EMAIL PROTECTED]; Cryptography cryptography@metzdowd.com Actually, the final challenge was solved in 23 hours, about 1/3 Deep Crack, and 2/3 Distributed.net. They were lucky, finding the key after only 24% of the keyspace had been searched. More recently, RC5-64 was solved about a year ago. It took d.net 4 *years*. 2^69 remains non-trivial. What you're missing in this is that Deep Crack was already a year old at the time it was used for this, I was assuming that the most recent technologies would be used, so the 1998 point for Deep Crack was the critical point. Also if you check the real statistics for RC5-64 you will find that Distributed.net suffered from a major lack of optimization on the workhorse of the DES cracking effort (DEC Alpha processor) even to the point where running the X86 code in emulation was faster than the native code. Since an Alpha Processor had been the breaking force for DES Challenge I and a factor of 1/3 for III this crippled the performance resulting in the Alphas running at only ~2% of their optimal speed, and the x86 systems were running at only about 50%. Based on just this 2^64 should have taken only 1.5 years. Additionally add in that virtually the entire Alpha community pulled out because we had better things to do with our processors (e.g. IIRC the same systems rendered Titanic) and Distributed.net was effectively sucked dry of workhorse systems, so a timeframe of 4-6 months is more likely, without any custom hardware and rather sad software optimization. Assuming that the new attacks can be pipelined (the biggest problem with the RC5-64 optimizations was pipeline breaking) it is entirely possible to use modern technology along with GaAs substrate to generate chips in the 10-20 GHz range, or about 10x the speed available to Distributed.net. Add targetted hardware to the mix, deep pipelining, and massively multiprocessors and my numbers still hold, give or take a few orders of magnitude (the 8% of III done by Deep Crack in 23 hours is only a little over 2 orders of magnitude off, so within acceptable bounds). 2^69 is achievable, it may not be pretty, and it certainly isn't kind to the security of the vast majority of secure infrastructure, but it is achievable and while the cost bounds may have to be shifted, that is achievable as well. It is still my view that everyone needs to keep a close eye on their hashes, make sure the numbers add up correctly, it is simply my view now that SHA-1 needs to be put out to pasture, and the rest of the SHA line needs to be heavily reconsidered because of their close relation to SHA-1. The biggest unknown surrounding this is the actual amount of work necessary to perform the 2^69, if the workload is all XOR then the costs and timeframe I gave are reasonably pessimistic
Re: SHA1 broken?
- Original Message - From: James A. Donald [EMAIL PROTECTED] Subject: Re: SHA1 broken? 2^69 is damn near unbreakable. I believe you are incorrect in this statement. It is a matter of public record that RSA Security's DES Challenge II was broken in 72 hours by $250,000 worth of semi-custom machine, for the sake of solidity let's assume they used 2^55 work to break it. Now moving to a completely custom design, bumping up the cost to $500,000, and moving forward 7 years, delivers ~2^70 work in 72 hours (give or take a couple orders of magnitude). This puts the 2^69 work well within the realm of realizable breaks, assuming your attackers are smallish businesses, and if your attackers are large businesses with substantial resources the break can be assumed in minutes if not seconds. 2^69 is completely breakable. Joe
Re: Dell to Add Security Chip to PCs
- Original Message - From: Shawn K. Quinn [EMAIL PROTECTED] Subject: Re: Dell to Add Security Chip to PCs Isn't it possible to emulate the TCPA chip in software, using one's own RSA key, and thus signing whatever you damn well please with it instead of whatever the chip wants to sign? So in reality, as far as remote attestation goes, it's only as secure as the software driver used to talk to the TCPA chip, right? That issue has been dealt with. They do this by initializing the chip at the production plant, and generating the certs there, thus the process of making your software TCPA work actually involves faking out the production facility for some chips. This prevents the re-init that I think I saw mentioned a few messages ago (unless there's some re-signing process within the chip to allow back-registering, entirely possible, but unlikely). It even gets worse from there because the TCPA chip actually verifies the operating system on load, and then the OS verifies the drivers, solid chain of verification. Honestly Kaminsky has the correct idea about how to get into the chip and break the security, one small unchecked buffer and all the security disappears forever. Joe Trust Laboratories Changing Software Development http://www.trustlaboratories.com
Re: Mixmaster is dead, long live wardriving
- Original Message - From: Major Variola (ret) [EMAIL PROTECTED] Subject: Mixmaster is dead, long live wardriving At 07:47 PM 12/9/04 -0800, Joseph Ashwood wrote: If the Klan doesn't have a right to wear pillowcases what makes you think mixmaster will survive? Well besides the misinterprettaion of the ruling, which I will ignore, what makes you think MixMaster isn't already dead? OK, substitute wardriving email injection when wardriving is otherwise legal for Mixmastering, albeit the former is less secure since the injection lat/long is known. And you need to use a disposable Wifi card or at least one with a mutable MAC. Wardriving is also basically dead. Sure there are a handful of people that do it, but the number is so small as to be irrelevant. Checking the logs for my network (which does run WEP so the number of attacks may be reduced from unprotected) in the last 2 years someone (other than those authorized) has attempted to connect about 1000 times, of those only 4 made repeated attempts, 2 succeeded and hit the outside of the IPSec server (I run WEP as a courtesy to the rest of the connection attempts). That means that in the last 2 years there have been at most 4 attempts at wardriving my network, and I live in a population dense part of San Jose. Wardriving can also be declared dead. Glancing at the wireless networks visible from my computer I currently see 6, all using at least WEP (earlier there were 7, still all encrypted). I regularly drive down through Los Angeles, when I have stopped for gas or food and checked I rarely see an unprotected network. The WEP message has gotten out, and the higher security versions are getting the message out as well. Now all it will take is a small court ruling that whatever comes out of your network you are responsible for, and the available wardriving targets will quickly drop to almost 0. Wardriving is either dead or dying. Or consider a Napster-level popular app which includes mixing or onion routing. Now we're back to the MixMaster argument. Mixmaster was meant to be a Napster-level popular app for emailing, but people just don't care about anonymity. Such an app would need to have a seperate primary purpose. The problem with this is that, as we've seen with Freenet, the extra security layering can actually undermine the usability, leading to a functional collapse. If a proper medium can be struck then such an application can become popular, I don't expect this to happen any time soon. Joe
Re: punkly current events
- Original Message - From: Major Variola (ret) [EMAIL PROTECTED] Subject: punkly current events If the Klan doesn't have a right to wear pillowcases what makes you think mixmaster will survive? Well besides the misinterprettaion of the ruling, which I will ignore, what makes you think MixMaster isn't already dead? MixMaster is only being used by a small percentage of individuals. Those individuals like to claim that everyone should send everything anonymously, when in truth communication cannot happen with anonymity, and trust cannot be built anonymously. This leaves MixMaster as only being useful for a small percentage of normal people, and those using it to prevent being identified as they communicate with other known individuals. The result of this is rather the opposite of what MixMaster is supposed to create. A small group to investigate for any actions which are illegal, or deemed worth investigating. In fact it is arguable that for a new face in action it is probably easier to get away with the actions in question to send the information in the clear to their compatriots than it is to use MixMaster, simply because being a part of the group using MixMaster immediately flags them, as potential problems. In short, except for those few people who have some use for MixMaster, MixMaster was stillborn. I'm not arguing whether such a situation should be the correct way things happened, but that is the way things happened. Joe
Re: A National ID: AAMVA's Unique ID
- Original Message - From: John Gilmore [EMAIL PROTECTED] [EMAIL PROTECTED]; [EMAIL PROTECTED] Sent: Thursday, June 17, 2004 10:31 AM Subject: Re: A National ID: AAMVA's Unique ID The solution then is obvious, don't have a big central database. Instead use a distributed database. Our favorite civil servants, the Departments of Motor Vehicles, are about to do exactly this to us. They call it Unique ID and their credo is: One person, one license, one record. They swear that it isn't national ID, because national ID is disfavored by the public. But it's the same thing in distributed-computing clothes. I think you misunderstood my point. My point was that it is actually _easier_, _cheaper_, and more _secure_ to eliminate all the silos. There is no reason for the various silos, and there is less reason to tie them together. My entire point was to put my entire record on my card, this allows faster look-up (O(1) time versus O(lg(n))), greater security (I control access to my record), it's cheaper (the cards have to be bought anyway), it's easier (I've already done most of the work on defining them), and administration is easier (no one has to care about duplication). This sure smells to me like national ID. I think they are drawing the line a bit finer than either of us would like. They don't call it a national ID because it being a national ID means that it would be run by the federal government, being instead run by state governments, it is a state ID, linked nationally. As I said in the prior one, I disagree with any efforts to create forced ID. This, like the MATRIX program, is the brainchild of the federal Department of inJustice. But those wolves are in the sheepskins of state DMV administrators, who are doing the grassroots politics and the actual administration. It is all coordinated in periodic meetings by AAMVA, the American Association of Motor Vehicle Administrators (http://aamva.org/). Draft bills to join the Unique ID Compact, the legally binding agreement among the states to do this, are already being circulated in the state legislatures by the heads of state DMVs. The idea is to sneak them past the public, and past the state legislators, before there's any serious public debate on the topic. They have lots of documents about exactly what they're up to. See http://aamva.org/IDSecurity/. Unfortunately for us, the real documents are only available to AAMVA members; the affected public is not invited. Robyn Wagner and I have tried to join AAMVA numerous times, as freetotravel.org. We think that we have something to say about the imposition of Unique ID on an unsuspecting public. They have rejected our application every time -- does this remind you of the Hollywood copy-prevention standards committees? Here is their recent rejection letter: Thank you for submitting an application for associate membership in AAMVA. Unfortunately, the application was denied again. The Board is not clear as to how FreeToTravel will further enhance AAMVA's mission and service to our membership. We will be crediting your American Express for the full amount charged. Please feel free to contact Linda Lewis at (703) 522-4200 if you would like to discuss this further. Dianne Dianne E. Graham Director, Member and Conference Services AAMVA 4301 Wilson Boulevard, Suite 400 Arlington, VA 22203 T: (703) 522-4200 | F: (703) 908-5868 www.aamva.org http://www.aamva.org/ At the same time, they let in a bunch of vendors of high security ID cards as associate members. Well then create a High-Security ID card company, build it on the technology I've talked about. It's fairly simple, file the paperwork to create an LLC with you and Robyn, the LLC acquires a website, it can be co-located at your current office location, the website talks about my technology, how it allows the unique and secure identification of every individual, blah, blah, blah, get a credit card issued in the correct name. They'll almost certainly let you in, you'll look and smell like a valid alternative (without lying because you could certainly offer the technology), if you really want to make it look really good I'm even willing to work with you on filing a patent, something that they'd almost certainly appreciate. AAMVA, the 'guardians' of our right to travel and of our identity records, doesn't see how listening to citizens concerned with the erosion of exactly those rights and records would enhance their mission and service. Of course it won't, their mission and service is to offer the strongest identity link possible in the ID cards issued nation-wide, as such the citizen's course of action has to be to govern the states issuing these identication papers. However, if you offer them technology to actually make their mission and service cheaper, more effective, and as a side-benefit better for their voters. Besides, if you can't beat them (you
Re: [cdr] Re: Digital cash and campaign finance reform
- Original Message - From: Tim May [EMAIL PROTECTED] Subject: [cdr] Re: Digital cash and campaign finance reform There are too many loopholes to close. I think that's the smartest thing any one of us has said on this topic. Joe
Re: Batter Up! (Was Re: Ex-Intel VP Fights for Detainee)
First let me say that I am anti-war. Maybe it is just because I've changed from being purely a tech player to now owning Trust Laboratories, and so primarily being a businessman, but I see things slightly differently from the WSJ. http://online.wsj.com/article_print/0,,SB1049616100,00.html excerpts Of course, the largest benefit -- a more stable Mideast -- is huge but unquantifiable. A second plus, lower oil prices, is somewhat more measurable. (Oil prices fell again yesterday on the prospect of victory.) The premium on 11.5 million barrels imported every day by the U.S. is a transfer from us to producing countries. Postwar, with Iraqi production back in the pipeline and calmer markets, oil prices will fall even further. If they drop to an average in the low $20s, the U.S. economy will get a boost of $55 billion to $60 billion a year. I don't think the stable Mideast is the largest benefit. The largest benefit comes from having a US-friendly government in the Mideast. This has several benefits the most important of which are; it provides a stable center of power for the US in the Mideast, and provides the US with priority oil. The center of power is not currently important, but with the growing disruption that the Israel-Palestine problem presents I have a strong suspicion that military force in the middle east will become increasingly necessary. The foundation for this is rather simple to find, it was bin Laden himself that said something like until the people of Palestine know safety, the US will not. To counter this we need only have a friendly country in the middle east where we can temporarily position our armaments, this will vastly reduce the cost of troop movement the next time our presence will be felt. The priority oil is not a current problem but with the world oil supply quickly becoming depleted (some estimates put us at only 30 years left) the availability of a conisistent oil supply can be economically justified rather easily. Not that this will make much difference for your average person, but military purposes of oil are many. Militarily, these end benefits are enormous. The interim general populous benfits are substantial as well, but I don't feel they are as impactual. Already the general populous is beginning to see hybrid cars, fuel-cell cars are only a few-years away, and at least GM and BMW are experimenting with internal-combustion hydrogen engines (a few years ago BMW had running experimental 7 series that used internal combustion hydrogen that travelled parts of Europe). With these advances the general usage of oil is likely to diminish over the next couple decades spurred on by the vastly increasing cost of purchasing gasoline. There will of course be the necessary, temporary, dip in oil pricing as the Iraqi oil fields are pushed into higher production. Over time though this dip will mysteriously disappear, blamed on market forces if anyone actually notices. But perhaps the best way to look at the economics of the war has been suggested by John Cogan. The Hoover Institution economist says the war is an investment. The proper question then becomes what resources are we willing to invest to achieve peace and stability, and a diminished threat from terrorism and terrorist-supporting states. At 1% of GDP, the war looks like a bargain. I very much agree with John Cogan, this war is an investment. I disagree though with the WSJ conclusion that this is an investment in the stability of the Middle East/ending Iraqi containment. Instead I believe that this is an investment in US stability, and military ability. As such it will pay off enormously, but I believe the costs to be far in excess of helping the middle east address the Israel problem in a diplomatic way which would cost less, undermine much of the terrorist actions, make the US look like more of a beneficial monopoly, and certainly put us in better favor throughout the Middle East. Before anyone feels free to jump on me about this, I would like to remind everyone that I am anti-war. I believe that war should only be used in situations where it is truly unavoidable. Joe Trust Laboratories http://www.trustlaboratories.com
Digital Certificates
I was just wondering if anyone has a digital certificate issuing system I could get a few certificates issued from. Trust is not an issue since these are development-only certs, and won't be used for anything except testing purposes. The development is for an open source PKCS #11 test suite. Joe Trust Laboratories http://www.trustlaboratories.com
Re: Re: Digital Certificates
- Original Message - From: Eric Murray [EMAIL PROTECTED] Subject: CDR: Re: Digital Certificates On Tue, Feb 18, 2003 at 01:22:21PM -0800, Joseph Ashwood wrote: I was just wondering if anyone has a digital certificate issuing system I could get a few certificates issued from. Trust is not an issue since these are development-only certs, and won't be used for anything except testing purposes. Whenever I need some test certs I use openssl to generate them. (Or an ingrian box, but not many people have one of those.) There's instructions in the openssl docs. For test purposes you don't need openca, its only needed if you want to issue a lot of certs automagically. Thank you for the input. I think I've got that working well enough to do it. The development is for an open source PKCS #11 test suite. Let me know when its done, I could use it. The next hurdle I have to overcome is getting a reference PKCS #11 module, although this shouldn't take too long if I can ever get the Gnu PKCS #11 to compile. I'll make sure I tell you when it's done. Joe
Re: Re: Shuttle Diplomacy
- Original Message - From: Thomas Shaddack [EMAIL PROTECTED] To: Harmon Seaver [EMAIL PROTECTED] Cc: [EMAIL PROTECTED] Sent: Saturday, February 01, 2003 4:42 PM Subject: CDR: Re: Shuttle Diplomacy [snip conspiracy theory] Especially in this case, I'd bet my shoes on Murphy; Columbia was an old lady that had her problems even before the launch itself. I'd bet on something stupid, like loosened tiles or computer malfunction (though more likely the tiles, as the computers are backed up). Remember Challenger, where the fault was a stupid O-ring. One of the current theories floating around has to do with a piece of debris that flew off the booster rocket during take-off and collided with the left wing (where the problems began). The video of the take-off was reviewed in great detail and it was determined that it was innocent, considering the proximity of the problems and the debris there appears to be at least something worth investigating. Joe Trust Laboratories http://www.trustlaboratories.com
Re: Clarification of challenge to Joseph Ashwood:
Sorry, I didn't bother reading the first message, and I won't bother reading any of the messages further in this thread either. Kong lacks critical functionality, and is fatally insecure for a wide variety of uses, in short it is beyond worthless, ranging into being a substantial risk to the security of anyone/group that makes use of it. - Original Message - From: James A. Donald [EMAIL PROTECTED] Subject: Clarification of challenge to Joseph Ashwood: Joseph Ashwood: So it's going to be broken by design. These are critical errors that will eliminate any semblance of security in your program. James A. Donald: I challenge you to fool my canonicalization algorithm by modifying a message to as to change the apparent meaning while preserving the signature, or by producing a message that verifies as signed by me, while in fact a meaningfully different message to any that was genuinely signed by me. That's easy, remember that you didn't limit the challenge to text files. It should be a fairly simple matter to create a JPEG file with a number of 0xA0 and 0x20 bytes, by simply swapping the value of those byte one can create a file that will pass your verification, but will obviously be corrupt. Your canonicalization is clearly and fatally flawed. Three quarters of the user hostility of other programs comes from their attempt to support true names, and the rest comes from the cleartext signature problem. Kong fixes both problems. Actually Kong pretends the first problem doesn't exist, and corrects the second one in such a way as to make it fatally broken. Joseph Ashwood must produce a message that is meaningfully different from any of the numerous messages that I have sent to cypherpunks, but which verifies as sent by the same person who sent past messages. Thus for Kong to be broken one must store a past message from that proflic poster supposed called James Donald, in the Kong database, and bring up a new message hacked up by Joseph Ashwood, and have Kong display in the signature verification screen To verify that I would of course have to download and install Kong, something that I will never do, I don't install software I already know is broken, and fails to address even the most basic of problems. Joe
Re: What email encryption is actually in use?
- Original Message - From: James A. Donald [EMAIL PROTECTED] What email encryption is actually in use? In my experience PGP is the most used. When I get a PGP encrypted message, I usually cannot read it -- it is sent to my dud key or something somehow goes wrong. Then you are obviously using PGP wrong. When you choose your 768-bit key in 1996 (I checked the key servers) you should have considered the actual lifetime that the key was going to have. In 1996 a 768-bit key was considered borderline secure, and it was just about time to retire them. Instead of looking at this and setting an expiration date on your key, you instead choose to make it live forever. Your other alternative would have been to revoke that key before you retired it. You made critical mistakes, and you blame it on PGP. As to it's dependability. I've seen two problems when someone could not decrypt the PGP message; 1) They shouldn't have access to it (someone elses key, forgot passphrase, etc), 2) They didn't have any clue how to use PGP, these people generally have trouble turning on their computer. On rare occassions there will be issues with versions, but in my experience these are exceptionally rare. Kong encrypted messages usually work, because there is only one version of the program, and key management is damn near non existent by design, since my experience as key manager for various companies shows that in practice keys just do not get managed. After I release the next upgrade, doubtless fewer messages will work. Maybe you should have considered designing the system so that it could be upgraded. A properly designed system can detect when an incompatible version was used for encryption, and can inform the user of the problem. Additionally I think there is one core reason why Kong decryptions always work, no one uses it, without key management it is basically worthless. Fortunately because there is no userbase you can change it dramatically for the next release, maybe this time it'll be worth using. The most widely deployed encryption is of course that which is in outlook -- which we now know to be broken, since impersonation is trivial, making it fortunate that seemingly no one uses it. If you did some research, you'd find that it is called S/MIME, it is a standard, a broken standard, but a standard (admittedly Outlook implemented it poorly and that is a major source of the breakage). The only non-standard encryption outlook uses is in the file storage, which has nothing to do with email. Repeating the question, so that it does not get lost in the rant. To the extent that real people are using digitally signed and or encrypted messages for real purposes, what is the dominant technology, or is use so sporadic that no network effect is functioning, so nothing can be said to be dominant? The two big players are PGP and S/MIME. The chief barrier to use of outlook's email encryption, aside from the fact that is broken, is the intolerable cost and inconvenience of certificate management. Actually the chief barrier is psychological, people don't feel they should side with the criminals by using encryption. Certificate management is actually quite easy and cheap. It is the mistakes of people who lack any understanding of how the system actually works that make it expensive and inconvenient. The same applies to PGP. We have tools to construct any certificates we damn well please, The same applies everywhere, in fact in your beloved Kong, the situation is worse because the identities can't be managed. though the root signatures will not be recognized unless the user chooses to put them in. That's right, blame your own inadequacies on everyone else, that seems to be the standard American way now. Is it practical for a particular group, for example a corporation or a conspiracy, to whip up its own damned root certificate, without buggering around with verisign? Of course it is, in fact there are about 140 root certificates that Internet Explorer recognises, the majority of these have absolutely nothing to do with Verisign. Getting it into the systems is a big more problematic. (Of course fixing Microsoft's design errors is never useful, since they will rebreak their products in new ways that are more ingenious and harder to fix.) And this has nothing whatsoever to do with root certificates. I intended to sign this using Network Associates command line pgp, only to discover that pgp -sa file produced unintellible gibberish, that could only be made sense of by pgp, so that no one would be able to read it without first checking my signature. Which would of course demonstrate once more that you have no clue how to use PGP. It also demonstrates what is probably your primary source of I can't decrypt it you are using a rather old version of PGP. While the rest of the world has updated PGP to try to remain secure, you have managed to forgo all semblance of security, in favor of
Re: Re: Startups, Bubbles, and Unemployment
- Original Message - From: Eric Cordian [EMAIL PROTECTED] Although I appear to have been the final catalyst for the discussion of unemployment. I agree with pretty much everything Eric Cordian said. In fact my current state of lack of work, has little to do with lack of employment, I am officially employed as a Substitute Teacher in the Gilroy Unified School District (Gilroy, Ca), but due to summer I am lacking in work. If you simply need a job, and have a bachelor's degree (or higher), sub teaching pays, barely enough to live on, but it does pay. Additionally I have several adding factors in this, of course I do consulting when available, have some investments from when I was more enjoyably employed, but most importantly I'm using the spare time I do have (sub teaching doesn't take much actual work) to actually do #4: 4. Don't expect anyone to pay you to sit around and figure out what the Next Big Thing is going to be. At which point I will attempt to get funding (if needed), work to make the world a better place, and most importantly make a great deal of money for the people funding the product/productline. A quick word of warning, in California it takes on the order of 6 months to get the piece of paper that says you can sub teach, so be prepared for a bit of a wait, find a job at McDonald's or whatever for a while. Joe
Re: Re: Overcoming the potential downside of TCPA
- Original Message - From: Ben Laurie [EMAIL PROTECTED] The important part for this, is that TCPA has no key until it has an owner, and the owner can wipe the TCPA at any time. From what I can tell this was designed for resale of components, but is perfectly suitable as a point of attack. If this is true, I'm really happy about it, and I agree it would allow virtualisation. I'm pretty sure it won't be for Palladium, but I don't know about TCPA - certainly it fits the bill for what TCPA is supposed to do. I certainly don't believe many people to believe me simply because I say it is so. Instead I'll supply a link to the authority of TCPA, the 1.1b specification, it is available at http://www.trustedcomputing.org/docs/main%20v1_1b.pdf . There are other documents, unfortunately the main spec gives substantial leeway, and I haven't had time to read the others (I haven't fully digested the main spec yet either). From that spec, all 332 pages of it, I encourage everyone that wants to decide for themselves to read the spec. If you reach different conclusions than I have, feel free to comment, I'm sure there are many people on these lists that would be interested in justification for either position. Personally, I believe I've processed enough of the spec to state that TCPA is a tool, and like any tool it has both positive and negative aspects. Provided the requirement to be able to turn it off (and for my preference they should add a requirement that the motherboard continue functioning even under the condition that the TCPA module(s) is/are physically removed from the board). The current spec though does seem to have a bend towards being as advertised, being primarily a tool for the user. Whether this will remain in the version 2.0 that is in the works, I cannot say as I have no access to it, although if someone is listening with an NDA nearby, I'd be more than happy to review it. Joe
Re: Overcoming the potential downside of TCPA
- Original Message - From: Ben Laurie [EMAIL PROTECTED] Joseph Ashwood wrote: There is nothing stopping a virtualized version being created. What prevents this from being useful is the lack of an appropriate certificate for the private key in the TPM. Actually that does nothing to stop it. Because of the construction of TCPA, the private keys are registered _after_ the owner receives the computer, this is the window of opportunity against that as well. The worst case for cost of this is to purchase an additional motherboard (IIRC Fry's has them as low as $50), giving the ability to present a purchase. The virtual-private key is then created, and registered using the credentials borrowed from the second motherboard. Since TCPA doesn't allow for direct remote queries against the hardware, the virtual system will actually have first shot at the incoming data. That's the worst case. The expected case; you pay a small registration fee claiming that you accidentally wiped your TCPA. The best case, you claim you accidentally wiped your TCPA, they charge you nothing to remove the record of your old TCPA, and replace it with your new (virtualized) TCPA. So at worst this will cost $50. Once you've got a virtual setup, that virtual setup (with all its associated purchased rights) can be replicated across an unlimited number of computers. The important part for this, is that TCPA has no key until it has an owner, and the owner can wipe the TCPA at any time. From what I can tell this was designed for resale of components, but is perfectly suitable as a point of attack. Joe
Overcoming the potential downside of TCPA
Lately on both of these lists there has been quite some discussion about TCPA and Palladium, the good, the bad, the ugly, and the anonymous. :) However there is something that is very much worth noting, at least about TCPA. There is nothing stopping a virtualized version being created. There is nothing that stops say VMWare from synthesizing a system view that includes a virtual TCPA component. This makes it possible to (if desired) remove all cryptographic protection. Of course such a software would need to be sold as a development tool but we all know what would happen. Tools like VMWare have been developed by others, and as I recall didn't take all that long to do. As such they can be anonymously distributed, and can almost certainly be stored entirely on a boot CD, using the floppy drive to store the keys (although floppy drives are no longer a cool thing to have in a system), boot from the CD, it runs a small kernel that virtualizes and allows debugging of the TPM/TSS which allows the viewing, copying and replacement of private keys on demand. Of course this is likely to quickly become illegal, or may already, but that doesn't stop the possibility of creating such a system. For details on how to create this virtualized TCPA please refer to the TCPA spec. Joe
Re: Is TCPA broken?
I need to correct myself. - Original Message - From: Joseph Ashwood [EMAIL PROTECTED] Suspiciously absent though is the requirement for symmetric encryption (page 4 is easiest to see this). This presents a potential security issue, and certainly a barrier to its use for non-authentication/authorization purposes. This is by far the biggest potential weak point of the system. No server designed to handle the quantity of connections necessary to do this will have the ability to decrypt/sign/encrypt/verify enough data for the purely theoretical universal DRM application. I need to correct this DES, and 3DES are requirements, AES is optional. This functionality appears to be in the TSS. However I can find very few references to the usage, and all of those seem to be thoroughly wrapped in numerous layers of SHOULD and MAY. Since is solely the realm of the TSS (which had it's command removed July 12, 2001 making this certainly incomplete), it is only accessible through few commands (I won't bother with VerifySignature). However looking at the TSS_Bind it says explicitly on page 157 To bind data that is larger than the RSA public key modulus it is the responsibility of the caller to perform the blocking indicating that the expected implementation is RSA only. The alternative is wrapping the key, but that is clearly targeted at using RSA to encrypt a key. The Identity commands, this appears to use a symmetric key, but deals strictly with TPM_IDENTITY_CREDENTIAL. Regardless the TSS is a software entity (although it may be assisted by hardware), this is and of itself presents some interesting side-effects on security. Joe
Is TCPA broken?
- Original Message - From: Mike Rosing [EMAIL PROTECTED] Are you now admitting TCPA is broken? I freely admit that I haven't made it completely through the TCPA specification. However it seems to be, at least in effect although not exactly, a motherboard bound smartcard. Because it is bound to the motherboard (instead of the user) it can be used for various things, but at the heart it is a smartcard. Also because it supports the storage and use of a number of private RSA keys (no other type supported) it provides some interesting possibilities. Because of this I believe that there is a core that is fundamentally not broken. It is the extensions to this concept that pose potential breakage. In fact looking at Page 151 of the TCPA 1.1b spec it clearly states (typos are mine) the OS can be attacked by a second OS replacing both the SEALED-block encryption key, and the user database itself. There are measures taken to make such an attack cryptographically hard, but it requires the OS to actually do something. Suspiciously absent though is the requirement for symmetric encryption (page 4 is easiest to see this). This presents a potential security issue, and certainly a barrier to its use for non-authentication/authorization purposes. This is by far the biggest potential weak point of the system. No server designed to handle the quantity of connections necessary to do this will have the ability to decrypt/sign/encrypt/verify enough data for the purely theoretical universal DRM application. The second substantial concern is that the hardware is substantially limited in the size of the private keys, being limited to 2048 bits, the second concern is that it is additionally bound to SHA-1. Currently these are both sufficient for security, but in the last year we have seen realistic claims that 1500 bit RSA may be subject to viable attack (or alternately may not depending on who you believe). While attacks on RSA tend to be spread a fair distance apart, this never the less puts 2048 bit RSA at fairly close to the limit of security, it would be much preferable to support 4096-bit RSA from a security standpoint. SHA-1 is also currently near its limit. SHA-1 offer 2^80 security, a value that it can be argued may be too small for long term security. For the time being TCPA seems to be unbroken, 2048-bit RSA is sufficient, and SHA-1 is used as a MAC for important points. For the future though I believe these choices may prove to be a weak point in the system, for those that would like to attack the system, these are the prime targets. The secondary targets would be forcing debugging to go unaddressed by the OS, which since there is no provision for smartcard execution (except in extremely small quantities just as in a smartcard) would reveal very nearly everything (including the data desired). Joe
Re: Seth on TCPA at Defcon/Usenix
- Original Message - From: AARG! Anonymous [EMAIL PROTECTED] [brief description of Document Revocation List] Seth's scheme doesn't rely on TCPA/Palladium. Actually it does, in order to make it valuable. Without a hardware assist, the attack works like this: Hack your software (which is in many ways almost trivial) to reveal it's private key. Watch the protocol. Decrypt protocol Grab decryption key use decryption key problem solved With hardware assist, trusted software, and a trusted execution environment it (doesn't) work like this: Hack you software. DOH! the software won't run revert back to the stored software. Hack the hardware (extremely difficult). Virtualize the hardware at a second layer, using the grabbed private key Hack the software Watch the protocol. Decrypt protocol Grab decryption key use decryption key Once the file is released the server revokes all trust in your client, effectively removing all files from your computer that you have not decrypted yet problem solved? only for valuable files Of course if you could find some way to disguise which source was hacked, things change. Now about the claim that MS Word would not have this feature. It almost certainly would. The reason being that business customers are of particular interest to MS, since they supply a large portion of the money for Word (and everything else). Businesses would want to be able to configure their network in such a way that critical business information couldn't be leaked to the outside world. Of course this removes the advertising path of conveniently leaking carefully constructed documents to the world, but for many companies that is a trivial loss. Joe
Re: Re: Challenge to TCPA/Palladium detractors
- Original Message - From: Eugen Leitl [EMAIL PROTECTED] Can anyone shed some light on this? Because of the sophistication of modern processors there are too many variables too be optimized easily, and doing so can be extremely costly. Because of this diversity, many compilers use semi-random exploration. Because of this random exploration the compiler will typically compile the same code into a different executable. With small programs it is likely to find the same end-point, because of the simplicity. The larger the program the more points for optimization, so for something as large as say PGP you are unlikely to find the same point twice, however the performance is likely to be eerily similar. There are bound to be exceptions, and sometimes the randomness in the exploration appears non-existent, but I've been told that some versions the DEC GEM compiler used semi-randomness a surprising amount because it was a very fast way to narrow down to an approximate best (hence the extremely fast compilation and execution). It is likely that MS VC uses such techniques. Oddly extremely high level languages don't have as many issues, each command spans so many instructions that a pretuned set of command instructions will often provide very close to optimal performance. I've been told that gcc does not apparently use randomness to any significant degree, but I admit I have not examined the source code to confirm or deny this. Joe
Re: Closed source more secure than open source
- Original Message - From: Anonymous [EMAIL PROTECTED] Ross Anderson's paper at http://www.ftp.cl.cam.ac.uk/ftp/users/rja14/toulouse.pdf has been mostly discussed for what it says about the TCPA. But the first part of the paper is equally interesting. Ross Andseron's approximate statements: Closed Source: the system's failure rate has just dropped by a factor of L, just as we would expect. Open Source: bugs remain equally easy to find. Anonymous's Statements: For most programs, source code will be of no benefit to external testers, because they don't know how to program. Therefore the rate at which (external) testers find bugs does not vary by a factor of L between the open and closed source methodologies, as assumed in the model. In fact the rates will be approximately equal. The result is that once a product has gone into beta testing and then into field installations, the rate of finding bugs by authorized testers will be low, decreased by a factor of L, regardless of open or closed source. I disagree, actually I agree and disagree with both, due in part to the magnitudes involved. It is certainly true that once Beta testing (or some semblance of it) begins there will be users that cannot make use of source code, but what Anonymous fails to realize is that there will be beta testers that can make use of the source code. Additionally there are certain tendencies in the open and closed source communities that Anonymous and Anderson have not addressed in their models. The most important tendencies are that in closed source beta testing is generally handed off to a separate division and the original author does little if any testing, and in open source the authors have a much stronger connection with the testing, with the authors' duty extending through the entire testing cycle. These tendencies lead to two very different positions than generally realized. First, closed source testing, beginning in the late Alpha testing stage, is generally done without any assistance from source code, by _anyone_, this significantly hampers the testing. This has led to observed situations where QA engineers sign off on products that don't even function, let alone have close to 0 bugs. With the software engineers believing that because the code was signed off, it must be bug-free. This is a rather substantial problem. To address this problem one must actually correct the number of testers for the ones that are effectively doing nothing. So while L is the extra difficulty in finding bugs without source code, it is magnified by something approximating (testers)/(testers not doing anything). It's worth noting that (testers) (testers not doing anything) causing the result K = L*(testers)/(testers not doing anything), to tend towards infinite values. In open source we have very much the opposite situation. The authors are involved in all stages of testing, giving another value. This value is used to adjust L as before, but the quantities involved are substantially different. It must be observed, as was done by Anonymous, that there are testers that have no concept what source code is, and certainly no idea how to read it, call these harassers. In addition though there are also testers who read source code, and even the authors themselves are doing testing, call these coders. So in this case K = L*(harassers)/(harassers+coders). Where it's worth noting that K will now tend towards 0. It is also very much the case that different projects have different quantities of testers. In fact as the number of beta testers grows, the MTBD(iscovery) of a bug must not increase, and will almost certainly decrease. In this case each project must be treated separately, since obviously WindowsXP will have more people testing it (thanks to bug reporting features) than QFighter3 (http://sourceforge.net/projects/qfighter3/ the lest active development on sourceforge). This certainly leads to problems in comparison. It is also worth noting that it is likely that actual difficulty in locating bugs is probably related to the maximum of (K/testers) and the (testers root of K). Meaning that WindowsXP is likely to have a higher ratio of bugs uncovered in a given time period T than QFighter3. However due to the complexity of the comparisons, QFighter3 is likely to have fewer bugs than WindowsXP, simply because WindowsXP is several orders of magnitude more complex. So while the belief that source code makes bug hunting easier on everyone, is certainly not purely the case (Anonymous's observation), it is also not the case that the tasks are equivalent (Anonymous's claim), with the multiplier in closed source approaching infinite, and open source towards 0. Additionally the quantity of testers appears to have more of an impact on bug-finding than the discussion of open or closed source. However as always complexity plays an enormous role in the number of bugs available to find, anybody with a few days programming experience
Re: Re: maximize best case, worst case, or average case? (TCPA
- Original Message - From: Ryan Lackey [EMAIL PROTECTED] I consider DRM systems (even the not-secure, not-mandated versions) evil due to the high likelyhood they will be used as technical building blocks upon which to deploy mandated, draconian DRM systems. The same argument can be applied to just about any tool. A knife has a high likelihood of being used in such a manner that it causes physical damage to an individual (e.g. you cut yourself while slicing your dinner) at some point in its useful lifetime. Do we declare knives evil? A hammer has a high likelihood of at some point in its useful life causing physical damage to both an individual and property. Do we declare hammers evil? DRM is a tool. Tools can be used for good, and tools can be used for evil, but that does not make a tool inherently good or evil. DRM has a place where it is a suitable tool, but one should not declare a tool evil simply because an individual or group uses the tool for purposes that have been declared evil. Joe
Re: Piracy is wrong
Subject: CDR: Piracy is wrong This shouldn't have to be said, but apparently it is necessary. Which is a correct statement, but an incorrect line of thinking. Piracy is an illegitimate use of a designed in hole in the security, the ability to copy. This right to copy for personal use is well founded, and there are even supreme court cases to support it. DRM removes this right, without due representation, and it is thinking like yours that leads down this poorly chosen path. The other much more harsh reality involved is that DRM cannot work, all it can do is inconvenience legitimate consumers. There is massive evidence of this, and you are free to examine them in any way you choose. Piracy - unauthorized copying of copyrighted material - is wrong. It inherently involves lying, cheating and taking unfair advantage of others. Systems like DRM are therefore beneficial when they help to reduce piracy. We should all support them, to the extent that this is their purpose. When an artist releases a song or some other creative product to the world, they typically put some conditions on it. These include the expectation that the artist will be paid according to whatever deal they have signed with their label. Inherent in this deal is the consumer's right to copy for personal use, and to resell their purchased copy, as long as all copies that the consumer has made are destroyed. DRM attempts to revoke this right to personal copying, and resale. If you want to listen to and enjoy the song, you are obligated to agree to those conditions. If you can't accept the conditions, you shouldn't take the creative work. And if the artist cannot accept the fundamental rights specifically granted, they should not produce art. The artist is under no obligation to release their work. It is like a gift to the world. They are free to put whatever conditions they like on that gift, and you are free to accept them or not. Last time I checked the giver is supposed to remove the pricetag from the gift before giving it. By a similar argument, everyone should be happy that the WTC flying occured, after all they were kind enough not to kill anyone that's still alive. The logic simply doesn't hold. If you take the gift, you are agreeing to the conditions. If you then violate the stated conditions, such as by sharing the song with others, you are breaking your agreement. You become a liar and a cheat. In fact one of the specifically granted rights is the right to share the music with friends and family, so this has nothing to do with being a liar and a cheat it has to do with excercising not just rights, but rights that have been specifically granted. If you take the song without paying for it, you are again receiving this gift without following the conditions that were placed on it as part of the gift being offered. You are taking advantage of the artist's creativity without them receiving the compensation they required. Because of that specifically granted right, that copies can be made for friends and family, it is also a specifically granted right to accept those copies. So it is merely excercising a specifically granted right. You clearly have not read or understood the implications and complexities of your statements, with regard to either logic or the law. This isn't complicated. Apparently it is too complicated for you. It's just basic ethics. It's just basic rights and excercising of those rights. It's a matter of honesty and trust. If the record companies were prepared to trust, why do they employ a substantial army of lawyers? Why do they pursue every p2p network? Why are they pushing for DRM? Trust is not a one-way street. The recording labels have demonstrated that they cannot be trusted in any form, what delusion makes you think they can be trusted now? When someone makes you an offer and you don't find the terms acceptable, you simply refuse. Exactly, I refuse to accept a DRM -limited environment which does not allow me full ownership of something I purchased. You don't take advantage by taking what they provide and refusing to do your part. That's cheating. No, that's a fundamental misunderstanding of everything involved, from law to basic logic you have misunderstood it all. Joe
Re: RE: Harry Potter released unprotected
- Original Message - From: Lucky Green [EMAIL PROTECTED] Joseph Ashwood wrote: This looks like just a pilot program. Watch the normal piracy channels though, if Harry Potter shows up stronger than other releases Macrovision will be around a while. But if Harry Potter isn't substantially hit by piracy, then you might want to start shorting Macrovision, they'll start losing customers. I am confused. AFAICT, the majority of movie piracy today takes place via DivX from DVD's. How does Macrovision even play a role in this? In it's realistic form, Macrovision has nothing to do with any of it. However since it is current industry protocol to use Macrovision copy-protection, Macrovision is of interest. In truth, this isn't even a question of copy-protection, there's plenty of evidence that none of that works. Instead this is about a technology, and a company, the technology is the Macrovision copy-protection technology, and the company explicitly involved is Macrovision. Macrovision makes the bulk of their profits from this copy-protection technology, and since it is a copy-protection technology it is of general interest to many cypherpunks, even if not in any real way. (see the other reply regarding picture corrections). Because of Macrovision's heavy reliance on the copy-protection technology for profits, an undermining of that critical asset will greatly diminish the value of the company, and so diminish the stock price. For any other purpose, there's basically no reason for this thread at all. Hope this helped a bit. Joe
Re: CDR: RE: Degrees of Freedom vs. Hollywood Control Freaks
- Original Message - From: [EMAIL PROTECTED] Subject: Re: CDR: RE: Degrees of Freedom vs. Hollywood Control Freaks Ok, somebody correct me if I'm wrong here, but didn't they officially cease production of vinyl pressings several years ago? As in *all* vinyl pressings??? They stopped selling them to the general public, but you only have to stop by a DJ record shop (as opposed to the consumer shops) to see a wide selection of vinyl albums. DJs prefer vinyl primarily because it allows beat matching by hand, scratching, etc. The only disadvantage I know of for vinyl is that it degrades as it is played, for a DJ this isn't much of a problem since tracks have a lifespan that's measured in days or weeks the vinyl becomes useless after a few weeks, which is how long it lasts at good quality. Joe
Re: FC: Hollywood wants to plug analog hole, regulate A-D
- Original Message - From: Neil Johnson [EMAIL PROTECTED] To: Joseph Ashwood [EMAIL PROTECTED]; [EMAIL PROTECTED] Sent: Friday, May 31, 2002 6:59 PM Subject: Re: FC: Hollywood wants to plug analog hole, regulate A-D On Sunday 02 June 2002 08:24 pm, Joseph Ashwood wrote: The MPAA has not asked that all ADCs be forced to comply, only that those in a position to be used for video/audio be controlled by a cop-chip. While the initial concept for this is certainly to bloat the ADC to include the watermark detection on chip, there are alternatives, and at least one that is much simpler to create, as well as more benficial for most involved (although not for the MPAA). Since I'm writing this in text I cannot supply a wonderful diagram, but I will attempt anyway. The idea looks somewhat like this: analog source --ADC--CopGate-digital Where the ADC is the same ADC that many of us have seen in undergrad electrical engineering, or any suitable replacement. The CopGate is the new part, and will not be normally as much of a commodity as the ADC. The purpose of the CopGate is to search for watermarks, and if found, disable the bus that the information is flowing across, this bus disabling is again something that is commonly seen in undergrad EE courses, the complexity is in the watermark detection itself. The simplest design for the copgate looks somewhat like this (again bad diagram): in|---buffergatesout CopChip-| Where the buffer gates are simply standard buffer gates. This overall design is beneficial for the manufacturer because the ADC does not require redesign, and may already include the buffergates. In the event that the buffer needs to be offchip the gate design is well understood and commodity parts are already available that are suitable. For the consumer there are two advantages to this design; 1) the device will be cheaper, 2) the CopChip can be disabled easily. In fact disabling the CopChip can be done by simply removing the chip itself, and tying the output bit to either PWR or GND. As an added bonus for manufacturing this leaves only a very small deviation in the production lines for inside and outside the US. This seems to be a reasonable way to design to fit the requirements, without allowing for software disablement (since it is purely hardware). Joe Bz! Wrong Answer ! How do you prevent some hacker/pirate (digital rights freedom fighter) from disabling the CopGate (by either removing the CopChip, finding a way to bypass it, or figure out how to make it think it's in, Government Snoop mode ) ? To quote myself the CopChip can be disabled easily, last paragraph sentence begins with For the consumer . . . as has been pointed out by numerous people, there is no solution to this. With a minimal amount of electrical engineering knowledge it is possible for individuals to easily construct a new ADC anyway. Then the watermark can be removed. Which can and should be done after conversion. Remember it only requires ONE high-quality non-watermarked analog to digital copy to make it on the net and it's all over. You seem to be of the mistaken opinion that I believe this to be a good thing, when the design I presented was designed to minimize cost, of design, manufacture, and removal. I am of the fundamental opinion that this is not a legal problem, this is a problem of the MPAA and anyone else that requires a law like this to remain profitable is advertising incorrectly. The Hollywood studios have already found the basic solution, sell advertising space _within_ the program. In fact some movies are almost completely subsidized by the ad space within the movie. By moving to that model for primary revenue it is easy to accept that a massive number of copies will be made since that improves the value of the ad space in your next movie/episode. Of course I'm not involved with any studio so they don't ask my opinion. Joe
Re: RE: FC: Hollywood wants to plug analog hole, regulate A-D
Everything I'm about to say should be taken purely as an analytical discussion of possible solutions in light of the possibilities for the future. For various reasons I discourage performing the analyzed alterations to any electronic device, it will damage certain parts of the functionality of the device, and may cause varying amounts of physical, psychological, monetary and legal damages to a wide variety of things. There seems to be a rather siginficant point that is being missed by a large portion of this conversation. The MPAA has not asked that all ADCs be forced to comply, only that those in a position to be used for video/audio be controlled by a cop-chip. While the initial concept for this is certainly to bloat the ADC to include the watermark detection on chip, there are alternatives, and at least one that is much simpler to create, as well as more benficial for most involved (although not for the MPAA). Since I'm writing this in text I cannot supply a wonderful diagram, but I will attempt anyway. The idea looks somewhat like this: analog source --ADC--CopGate-digital Where the ADC is the same ADC that many of us have seen in undergrad electrical engineering, or any suitable replacement. The CopGate is the new part, and will not be normally as much of a commodity as the ADC. The purpose of the CopGate is to search for watermarks, and if found, disable the bus that the information is flowing across, this bus disabling is again something that is commonly seen in undergrad EE courses, the complexity is in the watermark detection itself. The simplest design for the copgate looks somewhat like this (again bad diagram): in|---buffergatesout CopChip-| Where the buffer gates are simply standard buffer gates. This overall design is beneficial for the manufacturer because the ADC does not require redesign, and may already include the buffergates. In the event that the buffer needs to be offchip the gate design is well understood and commodity parts are already available that are suitable. For the consumer there are two advantages to this design; 1) the device will be cheaper, 2) the CopChip can be disabled easily. In fact disabling the CopChip can be done by simply removing the chip itself, and tying the output bit to either PWR or GND. As an added bonus for manufacturing this leaves only a very small deviation in the production lines for inside and outside the US. This seems to be a reasonable way to design to fit the requirements, without allowing for software disablement (since it is purely hardware). Joe
Re: Re: Two ideas for random number generation
- Original Message - From: [EMAIL PROTECTED] To: Tim May [EMAIL PROTECTED]; Eugen Leitl [EMAIL PROTECTED] Cc: [EMAIL PROTECTED] Sent: Sunday, April 21, 2002 1:33 PM Subject: CDR: Re: Two ideas for random number generation Why would one want to implement a PRNG in silicon, when one can easily implement a real RNG in silicon? Because with a pRNG we can sometimes prove very important things, while with a RNG we can prove very little (we can't even prove that entropy actually exists, let alone that we can collect it). And if one is implementing a PRNG in software, it is trivial to have lots of internal state (asymptotically approaching one-time pad properties). The problem is not having that much internal state, but what do you do with it? Currently the best options on that front involve using block ciphers in various modes, but this has a rather small state, but again we can quite often prove things about the construct. Joe
Re: Re: Two ideas for random number generation
- Original Message - From: Eugen Leitl [EMAIL PROTECTED] On Mon, 22 Apr 2002, Tim May wrote: What real-life examples can you name where Gbit rates of random digits are actually needed? Multimedia streams, routers. If I want to secure a near-future 10 GBit Ethernet stream with a symmetric cypher for the duration of a few years (periodic rekeying from a RNG might help?) I need both lots of internal state (the PRNG can't help leaking information about its state in the cypher stream, though the rate of leakage is the function of smarts of the attacker) and a high data rate. Actually that's not necessarily the case. Let's use your example of a Multimedia stream server that is filling a 10GBit/s connection. Right now the current minimum seems to be 56kbit/s. So that means that if every available connection is taken in the same second, the server would only need a rate of 2.2 million bits/sec from it's RNG to build a 128-bit key for each. A good design for this though has the client doing most of the random number choosing, where the only purpose of the server random number is to prevent the client of biasing the result, so 128-bits is more than sufficient. So 2.2 Mbit/sec seems to be the peak for that. Finding situations where a decent design will yield a need for an RNG to run about 1 Gbit/sec is extremely difficult. With poor designs it's actually rather easy, take a RNG that is poor enough (or a situation where that is a basic assumption) that it has to be distilled to 1 billionth it's size, obviously to support that multimedia stream server would require 2.2 million Gigabits per second (approximately). In any case, if someone wants Gbits per second of random numbers, it'll cost 'em, as it should. Not something I think we need to worry much about. Maybe, but it's neat trying to see how the constraints of 2d and 3d layout of cells, signal TOF and fanout issues influence PRNG design if lots of state bits and a high data rate are involved. It is not very useful right now, agreed. I think it would be a good process to go through to develop a design for one, or at least a basic outline for how it could be done, but the basic idea that comes to mind looks a lot like /dev/random, but run in parallel collecting from several sources including a custom hardware pool similar to the Intel RNG. Joe
Re: Re: Two ideas for random number generation: Q for Eugene
- Original Message - From: gfgs pedo [EMAIL PROTECTED] Oh surely you can do better than that - making it hard to guess the seed is also clearly a desirable property (and one that the square root rng does not have). U can choose any arbitrary seed(greater than 100 bits as he (i forgot who) mentioned earlier.Then subject it to the Rabin-Miller test. Since the seed value is a very large number,it would be impossible to determine the actual value.The chances the intruder find the correct seed or the prime number hence generated is practically verly low. You act like the only possible way to figure it out is to guess the initial seed. The truth is that the number used leaves a substantial amount of residue in it's square root, and there are various rules that can be applied to square roots as well. Since with high likelihood you will have a lot of small factors but few large ones, it's a reasonable beginning to simply store the roots of the first many primes, this gives you a strong network to work from when looking for those leftover signatures. With decent likelihood the first 2^32 primes would be sufficient for this when you choose 100 bit numbers, and this attack will be much faster than brute force. So while you have defeated brute force (no surprise there, brute force is easy to defeat) you haven't developed a strong enough generation sequence to really get much of anywhere. Of course, finding the square root of a 100 digit number to a precision of hundreds of decimal places is a lot of computational effort for no good reason. Yes the effort is going to be large but why no good reason? Because it's a broken pRNG, that is extremely expensive to run. If you want a fast pRNG you look to ciphers in CTR mode, or stream ciphers, if you want one that's provably good you go to BBS (which is probably faster than your algorithm anyway). So there's no good reason to implement such an algorithm. BTW, the original poster seemed to be under the delusion that a number had to be prime in order for its square to be irrational, but every integer that is not a perfect square has an irrational square root (if A and B are mutually prime, A^2/B^2 can't be simplified). Nope ,I'm under no such delusion :) Just the delusion that your algorithm was good. Joe