Re: [Cryptography] funding Tor development
On 14/10/2013 14:36, Eugen Leitl wrote: Guys, in order to minimize Tor Project's dependance on federal funding and/or increase what they can do it would be great to have some additional funding ~10 kUSD/month. I would say what is needed is not one source at $10K/month but 10K sources at $1/month. A single source of funding is *always* a single source of control. ___ The cryptography mailing list cryptography@metzdowd.com http://www.metzdowd.com/mailman/listinfo/cryptography
Re: [Cryptography] AES [was NSA and cryptanalysis]
On 16/09/2013 23:39, Perry E. Metzger wrote: On Mon, 16 Sep 2013 11:54:13 -1000 Tim Newsham tim.news...@gmail.com wrote: - A backdoor that leaks cryptographic secrets consider for example applications using an intel chip with hardware-assist for AES. You're feeding your AES keys directly into the cpu. Any attacker controlling the cpu has direct access and doesn't have to do any fancy pattern matching to discover the keys. Now if that CPU had a way to export some or all of the bits through some channel that would also be passively observable, the attacker could pull off an offline passive attack. What about RNG output? What if some bits were redundantly encoded in some of the RNG output bits which where then used directly for tcp initial sequence numbers? Such a backdoor would be feasible. It might be feasible in theory (and see the Illinois Malicious Processor as an example) but I think it would be hard to pull off well -- too hard to account for changes in future code, too hard to avoid detection of what you've done. Not sure this is true. If instead of leaking via the RNG, you leak via the cryptographic libraries *and* the windows socket libraries, then while there are probably two different teams involved, there is only one manufacturer - Microsoft. Ok that would exclude non-windows systems, which in this world of BYOD means an increasing number of ios or android devices - but the odds of one end or the other of any given exchange being a MS platform are good. Provided the cryptographic libraries are queried in a specific manner for tcp sequence numbers (which can be enforced) the winsock team never need know how those are generated, leaving just the cryptographic library holding both the input and output. On the other hand, we know from the press reports that several hardware crypto accelerators have been either backdoored or exploited. In those, leaking key material to observers in things like IVs or choices of nonces might be quite feasible. Such devices are built to be tamper resistant so no one will even notice if you add features to try to conceal the extra functionality of the device. For the Intel chips, I suspect that if they've been gimmicked, it will be more subtle, like a skew in the RNG that could be explained away as a manufacturing or design error. That said, things like the IMP do give one pause. And *that* said, if you're willing to go as far as what the IMP does, you no longer need to simply try to leak information via the RNG or other crypto hardware, you can do far far worse. (For those not familiar with the Illinois Malicious Processor: https://www.usenix.org/legacy/event/leet08/tech/full_papers/king/king_html/ ) Perry ___ The cryptography mailing list cryptography@metzdowd.com http://www.metzdowd.com/mailman/listinfo/cryptography
Re: Source for Skype Trojan released
Stephan Neuhaus wrote: On Aug 31, 2009, at 13:20, Jerry Leichter wrote: It can “...intercept all audio data coming and going to the Skype process.” Interesting, but is this a novel idea? As far as I can see, the process intercepts the audio before it reaches Skype and after it has left Skype. Isn't that the same as calling a keylogger a PGP Trojan? Not really. more generically, you could call it a VoIP trojan or even Audio monitoring trojan - presumably a more advanced version could listen to the mic stream even when the VoIP application is not in use, in order to obtain information. However, in context, this was designed to be used for law enforcement to bug a skype VoIP session, so the name reflects the design goal; yes, it is a more generalized attack than that, but not in intent or (presumed) usage. - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com
Re: X.509 certificate overview + status
Travis wrote: Hello, Recently I set up certificates for my server's SSL, SMTP, IMAP, XMPP, and OpenVPN services. Actually, I created my own CA for some of the certificates, and in other cases I used self-signed. It took me substantially more time than I had anticipated, and I'm left with feelings of unease. odd. the openssl installations I am familiar with came with example config files that were perfectly functional, took me about ten minutes to figure out what needed doing purely from the man pages and the example config. if ten minutes is too long, just go with xca (http://sourceforge.net/projects/xca) which does it all in a nice, pretty gui for you. A few distros (suse, for example) also have a gui for certificate issuing in their central admin tool. - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com
Re: once more, with feeling.
Darren J Moffat wrote: Warnings aren't enough in this context [ whey already exists ] the only thing that will work is stopping the page being seen - replacing it with a clearly worded explanation with *no* way to pass through and render the page (okay maybe with a debug build of the browser but not in the shipped product). One thing that concerns me is that in the new release of firefox, there appears to be NO way to get to a site that has a bad certificate (or self signed certificate) other than overriding the warning permanently - no ok let me see it, I have seen the warning and want to look just this once that the remember mismatched domains plugin for 2.x gave you. - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]
Re: once more, with feeling.
Paul Hoffman wrote: At 11:21 PM +0100 9/9/08, Dave Howe wrote: Darren J Moffat wrote: Warnings aren't enough in this context [ whey already exists ] the only thing that will work is stopping the page being seen - replacing it with a clearly worded explanation with *no* way to pass through and render the page (okay maybe with a debug build of the browser but not in the shipped product). One thing that concerns me is that in the new release of firefox, there appears to be NO way to get to a site that has a bad certificate (or self signed certificate) other than overriding the warning permanently - no ok let me see it, I have seen the warning and want to look just this once that the remember mismatched domains plugin for 2.x gave you. That may concern you, but I consider it a feature. Instead of teaching users to always click through the damn dialog boxes, FF3 says if you fell for it once, you're going to always fall for it so we won't teach you bad habits. There are arguments for either strategy. True enough, but the clickthru bandits will just see a button that reads to them make this error go away then next time will forget they did it - and will take the fact that they went straight into the site to mean the problem was fixed or simply not remember there ever was a problem. In the meantime, a choice I *used to have* is now taken from me, in the interests of selling more EV certificates. Given that few or none of us on this list are actually trained interface experts, I'm sure we could debate this until Perry pulls the moderator switch again. The salient point is that people who have more stake in the game (Mozilla Inc.) have spent longer thinking about this than we give them credit for and come to the design decisions that they have. - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]
Re: Ransomware
The Fungi wrote: On Tue, Jun 10, 2008 at 11:41:56PM +0100, Dave Howe wrote: The key size would imply PKI; that being true, then the ransom may be for a session key (specific per machine) rather than the master key it is unwrapped with. Per the computerworld.com article: Kaspersky has the public key in hand ? it is included in the Trojan's code ? but not the associated private key necessary to unlock the encrypted files. http://www.computerworld.com/action/article.do?command=viewArticleBasicarticleId=9094818 This would seem to imply they already verified the public key was constant in the trojan and didn't differ between machines (or that I'm giving Kaspersky's team too much credit with my assumptions). Sure. however, if the virus (once infecting the machine) generated a random session key, symmetric-encrypted the files, then encrypted the session key with the public key as part of the ransom note then that would allow a single public key to be used to issue multiple ransom demands, without the unlocking of any one machine revealing the master key that could unlock all of them. giving away your entire extortion capability to the first person to pay up doesn't seem sane, if you could as easily make each machine a unique proposition... - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]
Re: Ransomware
Jim Youll wrote: If there's just one key, then Kaspersky could get maximum press by paying the ransom and publishing it. If there are many keys, then Kaspersky still has reached its press-coverage quota, just not as dramatically. The key size would imply PKI; that being true, then the ransom may be for a session key (specific per machine) rather than the master key it is unwrapped with. - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]
Re: Can we copy trust?
Ben Laurie wrote: Ed Gerck wrote: Ben Laurie wrote: But doesn't that prove the point? The trust that you consequently place in the web server because of the certificate _cannot_ be copied to another webserver. That other webserver has to go out and buy its own copy, with its own domain name it it. A copy is something identical. So, in fact you can copy that server cert to another server that has the same domain (load balancing), and it will work. Web admins do it all the time. The user will not notice any difference in how the SSL will work. Obviously. Clearly I am talking about a server in a different domain. Up until recently, you could buy a cert for one domain, use *it* to issue a cert for another domain, and the major web browsers wouldn't kick at the traces provided you sent both certs in the ssl handshake. Thankfully, they fixed that before *too* many phishers figured it out. - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]
Re: How is DNSSEC
James A. Donald wrote: From time to time I hear that DNSSEC is working fine, and on examining the matter I find it is working fine except that DNSSEC is working fine as a technology. However, it is worth remembering that it works based on digitally signing an entire zone - the state of the world being what it is, most people prohibit xfer so any other technology that would allow a zonewalk is not going to be deployed. as far as I can tell, this is a basic design flaw, so isn't going to be rectified anytime soon. - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]
Re: delegating SSL certificates
John Levine wrote: | Presumably the value they add is that they keep browsers from popping | up scary warning messages Apple's Mail.app checks certs on SSL-based mail server connections. It has the good - but also bad - feature that it *always* asks for user approval if it gets a cert it doesn't like. Good point -- other mail programs such as Thunderbird also pop up the scary warnings. I've paid the $15 protection money for the certs on my mail servers. I have found that just adding the cert to the local keystore had pretty much the same effect. There is a nice addon for Thunderbird/Firefox (which will apparently be a native ability in v3 of the latter) called remember mismatched domains that lets you suppress an error for a specific cert/domain mismatch. - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]
Re: delegating SSL certificates
[EMAIL PROTECTED] wrote: So at the company I work for, most of the internal systems have expired SSL certs, or self-signed certs. Obviously this is bad. Sorta. TLS gets along with self signed just fine though, and obviously you can choose to accept a root or unsigned cert on a per-client basis. I know that if we had IT put our root cert in the browsers, that we could then generate our own SSL certs. sure. for IE its just a registry key, trivial to push out using login scripts etc. Are there any options that don't involve adding a new root CA? buying a intermediate cert from an existing CA? buying a wildcard cert for your domain, and using the same wildcard cert on all nodes? I would think this would be rather common, and I may have heard about certs that had authority to sign other certs in some circumstances... at one point, you could use *any* cert to sign another cert; IE didn't bother checking. I believe they have fixed that now. - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]
Re: Fixing SSL (was Re: Dutch Transport Card Broken)
Philipp Gühring wrote: I once implemented SSL over GSM data channel (without PPP and without TCP), and discovered that SSL needs better integrity protection than raw GSM delivers. (I am quite sure that´s why people normally run PPP over GSM channels ...) SSH has the same problems. It also assumes an active attack in case of integrity problems of the lower layer, and terminates the connection. TBH I can't see the problem - the unix philosophy of doing one thing well, and chaining simple tools to make complex ones, works well here. we have: TCP - well understood, has crude integrity and reliability checks built in, works reasonably well at converting a bunch of packets leaving and arriving via your network connection into something vaguely like a stream point-to-point connection. Provided by every ISP across the planet, problems at this level can be handed off to experienced network engineers who will at least understand the problem. SSL - Cludge thrown together by a browser manufacturer, probably to create a market for a bunch of companies who generated two prime numbers and now sell the answers to simple math queries involving the numbers. However, works reasonably well, has some crude authentication of the server built in (via the aformentioned bunch of companies) which at least limits potential hackers to those whose money the bunch of companies will accept ;) Again, works well in its domain, but requires a reasonably reliable channel to talk over, and a message to carry. Effectively turns an unencrypted channel into an encrypted one, Would work as well over a serial link as a tcp link (modulo the domain name check in the cert) HTTP - pretty basic file transfer protocol, with limited scope for negotiation, but designed largely to move text files from a server to a client. requires transport, can use tcp, ssl-over-tcp, serial, whatever your server will listen on and your client request on. add them together and you get HTTPS. leave out the SSL, and you get HTTP as-normally-spoke, so the SSL and HTTP are pretty much drop in modules. you could define HTTPG (HTTP over a security protocol other than SSL) and if a browser could support it, both TCP and HTTP would still be happy. you could also define HTTPS-over-adis-lamp and provided the operators were sufficiently accurate, securely download your web page from a server on a nearby hilltop after dark by replacing the TCP layer :) - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]
Re: patent of the day
Perry E. Metzger wrote: http://www.google.com/patents?vid=USPAT6993661 Hat tip to a party who prefers to remain anonymous who sent me the patent number. Interesting. he patented E4M, then two years old or so... - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]
Re: Bid on a SnakeOil Crypto Algorithm Patent
Saqib Ali wrote: http://www.freepatentauction.com/patent.php?nb=950 googlepatent gives me: http://www.google.com/patents?id=HaN6EBAJdq=7,088,821 - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]
Re: Seagate announces hardware FDE for laptop and desktop machines
Leichter, Jerry wrote: First off, it depends on how the thing is implemented. Since the entire drive is apparently encrypted, and you have to enter a password just to boot from it, some of the support is in an extended BIOS or some very early boot code, which is below any OS you might actually have on the disk. If I had to guess, I would suggest they were using the ATA secure hd password api, and really providing security rather than the firmware-lock usually associated with such passwords. That would allow you to retrofit it to a lot of laptops which already use that functionality, in a plug-and-play manner. - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]
Re: A crazy thought?
Allen wrote: Hi Gang, In a class I was in today a statement was made that there is no way that anyone could present someone else's digital signature as their own because no one has has their private key to sign it with. This was in the context of a CA certificate which had it inside. I tried to suggest that there might be scenarios that could accomplish this but was told impossible. Not being totally clear on all the methods that bind the digital signature to an identity I let it be; however, the impossible mantra got me to thinking about it and wondering what vectors might make this possible. Awareness of the failure models of various PKI solutions is an important part of using and designing uses for them. There are many, many failure models for the current x509/Certification Authority model used by ssl. (everyone already familiar with the failure modes should probably hit next message now, unless they want to double check I am not giving out bad advice; this email is going to get rather long :) Consider the following steps. I will predefine three actors here - [SITE] which for email is the *recipient*, for web traffic is the server owner. [USER] which is the mail sender and/or site user - originator of protected data. [CA] which is the certificate authority 1. [CA] generates and stores securely a private key This is a once-in-a-decade event, but even so, there are failure modes. One possible mode is to use political pressure (or just bad coding) to force one of the two primes used in RSA to be either fixed or from a very small subset of possible primes (aka canned primes). As you can imagine, finding the private key becomes near trivial if you know one of the two primes in advance... We can move onto the security of the key later. 2. [CA] generates and stores a public certificate using the private key This at least is without any real issues (except security of the private key of course). In practice, this would be the same operation as (1) but need not be. 3. [CA] transmits the public key verifiably to the end recipients This is actually more complex than it sounds - I would guess 99% of the keys everyone has on their machine (if not 100%) were supplied to them with the browser, or in the case of IE, preinstalled on the machine. The vast majority of users have no idea how to even display those keys, never mind check them. To verify, ask yourself this question. For each web browser or email package installed to your machine, a) Where are root keys stored? b) How do I view them? c) Where is the public key or hash I should check? d) where do I obtain a known-good copy of that so I can verify it? The answers to some of those might surprise you (for instance, IE stores its root certs unprotected in the registry, and your AD administrator can override them at will; IE keys are used by almost everything supplied by microsoft, including execution digital signatures and email - Outlook or OE). All are trivially over-ridable by an attacker with write access to your machine. 4. [SITE] Generates and stores securely a private key Pretty much the same provisos apply here as did for the CA. Do you know and can you trust your key generation software? IIS for instance relies on a tool supplied my microsoft for the purpose; Apache usually suggests OpenSSL, email clients usually use their associated web browser for an interactive generation of both key and CSR while connected to the CA's website. However as another exercise - for each, where (and how) is the private key stored and protected? 5. [SITE] Generates and forwards to the CA a certificate signing request (CSR) Modulo the usual private key concerns, this is usually trouble-free (and again, is usually a combined step with key generation) 6. [CA] Receives and (for a payment) signs the CSR with its private key. This is where things get interesting. The certificate generated at this stage may or may not use exact copies of the data in the CSR; It may or may not be signed directly by the CA master key (for many CA's, their master key is kept offline in a bank vault and used to sign an intermediate key which is used for actual CSRs. In fact, it may sign *multiple* intermediate keys, for a number of good reasons (which we won't go into at this stage) but which also introduce another possible attack vector for a TLA with the power to force a CA of his choice (or someone with access to a private key there) to do selected tasks. Several potential attacks require that this transmission to the CA be intercepted and fulfilled by someone other than the CA themselves. Conventional wisdom says that there is little or no risk caused by site certificate substitution, and to a great extent this is correct - other than the possible forcing of the symmetric encryption method to one breakable by the TLA, there is little or no benefit to such a substitution. 7.
Re: the return of key escrow?
Chris Olesch wrote: Ok the lurker posts... Can someone explain to me why security specialists think this: The system uses BitLocker Drive Encryption through a chip called TPM (Trusted Platform Module) in the computer's motherboard. is going to stop authorities from retreiving data? I ask this question on the basis of their encrypted hard drive on the old xbox. It supposedly used a secure key so the hard drive couldn't be upgraded, yet this fact didn't slow down the modd scene. Its not as if they are hardware encrypting tightly is it? The old XBox didn't encrypt the data on the hard drive - instead, it used a password on the drive firmware that almost all modern hard drives support (your home pc's drive almost certainly supports the same thing, even if your bios doesn't) Defeating the password requires one of: a) obtaining the password b) replacing the drive bios or controller c) using an already unlocked drive d) defeating the os on a running system to allow writes to the drive all known xbox hacks used method c) or d) - using a game to bypass the write protection, or disconnecting the ide cable after the drive was unlocked and using a standard usbide adaptor to write to the drive. - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]
Re: Hiding data on 3.5 using 40 track mode
Travis H. wrote: In the FBI's public statement about Hannsen, they relate how he used a 3.5 floppy in 40 track mode to store data, but if it was read in the ordinay way it would appear blank. IIRC, high-density floppies are 80 tracks per inch, and double density were 40 tpi. So, how do you suppose this trick works? The official details are, of course, vague It would have to be a guess. Back in the 5 1/2 days, we would frequently use a disk on both standard and 1.2mb drives; on the 1.2s, the head was literally half the width of a standard 5.25 drive, so you got the occasional problem due to this. For virgin disks, reading a file written on a 1.2 on a standard was no problem; writing *any* disk on a standard always worked After a bit of use though, a interesting but predictable problem emerged - if you wrote a file on a standard, then overwrote that file on a 1.2, then only half the track (the lowest half) would be overwritten; the other half would retain its original data, and a standard drive attempting to read back the data would in fact read unreliably. Applying this to the problem would seem to suggest that, if you format a standard 1.44 floppy as a 720, only *alternate* tracks are actually formatted, and the intervening tracks are left blank. If you wrote and installed a special driver, you could read and write those *alternate* tracks independently of the formatted tracks; even in a classic 720 3 1/2 drive, the worst you could expect would be an unreliable read, and the best would be that you would get a reliable read from the real tracks, ignoring the interleaved alternates. Of course, reading this floppy on a normal 1.44mb drive would show nothing wrong, and read it as being a perfectly usable 720K floppy. of course, *why* you would want to do that is another issue. Oh - before I forget, I was thinking about covert channels and cds a few days ago and realised there is already one - CDs support a special mode called CD+G - this is used making karaoke cds to support the video data stream; the vast majority of pc drives cannot read this data - there are exceptions of course. however, karaoke players (and many low-end dvd players) CAN, and by design display them on the screen of the playback device. This is pretty much STO, but could conceal a message trivially that normal examination of the cd would not reveal, but which the recipient could display (again, trivially) using nothing more than a tv set and cheap mass-produced DVD player. Needless to say, you could always write or read data from the low bits of the audio too, provided you got a reliable read of that data... the software to do that could be considered suspicous though, while a cd that has a short text message imbedded in track #12 of a 20 track audio collection would be harder to detect (but of course for even vague security would have to be treated as a steg channel and encrypted in addition, with something decodable by hand like a book code) - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]
Re: [EMAIL PROTECTED]: Re: thoughts on one time pads]
Eugen Leitl wrote: Sudden thermal stress (liquid nitrogen, etc) might be good enough to delaminate, leaving clear disks behind. Not sure what the data surface is made from but - surely a suitable organic solvent could remove the paint into suspension leaving a clear plastic disc and no trace of organized data? - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]
Re: thoughts on one time pads
John Denker wrote: Dave Howe wrote: Hmm. can you selectively blank areas of CD-RW? Sure, you can. It isn't s much different from rewriting any other type of disk. Yeah, I know. just unsure how effective blanking is on cd-rw for (say) a pattern that has been in residence for two years, but now must be unrecoverable. There are various versions of getting rid of a disk file. 5) Grinding the disk to dust. AFAIK this is the only NSA-approved method. A suitable grinder costs about $1400.00. http://cdrominc.com/product/1104.asp for most, scratching off the carrier substrate is usually enough - I *might* be persuaded some trace remains on the plastic disc afterwards, but I can't imagine anyone recovering from a disk that had been a) scraped clean then b) thrown into a blast furnace containing liquid iron, or even a small home smelter. However, I am more interested in methods to destroy just a single track at a time, and I doubt you could deface the disk reliably *and* still retain read abilty on the remaining tracks. - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]
Re: a crypto wiki
Anton Stiglic wrote: I agree. The cryptodox page looks nice, but I would rather see the content go in wikipedia, which is worked on, and looked at, by many more people, a really beautiful community work. There is also the wiki crypto wikibook, which is sorta a co-production and shares a lot of text with the wiki crypto entries. The idea is to get a slightly more fixed view of the pages, which can then be published in paper form. http://en.wikibooks.org/wiki/Cryptography - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]
Re: thoughts on one time pads
Jonathan Thornburg wrote: 1. How to insure physical security for the N years between when you exchange CDs and the use of a given chunk of keying material? The single CD system is brittle -- a single black-bag burglary to copy the CD, and poof, the adversary has all your keys for the next N years. Hmm. can you selectively blank areas of CD-RW? - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]
Re: [Clips] Sony to Help Remove its DRM Rootkit
R.A. Hettinga wrote: http://www.betanews.com/article/print/Sony_to_Help_Remove_its_DRM_Rootkit/1130965475 Unfortunately, this is an exaggeration of what Sony have agreed to do - they have issued an installable which removes the filename cloaking component while leaving the rest (primarily, the cd rom driver chain filters in place. It is still not possible to remove these other than manually (and yes, the system as a whole still uses up cpu and memory for no benefit other than for sony (and even then, its a trivial hack to prevent the DRM from installing in the first place - just disable autorun, which anyone halfway paranoid does anyhow) Mind you, sony seem to have added another wrinkle to this story with their new DRM - which is aimed, not at preventing p2p copies, but at isolating Sony CDs from itunes http://bigpicture.typepad.com/comments/2005/10/drm_crippled_cd.html - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]
Re: Is there any future for smartcards?
Eugen Leitl wrote: On Sun, Sep 11, 2005 at 06:49:58PM -0400, Scott Guthery wrote: 1) GSM/3G handsets are networked card readers that are pretty successful. They are I'd wager about as secure as an ATM or a POS, particularly with respect to social attacks. The smartphones not secure at all, because anything you enter on the keypad and see on the display can be compromised, so the tamper-proof cryptographic goodness locked inside the SIM smartcard will cheerfully approve whatever the code running on the smartphone will tell it to approve, regardless of what is being displayed to the user. TBH I don't think the smartcard approach will work - really, everything needed to verify what you are signing or encrypting needs to be within your secure boundary, so the only sensible approach is for a mobile-sized cryptographic device to be autonomous, but accept *dumb* storage cards for reading and writing; that dumb card can then be used to transfer a unsigned document to the cryptographic device, which when inserted uses a relay or switch to assume control of the keyboard and screen; person wishing a digital signature stores the document to be signed onto the card; signer inserts into his device, uses the device's display to assure himself this is really what he wants to sign and then keys his access code. The device then produces a digital signature certificate (possibly deliberately adding some harmless salt value to the end before signing, which is noted in the detached certificate's details) and copies that to the dumb card, retaining a copy for the user's own records. by using a switch controlled by the cryptographic module, the display can be then used by an alternate system when not in use - for example, a mobile phone - while providing an airgap between the secure module and the insecure (and yes, this would mean if you received a contract via email, you would have to write it to a card, remove that card from a slot, insert it into a different slot, then check it. I can't see how the system can be expected to work otherwise) - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]
Re: Another entry in the internet security hall of shame....
Peter Gutmann wrote: TLS-PSK fixes this problem by providing mutual authentication of client and server as part of the key exchange. Both sides demonstrate proof-of- possession of the password (without actually communicating the password), if either side fails to do this then the TLS handshake fails. Its only downside is that it isn't widely supported yet, it's only just been added to OpenSSL, and who knows when it'll appear in Windows/MSIE, Mozilla, Konqueror, Safari, So, the solution to nobody using the existing (but adequate) solution is another existing (but barely implimented and also unused) solution? - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]
Re: Another entry in the internet security hall of shame....
James A. Donald wrote: SSL works in practice, X509 with CA certs does not work in practice. People have been bullied into using it by their browsers, but it does not give the protection intended, because people do what is necessary to avoid being nagged by browsers, not what is necessary to be secure. Indeed so - however, if Google makes it just work then there will be a large swathe of people out there wondering what does this DIGITAL SIGNATURE button do in gmail? plus a smaller subset who have google talk and can perform secure e2e voip using x509 certs that they don't even know they have. Its not ideal, but its not a bad thing either - a little more security, using a known method, without any individual user having to know or care how it works (and lets face facts here, no solution that requires an end user to get his finger out and do something without being forced to, no matter how trivial the task is, ever had a decent update) - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]
Re: Another entry in the internet security hall of shame....
Nicolas Williams wrote: Yes, a challenge-response password authentication protocol, normally subject to off-line dictionary attacks by passive and active attackers can be strengthened by throwing in channel binding to, say, a TLS channel, such that: a) passive attacks are not possible, b) MITMs below TLS get nothing that can be attacked off-line, and c) server impersonators can be detected heuristically when the attacker can't retrieve the password in real-time (such an attack is indistinguishable from password incorrect situations, but...). Indeed. The main problem with TLS is lack of PKI support; in principle, this isn't true - TLS uses X509 certs, just like any other SSL based protocol - but in practice, everyone uses self signed certificates and nobody checks them or even caches them to see if they change. So - interesting idea time. what if 1) Talk strongly authenticated *all* connections, even p2p ones, using a GoogleMail master certificate and a Googletalk.Googlemail single-use certificate to authenticate the GoogleMail server. 2) Google got into the CA business; namely, all GoogleMail owners suddenly found they could send and receive S/Mime messages from their googlemail accounts, using a certificate that just appeared and was signed by the GoogleMail master cert. Given the GoogleMail user base, this could make GoogleMail a defacto CA in days. 3) This certificate was downloaded to your GoogleTalk client on login, and NEVER cached locally Ok, from a Security Professional's POV this would be a horror - certificates all generated by the CA (with no guarantees they aren't available to third parties) but it *would* bootstrap X509 into common usage, and takeup of s/mime certificates was always the bottleneck for getting encrypted mail to go mainstream (PGP has the same problem, but in addition has the WoT issues and up to recently actual obtaining of the software to contend with) I can only hope that if this *is* in the gameplan, that the certificates be marked autogenerated so that in the longer term a more conventional, clientside-generated certificate can be used instead. - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]
Re: Another entry in the internet security hall of shame....
Ian G wrote: none of the above. Using SSL is the wrong tool for the job. For the one task mentioned - transmitting the username/password pair to the server - TLS is completely appropriate. However, hash based verification would seem to be more secure, require no encryption overhead on the channel at all, and really connections and crypto should be primarily P2P (and not server relayed) anyhow. It's a chat message - it should be encrypted end to end, using either OpenPGP or something like OTR. And even then, you've only covered about 10% of the threat model - the server. yeah. you have a unencrypted interchange point - the server. There are aspects to that which make it both a good and bad thing, mostly bad. for example you allow interception at the server (may be a requirement for an american based company, but still bad), and you provide a single point of failure for hackers (very bad) Most of the good aspects revolve around only having to support one client cert you can embed in your own client (or make available on your website) and not an entire PKI infrastructure. - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]
Re: solving the wrong problem
Ilya Levin wrote: John Denker [EMAIL PROTECTED] wrote: So, unless/until somebody comes up with a better metaphor, I'd vote for one-picket fence. Nonsense fence maybe less metaphoric but more clear. I disagree - one picket fence gives a clear impression of a protective device that is hardened at but one point - leaving the rest insecure. nonsense fence doesn't give any real image. - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]
Re: aid worker stego
Peter Fairbrother wrote: I don't think there is much danger of severe torture, but I don't think innocent-until-proven-guilty applies either, and suspicion should be minimised or avoided. Depends on what you want to avoid. Best solution for software is dual-use - 7-zip for file encryption, standard s/mime capable email software (such as thunderbird or even outlook express) for pki. However, encrypted emails are *always* going to stick out like a sore thumb if intercepted, and even the output of most stego packages will look suspect (unless your aid worker is in the habit of sending large numbers of digital photos by email. This could be arranged - get him to take new, original photos of what he sees while doing his work, use them exactly once for stego, then keep the stegoed versions around on the hd so that any comparison later will show the original version identical to the intercepted email version. Probably the best overall solution to this would be a bootable mini-cd; a mini-linux distro would give a gui, and still leave room for conventional encryption packages, stego packages and the user's secret/public keyring, leave no trace on the HD at all (no matter how good the forensic package), can be hidden in a wallet amongst credit cards, and can be distroyed trivially by simply scratching off the printed surface with the back of a key or against a rough surface such as a wall or stone paving slab (ie, drop it face down, then stand on it and move foot back and forth until you have an oblong of worthless plastic and a slightly messy walkway) assuming stego, you could load digicam photos (either via a driver on the minicd or via windows, whichever you happen to be using at the time) not long after they were taken, for later stego purposes, and the space they use on the digicam reused for more photos before the first set were used for stego (or again, if in a hurry, just remove and discard the sd card from the cam) - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]
Re: SHA1 broken?
Joseph Ashwood wrote: I believe you are incorrect in this statement. It is a matter of public record that RSA Security's DES Challenge II was broken in 72 hours by $250,000 worth of semi-custom machine, for the sake of solidity let's assume they used 2^55 work to break it. Now moving to a completely custom design, bumping up the cost to $500,000, and moving forward 7 years, delivers ~2^70 work in 72 hours (give or take a couple orders of magnitude). This puts the 2^69 work well within the realm of realizable breaks, assuming your attackers are smallish businesses, and if your attackers are large businesses with substantial resources the break can be assumed in minutes if not seconds. 2^69 is completely breakable. Joe Its fine assuming that moore's law will hold forever, but without that you can't really extrapolate a future tech curve. with *todays* technology, you would have to spend an appreciable fraction of the national budget to get a one-per-year break, not that anything that has been hashed with sha-1 can be considered breakable (but that would allow you to (for example) forge a digital signature given an example) This of course assumes that the break doesn't match the criteria from the previous breaks by the same team - ie, that you *can* create a collision, but you have little or no control over the plaintext for the colliding elements - there is no way to know as the paper hasn't been published yet. - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]
Re: Quantum cryptography gets practical
Dave Howe wrote: I think this is part of the purpose behind the following paper: http://eprint.iacr.org/2004/229.pdf which I am currently trying to understand and failing miserably at *sigh* Nope, finally strugged to the end to find a section pointing out that it does *not* prevent mitm attacks. Anyone seen a paper on a scheme that does? - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]
Re: IBM's original S-Boxes for DES?
Steven M. Bellovin wrote: It was only to protect against differential cryptanalysis; they did not know about linear cryptanalysis. More accurately, they didn't protect against linear cryptanalysis - there is no way to know if they knew about it and either didn't want to make changes to protect against that (they weakened the key, so may have wished to keep *some* attacks viable against it to weaken it still further), had to choose (against *either* differential or linear, as they didn't know how to protect against both) or simply the people doing the eval on DES didn't know, as it was rated above their clearance level. We only have a single event to go from (that DES was indeed protected against one not the other) so can't really judge motivation or knowledge. - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]
Re: They Said It Couldn't Be Done
R. A. Hettinga wrote: Nevada has taken the lead on paper trails not only in its own elections, but also in Congress. Its senators - John Ensign, a Republican, and Harry Reid, a Democrat - have co-sponsored the bipartisan Voting Integrity and Verification Act, one of a number of pending bills that would require that all electronic voting machines produce voter-verifiable paper trails. Congress should pass such legislation right away so all Americans can have the same confidence in their elections as Nevadans now have. I must admit I am surprised a new law is needed. Under the Help America Vote Act 2002 electronic voting machines appear to have the following audit requirement: TITLE III--UNIFORM AND NONDISCRIMINATORY ELECTION TECHNOLOGY AND ADMINISTRATION REQUIREMENTS Subtitle A--Requirements SEC. 301. NOTE: 42 USC 15481 VOTING SYSTEMS STANDARDS. (a) Requirements.--Each voting system used in an election for Federal office shall meet the following requirements: (2) Audit capacity.-- (A) In general.--The voting system shall produce a record with an audit capacity for such system. (B) Manual audit capacity.-- (i) The voting system shall produce a permanent paper record with a manual audit capacity for such system. (ii) The voting system shall provide the voter with an opportunity to change the ballot or correct any error before the permanent paper record is produced. (iii) The paper record produced under subparagraph (A) shall be available as an official record for any recount conducted with respect to any election in which the system is used. (taken from http://www.fec.gov/hava/law_ext.txt ) So unless there is a amendment to that law (that I am obviously unaware of) it isn't up to individual States to add this as an additional requirement - its already required. perhaps someone could enlighten me? - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]
Re: How a Digital Signature Works
R. A. Hettinga wrote: The publisher first has to obtain a digital certificate from a recognized certificate authority or CA (VeriSign (VRSN ) is the largest and best known CA in the U.S.). The publisher receives a private and a public key, each of which is a long number of about 300 digits. These are used to create a digital signature for each program (see BW Online, 8/10/04, Windows of Vulnerability No More?). And which will guarantee to... erm... *try* not to sell the same certificate to someone else, or to at least notice if they do (provided it has a famous name on it like microsoft of course) and what is new about MS's signed executable support? its been around long enough... - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]
Re: Use cash machines as little as possible
Anne Lynn Wheeler wrote: ONE of Britain's biggest banks is asking customers to use cash machines as little as possible to help combat soaring card fraud. That's odd - given a deliberate policy of encouraging Cash Machine use over the last few years, as Cash Machine costs+fraud still come to less than the running costs of sufficient local branches to allow you to obtain *Your* money back from them when needed - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]
Re: A National ID
R. A. Hettinga wrote: If we're going to move to a national identification card, we can't afford to do it badly. Now is the time to figure out how to create a card that helps identify people but doesn't rob them of a huge swath of their civil liberties in the process. Just watch how the british do it - then don't do it that way. I am still trying to figure out how over a decade of terrorist bombings in mainland UK didn't justify introducing a national ID card - but the americans wanting biometric passports for visitors does. - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]
Re: Software Helps Rights Groups Protect Sensitive Information
R. A. Hettinga wrote: To prevent loss or theft, the data is backed up automatically and redundantly on dedicated Martus servers in Manila, Toronto, Seattle and Budapest. Nobody can read the files without access to the original user's cryptography key and password -- with the exception of sophisticated code-cracking organizations such as the U.S. National Security Agency or China's Public Security Bureau. I might be missing something here but - exactly how does a system insecure enough that interested governments can crack it help protect people who are releasing information concealed by those governments? - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]
Re: Yahoo releases internet standard draft for using DNS as public key server
Ian Grigg wrote: Dave Howe wrote: No - it means you might want to consider a system that guarantees end-to-end encryption - not just first link, then maybe if it feels like it That doesn't mean TLS is worthless - on the contrary, it adds an additional layer of both user authentication and session encryption that are both beneficial - but that *relying* on it to protect your messages is overoptimistic at best, dangerous at worst. This I believe is a bad way to start looking at cryptography. There is no system that you can put in place that you can *rely* upon to protect your message. No, there are plenty that you can rely on to protect your message while still in transit. If you can ensure that the only possible points of vulnerability are at the two endpoints, then you and your correspondent take control of your security - it won't be perfect, as you point out - but you won't be reliant on the goodwill and efforts of some third party whose most economic option is to accidentally or deliberately neglect TLS between your local smart host and your correspondent's email spooler, or indeed, to supply minimal security to the email spools at smarthost or destination. (Adi Shamir again: #1 there are no secure systems, ergo, it is not possible to rely on them, and to think about relying will take one down false paths.) Secure systems exist - but are rarely worth the effort involved. Many PDAs can handle PGP or S/Mime traffic these days - certainly, you could offload your message (already encrypted) to flash media, insert into sending host, receive (from email spool) at the destination and transfer to flash media, then insert into decoding PDA. To compromise either PDA would require access - so if you keep it about your person (and within sight when you bathe), you should be safe against anything but a midnight intrusion with sleeping gas But regardless - the level of defence required is proportional to the likely threat. It is entirely possible that it would be worthwhile for some hacker to compromise a router between your ISP's mail server and your correspondent's spool, or that spool itself. It is less likely that it would be worth someone's while to break into your home with exquisite timing and tracelessly alter software on your trusted airgapped machine while you shower (and if that *is* your threat model, I envy the income you must get to justify being in such a position or bow to the value of your information to some repressive regime) Otherwise, we adopt what military people call tactical security: strong enough to keep the message secure enough so that most of the time it does the job. Indeed so. The principle which needs to be hammered time and time again is that cryptography, like all other security systems, should be about risk and return - do what you can and put up with the things you can't. Again, true. I suspect we differ in what we consider an acceptable risk - I don't consider any setup where the security of the channel is against the best interests of the people controlling that channel acceptable - especially where I have no way to discover if that channel was compromised. I have what I hope is an acceptably secure system at home - and I also hope my correspondents do likewise. If our messages are compromised (not that they contain anything worth stealing) then it is my fault or theirs - not an admin at the isp, or some minimum-wage employee on a helpdesk bribed to let someone take a peak at my mailspool. This extra security comes free, gratis, not a penny does it cost - beyond the effort of learning how to use it - and while I was used to hotkeying my way into the current window, my recent switch to Enigmail means I don't even have to do that. Why would I settle for less? Applying the specifics to things like TLS and mail delivery - yes, it looks very ropey. Why for example people think that they need CA-signed certs for such a thing when (as you point out) the mail is probably totally unprotected for half the journey is just totally mysterious. And indeed I had a conversation with someone who was interested in a secure mailing list only a few days ago. I suggested he not bother and just set up a HTTPS website with any one of a dozen BBoard systems and local certificate support - because that was free and all the complexity (and most of the vulnerabilites) are at the server side - while setting up a secure email burster would be almost impossible and would rely on not only training the end users, but ensuring they have the right software installed. - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]
Accoustic Cryptoanalysis for RSA?
opinions? http://www.wisdom.weizmann.ac.il/~tromer/acoustic/ - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]
Vulnerability in the WinZip implimentation of AES?
http://www.cse.ucsd.edu/users/tkohno/papers/WinZip/ Abstract: WinZip is a popular compression utility for Microsoft Windows computers, the latest version of which is advertised as having easy-to-use AES encryption to protect your sensitive data. We exhibit several attacks against WinZip's new encryption method, dubbed AE-2 or Advanced Encryption, version two. We then discuss secure alternatives. Since at a high level the underlying WinZip encryption method appears secure (the core is exactly Encrypt-then-Authenticate using AES-CTR and HMAC-SHA1), and since one of our attacks was made possible because of the way that WinZip Computing, Inc.~decided to fix a different security problem with its previous encryption method AE-1, our attacks further underscore the subtlety of designing cryptographically secure software. - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]
Re: Do Cryptographers burn?
Hadmut Danisch wrote: - He didn't find any single mistake. He just says that everything is already known and taken from literature. certainly possible - if he didn't know (or deliberately ignored) that it had been written in 1988 :) How much of it is *still* new or at least hard to find in the literature? how much of it would be known *today* out of hand by someone who was familiar with the state of the art? If the university had instructed him to take a look at your work in that context, he may well not have found anything new or novel in there - because your work had since been duplicated, and after 16 years I would expect it to have been duplicated several times. If he had been instructed to find pre-1987 published work that duplicated yours, that would be different - but I would assume the university neglected that direction while instructing him. Maybe it's a minority writing false expertises. But it's a majority accepting that. We have the same problem with expert witnesses in court here in the uk - after a while, prosecutors learn which experts can be relied on to give the answer they want rather than admit it is a matter of opinion and either case could be correct - such experts get a lot more work from the prosecution for their unbiassed opinions than those which gave an unbiassed opinion the prosecution didn't like (it isn't unknown for the prosecution to approach three or four experts and take the most favourable return to court) So my doubt is not so much about that someone found the magic way to factorize. It's about someone intenionally selling snake-oil or backdoors and other's keeping their mouth shut and tolerate this as they do it here. no, it isn't. it is about someone deliberately choosing to concentrate on the worst aspects of a 16 year old dissertation (almost certainly, that it is 16 years out of date) and ignoring the context. I am sure if I paid 100 experts to evaluate *anything* I could find at least one I liked the resulting report from. I am not too surprised either - for the reasons I have detailed above. I know it is hard to have fought this way though the legal system to find the university has tried to throw money at the problem to make it go away - but it happens, and I can only assume you will eventually prove it in court. what you have here is a legal problem with some individuals, that their employer has chosen to back against a student, and in doing so bent any or all rules it could to win. This says little about the individual who wrote the new examination and more about your opponents in the university's legal team. BTW is there any way you can find out how many experts were asked to evaluate your work before they found one whose answer they liked? - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]
Re: Do Cryptographers burn?
Do Cryptographers burn? Sometimes they blush hard enough to ignite, if that helps :) Cryptography is a lot about math, information theory, proofs, etc. But there's a certain level where all this is too complicated and time-consuming to follow all those theories and claims. At a certain point cryptography is based on trusting the experts. This is universally true though - nobody can live long enough to work though the theory and practical of almost anything - consider for example a classic (no computer engine or breaking control) automobile. You would need to understand every part of chemical theory that relates to petroleum fractions and additives commonly found in car fuel; thermodynamic, materials and physical theory and engineering design that relates to the functions of the engine and its mechanical coupling to the drive wheels; the differential, the gearing, the steering, the breaking, the materials that form the tyres, the ergonomic design of the seating area, vision angles though the windshield, glass (materials) theory for the material that forms the windshield (toughness, resistance to random impacts, refraction though the medium), the materials of the road surface and how they behave under different conditions of wet, dry, temperature... and that is just the generic stuff. once you get down to a specific instance, you have to decide if your instance of car meets the theoretical data you learnt on a abstract car, and if the instance of road you are driving on meets similar data you have on abstract driving surfaces. Even those who work in that field can take decades to reach the point they could design one component of a modern automobile - the engine, the gearbox, the chassis and so on. It would be insane to learn all that if you just wanted to drive to work in the mornings At a basic level, you have to define basic functions in terms that an expert can verify and say yes, that is a truism - then you can drop them into more complex systems in different patterns, not knowing if the system will work but able to rely on the components to perform within their design parameters. the same is true of cryptography; an accepted algo will have been hammered on and peer reviewed by dozens of people - and as even a minor predictability under extreme conditions using a simplified form of an algo is worth writing up a paper on, there is a world of pre-established work to review and verify for even the most ambitious would-be-cryptoanalyst to cut his teeth on (and use as training examples to apply similar techniques to algos that have not yet had that attack publically attempted on them) So,if the basic level for a mathematician is the maths within the algorithm; the basic level for a programmer is the algorithm itself, as a process to produce cryptotext from plain, or plaintext from crypto. Normally, a programmer won't worry about verifying the algo - he will accept that as part of the design he is to impliment, and if he has a choice of several suitable algos, will simply impliment them all as alternative settings. Programmers love to re-use code though - it saves a *lot* of work, and as you become familiar with and improve the code you can feed back changes to earlier software, improving its efficiency for free. Of course for this to work well, the code must be independent of the body of the software, have clearly defined interfaces (so that you can't mess up an earlier implimentation when improving a later one, by forgetting a side effect you earlier relied on but don't need any more) and indeed act as a basic component itself - forming a subprogram that you hand data to and receive data back from as a black box operation. Another strength of doing your crypto as such a library of pre-written code is that you can test and prove it independently of your main program - you can hand your crypto library to your peers for their review, you can encourage its use (thus ensuring good crypto in a wide range of software, improving compatability, and not incidentally improving your peer reputation :) and generally maintain your library as a product in its own right. At the end of the chain is a programmer who doesn't want to know about how the algo works, would rather not have to know the details of how to make it work, but likes the idea of being able to write something like use MyFirstCryptoLibrary; ask user for message store in [messagedata] ask user for key store in [key] DoCrypto(chosenalgo,[key],[messagedata]) store in [encrypted message] ask user for destination store in [EmailToName] SendEmailTo([EmailToName],[encrypted message]) and have it work. and indeed, until the average programmer *can* use a library that easily to include decent crypto in his product, the majority of the products out there will be either not supporting crypto at all, or doing it badly. Is anyone here on this list who can claim to have read and understood all those publications about cryptography? I can
Re: PKI root signing ceremony, etc.
Peter Gutmann wrote: Dave Howe [EMAIL PROTECTED] writes: Key management and auditing is pretty much external to the actual software regardless of which solution you use I would have thought. Not necessarily. I looked at this in an ACSAC'2000 paper (available from http://www.acsac.org/2000/abstracts/18.html). This uses a TP-capable database as its underlying engine, providing the necessary auditing capabilities for all CA operations. This was desgined to meet the security/auditing requirements in a number of PKI standards (see the paper for full details, I've still got about 30cm of paper stacked up somewhere from this). The paper is based on implementation experience with cryptlib, you can't do anything without generating an audit trail provided you have proper security on the TP system (that is, a user can't inject arbitrary transactions into the system or directly access the database files). I tested the setup by running it inside a debugger and resetting/halting the program at every point in a transaction, and it recovered from each one. It can be done, it's just a lot of work to get right. *nods* I meant in this context - certainly, a well designed CA package would enforce security and audit trailing (I can easily visualise one that uses a composite (split) access key n of m, and could probably code up such a tool in a day or so) but Rich's original design had no audit or key management other than that imposed externally on the (essentially flatfile) stucture of Openssl command line tools. I should mention after having done all that work that most CAs rely on physical and personnel security more than any automatic logging/auditing. Take a PC and an HSM, lock it in a back room somewhere, and declare it a secure CA. *nods* and that is probably as secure as any other method, and a *lot* more secure than a safe exe running on insecure hardware. - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]
Re: Cryptophone locks out snoopers
Ian Grigg wrote: (link is very slow:) http://theregister.co.uk/content/68/34096.html Cryptophone locks out snoopers By electricnews.net Posted: 20/11/2003 at 10:16 GMT I see the source release has been put back... again. - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]
Re: Test of BIOS Spyware
Ralf-P. Weinmann wrote: This is *NOT* the interesting part. The interesting part is the payload it is to deliver. The claim This enables the software to spy on the user and remain hidden to the operating system. rather interests me. How do they achieve this in an OS-agnostic fashion? They won't even try - I am under the impression this is for use as a black bag job, possibly even remotely; they can target the machine with a specific update for the currently running OS. - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]
Re: Easy VPNs?
Ian Grigg wrote: I'm curious - my understanding of a VPN was that it set up a network that all applications could transparently communicate over. spot on. Port forwarding appears not to be that, in practice each application has to be reconfigured to talk to the appropriate port, or, each port has to be forwarded. also correct Am I missing something here? If there is an easy SSH based strategy for VPNs, what is it? what you are missing is joining the dots. the VPN part requires that a server process be running that intercepts packets destined for the remote end of the VPN (usually a virtual network card or ip stack shim). That says nothing about how the data gets from *that* intercept server to the matching server at the receiving end - the transport method. IPSec uses an assortment of custom ip types and standard tcp/udp connections. ssl vpn uses an ssl encrypted tcp/ip connection, but there is no reason why the two intercept servers couldn't talk to each other over (for example) a ssh tunnel, zebedee, or whatever else takes its author's fancy. In practice, you want the tunnel to have low overhead, so udp is often used; tcp however traverses nat and pat servers much more easily and the additional convenience of ssh transport (being an existing, established standard that uses only a single port and that firewalls - and their admins - are already familiar with) may be of more value than the more complex and less well understood (and damned hard to get though anything, including firewalls) IPSec. so as I say - think of vpn as two components - intercept (the virtual network functionality) and transport (a secure, authenticated, encapsulated communications standard) and how vpn over *anything* becomes more clear. - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]
Re: Monoculture
slightly ranting, you might want to hit del now :) Ian Grigg wrote: What is written in these posts (not just the present one) does derive from that viewpoint and although one can quibble about the details, it does look very much from the outside that there is an informal Cryptographers Guild in place [1]. I don't think the jury has reached an opinion on why the cryptography group looks like a guild as yet, and it may never do so. A guild, of course, is either a group of well-meaning skilled people serving the community, or a cartel for raising prices, depending on who is doing the answering. To me it seems more like a academic community - particularly the way many can't handle the concept of good enough but look for theoretically perfect solutions that may be unworkable in the Real World. And yes, I *am* an outsider - I dabble a little, and I am a programmer, but I am the first to admit my math skills are nowhere near adequate to make any meaningful contribution to the field. It seems to me there is no more a cryptography guild than a linux guild - yes, you get advocates who foam at the mouth if you say the wrong thing, but the majority seem more interested in getting it to work. From my POV as a programmer, learning the field consists of identifying the available building blocks (hash, symmetric, asymmetric), standards (openpgp, x509, ssl, ssh, ipsec) and prior implimentations (paying particular attention to what had to be patched due to discovered vunerablities, so as to avoid the same errors in my own code) It also seems the crypto community is very open to questions, very hostile to statements - so often knowing how to phrase something to them is as important as the content of the question. Stating I am doing $FOO will not be as productive as If I were to do $FOO what vunerabilities would that introduce? - remembering that any good advice you get back for free would have probably cost you weeks of study or possibly thousands of dollars trying to obtain a security certification for your solution later on. Just ignore any posts of because it isn't done that way unless they give a good reason why your way isn't better (note as good isn't good enough - you always need a good reason to stray from a tested and known path, and it is often worth putting up with a few minor inconveniences to stay on it) Oh - and make sure you can recognise a good reason when you see it ::) The guild would like the application builder to learn the field. They would like him to read up on all the literature, the analysies. To emulate the successes and avoid the pitfalls of those protocols that went before them. The guild would like the builder to present his protocol and hope it be taken seriously. The guild would like the builder of applications to reach acceptable standards. I would certainly expect a house builder to know how to lay bricks - but if he insisted on designing the house too, I would expect him to know how to do that (and not just start putting up walls and hoping it will all work out later. Design requires a fair understanding of what you are designing and what the capabilities and limitations of the materials are - this is why SAs get paid more than their programming teams (not that I like that given I am a programmer not a SA). If you aren't willing to learn how to do that, you can still follow someone else's design - or take a modular approach and just drop pre-built units (normally libraries) into those parts of the code that need them. Libraries can be surprisingly good - if the designer put in enough effort, they can have sufficient inline M/C for the timing-critical parts that they are noticably more efficient than implimenting your own code in a medium or high level language. And, the guild would like the builder to take the guild seriously, in recognition of the large amounts of time guildmembers invest in their knowledge. That does tend to happen - in any community, you get those who get used to being authorities, and react badly to being challenged. At least in this community most of them have the sense to back down when proved wrong :) None of that is likely to happen. The barrier to entry into serious cryptographic protocol design is too high for the average builder of new applications [2]. He has, after all, an application to build. Indeed so - that is why using a prebuilt standard (or better yet, a library) as your base is such a good idea. However, a lot of programmers don't like doing that because they feel it is either cheating or means all their hard work is going to be dismissed as just an implimentation of someone else's idea rather than something original and novel. However, the odds of someone rolling their own protocol getting something more efficient or effective as work that has already been done are low - and if the package you put together is sufficently good, no users will care it uses SSH (protocol) for comms or someone else's AES library for
Re: Monoculture
Jill Ramonsky wrote: Is it possible for Bob to instruct his browser to (a) refuse to trust anything signed by Eve, and (b) to trust Alice's certificate (which she handed to him personally)? (And if so, how?) I am very much hoping that you can answer both (a) and (b) with a yes, ok then yes :) What it comes down to is a browser will trust any certificate either a) explicitly marked as trusted or b) signed by a root CA in its root certificate store so the correct procedure for (a) is for bob to delete eve's root certificate from his root store. for (b) he can either explicitly mark Alice's cert as accepted, or (technically more interesting) if he trusts her as introducer add her root cert - which is the same thing if she self-signed her cert - to his root store, so that *any* cert she signs is accepted. - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]
Re: why are CAs charging so much for certs anyway? (Re: End of the line for Ireland's dotcom star)
Joel Sing wrote: Hi Adam, I believe they have, at least to a large degree. InstantSSL (www.instantssl.com) sell 128-bit certificates for $49USD/annum. Certainly far cheaper than the VeriSign or Thawte equivalent. This is their 'base' level service which comes with a $50USD warranty, email based support and a 30 day refund/reissue policy. One of our clients uses one of their certificates and we haven't had an issue with it. What is their browser coverage like? - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]
Re: Pre-cursor to Non-Secret Encryption
John Young wrote: James Ellis, GCHQ, in his account of the development of non-secret encryption credits a Bell Laboratories 1944 report on Project C-43 for stimulating his conception: However the concept seems familiar enough - unless I am missing something, a PRNG (n for noise rather than number this time) in sync with a similar PRNG at the recipient end is mixed with the plaintext signal to give a cryptotext; the matching unit subtracts the same values from the received signal to give the original plaintext. If it were digital we would probably xor it :) - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]
Re: An attack on paypal
in a world where there are repeated human mistakes/failures at some point it is recognized that people aren't perfect and the design is changed to accommodate peoples foibles. in some respects that is what helmets, seat belts, and air bags have been about. The problem is here, we are blaming the protective device for not being able to protect against the deliberate use of an attack that bypasses, not challenges it - by exploiting the gullibility or tendency to take the path of least resistance of the user. The real weakness in HTTPS is the tendency of certificates signed by Big Name CAs to be automagically trusted - even if you have never visited that site before. yes, you can fix this almost immediately by untrusting the root certificate - but then you have to manually verify each and every site at least once, and possibly every time if you don't mark the cert as trusted for future reference. To blame HTTPS for an attack where the user fills in a web form received via html-rendering email (no https involved at all) is more than a little unfair though. in the past systems have designed long, complicated passwords that are hard to remember and must be changed every month. that almost worked when a person had to deal with a single shared-secret. when it became a fact of life that a person might have tens of such different interfaces it became impossible. It wasn't the fault of any specific institution, it was a failure of humans being able to deal with large numbers of extremely complex, frequently changing passwords. Because of known human foibles, it might be a good idea to start shifting from an infrastructure with large numbers of shared-secrets to a non-shared-secret paradigm. I am not aware of one (not that that means much, given I am a novice in this field) Even PKI relies on something close to a shared secret - a *trustworthy* copy of the public key, matching a secret copy of the private key. In x509, this trustworthyness is established by an Ultimately Trusted CA; in pgp, by the Web Of Trust, in a chain leading back to your own key; in SSH, by your placing of the public key into your home dir manually (using some other form of authentication to presumably gain access) in each of these cases, the private key will almost invariably be protected by a passphrase; at best, you can have a single passphrase (or even single private key) to cover all bases.. but that just makes that secret all the more valuable. at a recent cybersecurity conference, somebody made the statement that (of the current outsider, internet exploits, approximately 1/3rd are buffer overflows, 1/3rd are network traffic containing virus that infects a machine because of automatic scripting, and 1/3 are social engineering (convince somebody to divulge information). As far as I know, evesdropping on network traffic doesn't even show as a blip on the radar screen. That is pretty much because defence occupies the position of the interior - attackers will almost invariably attack weak points, not strong ones. It is easy to log and calculate how many attacks happen on weak points, but impossible to calculate how many attacks *would* have happened had the system not been in place to protect against such attacks, so the attackers moved onto easier targets. It makes little sense to try and break one https connection (even at 40 bit) if by breaking into the server you get that information, hundreds of others (until discovered) and possibly thousands of others inadvisedly stored unprotected in a database. snip The types of social engineering attacks then become convincing people to insert their hardware token and do really questionable things or mailing somebody their existing hardware token along with the valid pin (possibly as part of an exchange for replacement). The cost/benefit ratio does start to change since there is now much more work on the crooks part for the same or less gain. One could also claim that such activities are just part of child-proofing the environment (even for adults). On the other hand, it could be taken as analogous to designing systems to handle observed failure modes (even when the failures are human and not hardware or software). Misc. identify theft and credit card fraud reference: Which again matches well to the Nigerian analogy. Everyone *knows* that handing over your bank details is a Bad Thing - yet they still do it. - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]