RE: OpenID/Debian PRNG/DNS Cache poisoning advisory

2008-08-08 Thread Dave Korn
Eric Rescorla wrote on 08 August 2008 16:06:

 At Fri, 8 Aug 2008 11:50:59 +0100,
 Ben Laurie wrote:
 However, since the CRLs will almost certainly not be checked, this
 means the site will still be vulnerable to attack for the lifetime of
 the certificate (and perhaps beyond, depending on user
 behaviour). Note that shutting down the site DOES NOT prevent the attack.
 
 Therefore mitigation falls to other parties.
 
 1. Browsers must check CRLs by default.
 
 Isn't this a good argument for blacklisting the keys on the client
 side?

  Isn't that exactly what Browsers must check CRLs means in this context
anyway?  What alternative client-side blacklisting mechanism do you suggest?

cheers,
  DaveK
-- 
Can't think of a witty .sigline today

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


RE: OpenID/Debian PRNG/DNS Cache poisoning advisory

2008-08-08 Thread Dave Korn
Eric Rescorla wrote on 08 August 2008 17:58:

 At Fri, 8 Aug 2008 17:31:15 +0100,
 Dave Korn wrote:
 
 Eric Rescorla wrote on 08 August 2008 16:06:
 
 At Fri, 8 Aug 2008 11:50:59 +0100,
 Ben Laurie wrote:
 However, since the CRLs will almost certainly not be checked, this
 means the site will still be vulnerable to attack for the lifetime of
 the certificate (and perhaps beyond, depending on user
 behaviour). Note that shutting down the site DOES NOT prevent the
 attack. 
 
 Therefore mitigation falls to other parties.
 
 1. Browsers must check CRLs by default.
 
 Isn't this a good argument for blacklisting the keys on the client
 side?
 
   Isn't that exactly what Browsers must check CRLs means in this
 context anyway?  What alternative client-side blacklisting mechanism do
 you suggest? 
 
 It's easy to compute all the public keys that will be generated
 by the broken PRNG. The clients could embed that list and refuse
 to accept any certificate containing one of them. So, this
 is distinct from CRLs in that it doesn't require knowing
 which servers have which cert...

scurries off to read CRL format in RFC

  Oh, you can't specify them solely by key, you have to have all the
associated metadata.  That's annoying, yes, I understand your point now.

  IIRC various of the vendors' sshd updates released in the immediate wake
of the Debian catastrophe do indeed block all the weak keys.


cheers,
  DaveK
-- 
Can't think of a witty .sigline today

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


RE: how bad is IPETEE?

2008-07-11 Thread Dave Korn
John Ioannidis wrote on 10 July 2008 18:03:

 Eugen Leitl wrote:
 In case somebody missed it,
 
 http://www.tfr.org/wiki/index.php?title=Technical_Proposal_(IPETEE)
 
 
 If this is a joke, I'm not getting it.
 
 /ji

  I thought the bit about Set $wgLogo to the URL path to your own logo
image was quite funny.  But they did misspell 'teh' in Transparent
end-to-end encryption for teh internets.

  It does sound a lot like SSL/TLS without certs, ie. SSL/TLSweakened to
make it vulnerable to MitM.  Then again, if no Joe Punter ever knows the
difference between a real and spoofed cert, we're pretty much in the same
situation anyway.

  And of course those supposedly transparent fails-and-reconnects will turn
out to be anything but, in practice...


cheers,
  DaveK
-- 
Can't think of a witty .sigline today

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


RE: Ransomware

2008-06-11 Thread Dave Korn
Dave Howe wrote on 11 June 2008 19:13:

 The Fungi wrote:
 On Tue, Jun 10, 2008 at 11:41:56PM +0100, Dave Howe wrote:
 The key size would imply PKI; that being true, then the ransom may
 be  for a session key (specific per machine) rather than the
 master key it  is unwrapped with.
 
 Per the computerworld.com article:
 
Kaspersky has the public key in hand ? it is included in the
Trojan's code ? but not the associated private key necessary to
unlock the encrypted files.
 

http://www.computerworld.com/action/article.do?command=viewArticleBasicarti
cleId=9094818
 
 This would seem to imply they already verified the public key was
 constant in the trojan and didn't differ between machines (or that
 I'm giving Kaspersky's team too much credit with my assumptions).
 
 Sure. however, if the virus (once infecting the machine) generated a
 random session key, symmetric-encrypted the files, then encrypted the
 session key with the public key as part of the ransom note then that
 would allow a single public key to be used to issue multiple ransom
 demands, without the unlocking of any one machine revealing the master
 key that could unlock all of them.

  Why are we wasting time even considering trying to break the public key?

  If this thing generates only a single session key (rather, a host key)
per machine, then why is it not trivial to break?  The actual encryption
algorithm used is RC4, so if they're using a constant key without a unique
IV per file, it should be trivial to reconstruct the keystream by XORing any
two large files that have been encrypted by the virus on the same machine.

  This thing ought to be as easy as WEP to break open, shouldn't it?

cheers,
  DaveK
-- 
Can't think of a witty .sigline today

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


RE: Ransomware

2008-06-11 Thread Dave Korn
Leichter, Jerry wrote on 11 June 2008 20:04:

   Why are we wasting time even considering trying to break the public
 key? 
 
   If this thing generates only a single session key (rather, a host
 key) per machine, then why is it not trivial to break?  The actual
 encryption algorithm used is RC4, so if they're using a constant key
 without a unique IV per file, it should be trivial to reconstruct the
 keystream by XORing any two large files that have been encrypted by the
 virus on the same machine. 
 This is the first time I've seen any mention of RC4.  *If* they are
 using RC4, 

  According to this entry at viruslist.com:
http://www.viruslist.com/en/viruses/encyclopedia?virusid=313444
which I found linked from the analyst's diary blog, 

The virus uses Microsoft Enhanced Cryptographic Provider v1.0 (built into
Windows) to encrypt files. Files are encrypted using the RC4 algorithm. The
encryption key is then encrypted using an RSA public key 1024 bits in length
which is in the body of the virus.

  According to this thread on the gpcode forum:
http://forum.kaspersky.com/index.php?s=49bd69fb414610c700170b115d0730fashow
topic=72322
the readme.txt files containing the ransom key are identical in every
directory on the infected computer, suggesting that there is indeed a unique
per-host RC4 key.

  According to 
http://forum.kaspersky.com/index.php?s=72050db4cb7d54c17e3b6b134d060269show
topic=72409
every file encrypted by the virus grows by 8 bytes, so it looks like it uses
an IV.  But that didn't help with WEP...


cheers,
  DaveK
-- 
Can't think of a witty .sigline today

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


RE: RIM to give in to GAK in India

2008-05-27 Thread Dave Korn
Perry E. Metzger wrote on 27 May 2008 16:14:

 Excerpt:
 
In a major change of stance, Canada-based Research In Motion (RIM)
may allow the Indian government to intercept non-corporate emails
sent over BlackBerrys.
 

http://economictimes.indiatimes.com/Telecom/Govt_may_get_keys_to_your_BlackB
erry_mailbox_soon/articleshow/3041313.cms
 
 Hat tip: Bruce Schneier's blog.

  Although on the other hand:

Excerpt:

  Research In Motion (RIM), the Canadian company behind the BlackBerry
  handheld, has refused to give the Indian government special access to
  its encrypted email services.   [ ... ]
  
  According to the Times of India, the company said in a statement:

The BlackBerry security architecture for enterprise customers is
  purposefully designed to exclude the capability for RIM or any third
  party to read encrypted information under any circumstances. We regret
  any concern prompted by incorrect speculation or rumours and wish to
  assure customers that RIM is committed to continue serving security-
  conscious business in the Indian market.

http://www.theregister.co.uk/2008/05/27/indian_gov_blackberry_blackball/


  [  Hmm, two contradictory stories, whoever woulda thunk it?  There's
probably some politicking going on, mixed up with marketeering and
FUD-spinning.  ]

cheers,
  DaveK
-- 
Can't think of a witty .sigline today

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


RE: RIM to give in to GAK in India

2008-05-27 Thread Dave Korn
Florian Weimer wrote on 27 May 2008 18:49:

 * Dave Korn:
 
In a major change of stance, Canada-based Research In Motion (RIM)
may allow the Indian government to intercept non-corporate emails

sent over BlackBerrys.
 
 
   Research In Motion (RIM), the Canadian company behind the BlackBerry
   handheld, has refused to give the Indian government special access to
 **
   its encrypted email services.   [ ... ]
 
   According to the Times of India, the company said in a statement:
 
 The BlackBerry security architecture for enterprise customers is

   purposefully designed to exclude the capability for RIM or any third
   party to read encrypted information under any circumstances. We regret
 
   [  Hmm, two contradictory stories, whoever woulda thunk it?  There's
 probably some politicking going on, mixed up with marketeering and
 FUD-spinning.  ]
 
 If you look closely, there's no contradiction.

  Well spotted.  Yes, I guess that's what Jim Youll was asking.  And I
should have said seemingly-contradictory.  This is, of course, what I
meant by marketeering: when someone asks if your service is insecure and
interceptable, you don't say Yes, our ordinary service will give you up to
the filth at the drop of a hat, you spin it as No, our enterprise service
is completely secure [...other details elided...].


cheers,
  DaveK
-- 
Can't think of a witty .sigline today

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


RE: Firewire threat to FDE

2008-03-21 Thread Dave Korn
Hagai Bar-El wrote on 18 March 2008 10:17:

 All they
 need to do is make sure (through a user-controlled but default-on
 feature) that when the workstation is locked, new Firewire or PCMCIA
 devices cannot be introduced. That hard?

  Yes it is, without redesigning the PCI bus.  A bus-mastering capable
device doesn't need any interaction with or acknowledgement from the host,
it doesn't need any driver to be loaded and running, it just needs
electrical connectivity in order to control the entire system.  (I suppose
you could disable the BAR mappings when you go to locked mode, but that's
liable to mess up any integrated graphics set that uses system memory for
the frame buffer, and you'd better not lock your terminal while your SCSI
drives are in operation...)



cheers,
  DaveK
-- 
Can't think of a witty .sigline today

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


RE: Open source FDE for Win32

2008-02-14 Thread Dave Korn
On 11 February 2008 04:13, Ali, Saqib wrote:
 I installed TrueCrypt on my laptop and ran some benchmark tests/
 
 Benchmark Results:
 http://www.full-disk-encryption.net/wiki/index.php/TrueCrypt#Benchmarks

  Thanks for doing this!

 Cons:
 1) Buffered Read and Buffered Transfer Rate was almost halved after
 TrueCrypt FDE was enabled :-(.

  Yes, to almost the exact same rate as sequential reads.  I'm guessing it
simply doesn't implement look-ahead decryption.  It might even be a positively
good idea to not decrypt anything until you're specifically asked.

 3) The initial encryption of the 120 GB HDD took 2 hours.

  You think a 1GB/min encryption rate is so slow as to count as a con?  I
think that's fairly reasonable.  My lightly loaded AMD64x2 box just took 48s
to copy a 584MB file from one place to another on a first trial, and between
26s and 39s on 'hot' retests.

  Or are you suggesting that it could encrypt each block OTF when it's first
accessed, or run the encryption in the background while the system was still
live, instead of converting the whole drive in one big bite?


cheers,
  DaveK
-- 
Can't think of a witty .sigline today

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


RE: Dutch Transport Card Broken

2008-01-30 Thread Dave Korn
On 30 January 2008 17:01, Jim Cheesman wrote:

 James A. Donald:
 SSL is layered on top of TCP, and then one layers
 one's actual protocol on top of SSL, with the result
 that a transaction involves a painfully large number
 of round trips.
 
 Richard Salz wrote:
   Perhaps theoretically painful, but in practice this is
   not the case; commerce on the web is the
   counter-example.
 
 James A. Donald:
 
 The delay is often humanly perceptible.  If humanly
 perceptible, too much.
 
 I respectfully disagree - I'd argue that a short wait is actually more
 reassuring to the average user (Hey! The System's checking me out!) than an
 instantaneous connection would be.


  I also disagree.  It's not like anyone says to themselves Hey, this website
is taking me several seconds to access - I'll spend a couple of hours
physically going to the shop instead.  It's economics again: what amount of
time or money constitutes too much depends what the alternative choices are.


cheers,
  DaveK
-- 
Can't think of a witty .sigline today

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


RE: Dutch Transport Card Broken

2008-01-30 Thread Dave Korn
On 30 January 2008 17:03, Perry E. Metzger wrote:

 My main point here was, in fact, quite related to yours, and one that
 we make over and over again -- innovation in such systems for its own
 sake is also not economically efficient or engineering smart. 

  Hear hear!  This maxim should be burned into the frontal lobes of every
single member of Microsoft's engineering (and marketing) teams with a red-hot
poker[*].

[  Over-engineered solutions to non-problems and gratuitous marketing-driven
featuritis have been the root cause of almost every windows security disaster
ever - e.g., email featuring 'rich content' such as scripts; web browsers that
download and locally run active-x from random websites; lots of vulnerable RPC
services installed and enabled by default on home user PCs; ... etc etc.;
certainly they have far outnumbered the occasional flaws in core kernel
services.  But - economics again!, and a tip'o the hat to Schneier and his
externalities argument - as long as the extra sales go to Microsoft's coffers,
and the extra costs are all imposed on their victims^Wusers, there's no
incentive for them to do otherwise.  Hence my suggestion that they need a
red-hot one (incentive, that is).  ]

cheers,
  DaveK

[*] - or red-hot Gutmann soundwave 
-- 
Can't think of a witty .sigline today

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


RE: Fixing SSL (was Re: Dutch Transport Card Broken)

2008-01-30 Thread Dave Korn
On 30 January 2008 17:03, Eric Rescorla wrote:


 We really do need to reinvent and replace SSL/TCP,
 though doing it right is a hard problem that takes more
 than morning coffee.
 
 TCP could need some stronger integrity protection. 8 Bits of checksum isnĀ“t
 enough in reality. (1 out of 256 broken packets gets injected into your TCP
 stream)  Does IPv6 have a stronger TCP?
 
 Whether this is true or not depends critically on the base rate
 of errors in packets delivered to TCP by the IP layer, since
 the rate of errors delivered to SSL is 1/256th of those delivered
 to the TCP layer. 

  Out of curiosity, what kind of TCP are you guys using that has 8-bit
checksums?

 Since link layer checksums are very common,
 as a practical matter errored packets getting delivered to protocols
 above TCP is quite rare.

  Is it not also worth mentioning that TCP has some added degree of protection
in that if the ACK sequence num isn't right, the packet is likely to be
dropped (or just break the stream altogether by desynchronising the seqnums)?


cheers,
  DaveK
-- 
Can't think of a witty .sigline today

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


RE: SSL/TLS and port 587

2008-01-23 Thread Dave Korn
On 22 January 2008 18:38, Ed Gerck wrote:

 It is misleading to claim that port 587 solves the security problem of
 email eavesdropping, and gives people a false sense of security. It is
 worse than using a 56-bit DES key -- the email is in plaintext where it is
 most vulnerable.   

  Well, yes: it would be misleading to claim that end-to-end security protects
you against an insecure or hostile endpoint.  But it's a truism, and it's not
right to say that there is a security gap that is any part of the remit of
SSL/TLS to alleviate; the insecurity - the untrusted endpoint - is the same
regardless of whether you use end-to-end security or not.

  It's probably also not inaccurate to say that SSL/TLS protects you against
warrantless wiretapping; the warrantless wiretap program is implemented by
mass surveillance of backbone traffic, even AT+T doesn't actually forward the
traffic to their mail servers, decrypt it and then send it back to the tap
point - as far as we know.  When the spooks want your traffic as decrypted by
your ISP server, that's when they *do* go get a warrant, but the broad mass
warrantless wiretapping program is just that, and it'd done by sniffing the
traffic in the middle.  SSL/TLS *does* protect you against that, and the only
time it won't is if you're singled out for investigation.

  This is not to say that it wouldn't be possible for all ISPs to collaborate
with the TLAs to log, sniff or forward the decrypted traffic from their
servers, but if they can't even set up central tapping at a couple of core
transit sites of one ISP without someone spilling the beans, it seems
improbable that every ISP everywhere is sending them copies of all the traffic
from every server...

cheers,
  DaveK
-- 

Can't think of a witty .sigline today

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


RE: patent of the day

2008-01-23 Thread Dave Korn
On 23 January 2008 04:45, Ali, Saqib wrote:

 can anyone please shed more light on this patent. It seems like a
 patent on the simple process of cryptographic erase..


  As far as I can tell, they're describing a hardware pass-through OTF
encryption unit that plugs inline with a hard drive (or similar) and contains
a secure and destroyable keystore.


cheers,
  DaveK
-- 
Can't think of a witty .sigline today

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


RE: Foibles of user security questions

2008-01-14 Thread Dave Korn
On 07 January 2008 17:14, Leichter, Jerry wrote:

 Reported on Computerworld recently:  To improve security, a system
 was modified to ask one of a set of fixed-form questions after the
 password was entered.  Users had to provide the answers up front to
 enroll.  One question:  Mother's maiden name.  User provides the
 4-character answer.  System refuses to accept it:  Answer must have
 at least 6 characters.

  See also Favorite Color (RED is not a valid option) at
http://thedailywtf.com/Articles/Banking-So-Advanced.aspx

cheers,
  DaveK
-- 
Can't think of a witty .sigline today

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


RE: More on in-memory zeroisation

2007-12-14 Thread Dave Korn


  I've been through the code.  As far as I can see, there's nothing in
expand_builtin_memset_args that treats any value differently, so there can't be
anything special about memset(x, 0, y).  Also as far as I can tell, gcc doesn't
optimise out calls to memset, not even thoroughly dead ones: for example -


/artimi/software/firmware $ cat memstst.c

#include string.h
int foo (void);
int main (int argc, const char **argv)
{
  int var[100];
  memset (var, 0, sizeof var);
  foo ();
  return 0;
}

int foo (void)
{
  int var[100];
  memset (var, 0, sizeof var);
  return 0;
}

/artimi/software/firmware $ gcc -O2 memstst.c -o mt
/artimi/software/firmware $ gcc -O2 memstst.c -S -o memstst.s
/artimi/software/firmware $ grep memset memstst.s
call_memset
call_memset
.def_memset;.scl3;  .type   32; .endef
/artimi/software/firmware $


  This is not entirely unexpected; memset, even when expanded inline as a
builtin, still has libcall behaviour.

cheers,
  DaveK
-- 
Can't think of a witty .sigline today

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


RE: More on in-memory zeroisation

2007-12-10 Thread Dave Korn
On 09 December 2007 06:16, Peter Gutmann wrote:

 Reading through Secure Programming with Static Analysis, I noticed an
 observation in the text that newer versions of gcc such as 3.4.4 and 4.1.2
 treat the pattern:
 
   memset(?, 0, ?)
 
 differently from any other memset in that it's not optimised out.

 Can anyone who knows more about gcc development provide more insight on
 this? Could it be made an official, supported feature of the compiler?

  I'm sure it could; why not raise it on the GCC mailing list?  It sounds like
all it would involve would be a patch to the documentation and maybe a comment
in the source.

cheers,
  DaveK
-- 
Can't think of a witty .sigline today

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


RE: Scare tactic?

2007-09-20 Thread Dave Korn
On 19 September 2007 22:01, Nash Foster wrote:

 http://labs.musecurity.com/2007/09/18/widespread-dh-implementation-weakness/
 
 Any actual cryptographers care to comment on this? 

  IANAAC.

 I don't feel qualified to judge.

  Nor do I, but I'll have a go anyway.  Any errors are all my own work.  AIUI,
the weakness is that if you control one end of the DH exchange, you can force
a weak key to be selected for the subsequent encrypted exchange that an
external observer can easily guess.  I would summarize the main findings as:


  If you are one participant in a DH key exchange, it is possible for you to
reveal the agreed-upon shared secret.

  If you pwn an IKE server, you can decrypt and read all the traffic it is
exchanging with peers.  The clients of that server won't know that it's giving
up all their data.


  Whether you do it by forcing the implementation to choose a weak key, or by
just revealing the information that in each situation you already have under
your control, seems to me like a mere technicality.  I can't envisage any
situation under which this would actually *increase* your exposure.  However
it is an implementation flaw and should be addressed just for the sake of
tying up loose ends and doing things properly.


cheers,
  DaveK
-- 
Can't think of a witty .sigline today

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


RE: Another Snake Oil Candidate

2007-09-13 Thread Dave Korn
On 13 September 2007 04:18, Aram Perez wrote:

   to circumvent keylogging spyware - More on this later...

   The first time you plug it in, you initialize it with a password -
 Oh, wait until I disable my keylogging spyware.
   You enter that password to unlock your secure files - Did I
 disable my keyloggin spyware?
 
 Protected by a password that is entered on whatever PC you plug the
 IronKey into and that is somehow auto-magically protected against all
 keylogging spyware that may exist on that PC.

 Decrypting your files is then as easy as dragging and dropping them
 onto the desktop and by any malware that detects that the IronKey is
 present and has been unlocked and copies the files to a hidden folder.

  So by your exacting standards, PGP, gpg, openssh, in fact basically
_everything_ is snake oil.  Endpoint security is a real issue, but it's not
within the remit of this product to address.  I feel your complaint is
overblown.  Marketspeak alone doesn't make a product snakeoil, its security
has to actually be bogus too.


  Encryption Keys
 
  The encryption keys used to protect your data are generated
  in hardware by a FIPS 140-2 compliant True Random Number
 
 As opposed to a FIPS 140-2 compliant False Random Number Generator.

  No, as opposed to a *Pseudo* Random Number Generator.  This is a really
silly thing to attempt to complain about; they're correctly using technical
terminology that you should be perfectly familiar with.


cheers,
  DaveK
-- 
Can't think of a witty .sigline today

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


RE: Rare 17th century crypto book for auction.

2007-09-12 Thread Dave Korn
On 12 September 2007 19:28, Steven M. Bellovin wrote:

 On Wed, 12 Sep 2007 09:28:51 -0400
 Perry E. Metzger [EMAIL PROTECTED] wrote:
 
 
 A rare 17th century crypto book is being auctioned.
 
 http://www.liveauctioneers.com/item/4122383/
 
 As I commented to Bruce, see what Kahn says about it:  But the work,
 while containing some cipher systems, mainly defends the occultism of
 Trithemius.

  Sure, that's what the *ciphertext* says  g

cheers,
  DaveK
-- 
Can't think of a witty .sigline today

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


RE: Seagate announces hardware FDE for laptop and desktop machines

2007-09-09 Thread Dave Korn
On 07 September 2007 21:28, Leichter, Jerry wrote:

 Grow up.  *If* the drive vendor keeps the mechanism secret, you have
 cause for complaint.  But can you name a drive vendor who's done
 anything like that in years?  

  All DVD drive manufacturers.  That's why nobody could write a driver for
Linux until CSS was cracked, remember?


cheers,
  DaveK
-- 
Can't think of a witty .sigline today

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


RE: debunking snake oil

2007-09-01 Thread Dave Korn
On 31 August 2007 02:44, travis+ml-cryptography wrote:

 I think it might be fun to start up a collection of snake oil
 cryptographic methods and cryptanalytic attacks against them.

  I was going to post about crypto done wrong after reading this item[*]:
http://www.f-secure.com/weblog/archives/archive-082007.html#1263

  I can't tell exactly what, but they have to be doing *something* wrong if
they think it's necessary to use file-hiding hooks to conceal... well,
anything really.  The hash of the fingerprint should be the symmetric key used
to encrypt either files and folders directly on the thumbdrive, or perhaps a
keyring file containing ADKs of some description, but if you do crypto right,
you shouldn't have to conceal or obfuscate anything at all.


cheers,
  DaveK
[*] - See also 
http://www.f-secure.com/weblog/archives/archive-082007.html#1264
http://www.f-secure.com/weblog/archives/archive-082007.html#1266 
-- 
Can't think of a witty .sigline today

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


RE: debunking snake oil

2007-09-01 Thread Dave Korn
On 02 September 2007 01:13, Nash Foster wrote:

 I don't think fingerprint scanners work in a way that's obviously
 amenable to hashing with well-known algorithms. Fingerprint scanners
 produce an image, from which some features can be identified. But, not
 all the same features can be extracted identically every time an image
 is obtained.  I know there's been research into fuzzy hashing schemes,
 but are they sufficiently secure, fast, and easy to code that they
 would be workable for this?

  Well, if fingerprint scanners aren't reliable enough to identify the same
person accurately twice, it's even moreso snake oil to suggest they're
suitable for crypto... or even biometric authentication, for that.

  (I wonder if the level of variability is manageable enough that you could
generate a set of the most-probable variations of the trace of a given
fingerprint and then use a multiple key/N-out-of-M technique.)


cheers,
  DaveK
-- 
Can't think of a witty .sigline today

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


RE: more reports of terrorist steganography

2007-08-20 Thread Dave Korn
On 20 August 2007 16:00, Steven M. Bellovin wrote:

 http://www.esecurityplanet.com/prevention/article.php/3694711
 
 I'd sure like technical details...


  Well, how about 'it can't possibly work [well]'?

  [ ... ] The article provides a detailed example of how 20 messages can be
hidden in a 100 x 50 pixel picture [ ... ] 

  That's gotta stand out like a statistical sore thumb.


  The article is pretty poor if you ask me.  It outlines three techniques for
stealth: steganography, using a shared email account as a dead-letter box, and
blocking or redirecting known IP addresses from a mail server.  Then all of a
sudden, there's this conclusion ...

 Internet-based attacks are extremely popular with terrorist organizations
because they are relatively cheap to perform, offer a high degree of
anonymity, and can be tremendously effective. 

... that comes completely out of left-field and has nothing to do with
anything the rest of the article mentioned.  I would conclude that someone's
done ten minutes worth of web searching and dressed up a bunch of
long-established facts as 'research', then slapped a The sky is falling!
Hay-ulp, hay-ulp security dramaqueen ending on it and will now be busily
pitching for government grants or contracts of some sort.



  So as far as technical details, I'd say you take half-a-pound of security
theater, stir in a bucket or two of self-publicity, season with a couple of
megabucks of goverment pork, and hey presto!  Tasty terror-spam!

  BTW, I can't help but wonder if Secrets of the Mujahideen refuses to allow
you to use representational images for stego?  ;-)

  (BTW2, does anyone have a download URL for it?  The description makes it
sound just like every other bit of crypto snakeoil; it might be fun to reverse
engineer.)

cheers,
  DaveK
-- 
Can't think of a witty .sigline today

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


RE: Free Rootkit with Every New Intel Machine

2007-06-26 Thread Dave Korn
On 26 June 2007 00:51, Ian Farquhar (ifarquha) wrote:

 It seems odd for the TPM of all devices to be put on a pluggable module as
 shown here.  The whole point of the chip is to be bound tightly to the
 motherboard and to observe the boot and initial program load sequence.
 
 Maybe I am showing my eternal optimist side here, but to me, this is how
 TPM's should be used, as opposed to the way their backers originally wanted
 them used.  A removable module whose connection to a device I establish
 (and can de-establish, assuming the presence of a tamper-respondent barrier
 such as a sensor-enabled computer case to legitimize that activity) is a
 very useful thing to me, as it facilitates all sorts of useful
 applications.  The utility of the original intent has already been widely
 criticised, so I won't repeat that here.  :)   

  If you can remove it, what's to stop you plugging it into another machine
and copying all your DRM-encumbered material to that machine?

  It's supposed to identify the machine, not the user.  Sounds to me like what
you want is a personally identifying cert that you could carry around on a usb
key...


cheers,
  DaveK
-- 
Can't think of a witty .sigline today

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


RE: Blackberries insecure?

2007-06-21 Thread Dave Korn
On 21 June 2007 04:41, Steven M. Bellovin wrote:

 According to the AP (which is quoting Le Monde), French government
 defense experts have advised officials in France's corridors of power
 to stop using BlackBerry, reportedly to avoid snooping by U.S.
 intelligence agencies.
 
 That's a bit puzzling.  My understanding is that email is encrypted
 from the organization's (Exchange?) server to the receiving Blackberry,
 and that it's not in the clear while in transit or on RIM's servers.
 In fact, I found this text on Blackberry's site:
 
   Private encryption keys are generated in a secure, two-way
   authenticated environment and are assigned to each BlackBerry
   device user. Each secret key is stored only in the user's secure
   regenerated by the user wirelessly.
 
   Data sent to the BlackBerry device is encrypted by the
   BlackBerry Enterprise Server using the private key retrieved
   from the user's mailbox. The encrypted information travels
   securely across the network to the device where it is decrypted
   with the key stored there.
 
   Data remains encrypted in transit and is never decrypted outside
   of the corporate firewall.
 
 Of course, we all know there are ways that keys can be leaked.

  And work factors reduced.  And corporations who want to do business in the
US  have been known to secretly collaborate with the US.gov before to sabotage
encryption features on exported devices (e.g. Lotus, Crypto AG, Microsoft,
Netscape).  So there's no reason to take the assurances on the blackberry
website at face value, and if you're a government or other .org that really
takes security /proper/ seriously, you've got to account for the very real
risk.

cheers,
  DaveK
-- 
Can't think of a witty .sigline today

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


RE: stickers can deter car theft

2007-05-26 Thread Dave Korn
On 26 May 2007 04:33, James Muir wrote:

 
 Anyone heard of this before?  

  Been happening all over the place for several years now.  Many references at
http://www.schneier.com/blog/archives/2006/10/please_stop_my.html

cheers,
  DaveK
-- 
Can't think of a witty .sigline today

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


RE: Russian cyberwar against Estonia?

2007-05-23 Thread Dave Korn
On 22 May 2007 14:51, Trei, Peter wrote:

 In fairness, its worth noting that the issue is also mixed up
 in Estonian electoral politics:
 
 http://news.bbc.co.uk/1/hi/world/europe/6645789.stm
 
 The timing of the electronic attacks, and the messages left by
 vandals, leave little doubt that the 'Bronze Soldier' affair is
 the motivating factor. Whether Russian Government agents were
 involved in the attacks is not proven, but certainly seems possible.

  Patriotic script-kiddies have been taking it upon themselves to contribute
botnet-driven DDoSen to pretty much every international incident going over
the past few years, from the US-vs-China hacker wars back in Code Red days, to
the Arab-Israeli conflict, to ... well, everything really.  The fact that
there's a real diplomatic incident going on may well be their motivation, but
it's not evidence that they are in any meaningful sense 'state actors'.
Occam's razor suggests that since the script kiddies will do this
/regardless/, i.e. spontaneously and unprovoked, there's no need to posit
additional sources of DDoS deliberately organized by the government (though of
course it doesn't exclude the possibility).  Why get your hands dirty when
some unpaid volunteer will provide you plausible (because truthful)
deniability? 

  Perhaps I should coin the phrase Useful Skiddiots!


cheers,
  DaveK
-- 
Can't think of a witty .sigline today

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


RE: Russian cyberwar against Estonia?

2007-05-18 Thread Dave Korn
On 18 May 2007 05:44, Alex Alten wrote:

 This may be a bit off the crypto topic,

  You betcha!

  but it is interesting nonetheless.
 
 Russia accused of unleashing cyberwar to disable Estonia
 http://www.guardian.co.uk/print/0,,329864981-103610,00.html
 
 Estonia accuses Russia of 'cyberattack'
 http://www.csmonitor.com/2007/0517/p99s01-duts.html


  shrugs  Any IP address you find in a packet of a DDoS coming towards you
is pretty likely not to be the source of the attack.  So far there's no
evidence to show anything other than that the russian .gov is just as liable
to have virused and botted machines on its internal nets as the US .gov.

cheers,
  DaveK
-- 
Can't think of a witty .sigline today

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


RE: can a random number be subject to a takedown?

2007-05-02 Thread Dave Korn
On 01 May 2007 22:33, Jon Callas wrote:

 On May 1, 2007, at 12:53 PM, Perry E. Metzger wrote:

 unsigned char* guess_key(void)
 {
  unsigned
  char key[] = {0x0a, 0xFa, 0x12, 0x03,
0xD9, 0x42, 0x57, 0xC6,
0x9E, 0x75, 0xE4, 0x5C,
0x64, 0x57, 0x89, 0xC1};
 
  return key;
 }
 
 (Or it would if I'd put the actual AACS key in there.)

  Heh, that's a bit like the old issue of whether you can publish an OTP that
has certain interesting properties when used to en/decrypt some other public
domain information.

  See also http://preview.tinyurl.com/3dcse6 
   http://preview.tinyurl.com/2d3hm3
   http://preview.tinyurl.com/2ey2mj

for more variations on this theme.  Wonder if you can issue a take-down notice
for a 301 redirect?

cheers,
  DaveK
-- 
Can't think of a witty .sigline today

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


RE: Randomness

2007-04-28 Thread Dave Korn
On 27 April 2007 20:34, Eastlake III Donald-LDE008 wrote:

 See http://xkcd.com/c221.html.
 
 Donald

http://web.archive.org/web/20011027002011/http://dilbert.com/comics/dilbert/ar
chive/images/dilbert2001182781025.gif


cheers,
  DaveK
-- 
Can't think of a witty .sigline today

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


RE: DNSSEC to be strangled at birth.

2007-04-07 Thread Dave Korn
On 06 April 2007 00:50, Paul Hoffman wrote:

 because, with it, one can sign the appropriate
 chain of keys to forge records for any zone one likes.
 
 If the owner of any key signs below their level, it is immediately
 visible to anyone doing active checking. 

  Only if they get sent that particular forged DNS response.  It's more likely
to be targeted.  DHS man shows up at suspect's ISP, with a
signed-below-its-level dns record (or a whole hierarchy of normally signed
records) to install on just their servers and perhaps even to serve up to just
one of their customers.  Nobody else gets to see it.

 Plus, now that applications are keeping public keys for services in
 the DNS, one can, in fact, forge those entries and thus conduct man in
 the middle surveillance on anyone dumb enough to use DNS alone as a
 trust conveyor for those protocols (e.g. SSH and quite possibly soon
 HTTPS).
 
 ...again assuming that the users of those keys don't bother to look
 who signed them.

  I think that's a safe assumption.  How are these users meant to look?
Little lock-icon in the status bar?

 Because I believe that ISPs, not just security geeks, will be
 vigilant in watching whether there is any layer-hopping signing and
 will scream loudly when they see it. AOL and MSN have much more to
 lose if DHS decides to screw with the DNS than anyone on this list
 does. 

  Can I point out that large telecomms corporations have been making a habit
of silently acquiescing to whatever illegal and spuriously-motiveated requests
the DHS or anyone else invoking the magic words war on terror is capable of
dreaming up?

 Having said that, it is likely that we will be the ones to
 shoot the signal flares if DHS (or ICANN, for that matter) misuses
 the root signing key. But it won't be us that causes DHS to stand
 down or, more likely, get thrown off the root: it's the companies who
 have billions of dollars to lose if the DNS becomes untrusted.

  We already had this with PKI and SSL, and it basically failed.  Works fine
on a small scale in a tightly-disciplined organisation; fails totally to scale
to Joe Internet-User.

cheers,
  DaveK
-- 
Can't think of a witty .sigline today

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


RE: WEP cracked even worse

2007-04-05 Thread Dave Korn
On 04 April 2007 00:44, Perry E. Metzger wrote:

 Not that WEP has been considered remotely secure for some time, but
 the best crack is now down to 40,000 packets for a 50% chance of
 cracking the key.
 
 http://www.cdc.informatik.tu-darmstadt.de/aircrack-ptw/


  Sorry, is that actually better than The final nail in WEP's coffin, which
IIUIC can get the entire keystream (who needs the key?) in log2(nbytes) packet
exchanges (to oversimplify a bit, but about right order-of-magnitude)?

cheers,
  DaveK
-- 
Can't think of a witty .sigline today

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


DNSSEC to be strangled at birth.

2007-04-05 Thread Dave Korn

 Afternoon all,

  This story is a couple of days old now but I haven't seen it mentioned
on-list yet.

  The DHS has requested the master key for the DNS root zone.

http://www.heise.de/english/newsticker/news/87655
http://www.theregister.co.uk/2007/04/03/dns_master_key_controversy/
http://yro.slashdot.org/article.pl?sid=07/03/31/1725221


  Can anyone seriously imagine countries like Iran or China signing up to a
system that places complete control, surveillance and falsification
capabilities in the hands of the US' military intelligence?  I could see some
(but probably not even all) of the European nations accepting the move at face
value and believing whatever assurances of safeguards the DHS might offer, but
the rest of the world?  No way.

  Surely if this goes ahead, it will mean that DNSSEC is doomed to widespread
non-acceptance.  And unless it's used everywhere, there's very little point
having it at all.

cheers,
  DaveK
-- 
Can't think of a witty .sigline today

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


RE: DNSSEC to be strangled at birth.

2007-04-05 Thread Dave Korn
On 05 April 2007 16:48, [EMAIL PROTECTED] wrote:

 Dave,
 
 For the purposes of discussion,
 
 (1) Why should I care whether Iran or China sign up?

  I think it would be consistent to either a) care that *everybody* signs up,
or b) not care about DNSSEC at all, but I think that a fragmentary uptake is
next to useless.  As indeed the current situation provides evidence may be the
case.

 (2) Who should hold the keys instead of the only powerful
 military under democratic control?
 
 (a) The utterly porous United Nations?
 
 (b) The members of this mailing list, channeling
 for the late, lamented Jon Postel?
 
 (c) The Identrus bank consortium (we have your
 money, why not your keys?) in all its threshhold
 crypto glory?
 
 (d) The International Telecommunication Union?
 
 (e) Other: _

 Hoping for a risk-analytic model rather than an
 all-countries-are-created-equal position statement.

 Strawman.  Not what I said at all.

 FWIW, however, I would like to see them held by a multinational civilian
organisation.  That could be a UN or ITU body, or an ICANN or IETF/IANA
offshoot, there are many possibilities.

  The *important* point is that we have strategies and techniques available to
us in democracies to prevent corruption or abuse of power: we have separation
of powers, and bodies that bring together conflicting interests to share power
in the theory that if anyone tries to get up to anything, the others will be
watching, and since they have conflicting interests they are unlikely to
collude.  This seems to me to be a viable principle for management of internet
infrastructure.

  Placing it all in the hands of a single interest group - whether that be the
US (or anybody else's) military, the RIAA, or Bun-Bun the mini-lop, is a
single point of failure for corruption/abuse resistance.

  BTW, there are lots of other reasons not to trust a military: lack of
accountability and oversight.  You were the first to mention democracy: just
because the US army is the army of a democracy does not mean that it is in
itself democratic.

cheers,
  DaveK
-- 
Can't think of a witty .sigline today

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


RE: link fest on fingerprint biometrics

2006-09-09 Thread Dave Korn
On 08 September 2006 00:38, Travis H. wrote:


 At home I have an excellent page on making fake fingerprints, but I
 cannot find it
 right now.  It used gelatin (like jello) and was successful at fooling a
 sensor. 

http://search.theregister.co.uk/?q=gummi should be a start.


cheers,
  DaveK
-- 
Can't think of a witty .sigline today


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Impossible compression still not possible. [was RE: Debunking the PGP backdoor myth for good. [was RE: Hypothesis: PGP backdoor (was: A security bug in PGP products?)]]

2006-08-30 Thread Dave Korn
On 28 August 2006 15:30, Ondrej Mikle wrote:

 Ad. compression algorithm: I conjecture there exists an algorithm (not
 necessarily *finite*) that can compress large numbers
 (strings/files/...) into small space, more precisely, it can
 compress number that is N bytes long into O(P(log N)) bytes, for some
 polynomial P. 

[  My maths isn't quite up to this.  Is it *necessarily* the case that /any/
polynomial of log N /necessarily/ grows slower than N?  If not, there's
nothing to discuss, because this is then a conventional compression scheme in
which some inputs lead to larger, not smaller, outputs.  Hmm.  It would seem
to me that if P(x)==e^(2x), P(log n) will grow exponentially faster than N.
I'm using log to mean natural log here, substitute 10 for e in that formula if
we're talking about log10, the base isn't important.  However, assuming that
we're only talking about P that *do* grow more slowly, I'll discuss the
problem with this theory anyway.  ]


  There are many, but there are no algorithms that can compress *all* large
numbers into small space; for all compression algorithms, some sets of input
data must result in *larger* output than input.

  There is *no* way round the sheer counting theory aspect of this.  There are
only 2^N unique files of N bits.  If you compress large files of M bits into
small files of N bits, and you decompression algorithm produces deterministic
output, then you can only possibly generate 2^N files from the compressed
ones.

 Take as an example group of Z_p* with p prime (in another words: DLP).
 The triplet (Z, p, generator g) is a compression of a string of p-1
 numbers, each number about log2(p) bits.
 
 (legend: DTM - deterministic Turing machine, NTM - nondeterministic
 Turing machine)
 
 There exists a way (not necessarily fast/polynomial with DTM) that a
 lot of strings can be compressed into the mentioned triplets. There
 are \phi(p-1) different strings that can be compressed with these
 triplets. Exact number of course depends on factorization of p-1.
 
 Decompression is simple: take generator g and compute g, g^2, g^3,
 g^4, ... in Z_p*

  This theory has been advanced many times before.  The Oh, if I can just
find the right parameters for a PRNG, maybe I can get it to reconstruct my
file as if by magic.  (Or subsitute FFT, or wavelet transform, or
key-expansion algorithm, or ... etc.)

  However, there are only as many unique generators as (2 to the power of the
number of bits it takes to specify your generator) in this scheme.  And that
is the maximum number of unique output files you can generate.

  There is ***NO*** way round the counting theory.

 I conjecture that for every permutation on 1..N there exists a
 function that compresses the permutation into a short
 representation.

  I'm afraid to tell you that, as always, you will find the compression stage
easy and the decompression impossible.  There are many many many more
possible permutations of 1..N than the number of unique short
representations in the scheme.  There is no way that the smaller number of
unique representations can possibly produce any more than the same (smaller)
number of permutations once expanded.  There is no way to represent the other
(vast majority) of permutations.

  It is possible that only NTM, possibly with infinite
 algorithm (e.g. a human) can do it in some short finite time. 

  Then it's not really an algorithm, it's an ad-hoc collection of different
schemes.  If you're allowed to use a completely different encryption scheme
for every file, I can do better than that: for every file, define an
encryption scheme where the bit '1' stands for the content of that file, and
everything else is represented by a '0' bit followed by the file itself.
Sure, most files grow 1 bit bigger, but the only file we care about is
compressed to just a single bit!  Great!

  Unfortunately, all you've done is moved information around.  The amount of
information you'd have to have in the decompressor to know which file to
expand any particular '1' bit into is equal to  the amount you've saved by
compressing the original to a single bit.

  There is ***NO*** way round the counting theory.

 Again,
 I could've/should've proven or disproven the conjecture, I just don't
 want to do it - yet ;-)

  I seriously advise you not to waste much time on it.  Because ...






 There is ***NO*** way round the counting theory.







cheers,
  DaveK
-- 
Can't think of a witty .sigline today


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


RE: Impossible compression still not possible. [was RE: Debunking the PGP backdoor myth for good. [was RE: Hypothesis: PGP backdoor (was: A security bug in PGP products?)]]

2006-08-30 Thread Dave Korn
On 28 August 2006 17:12, Ondrej Mikle wrote:

 We are both talking about the same thing :-)

  Oh!
 
 I am not saying there is a finite deterministic algorithm to compress
 every string into small space, there isn't. BTW, thanks for There
 is ***NO*** way round the counting theory. :-)
 
 All I wanted to say is:
 For a specific structure (e.g. movie, picture, sound) there is some
 good compression algorithm.
 
 E.g.: if you take a GIF 65536x65536, all white, with just one pixel
 black, it can be compressed into 35 bytes, see here:
 http://i.iinfo.cz/urs-att/gif3_6-115626056883166.gif
 If you wanted to compress the same picture using JPEG (i.e. discrete
 cosine transform), then two things would happen:
 
 The compressed jpeg file would be a) much bigger b) decompressed image
 would have artifacts, because Fourier transform of a pulse is sync
 (infinitely many frequencies). Sure, JPEG is a lossy compression, but
 good enough for photos and images that don't have a lot of high
 frequencies.

  Absolutely, absolutely!  

  A compression algorithm achieves the best results if it is designed with
statistical knowledge of the target domain taken into account.  In any
compression scheme you're balancing the set of inputs that grow smaller on
compression against the necessary counterpart of inputs that grow larger.
Whatever you gain in the first set, you lose in the second.  The secret is to
arrange that the inputs that tend to grow larger are the ones that are
less-common in real-world usage, and thus that the ones that are more common
tend to grow smaller.  In practice, this means eliminating 'redundancy', where
'redundancy' is defined as 'whatever similar properties you can tease out from
the more-common-real-world cases'.

  Of course, I could point out that there is precisely *1* bit of information
in that huge GIF, so even compressing it to 35 bytes isn't a great
achievement... it's one of the set of less-common inputs that grow bigger as a
compromise so that real pictures, which tend to have at least *some*
variation, grow smaller.


cheers,
  DaveK
-- 
Can't think of a witty .sigline today


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Debunking the PGP backdoor myth for good. [was RE: Hypothesis: PGP backdoor (was: A security bug in PGP products?)]

2006-08-28 Thread Dave Korn
On 24 August 2006 03:06, Ondrej Mikle wrote:

 Hello.
 
 We discussed with V. Klima about the recent bug in PGPdisk that
 allowed extraction of key and data without the knowledge of passphrase.
 The result is a *very*wild*hypothesis*.
 
 Cf. http://www.safehack.com/Advisory/pgp/PGPcrack.html
 
 Question 1: why haven't anybody noticed in three months? Why has not
 there been a serious notice about it?

  Because it is completely incorrect.  Utterly wrong.  This was explained on
this list just a couple of days ago, look for the thread A security bug in
PGP products? in the list archives.

 According to the paper, both standard .pgd and self-extracting SDA
 (self-decrypting archives) are affected. Systematic backdoor maybe?

  No, the paper is wrong.  They aren't affected, you can't break the
encryption on them, and therefore there is no backdoor.
 
 Possibilities:
 1) it is a hoax. Though with very low probability. The text seems to
 include a lot of work and makes perfect sense (REPE CMPS, all the
 assembly), i.e. we suppose it is highly improbable that somebody would
 make such hoax.

  It is not a hoax.  It is the work of an incompetent.  Like many of those who
invent perpetual motion machines, he genuinely believes that what he has done
is correct, but it isn't.  Unfortunately, but also very much like many of
those who invent perpetual motion machines, when this is pointed out to him he
assumes that everyone else is either stupid or malicious, rather than accept
that his theory has a massive flaw which completely undermines it.

  This can be either proven or disproven simply by
 checking the Win program using hex editor/debugger (using an already
 downloaded copy). I haven't had the time to check it yet (no Win).

  Actually, it can't, because the instructions he has given are not sufficient
to follow.  At the critical point, he says you must replace the bytes where
the disk encryption key is stored.  Unfortunately, he cannot tell you what to
replace them with, unless you already happen to have a copy of the bytes
representing that *exact* *same* disk encryption key stored *under* *a*
*known* *passphrase*, and that is why the only example on his website that
works is the one where you change the passphrase on a disk but don't
re-encrypt it.  He even admits that in all other cases you will extract
crap.

  Examine the instructions at
http://www.safehack.com/Advisory/pgp/PGPcrack.html#Two_Ways_to_bypass_PGP_SDA_
Authentication

--quote--
Two Ways to bypass PGP SDA Authentication and EXTRACT with success


After spending a lot of time debugging and analyzing PGP SDA, we came up with
a conclusion that we can successfully extract the contents of PGP SDA in 2
ways.

1) Modifying the contents of the address 00890D70. (Screen Capture)

The modification should be done in:
0040598F |. E8 AC3D CALL filename_s.00409740

At: 00409740 /$ 8B4424 0C MOV EAX,DWORD PTR SS:[ESP+C]

At this point change the contents of 00890D70.

After the bytes change, you will have to bypass authentication. After
bypassing authentication you will be able to extract.

2) Modifying the contents of the address 00BAF670. (Screen Capture)

The Modification should be done in:
0040595F FF15 90324100 CALL DWORD PTR DS:[413290]

At: 004019DA /$ FF7424 08 PUSH DWORD PTR SS:[ESP+8]

At this point change the contents of 00BAF670.
NOTE: At this point if you change the contents of 00BAF670, you won't have to
bypass authentication, it will work like a charm, and it will grant
auth/extract.  
--quote--

  Notice the crucial phrases At this point change the contents of 00890D70,
and At this point change the contents of 00BAF670.  He gives you absolutely
no information what it is that you need to change those bytes to.  Well, I can
tell you.  You have to change them to be the value of the disk encryption key
as encrypted by whatever passphrase you chose to enter.  You cannot do this
unless you already know the disk encryption key.

  In other words, if you already know the key to decrypt the disk, you can
decrypt the disk.  If you don't, however, you can't.

  Examine the writing a bit further down the page, where it says

--quote--
Accessing ANY PGP VIRTUAL Disk . (Need more testing and free time, Check
Debugging Notes at the end)

At this point you can add users change existing users passphrase Re-encrypt
disk and do other stuff. But when you try to access the disk you will get Disk
is not formatted. This is when you need to use your debugger.

--quote--

  Notice how he doesn't say what you need to *do* with the debugger, so let me
explain what he has skipped over:  Using only your debugger, you need to guess
the decryption key for the disk.  Think that's something you can do with a
debugger?

  

Fw: A security bug in PGP products?

2006-08-27 Thread Dave Korn
[ Originally tried to post this through gmane, but it doesn't seem to work;
apologies if this has been seen before. ]

Max A. wrote:
 Hello!
 
 Could anybody familiar with PGP products look at the following page
 and explain in brief what it is about and what are consequences of the
 described bug?


1.  The disk is encrypted using a long, secure, random, symmetric
en/de-cryption key.  (EDK for short).
2.  The EDK is encrypted with a passphrase and stored in a header at the
start of the encrypted disk
3.  If you change the passphrase on the disk, it simply reencrypts the EDK
using the new passphrase.  It does not generate a new EDK and it does not
re-encrypt the entire disk.
4.  Therefore the EDK itself is still the same, and if you overwrite the new
header (with the EDK encrypted by the new passphrase) using a stored copy of
the old header (with the same EDK encrypted under the old passphrase), you
have effectively changed the passphrase back - without having to have
knowledge of the new passphrase - and can now regain access using the old
passphrase.

  The guy who wrote that page posted a thread about it a while ago, I think
it was on FD or perhaps Bugtraq.  His interpretation is somewhat coloured by
his transparent belief that these are big corporate monstrosities and hence
must be evil.  His website is full of significant
exaggerations/inaccuracies; for instance, when he claims that you can break
the decryption using a debugger, he forgets to mention that this only
applies to a disk where you originally knew the passphrase and have since
changed it.  It's more of a usage/documentation issue, really; an end-user
might believe that changing the passphrase re-encrypted the entire disk
beyond their ability to retrieve it.


cheers,
  DaveK
--
Can't think of a witty .sigline today


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: A security bug in PGP products?

2006-08-27 Thread Dave Korn
Ondrej Mikle [EMAIL PROTECTED] wrote in message
news:[EMAIL PROTECTED]...
 Max A. wrote:
  Hello!
  
  Could anybody familiar with PGP products look at the following page
  and explain in brief what it is about and what are consequences of the
  described bug?
  
  http://www.safehack.com/Advisory/pgp/PGPcrack.html
  
 
 It seemed a bit obscure to me at first, but it says basically:
 
 PGPdisk does not use key derived from passphrase, just does simply this:
 
 if (somehash(entered_password) == stored_password_hashed) then 
 access_granted();
 
 That's the REPE CMPS chain instruction (string comparison). The check 
 can be simply skipped using debugger by interrupting the program, 
 changing CS:EIP (i.e. the place of execution) to resume after 
 successful check. The text probably implies that the key is stored 
 somewhere in the PGPdisk file and key's successful extraction does not 
 depend on knowledge of the passphrase.

  Nope.  Well, yes, the text does imply that, but the text is seriously wrong.
See my previous post for the full mechanism.  (Assuming the moderator lets it
through.)

  Given that, whatever passphrase you use, you will decrypt the EDK block and
get /something/ that looks like a key, this comparison of hashes is a sanity
test.  If you bypass it but enter the wrong passphrase, you'll get an
incorrectly-decrypted EDK, which will lead your disk to look like every sector
is full of random garbage.  Rather than decrypt the entire disk and run chkdsk
to see if it looks sane, comparing the hashes of the passphrase is a quick and
dirty way of testing if the resulting EDK is going to be the correct one.

  The author of the website did have this explained to him by someone from PGP
corp. on FD when he first reported this, but he failed to understand it, or
perhaps just refused to believe it.  Bypassing this check doesn't decrypt the
disk.

 So if you change passphrase, the disk won't get re-encrypted, just by 
 copypasting the old bytes you will revert to the old passphrase or you 
 can create another disk with passphrase chosen by you and use 
 copypasting method to decrypt other PGPdisk protected with passphrase.

  Yes to the first one, but no to the secopnd, because when you create a disk
it will have an entirely new EDK, so replacing the header block with one from
a different disk will mean that, yes, you can enter the old passphrase, and
yes, that will pass the hash-comparison check, but the old EDK (that you
correctly decrypt with the correct passphrase) doesn't actually apply to the
encrypted data on the new disk, and the disk will look like gibberish.

cheers,
  DaveK
-- 
Can't think of a witty .sigline today


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Creativity and security

2006-03-24 Thread Dave Korn
J. Bruce Fields wrote:
 On Thu, Mar 23, 2006 at 08:15:50PM -, Dave Korn wrote:
   So what they've been doing at my local branch of Marks  Spencer
 for the past few weeks is, at the end of the transaction after the
 (now always chip'n'pin-based) card reader finishes authorizing your
 transaction, the cashier at the till asks you whether you actually
 /want/ the receipt or not; if you say yes, they press a little
 button and the till prints out the receipt same as ever and they
 hand it to you, but if you say no they don't press the button, the
 machine doesn't even bother to print a receipt, and you wander away
 home, safe in the knowledge that there is no wasted paper and no
 leak of security information  ...

   ... Of course, three seconds after your back is turned, the
 cashier can still go ahead and press the button anyway, and then
 /they/ can have your receipt.  With the expiry date on it.  And the
 last four digits of the card number.  And the name of the card
 issuer, which allows you to narrow the first four digits down to
 maybe three or four possible combinations.  OK, 10^8 still aint
 easy, but it's a lot easier than what we started with.

 If all that information's printed on the outside of the card, then
 isn't this battle kind of lost the moment you hand the card to them?

1-  I don't hand it to them.  I put it in the chip-and-pin card reader 
myself.  In any case, even if I hand it to a cashier, it is within my sight 
at all times.

2-  If it was really that easy to memorize a name and the equivalent of a 
23-digit number at a glance without having to write anything down, surely 
the credit card companies wouldn't need to issue cards in the first place?

  IOW, unless we're talking about a corrupt employee with a photographic 
memory and telescopic eyes, the paper receipt I leave behind is the only 
place they could get any information about my card details.  This was of 
course not the case in the old days when your card was rolled over a receipt 
with multiple carbons, one of which was the retailer's copy that they needed 
to deposit with their bank, but things are a lot more secure now: a debit 
card transaction, authorised and completed online, leaves a lot less 
exposure; so nowadays I reckon that it is worth worrying about the remaining 
risks, that /were/ relatively speaking lower risks back then when compared 
to the fact of the retailer's retaining a hard copy of your card details, 
but that (now /that/ particular risk has been eliminated) are relatively 
higher risks.

  Of course, a corrupt employee could conceivably replace the card reader 
with a corrupt one of their own, but since it would take major carpentry to 
detach them from the cashtills and counters to which they are firmly fixed, 
I think that's a lot more likely to be noticed than an employee craftily 
pressing a little button and palming a receipt.  YMMV!

cheers,
  DaveK
-- 
Can't think of a witty .sigline today 




-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Creativity and security

2006-03-23 Thread Dave Korn
Olle Mulmo wrote:
 On Mar 20, 2006, at 21:51, [EMAIL PROTECTED] wrote:

 I was tearing up some old credit card receipts recently - after all
 these years, enough vendors continue to print full CC numbers on
 receipts that I'm hesitant to just toss them as is, though I doubt
 there
 are many dumpster divers looking for this stuff any more - when I
 found
 a great example of why you don't want people applying their
 creativity
 to security problems, at least not without a great deal of review.

 You see, most vendors these days replace all but the last 4 digits of
 the CC number on a receipt with X's.  But it must be boring to do the
 same as everyone else, so some bright person at one vendor(*) decided
 they were going to do it differently:  They X'd out *just the last
 four
 digits*.  After all, who could guess the number from the 10,000
 possibilities?

 Ahem.
  -- Jerry

 (*) It was Build-A-Bear.  The receipt was at least a year old, so for
 all I know they've long since fixed this.

 Unfortunately, they haven't. In Europe I get receipts with different
 crossing-out patterns almost every week.

 And, with they I mean the builders of point-of-sale terminals: I
 don't think individual store owners are given a choice.

 Though I believe I have noticed a good trend in that I get receipts
 where *all but four* digits are crossed out more and more often
 nowadays.

  In the UK, that is now the almost universal practice.  And it's equally 
almost universally the /last/ four digits across all retailers.  Which is 
good.

  What is not so good, however, is another example of 
not-as-clever-as-it-thinks-it-is clever new idea for addressing the problem 
of receipts.

  As we all know, when you pay with a credit or debit card at a store, it's 
important to take the receipt with you, because it contains vital 
information - even when most of the card number is starred out, the expiry 
date is generally shown in full.  So we're all encouraged to take them with 
us, take them home, and shred or otherwise securely dispose of them under 
our own control.

  Of course, this is a) a nuisance and b) wasteful of paper.  And obviously 
enough, someone's been trying to come up with a 'bright idea' to solve these 
issues.

  So what they've been doing at my local branch of Marks  Spencer for the 
past few weeks is, at the end of the transaction after the (now always 
chip'n'pin-based) card reader finishes authorizing your transaction, the 
cashier at the till asks you whether you actually /want/ the receipt or not; 
if you say yes, they press a little button and the till prints out the 
receipt same as ever and they hand it to you, but if you say no they don't 
press the button, the machine doesn't even bother to print a receipt, and 
you wander away home, safe in the knowledge that there is no wasted paper 
and no leak of security information  ...

  ... Of course, three seconds after your back is turned, the cashier can 
still go ahead and press the button anyway, and then /they/ can have your 
receipt.  With the expiry date on it.  And the last four digits of the card 
number.  And the name of the card issuer, which allows you to narrow the 
first four digits down to maybe three or four possible combinations.  OK, 
10^8 still aint easy, but it's a lot easier than what we started with.

  The risk could perhaps be fixed with an interlock which makes it 
impossible to print the receipt out after the card has been withdrawn from 
the reader, but I think the better solution would still be for the receipt 
to be printed out every single time and the staff trained in the importance 
of not letting customers leave without taking their receipts with them.

cheers,
  DaveK
-- 
Can't think of a witty .sigline today 




-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: GnuTLS (libgrypt really) and Postfix

2006-02-15 Thread Dave Korn
Werner Koch wrote:
 On Mon, 13 Feb 2006 03:07:26 -0500, John Denker said:

 Again, enough false dichotomies already!  Just because error codes
 are open to abuse doesn't mean exiting is the correct thing to do.

 For Libgcrypt's usage patterns I am still convinced that it is the
 right decision.


  Then you should warn people that it is not safe to use in any 
high-privilege server application.


cheers,
  DaveK
-- 
Can't think of a witty .sigline today 




-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: GnuTLS (libgrypt really) and Postfix

2006-02-12 Thread Dave Korn
Werner Koch wrote:
 On Sat, 11 Feb 2006 12:36:52 +0100, Simon Josefsson said:

   1) It invoke exit, as you have noticed.  While this only happen
  in extreme and fatal situations, and not during runtime,
  it is not that serious.  Yet, I agree it is poor design to
  do this in a library.

 I disagree strongly here.  Any code which detects an impossible state
 or an error clearly due to a programming error by the caller should
 die as soon as possible.

  :-) Then what was EINVAL invented for?

 Sure, for many APIs it is posssible to return an error code but this
 requires that the caller properly checks error codes.  We have all
 seen too many cases were return values are not checked and the
 process goes ahead assuming that everything went well - this might be
 okay for games but definitely not for cryptographic applications.

  Really it's never ok for anything, not even games, and any program that 
fails to check error return values is simply not properly coded, full stop.

  But abort()-ing in a library is also a big problem, because it takes 
control away from the main executable.  That can be a massive security 
vulnerability on Windows.  If you can get a SYSTEM-level service that 
listens on a well known pipe or LPC port to abort(), you can often steal 
it's pipe or port and escalate your privileges  It would be far preferable 
for the service to remain running in a main loop that ends up operating as 
...

... receive request from client
... fail to service it because libgcrypt returns errors..
 return error to caller

... rather than for it to abort.

 Libgcrypt tries to minimize these coding errors; for example there are
 no error returns for the RNG - if one calls for 16 bytes of random one
 can be sure that the buffer is filled with 16 bytes of random.  Now,
 if the environemnt is not okay and Libgcrypt can't produce that random
 - what shall we do else than abort the process.  This way the errors
 will be detected before major harm might occur.

  I'm afraid I consider it instead a weakness in your API design that you 
have no way to indicate an error return from a function that may fail.

 It is the same rationale why defining NDEBUG in production code is a
 Bad Thing.

  Perhaps libgcrypt could call abort in debug builds and return error codes 
in production builds?

cheers,
  DaveK
-- 
Can't think of a witty .sigline today 




-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: FWD: [IP] Encrypting Bittorrent to take out traffic shapers

2006-02-09 Thread Dave Korn
Alexander Klimov wrote:
 On Tue, 7 Feb 2006, Adam Fields wrote:
 Over the past months more Bittorrent users noticed that their ISP is
 killing all Bittorrent traffic . ISP?s like Rogers are using bit-
 shaping applications to throttle the traffic that is generated by
 Bittorrent.

 A side note is that they're using known insecure encryption methods
 as a cpu tradeoff because it doesn't matter if the traffic is
 decrypted eventually, as long as it can't be revealed in realtime.
 That's possibly shortsighted, but still interesting.

 Since one can easily encrypt 60 Mb/s with AES on a modern computer it
 does not matter what algorithm to use (unless, of course, you have a
 wider than 60 MB/s connection).

  Ah, but you haven't allowed for the fact that the ISP has to do this for 
/all/ the traffic from /all/ of their customers if they want to know if it's 
BT or not.

 BTW, if ISP really wants to slow down bittorrent it can use some other
 methods: there is usually constant port (6881, IIRC), and quite
 specific communication patern.

  Indeed, they're likely to find a traffic-analysis method of doing this 
sooner or later, and then they won't have to bother about the encryption.

  cheers,
 DaveK
-- 
Can't think of a witty .sigline today 




-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]