Re: encrypted tapes (was Re: Papers about "Algorithm hiding" ?)

2005-06-09 Thread astiglic
> [EMAIL PROTECTED] writes:
>
>>I saw allot of requirements by security auditors that looked pretty
>> silly.
>
> "Must use 128-bit RSA encryption" has to be the all-time favourite.
>
> One I saw recently was a requirement for using X9.17 key management... in
> SSL.
>
> Peter.

One of my favourites was that "PINs had to be hashed"  (these were PINs
for authentication in a proprietary application/system.  The justification
(given by the auditor) was that people who had access to the database,
should not be able to see the PINs in clear.  These where 4 digit PINs. So
the developers just SHA-oned the PINs.  Later on, the developers had to
export the PINs into another application, that had its own way to protect
the PINs, so they launched a brut force attack on all of the PINs, of
course this was easy because the input space was very small and the hash
function did not involve any secret key, no salt, no iterations...  Talk
about protection!

--Anton



-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: encrypted tapes (was Re: Papers about "Algorithm hiding" ?)

2005-06-09 Thread astiglic
> [EMAIL PROTECTED] wrote:
>>>| Oracle, for example, provides encryption functions, but the real
>>> problem
>>>| is the key handling (how to make sure the DBA can't get the key,
>>> cannot
>>>| call functions that decrypt the data, key not copied with the backup,
>>>| etc.).
>>>| There are several solutions for the key management, but the vendors
>>>should
>>>| start offering them.
>>>
>>>I would argue that the real problem is that encryption slows large
>>>searches (is percieved to slow large searches, anyway.)
>>>
>>>Adam
>>
>>
>> Yes, encrypting indexed columns for example is a problem.  But if you
>> limit yourself to encrypting sensitive information (I'm talking about
>> stuff like SIN, bank account numbers, data that serves as an index to
>> external databases and are sensitive with respect to identity theft),
>> these sensitive information should not be the bases of searches.
>> If they are not he basis of searches, there will be no performance
>> problems related to encrypting them.
>
> If they are indexes to external databases, then they are the basis of
> searches in those databases, by definition.

My terminology might have been misleading.  By "indexes to external
databases", I don't mean that the application that uses the database
actually talks to the external databases (it doesn't use the info as a key
to that external database)
-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: encrypted tapes (was Re: Papers about "Algorithm hiding" ?)

2005-06-09 Thread astiglic
> On Wednesday 08 June 2005 21:20, [EMAIL PROTECTED] wrote:
>> Yes, encrypting indexed columns for example is a problem.  But if you
>> limit yourself to encrypting sensitive information (I'm talking about
>> stuff like SIN, bank account numbers, data that serves as an index to
>> external databases and are sensitive with respect to identity theft),
>> these sensitive information should not be the bases of searches.
>> If they are not he basis of searches, there will be no performance
>> problems related to encrypting them.
>
> I can name at least one obvious case where "sensitive" data -- namely
> credit
> card numbers -- is in fact something you want to search on: credit card
> billing companies like CCbill and iBill.  Without the ability to search by
> CC#, customers are pretty screwed.
>
> That said, I will never buy the "only encrypt sensitive data" argument.
> In my
> experience, you *always* end up leaking something that way.

There are exceptions, I grant you that, but my hypothesis is that in most
cases you can do without indexing on the "sensitive" data you have.

Encrypting everything in your database, I say that will never work.  If
you do that, then you will have performance trouble, and nobody want's
that.  You can do stuff like encrypt everything at the OS level, but that
doesn't help protect database backups and it doesn't prevent your DBA to
look at data he's not supposed to.  If you encrypt everthing at the DBMS
level or at the application (client or middleware) level, than you cannot
encrypt indexed data.

--Anton
--Anton



-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: encrypted tapes (was Re: Papers about "Algorithm hiding" ?)

2005-06-09 Thread Charles M. Hannum
On Thursday 09 June 2005 17:37, Charles M. Hannum wrote:
> If we assume that the last 4 digits have been exposed somewhere -- and they
> usually are -- then this gives you at most 38 bits -- i.e. 2^38 hashes to
> test -- to search (even a couple less if you know a priori which *brand* of
> card it is).  How long do you suppose this would take?

On reconsideration, given the presence of the check digit, I think you have at 
most 2^34 tests (or 2^32 if you know the brand of card).  And this assumes 
there aren't additional limitations on the card numbering scheme, which there 
always are.

I guess you could use a keyed hash.  Remember, though, you can't use random 
padding if this is going to be searchable with a database index, so the 
amount of entropy you're putting in is pretty limited.


-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: encrypted tapes (was Re: Papers about "Algorithm hiding" ?)

2005-06-09 Thread Charles M. Hannum
On Thursday 09 June 2005 16:41, you wrote:
> From: "Charles M. Hannum" <[EMAIL PROTECTED]>
>
> > I can name at least one obvious case where "sensitive" data -- namely
> > credit card numbers -- is in fact something you want to search on: credit
> > card billing companies like CCbill and iBill.  Without the ability to
> > search by CC#, customers are pretty screwed.
>
> Is there a good reason for not searching by the hash of a CC# ?

Are you joking?

If we assume that the last 4 digits have been exposed somewhere -- and they 
usually are -- then this gives you at most 38 bits -- i.e. 2^38 hashes to 
test -- to search (even a couple less if you know a priori which *brand* of 
card it is).  How long do you suppose this would take?

(Admittedly, it's pretty sketchy even if you have to search the whole CC# 
space -- but this is why you need to prevent the data being accessed in any 
form!)

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: encrypted tapes (was Re: Papers about "Algorithm hiding" ?)

2005-06-09 Thread lists

From: "Charles M. Hannum" <[EMAIL PROTECTED]>

> I can name at least one obvious case where "sensitive" data -- namely credit 
> card numbers -- is in fact something you want to search on: credit card 
> billing companies like CCbill and iBill.  Without the ability to search by 
> CC#, customers are pretty screwed.

Is there a good reason for not searching by the hash of a CC# ?

http://www.wayner.org/books/td/

http://www.unixwiz.net/techtips/secure-cc.html
I think the author is planning further work on this site and
would be happy to receive constructive comments.

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: "Retailers Experiment With Biometric Payment" article

2005-06-09 Thread Eugen Leitl
On Thu, Jun 09, 2005 at 12:02:20PM -0400, Adam Shostack wrote:

> Has anyone ever studied the reversability of these algorithms?  It
> seems to me that you could make some plausible guesses and generate
> fingerprints from certain representations.  I don't know how likely
> those guesses are to be right.

The fingerprint hash (fingerprint's fingerprint) has to be resistant 
to rotation/translation, area size and subpattern presence, and tolerate 
some skin lesion noise, so it's the very opposite of a cryptographic hash.

Probably quite easy to reverse.

-- 
Eugen* Leitl http://leitl.org";>leitl
__
ICBM: 48.07100, 11.36820http://www.leitl.org
8B29F6BE: 099D 78BA 2FD3 B014 B08A  7779 75B0 2443 8B29 F6BE


signature.asc
Description: Digital signature


Re: "Retailers Experiment With Biometric Payment" article

2005-06-09 Thread Adam Shostack
On Thu, Jun 09, 2005 at 11:17:59AM -0400, Heyman, Michael wrote:
| From
| :

|   share its biometric data with government agencies, and 
|   in fact, the full fingerprints are not stored in the 
|   system. Instead, a complex mathematical algorithm is 
|   created to represent identifying characteristics of 
|   the fingerprint, which are matched to the real thing 
|   when a user shows up at a checkout counter.
|
| No discussion on the threat of finger removal...
| 

Has anyone ever studied the reversability of these algorithms?  It
seems to me that you could make some plausible guesses and generate
fingerprints from certain representations.  I don't know how likely
those guesses are to be right.

Adam

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


"Retailers Experiment With Biometric Payment" article

2005-06-09 Thread Heyman, Michael
From
:

  "You can always get a new Social Security number, but 
  you certainly can't get a new thumbprint...," Lee [of 
  EFF] said...Robinson, of BioPay, argues that a personal 
  check written at a grocery store passes through eight 
  people before it is cashed, a process he considers much 
  less secure than a biometric payment, in which the 
  fingerprint image is connected immediately to the 
  user's bank account. "What can I do to hurt you if I 
  have a picture of the tip of your finger? Not much," 
  Robinson said, contending that associating fingerprints 
  with legal troubles is unwarranted. BioPay does not 
  share its biometric data with government agencies, and 
  in fact, the full fingerprints are not stored in the 
  system. Instead, a complex mathematical algorithm is 
  created to represent identifying characteristics of 
  the fingerprint, which are matched to the real thing 
  when a user shows up at a checkout counter.

No discussion on the threat of finger removal...

-Michael

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: encrypted tapes

2005-06-09 Thread Richard Stiennon
I spent several years as such a security auditor for PwC.  While yes, they 
do hire a bunch of kids out of MBA school they also have extremely 
experienced senior managers supervising them.We always delved into 
business processes as well as using "off the shelf" tools. Invariably I 
would find major flaws in the way security was implemented at utilities, 
railroads, major banks, and computer manufacturers.


At Gartner I always advised my clients that if the purpose of the audit was 
to find a bunch of stuff and fix it then you should select a local boutique 
firm  who will do a faster, more in-depth assessment and give you 
actionable items to address at a very reasonable cost. If your purpose in 
doing a security audit is to convince the board of directors that you need 
to invest more in security then go with a big audit firm because their 
opinion holds much more weight.


Stiennon
blog:  www.threatchaos.com

At 10:14 AM 6/8/2005, Perry E. Metzger wrote:


[EMAIL PROTECTED] writes:
> One thing that irritates me is that most security audits (that verify
> compliance with regulations) are done by accountants.  No disrespect for
> accountants here, they are smart people, but most of them lack the
> security knowledge needed to really help with the security posture of a
> company, and often they don't work with a security expert.  I saw allot of
> requirements by security auditors that looked pretty silly.
> I believe a mix of accountants with security experts should be used for
> security audits

It is worse than that. At least one large accounting company sends new
recruits to a "boot camp" where they learn how to conduct "security
audits" by rote. They then send these brand new 23 year old "security
auditors" out to conduct security "audits", with minimal supervision
from a partner or two. The audits are inevitably of the lowest
possible quality -- they run automated security scanners no better
than open source ones you could download on your own, and they run
through checklists.  If an automated tool doesn't say there is a
problem, or if you obey the mindless checklist items, you pass.

Of course, for all the good such an "audit" does, you would as well
roll dice and claim that the output was somehow correlated with the
quality of your security infrastructure. Such an "audit" is totally
worthless except as a bureaucratic dodge. "We hired a world class
accounting company to check our security!" the executives can cry, "so
these security problems aren't our fault!" (Would that "fiduciary
responsibility" was not so often equated with "make sure there is
enough window dressing that we can't be blamed.")

By the way, selling such "audits" is extremely profitable, given the
discrepancy between the pay for the kids doing the audits and the
price the customer is charged. What is pathetic is not that companies
would try to foist such worthless services upon their customers, but
that their customers would willingly buy.

Incidently, my understanding is that at least some accounting
companies use similar techniques for doing audits of the bookkeeping
practices at their customers, which makes them at least somewhat
consistent, if nearly useless to relying parties. When you hear things
to the effect that accounting audits can only detect unintended bad
process and not deliberate malfeasance, that's part of the reason why.

Perry



-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Richard Stiennon
The blog: http://www.threatchaos.com 




-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: AmEx unprotected login site

2005-06-09 Thread Amir Herzberg

Perry E. Metzger wrote:


When I go to the SSL protected page, I can look at the URL and the
lock icon in the corner before typing in my password. 


Bless you for being so careful. I, instead, look at the logo of the site 
 and of the CA as displayed in TrustBar. This is much easier, and 
protects me from subtle changes in the URL e.g. homographic attacks, 
from spoofed address bars, and from certificates granted without proper 
validation, e.g. `domain validated` certificates. I would expect each 
security expert to use TrustBar (or other appropriate browser or browser 
extension - but check they don't send each URL to their server).



When you type in
your password BEFORE the SSL connection, by the time you realize that
it went to the wrong place, it is way too late.

If you realize it at all. Phisher can easily make you unaware of this.


I admit that not everyone will check the URL and the lock icon, but at
least it is *possible* to train people to do the right thing on
that. There is no way, effectively, to train people to be safe given
the way that Amex is set up.
And no way you can protect your users by a proxy or a local TrustBar 
installation, which, as argued above, can protect reasonably well even 
naive or unsuspecting users.

--
Best regards,

Amir Herzberg

Associate Professor
Department of Computer Science
Bar Ilan University
http://AmirHerzberg.com

New: see my Hall Of Shame of Unprotected Login pages: 
http://AmirHerzberg.com/shame.html


-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: encrypted tapes

2005-06-09 Thread Adam Shostack
On Thu, Jun 09, 2005 at 08:57:51AM +0100, [EMAIL PROTECTED] wrote:
| 
| From: "Perry E. Metzger" <[EMAIL PROTECTED]>
| 
| > It is worse than that. At least one large accounting company sends new
| > recruits to a "boot camp" where they learn how to conduct "security
| > audits" by rote. They then send these brand new 23 year old "security
| > auditors" out to conduct security "audits", with minimal supervision
| > from a partner or two. The audits are inevitably of the lowest
| > possible quality -- they run automated security scanners no better
| 
| The worst security audit point I have ever seen came from KPMG and
| said that logging on as a particular non-root unix account got root
| access, based on the "WARNING! YOU ARE SUPERUSER" message seen at login.
| What they'd never done was check something like "sum /etc/shadow" to
| see whether it was permitted or denied, nor run "id" or similar checks.
| So when this user's home directory is absent and he ends up using
| / and /.profile (where the warning was in an echo statement) he gets
| this message on the screen.  So where they should be writing
| "misleading warning in some circumstances" they write "root access
| immediately available for common users".
| 
| I'm planning to teach a class of 5 existing internal auditors
| next month on some security s/w and I am going to include:
|- focussing on the more important stuff
|  (a long-running problem where I work)
|- you must prove it before you can report it
|- you must be able to state what is wrong with the observed state;
|  usually expressed as the policy point(s) violated
|  (just appearing in scanner output is not enough)
|- you should have some idea of one way reasonable way to fix it

"oh, no, that's a reasonable treatment of those revenues.  You have to
prove its not before you can report on it."

So, while I am sympathetic to what you are saying, the job of audit is
to audit.  If the system says "You're root," fine, note it and move
on.

If as an auditor, I need to "prove" each problem I find, then I'm
going depth-first, not breadth first, and will miss important stuff.

I suggest a better fix is to have an interim audit report, which, with
the participation of senior technical people on both sides, becomes a
final audit report.  In that process, you could probably win the
/.profile argument.  However, auditors MUST be allowed to point out
whatever the hell they want.

Adam





-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: encrypted tapes

2005-06-09 Thread Florian Weimer
>- you must prove it before you can report it

I don't think this is a good policy in general.  Often, it's more
cost-effective to fix a potential vulnerability than to investigate it
in detail, construct a proof that it's real, and fix it.  This is
especially true in environments where changes can be deployed at
moderate cost.  (I know that there are others.)

To sum it up, I think it's fine to report potential problems as well,
but they have to be labeled as such (so that they receive the right
priority).

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: AmEx unprotected login site

2005-06-09 Thread Ben Laurie

Perry E. Metzger wrote:

Ben Laurie <[EMAIL PROTECTED]> writes:


Perry E. Metzger wrote:


"Steven M. Bellovin" <[EMAIL PROTECTED]> writes:



They're still doing the wrong thing. Unless the page was transmitted
to you securely, you have no way to trust that your username and
password are going to them and not to someone who cleverly sent you an
altered version of the page.


They're doing the wrong thing, and probably feel they have no
choice.  Setting up an SSL session is expensive; most people who go
to their home page do not log in, and hence do not (to Amex)
require cryptographic protection.


That's why Citibank and most well run bank sites have you click on a
button on the front page to go to the login screen. There are ways to
handle this correctly.


Why is this better? The button you click can just as easily take you
to a site other than the one intended.



When I go to the SSL protected page, I can look at the URL and the
lock icon in the corner before typing in my password. When you type in
your password BEFORE the SSL connection, by the time you realize that
it went to the wrong place, it is way too late.

I admit that not everyone will check the URL and the lock icon, but at
least it is *possible* to train people to do the right thing on
that. There is no way, effectively, to train people to be safe given
the way that Amex is set up.


But even if you have seen the lock and the URL, you are still vulnerable 
to homograph attacks and simply names that look right but aren't. I 
notice that AmEx have registered a _lot_ of names to make this hard, but 
even they don't win, for example:


$ whois americanexpresscard.co.uk

Domain Name:
americanexpresscard.co.uk

Registrant:
Lantec Corporation

Registrant's Address:
8 Copthall
Roseau
Commonwealth of Dominica
00152
DM

Oops.

Cheers,

Ben.

--
>>>ApacheCon Europe<<<   http://www.apachecon.com/

http://www.apache-ssl.org/ben.html   http://www.thebunker.net/

"There is no limit to what a man can do or how far he can go if he
doesn't mind who gets the credit." - Robert Woodruff

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: AmEx unprotected login site

2005-06-09 Thread Perry E. Metzger

Ben Laurie <[EMAIL PROTECTED]> writes:
> Perry E. Metzger wrote:
>> "Steven M. Bellovin" <[EMAIL PROTECTED]> writes:
>>
They're still doing the wrong thing. Unless the page was transmitted
to you securely, you have no way to trust that your username and
password are going to them and not to someone who cleverly sent you an
altered version of the page.
>>>
>>> They're doing the wrong thing, and probably feel they have no
>>> choice.  Setting up an SSL session is expensive; most people who go
>>> to their home page do not log in, and hence do not (to Amex)
>>> require cryptographic protection.
>> That's why Citibank and most well run bank sites have you click on a
>> button on the front page to go to the login screen. There are ways to
>> handle this correctly.
>
> Why is this better? The button you click can just as easily take you
> to a site other than the one intended.

When I go to the SSL protected page, I can look at the URL and the
lock icon in the corner before typing in my password. When you type in
your password BEFORE the SSL connection, by the time you realize that
it went to the wrong place, it is way too late.

I admit that not everyone will check the URL and the lock icon, but at
least it is *possible* to train people to do the right thing on
that. There is no way, effectively, to train people to be safe given
the way that Amex is set up.


Perry

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: AmEx unprotected login site

2005-06-09 Thread Perry E. Metzger

"R. Hirschfeld" <[EMAIL PROTECTED]> writes:
>> From: "Perry E. Metzger" <[EMAIL PROTECTED]>
>> Date: Wed, 08 Jun 2005 19:01:37 -0400
>
>> The other major offender are organizations (such as portions of
>> Verizon) that subcontract payment systems to third parties. They are
>> training their users to expect to be directed to a site they don't
>> recognize to enter in their credit card information. "Really! This is
>> your vendor's payment site! Pay no attention to the URL and
>> certificate!"
>> 
>> That one in particular takes amazing brains...
>
> For Verizon maybe, but there are plenty of Mom and Pop internet
> merchants for which it is arguably more secure to do it this way.  The
> merchant never sees the customer's payment information and thus
> needn't know how to properly protect it, and one-time shoppers may not
> know/trust the merchant anyway.  If the redirect is from a secure
> merchant site to a secure payment provider site, and the merchant site
> informs users where they will be redirected, is this so bad?

If the merchant site is secured by SSL, and prominently says that you
will be redirected to a given provider, it is perhaps not so bad in
theory. However, in practice, this fails the "simple rules my mom can
follow" test. I'd rather that they hand a short term cert and DNS
delegation to their processing partner.

What I want to be able to do is tell my mom something dead simple,
like "never enter your username and password or credit card
information unless the web page is the one you are expecting, and it
has the "lock icon" in the corner and the lock icon doesn't look like
someone was faking it."

Now, we face two major problems here.

1) Every complication you add on top of that means that you're
training lots and lots of very naive users to do things that are
potentially unsafe. Training users to expect to do unsafe things (like
what Amex or what Verizon are doing) is bad, because then they won't
notice in the future when they are asked to do something unsafe by a
bad guy. 

Fidelity, to my mind, is a model of good user training. They have a
set of very good web pages (see
http://personal.fidelity.com/accounts/services/findanswer/content/security/minimize_risk.shtml
and others) that give users excellent advice on never entering
passwords in on pages that didn't arrive encrypted, never emailing
personal information, etc. They allow customers to avoid ever exposing
social security numbers to customer service reps, encourage users to
use those services, etc. Their login page itself comes SSL
encrypted. There may be other security problems they have, but
encouraging users to do unsafe things isn't one of them.

Now, here they (and I and others) go, trying hard to educate users
about what the right thing is, and others go around forcing users to
do the wrong thing just to get their day to day business done! After a
while, people's defenses drop because they're being constantly trained
to do the wrong thing.

2) The other issue is that the browser accepts certs from so many CAs,
many of which have effectively no security. There are ways to fix this
long term, but that is a whole separate discussion.

-- 
Perry E. Metzger[EMAIL PROTECTED]

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: de-identification

2005-06-09 Thread Matt Crawford

On Jun 8, 2005, at 15:19, [EMAIL PROTECTED] wrote:

I'd like to come up to speed on the state of the
art in de-identification (~=anonymization) of data
especially monitoring data (firewall/hids logs, say).


I don't know the state of the art, but I can tell you the state of the 
artless.  I had a request to share ourr border router traffic logs 
(Cisco netflow) with a university, so they could try out some anomaly 
detection schemes they were working on.


(Bkgnd: We don't consider our network topology sensitive. Our traffic 
logs are subject to a general respect for privacy.)


Since they could send us packets of their choosing, I deemed it useless 
to obfuscate our own IP addresses.  I chose to anonymize all the 
external addresses.  My design note is below.


But then, as fate would have it, the university said they needed the 
true external addresses.  That left me a bit stumped.  Perhaps a less 
chaotic mapping, like one that is bijective between classful network 
numbers, would do.



obfuscation filter program

  Parameters
Blocks of IP addresses deemed internal.  Internal includes multicast
addresses and RFC 1918 "private use" address.

  Working data preserved across runs
For each date, a database of (true address, substituted address) 
pairings.


  Algorithms
Substituted addresses are pseudo-random, formed by MD5-hashing a
string (S | D | A | N) and taking the first 32 bits.
  S = fixed secret hash seed, long term
  D = date of data, in MMDD format
  N = integer, starting at 0 and incremented if resulting address
  is an internal one or a collision.

to obfuscate an IP address: {
  if it's internal, return it unchanged.  otherwise
   is a substitute is already assigned?  If so, return it. otherwise
for ( done = N = 0; !done; N++ ) {
  generate substitute address by hashing as above
  if ( !collision ) done = 1
}
save forward & reverse mappings
}

for each netflow record {
  i = 0
  if ( src is external ) {
obfuscate src; i++
  }
  if ( dst is external ) {
obfuscate dst; i++
  }
  if ( i != 1 ) log an unusual condition
  write output
}

Scripts:

  generator loops over input files, applying obfuscator, writing 
temp-named

  output file, then renaming completed output file to permanent name.

  mover looks for completed output files, copies them to destination, 
then

  looks for more, sleeping and retrying if there are none.

Other notes:

  The obfuscated mappings can be regenerated at will if exactly the 
same data

  is processed in the same sequence, and the secret hash seed is known.


-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: AmEx unprotected login site

2005-06-09 Thread Amir Herzberg

Ivars Suba responded to me:


1. This doesn't have any effect on non-SSL-protected sites (e.g.
AmEx,... see `Hall of Shame`). And of course assumes users will notice 
the use of non-SSL-site...


Vowww.. I didn't know that AmEx is not ssl protected ;))

Before user credentials are passed to site, site certificate are sent to 
client browser, then certificate are accepted/denied, ssl tunnel  is 
established/access denied:


As is clearly stated in messages you referred to, we all know Amex's 
site invoke SSL to encrypt the password. The problem is that a fake Amex 
site would not; and the user has no way to distinguish. Essentially, 
Amex site is secure against the (unlikely) eavesdropper, but not against 
the (much more likely) spoofer or the stronger (but possible) MITM.


Is this site ssl proteceted? Shame Hall isn't so "shamy" ;)
So, my claims in Hall of Shame remain. Or do you want to protect the 
Amex process? This will be interesting.

...
Keep CA whitelist in SSL termination Proxy, and deny all others 
(including sef-signed site certs).
You could of course do this filtering without also terminating the 
tunnel at your proxy. I agree that such filtering (without breaking the 
tunnel) is an advisable thing to do.


80% of users don't know what is the certificate. Imho, much better is  
trust this task to SSL termination proxy... 
I agree most users don't know what's a CA doing and what's a PK cert. 
But my intuition - and research - show that they can learn very quickly 
if we use simple words instead of jargon. In TrustBar, we display the 
name/logo of the site, followed by the words `identified by` and the 
name/logo of the CA. Our (limited) testing shows users understand this 
very well. And of course this does not prevent you from also blocking in 
a proxy any CAs you don't trust. Let the user decide among these you 
can't rule out.

--
Best regards,

Amir Herzberg

Associate Professor
Department of Computer Science
Bar Ilan University
http://AmirHerzberg.com

New: see my Hall Of Shame of Unprotected Login pages: 
http://AmirHerzberg.com/shame.html


-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: AmEx unprotected login site

2005-06-09 Thread Ben Laurie

Perry E. Metzger wrote:

"Steven M. Bellovin" <[EMAIL PROTECTED]> writes:


They're still doing the wrong thing. Unless the page was transmitted
to you securely, you have no way to trust that your username and
password are going to them and not to someone who cleverly sent you an
altered version of the page.


They're doing the wrong thing, and probably feel they have no choice.  
Setting up an SSL session is expensive; most people who go to their 
home page do not log in, and hence do not (to Amex) require 
cryptographic protection.



That's why Citibank and most well run bank sites have you click on a
button on the front page to go to the login screen. There are ways to
handle this correctly.


Why is this better? The button you click can just as easily take you to 
a site other than the one intended.


--
>>>ApacheCon Europe<<<   http://www.apachecon.com/

http://www.apache-ssl.org/ben.html   http://www.thebunker.net/

"There is no limit to what a man can do or how far he can go if he
doesn't mind who gets the credit." - Robert Woodruff

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: encrypted tapes

2005-06-09 Thread Dirk-Willem van Gulik


On Wed, 8 Jun 2005, Perry E. Metzger wrote:

> Dan Kaminsky <[EMAIL PROTECTED]> writes:

> > Yes, because key management is easy or free.

Eh - my experience is that that is where 99% of the cost is - in the whole
human procedures and vetting around it. The paper work, the auditing,
dealing with corperate power shuffles, getting 'hard' retention rules out
of the resonsible people and their conflicting advisors, etc.

> If you have no other choice, pick keys for the next five years,
> changing every six months, print them on a piece of paper, and put it
> in several safe deposit boxes. Hardcode the keys in the backup

We've been doing systems much like this; with the added twist that a) data
is keyed to a key matching how long its retention policy is, b) every
month or so certain private keys are destroyed as the data keyed to has
reached its limit and c) they are stored (with a recovery scheme) on
tamperproof dallas iButtons (which have a reliable clock) to make the
issues around operations (destroy at the right time) and trust (no need to
trust they key maker).

> Er, no. An error in CBC wipes out only the following block. Errors do
> not propagate past that in CBC. This is not especially worse than the
> situation right now.

And in actual practice we do not see this in the real world. We -do- see
serious issues with the compression used inside the drives though.
Specialist can help you - and the data you get back from them can then be
decrypted. The fact that it is opaque is not a problem for those recovery
experts.

Dw.

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: AmEx unprotected login site

2005-06-09 Thread Amir Herzberg

Few comments on what Ivars Suba wrote:

How to fight against phishing in organization enviroment?
Quite easy- put SSL termination Proxy between client browser and SSL
server:

Sure, but:
1. This doesn't have any effect on non-SSL-protected sites (e.g. 
AmEx,... see `Hall of Shame`). And of course assumes users will notice 
the use of non-SSL-site...


2. This assumes that the problem is `untrusted site certificates`. Is 
it? Which CAs would you NOT accept anymore? In particular, would you now 
reject all `domain validated certificates` (about 25% of SSL sites I've 
heard)?? Much better imho to give the information to the user, possibly 
warning against (or blocking) certs from a CA you know to be bad.


3. This solution takes advantage of the fact that users don't have any 
idea which CA they trust... which is true but very bad, breaking the 
trust model. I think it is better to make the CA visible to the user 
(but in a way users can understand - I believe we have that with TrustBar).


--
Best regards,

Amir Herzberg

Associate Professor
Department of Computer Science
Bar Ilan University
http://AmirHerzberg.com

New: see my Hall Of Shame of Unprotected Login pages: 
http://AmirHerzberg.com/shame.html


-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: AmEx unprotected login site

2005-06-09 Thread R. Hirschfeld
> From: "Perry E. Metzger" <[EMAIL PROTECTED]>
> Date: Wed, 08 Jun 2005 19:01:37 -0400

> The other major offender are organizations (such as portions of
> Verizon) that subcontract payment systems to third parties. They are
> training their users to expect to be directed to a site they don't
> recognize to enter in their credit card information. "Really! This is
> your vendor's payment site! Pay no attention to the URL and
> certificate!"
> 
> That one in particular takes amazing brains...

For Verizon maybe, but there are plenty of Mom and Pop internet
merchants for which it is arguably more secure to do it this way.  The
merchant never sees the customer's payment information and thus
needn't know how to properly protect it, and one-time shoppers may not
know/trust the merchant anyway.  If the redirect is from a secure
merchant site to a secure payment provider site, and the merchant site
informs users where they will be redirected, is this so bad?

Ray

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: encrypted tapes

2005-06-09 Thread lists

From: "Perry E. Metzger" <[EMAIL PROTECTED]>

> It is worse than that. At least one large accounting company sends new
> recruits to a "boot camp" where they learn how to conduct "security
> audits" by rote. They then send these brand new 23 year old "security
> auditors" out to conduct security "audits", with minimal supervision
> from a partner or two. The audits are inevitably of the lowest
> possible quality -- they run automated security scanners no better

The worst security audit point I have ever seen came from KPMG and
said that logging on as a particular non-root unix account got root
access, based on the "WARNING! YOU ARE SUPERUSER" message seen at login.
What they'd never done was check something like "sum /etc/shadow" to
see whether it was permitted or denied, nor run "id" or similar checks.
So when this user's home directory is absent and he ends up using
/ and /.profile (where the warning was in an echo statement) he gets
this message on the screen.  So where they should be writing
"misleading warning in some circumstances" they write "root access
immediately available for common users".

I'm planning to teach a class of 5 existing internal auditors
next month on some security s/w and I am going to include:
   - focussing on the more important stuff
 (a long-running problem where I work)
   - you must prove it before you can report it
   - you must be able to state what is wrong with the observed state;
 usually expressed as the policy point(s) violated
 (just appearing in scanner output is not enough)
   - you should have some idea of one way reasonable way to fix it

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: AmEx [add: and other] unprotected login site

2005-06-09 Thread Amir Herzberg
Perry: I share your feelings in this matter, great message (but I made 
some comments, see below). I'll appreciate the relevant Verizon URL so 
I'll add them to the Hall of Shame. Notice I already have several banks 
there, including Chase (which you also mentioned), and brokers, 
including CitiGroup's SmithBarney... And security companies including MS 
 Passport and EquiFax... More examples welcome (I'll also add 
`contributors` gladly). Thanks, Amir


Perry E. Metzger wrote:

"Steven M. Bellovin" <[EMAIL PROTECTED]> writes:


That's why Citibank and most well run bank sites have you click on a
button on the front page to go to the login screen. There are ways to
handle this correctly.


There's an attack there, too -- one can divert the link to the login 
screen.


Certainly, but at least then, the URL and the certificate won't point
at Amex (or whomever). If you train your users properly, then they can
avoid trouble even then.


Agreed. SSL is designed to protect against a MITM attacker, not a mere 
eavesdropper (for protecting only against eavesdroppers, we don't need 
certificates, DH would suffice, right?). Indeed, current browser 
security indicators are terrible, but that's why we do all this work on 
 secure usability, resulting in improved indicators (our TrustBar, and 
a host of others by now; every competent security person should use one 
and take care it's doing good job and not violating privacy). I firmly 
hope and believe this will soon be adopted by browsers, the IE people 
essentially told me they will, and some new browsers (NS, Ophera) 
already improved to some extent.


In the current case, by the time you see that there is a problem, it
is too late. Furthermore, you're training your users to engage in a
bad behavior. This is no different than Microsoft training their users
to mindlessly open .exe files for years and years, only to reap the
whirlwind when email viruses came along.


Well, of course, Microsoft are also training their users to enter 
passwords into unprotected login page, just like Amex - see the entry 
for MS Passport in the Hall of Shame... And btw, I had a long dialog 
with an exec in MS about it, and she actually _agreed_ and promised to 
fix it long ago, I'll ask her again how come nothing happens...


The right behavior to encourage for people is "never enter in your
userid and password for an important account on a page that you don't
trust". They're training people to do the opposite.


The other major offender are organizations (such as portions of
Verizon) that subcontract payment systems to third parties. They are
training their users to expect to be directed to a site they don't
recognize to enter in their credit card information. "Really! This is
your vendor's payment site! Pay no attention to the URL and
certificate!"

That one in particular takes amazing brains...

Examples will be added to the Hall of shame...




It's a tough problem: they want to outsource the payment processing, 
but don't have the infrastructure to do so properly.



They could delegate a "payments.verizon.com" DNS entry and hand the
processor a "payments.verizon.com" certificate, with an expiry date
quite similar to the date when their contract is up for renewal.

I'd like to make my position on one thing here really clear, by the
way.

Since when is it considered acceptable to slack on fiduciary
responsibility on the excuse that it is annoying and requires effort?
No one would accept a bank saying "accounting is boring, and hard to
do right, so we aren't going to keep track of your balance very well
any more." No one would accept "we've decided that paying for a proper
vault is expensive, so we're keeping your safe deposit box in the mens
room." How is proper network security any different? This is a
BANK. Keeping your money secure is what they are paid to do!


Absolutely, and I've also confirmed this by few lawyers...


Yes, it takes thought, planning, and some skill to have online
security for a financial institution, but no one is obligated to own
or run a bank. If you run a mortuary, you will have to deal with
corpses. If you run a bank, you have to be mindful of security in
handling money.

As for merchants like Verizon, there is really no excuse for a
for being unable to figure out how to process online credit
card payments safely, whether on their own or through a contractor. No
one obligates them to be in business, but if they're going to be, they
have a duty to do things like keeping accurate customer accounts,
paying their taxes, keeping track of who their shareholders are, and,
yes, making sure that they deal with credit card acceptance
non-hazardously. I know it is all a pain in the ass, but if one wants
an easier life, one should be a subsistence farmer instead of a
multinational corporation.

Sure, I'd love not to have to deal with the annoying things I have to
deal with, and I'd love not to have to pay my mortgage on time, and
I'd love a pony and a mountain of gold.

Re: AmEx unprotected login site (was encrypted tapes, was Re: Papersabout"Algorithm hiding" ?)

2005-06-09 Thread Amir Herzberg
Ken, you are correct (see below). And in fact, if the page came from the 
right source (as validated by SSL and a secure browser extension such as 
TrustBar), I don't think there is any need to validate the source (which 
is impractical even for the geekest geek). After all, if a site is so 
clueless as to send you corrupted scripts, it may as well publish your 
password directly...


Best, Amir Herzberg

Ken Ballou wrote:
 > Unless I misunderstand, the problem is that I can not determine where my

login information will go without examining the source of the login
page.  Sure, the form might be posted to a server using https.  But,
without examining the source of the login page, I won't be able to look
at the certificate for the site to which my credentials have been sent
until it's too late.

It's still the case that if I retrieve the original login form via
https, I have to examine the page source to see to which server the form
will be posted.  But I can examine the certificate of the site from
which I got the form originally to determine whether this is a phishing
attack.  If the login form itself can be shown to have come from an AmEx
server, I'm probably more comfortable trusting that my credentials are
going to the right server.

Do I completely misunderstand?

- Ken

.



--
Best regards,

Amir Herzberg

Associate Professor
Department of Computer Science
Bar Ilan University
http://AmirHerzberg.com

New: see my Hall Of Shame of Unprotected Login pages: 
http://AmirHerzberg.com/shame.html


-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: AmEx unprotected login site

2005-06-09 Thread Peter Gutmann
"Perry E. Metzger" <[EMAIL PROTECTED]> writes:
>"Steven M. Bellovin" <[EMAIL PROTECTED]> writes:
>>>They're still doing the wrong thing. Unless the page was transmitted
>>>to you securely, you have no way to trust that your username and
>>>password are going to them and not to someone who cleverly sent you an
>>>altered version of the page.
>>
>> They're doing the wrong thing, and probably feel they have no choice.
>> Setting up an SSL session is expensive; most people who go to their
.> home page do not log in, and hence do not (to Amex) require
>> cryptographic protection.
>
>That's why Citibank and most well run bank sites have you click on a button
>on the front page to go to the login screen. There are ways to handle this
>correctly.

I was just going to mention this myself because I've noticed local banks doing
it, you click on some "log in for online banking" link and get to an HTTPS
login page that's distinct from the HTTP main page.  For Mozilla/Firefox
users, grab a copy of the TargetAlert extension and you'll see this on the
originating page, TargetAlert will tag the login link with the "opens in new
window" indicator and the "HTTPS" indicator (the usual yellow padlock).  When
you've got TargetAlert installed, go to e.g. http://www.asbbank.co.nz/ to see
this.

Peter.

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: encrypted tapes (was Re: Papers about "Algorithm hiding" ?)

2005-06-09 Thread Peter Gutmann
[EMAIL PROTECTED] writes:

>I saw allot of requirements by security auditors that looked pretty silly.

"Must use 128-bit RSA encryption" has to be the all-time favourite.

One I saw recently was a requirement for using X9.17 key management... in SSL.

Peter.

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: encrypted tapes (was Re: Papers about "Algorithm hiding" ?)

2005-06-09 Thread Ben Laurie

[EMAIL PROTECTED] wrote:

| Oracle, for example, provides encryption functions, but the real problem
| is the key handling (how to make sure the DBA can't get the key, cannot
| call functions that decrypt the data, key not copied with the backup,
| etc.).
| There are several solutions for the key management, but the vendors
should
| start offering them.

I would argue that the real problem is that encryption slows large
searches (is percieved to slow large searches, anyway.)

Adam



Yes, encrypting indexed columns for example is a problem.  But if you
limit yourself to encrypting sensitive information (I'm talking about
stuff like SIN, bank account numbers, data that serves as an index to
external databases and are sensitive with respect to identity theft),
these sensitive information should not be the bases of searches.
If they are not he basis of searches, there will be no performance
problems related to encrypting them.


If they are indexes to external databases, then they are the basis of 
searches in those databases, by definition.



So my answer to people that have the perception you mentioned is that if
you want to encrypt sensitive information and that would cause performance
problems, then there are problems with your data architecture privacy wise
(you should re-structure your data, use it differently, etc.).


Not a very satisfactory answer.

--
http://www.apache-ssl.org/ben.html   http://www.thebunker.net/

"There is no limit to what a man can do or how far he can go if he
doesn't mind who gets the credit." - Robert Woodruff

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: encrypted tapes

2005-06-09 Thread Bill Frantz
On 6/8/05, [EMAIL PROTECTED] (Perry E. Metzger) wrote:

>If you have no other choice, pick keys for the next five years,
>changing every six months, print them on a piece of paper, and put it
>in several safe deposit boxes. Hardcode the keys in the backup
>scripts. When your building burns to the ground, you can get the tapes
>back from Iron Mountain and the keys from the safe deposit box.

I think I would be tempted to keep a private key in those safe deposit boxes, 
and when writing the backup tape, pick a "random" (as best you can with the 
hardware and software available) session key, encrypt it using the public key, 
hard coded in the backup procedure, and write the encrypted result as the first 
part of the backup.  This procedure allows you to keep your secrets hidden 
away, at least until you need to use one of the tapes.

Cheers - Bill

IP note:  This technique is so obvious to any practitioner skilled in the art 
as to be non-patentable (except in the USA, where obviousness is no barrier).  
In any case I put it into the public domain.

---
Bill Frantz| gets() remains as a monument | Periwinkle 
(408)356-8506  | to C's continuing support of | 16345 Englewood Ave
www.pwpconsult.com | buffer overruns. | Los Gatos, CA 95032

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: encrypted tapes

2005-06-09 Thread Jason Holt


On Wed, 8 Jun 2005, Perry E. Metzger wrote:


Dan Kaminsky <[EMAIL PROTECTED]> writes:

2) The cost in question is so small as to be unmeasurable.


Yes, because key management is easy or free.


In this case it is. As I've said, even having all your tapes for six
months at a time use the same key is better than putting the tapes in
the clear.

If you have no other choice, pick keys for the next five years,
changing every six months, print them on a piece of paper, and put it
in several safe deposit boxes. Hardcode the keys in the backup
scripts. When your building burns to the ground, you can get the tapes
back from Iron Mountain and the keys from the safe deposit box.

[...]

If in-transit attacks are the real problem, just email/fax/phone the key when 
you ship the tapes, and have them stick it in the box when it arrives.


-J

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: encrypted tapes (was Re: Papers about "Algorithm hiding" ?)

2005-06-09 Thread Jason Holt


On Wed, 8 Jun 2005, David Wagner wrote:
[...]

That said, I don't see how adding an extra login page to click on helps.
If the front page is unencrypted, then a spoofed version of that page
can send you to the wrong place.  Sure, if users were to check SSL
certificates extremely carefully, they might be able to detect the funny
business -- but we know that users don't do this in practice.

Dan Bernstein has been warning of this risk for many years.
http://cr.yp.to/djbdns/bugtraq/[EMAIL PROTECTED]
http://cr.yp.to/dnscache/bugtraq/[EMAIL PROTECTED]

As far as I can tell, if the front page is unencrypted, and if the
attacker can mount DNS cache poisoning, "pharming", or other web spoofing
attacks -- then you're hosed.  Did I get something wrong?


Well, yes.  TLS guarantees that you're talking to the website listed in the 
location bar.  Knowing what domain you *wanted* is up to you, and Dan handles 
that by suggesting that perhaps you have a paper brochure from the bank which 
lists their domain.


So, it's fine to have http://amex.com link to https://amex.com (or 
whatever.com) for forms requesting anything sensitive as long as amex.com (or 
whatever.com) is what's printed in the brochure.  As Dan points out, 
examination of the certificate is generally pointless as long as it's signed 
by a trusted CA, since the attacker can get a perfectly valid cert for 
hackers-r-us.com anyway.  The big question is just whether the domain asking 
for your account info corresponds with the organization you trust with it.


Of course, brochures aren't exactly hard to spoof (cf. Verisign's fraudulent 
domain renewal postcards).  And then there are the dozens of CAs your browser 
accepts, the CA staff who issue microsoft.com certs to random passersby, 
international domain names that look identical to, er, national ones.  All 
those gotchas apply even in the "correct" implementation outlined by Dan.


-J

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: encrypted tapes (was Re: Papers about "Algorithm hiding" ?)

2005-06-09 Thread Charles M. Hannum
On Wednesday 08 June 2005 21:20, [EMAIL PROTECTED] wrote:
> Yes, encrypting indexed columns for example is a problem.  But if you
> limit yourself to encrypting sensitive information (I'm talking about
> stuff like SIN, bank account numbers, data that serves as an index to
> external databases and are sensitive with respect to identity theft),
> these sensitive information should not be the bases of searches.
> If they are not he basis of searches, there will be no performance
> problems related to encrypting them.

I can name at least one obvious case where "sensitive" data -- namely credit 
card numbers -- is in fact something you want to search on: credit card 
billing companies like CCbill and iBill.  Without the ability to search by 
CC#, customers are pretty screwed.

That said, I will never buy the "only encrypt sensitive data" argument.  In my 
experience, you *always* end up leaking something that way.

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]