Re: Phishers Defeat 2-Factor Auth

2006-07-12 Thread James A. Donald

Lance James wrote:

The site asks for your user name and password, as well as the
token-generated key. If you visit the site and enter bogus information to
test whether the site is legit -- a tactic used by some security-savvy
people -- you might be fooled. That's because this site acts as the man in
the middle -- it submits data provided by the user to the actual
Citibusiness login site. If that data generates an error, so does the
phishing site, thus making it look more real.


So long as logins are registered and performed in a web page, rather 
than in the chrome, we are hosed.


Creating a login, and logging into it, has to be a browser and email 
client function, not a web page function.




-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Interesting bit of a quote

2006-07-12 Thread Anne Lynn Wheeler

[EMAIL PROTECTED] wrote:

I can corroborate the quote in that much of SarbOx and
other recent regs very nearly have a guilty unless proven
innocent quality, that banks (especially) and others are
called upon to prove a negative: X {could,did} not happen.
California SB1386 roughly says the same thing: If you cannot
prove that personal information was not spilled, then you
have to act as if it was.  About twenty states have followed
California's lead.  The surveillance requirements of both
SEC imposed-regulation and NYSE self-regulation seem always
to expand.  One of my (Verdasys) own customers failed a
SarbOx audit (by a big four accounting firm) because it
could not, in advance, *prove* that those who could change
the software (sysadmins) were unable in any way to change
the financial numbers and, in parallel, *prove* those who
could change the financial numbers (CFO  reports) were
unable to change the software environment.


my slightly different perspective is that audits in the past have 
somewhat been looking for inconsistencies from independent sources. this 
worked in the days of paper books from multiple different corporate 
sources. my claim with the current reliance on IT technology ... that 
the audited information can be all generated from a single IT source ... 
invalidating any assumptions about audits being able to look for 
inconsistencies from independent sources. A reasonable intelligent 
hacker could make sure that all the information was consistent.


a counter example is the IRS where individual reported income is 
correlated with other sources of reported financial information. 
however, i don't know how that could possibly work in the current 
environment where the corporation being audited is responsible for 
paying the auditors (cross checking information across multiple 
independent sources)


some past posts on the subject
http://www.garlic.com/~lynn/2006h.html#33
http://www.garlic.com/~lynn/2006i.html#1

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Factorization polynomially reducible to discrete log - known fact or not?

2006-07-12 Thread David Wagner
Ondrej Mikle  wrote:
I believe I have the proof that factorization of N=p*q (p, q prime) is 
polynomially reducible to discrete logarithm problem. Is it a known fact 
or not?

Be careful: when most people talk about the assumption that the
discrete log problem being hard, they usually are referring to the
hardness of discrete logs modulo a large prime.  In contrast, you
seem to be talking about the hardness of discrete logs modulo an
RSA modulus.  Those two things are not the same.

It is well-known that if you can solve discrete logs modulo a RSA
modulus N in polytime, then you can factor N in polytime.  This is
a standard result that is well-known to anyone who studies this field.
If you've re-discovered this result, you haven't got anything new.

The algorithm is very simple:
1. Choose a big random value x from some very broad range
   (say, {1,2,..,N^2}).
2. Pick a random element g (mod N).
3. Compute y = g^x (mod N).
4. Ask for the discrete log of y to the base g, and get back some
   answer x' such that y = g^x' (mod N).
5. Compute x-x'.  Note that x-x' is a multiple of phi(N), and
   it is highly likely that x-x' is non-zero.  It is well-known
   that given a non-zero multiple of phi(N), you can factor N in
   polynomial time.

There is no known proof that if you can factor N in polytime, you
can solve discrete logs modulo N in polynomial time.  (In practice,
if N is a 2048-bit RSA modulus that is a product of two 1024-bit
primes, if you can factor N, you can solve discrete logs modulo N
more efficiently by solving two discrete log problems modulo 1024-bit
prime numbers and then applying the Chinese remainder theorem.  But
the latter is still asymptotically superpolynomial.)

There is no known proof that if you can solve discrete logs modulo
a prime p in polytime, then you can factor a RSA modulus N in polytime.

There is no known proof that if you can factor a RSA modulus N in
polytime, then you can solve discrete logs modulo a prime p in polytime.

If you can solve any of the latter three problems, then you've got
something new, and many cryptographers will be interested.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Interesting bit of a quote

2006-07-12 Thread dan

You're talking about entirely different stuff, Lynn,
but you are correct that data fusion at IRS and everywhere
else is aided and abetted by substantially increased record
keeping requirements.  Remember, Poindexter's TIA thing did
*not* posit new information sources, just fusing existing
sources and that alone blew it up politically.  As a security
matter relevant here, we can't protect un-fused data so
fused data is indeed probably worse.

On the prove-a-negative area, every time I say this in
front of CISO-level audiences I get nodding assent.  Ain't
making it up, in other words.  Innocent until proven
guilty seems now to be true in criminal matters; guilty
until proven innocent holds sway in the civil arena.

On the idea that our version of it is just one of many
versions of the same phenomenon in all fields, not just
the crypto-security one, today (literally) I was ordered
by the State of Rhode Island to install smoke and fire
detectors with direct tie-in to the Fire Department in
my farm's riding arena (a steel frame building with dirt
floor and three doors big enough for a semi).  Why?  Because
the regulators couldn't figure out whether I was a place of
assembly or not so, therefore, I must be a place of assembly
and my next hearing is whether I need sprinklers.  Mind you,
klaxons  strobes, now required, guarantee killing any
non-expert riders who are in the ring when they go off, 
but since the regulators themselves cannot prove to 
themselves that they don't have to impose the same 
requirements as a movie theater, to protect their own
asses it is me that has to now prove to them that I am
not covered -- which appears to mean getting the Legislature
to specifically exempt riding arenas since if that
Legislature is silent the regulators will assume the
worst and that means their ass versus mine.

The core issue here is thus runaway positive feedback loops.
When you hold regulators (fire inspectors, financial auditors,
whatever) liable for not having proven that their clients
cannot have anything wrong (which is why Arthur Anderson
went out of business, e.g.), then you get prove-a-negative
from the regulators and auditors -- madness on the same
scale as tulip mania or the defenestration of Prague.

--dan


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Interesting bit of a quote

2006-07-12 Thread David Wagner
[EMAIL PROTECTED]
 Been with a reasonable number of General Counsels
 on this sort of thing.  Maybe you can blame them
 and not SB1386 for saying that if you cannot prove
 the data didn't spill then it is better corporate
 risk management to act as if it did spill.

Well, are you sure you haven't confused what they're saying about SOX, vs
what they're saying about SB1386?  It's easy for me to believe that they'd
say this about SOX, but the plain language of SB1386 seems pretty clear.

(It would also be easy for me to believe that a General Counsel would
say that if you have knowledge of a breach of security in one of your
systems and reason to believe that an unauthorized individual gained
access to personal information as a result, then you must assume that
you have to notify every person whose data was stored in the system and
who may have been affected by the breach, unless you can prove that those
persons weren't affected by that breach.  But that's very different from
how you characterized SB1386.)

If General Counsels are really saying that SB1386 requires you to act
as if data has spilled, even in absence of any reason whatsoever to
think there has been any kind of security breach or unauthorized access,
merely because you don't have proof that it hasn't spilled -- then yes,
that does sound strange to me.  That is not my understanding of the
intent of SB1386, and it is not what the language of SB1386 seems to say.

Then again, maybe your General Counsels know something that I don't;
it's always possible that the text of the law is misleading, or that
I'm missing something.  They're the legal experts, not me.

Personally, my suggestion is as follows: The next time that a General
Counsel claims to you that SB1386 requires you to assume data has spilled
(even in absence of any reason to believe there has been a security
breach) until you can prove to the contrary, I suggest you quote from
the text of SB1386, and let us know how they respond.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Interesting bit of a quote

2006-07-12 Thread Anne Lynn Wheeler

[EMAIL PROTECTED] wrote:

You're talking about entirely different stuff, Lynn,
but you are correct that data fusion at IRS and everywhere
else is aided and abetted by substantially increased record
keeping requirements.  Remember, Poindexter's TIA thing did
*not* posit new information sources, just fusing existing
sources and that alone blew it up politically.  As a security
matter relevant here, we can't protect un-fused data so
fused data is indeed probably worse.


but this is the security issue dating back to before the 80s ... when 
they decided they could no longer guarantee single point of security ... 
in part because of insider threats ... they added multiple independent 
sources as a countermeasure. the crooks responded with collusion ... so 
you started to see countermeasures to collusion appearing in the early 80s.


the advent of the internet, sort of refocused attention to outsider 
attacks ... even tho the statistics continue to hold that the major 
source of fraud is still insiders ... including thru the whole internet 
era. the possibility of outsiders may have helped insiders obfuscate 
true source of many insider vulnerabilities.


the issue with auditing to prove no possible vulnerability for a single 
point ... leading to the extremes of having to prove a negative ... can 
possibly be interpreted within the context of attempting to preserve the 
current audit paradigm.


independent operation/sources/entities have been used for a variety of 
different purposes. however, my claim has been then auditing has been 
used to look for inconsistencies. this has worked better in situations 
where there was independent physical books from independent sources 
(even in the same corporation).


As IT technology has evolved ... my assertion is a complete set of 
(consistent) corporate books can be generated from a single IT 
source/operation. The IRS example is having multiple independent sources 
of the same information (so that you can have independent sources to 
check for inconsistencies).


The fusion scenarios tend to be having multiple independent sources of 
at least some different data ... so the aggregation is more than the 
individual parts (as opposed to the same data to corroborate).


ref:
http://www.garlic.com/~lynn/aadsm24.htm#35 Interesting bit of a quote
http://www.garlic.com/~lynn/2006h.html#58 Sarbanes-Oxley
http://www.garlic.com/~lynn/2006l.html#1 Sarbanes-Oxley

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Interesting bit of a quote

2006-07-12 Thread Travis H.

On 7/11/06, Adam Fields [EMAIL PROTECTED] wrote:

On Tue, Jul 11, 2006 at 01:02:27PM -0400, Leichter, Jerry wrote:
 Business ultimately depends on trust.  There's some study out there -
Trust is not quite the opposite of security (in the sense of an
action, not as a state of being), but certainly they're mutually
exclusive. If you have trust, you have no need for security.


Quoting Ross Anderson's TCPA comments:
A trusted [entity] is one that can break your security.

Quoting John Carrol in Computer Security:
Just because it is trusted, doesn't mean it's trustworthy.
--
Resolve is what distinguishes a person who has failed from a failure.
Unix guru for sale or rent - http://www.lightconsulting.com/~travis/ --
GPG fingerprint: 9D3F 395A DAC5 5CCC 9066  151D 0A6B 4098 0C55 1484

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: switching from SHA-1 to Tiger ?

2006-07-12 Thread alex

 - Original Message -
 From: Zooko O'Whielacronx [EMAIL PROTECTED]
...
 The AES competition resulted in a block cipher that was faster as 
 well as safer than the previous standards.  I hope that the next 
 generation of hash functions achieve something similar, because for 
 my use cases speed in a hash function is more important than speed 
 in encryption.
 

I believe that this will be more and more the case.  Hashes will 
probably become slower relative to ciphers.  CPUs are becoming 
multi-core and data pipelining from RAM into a CPU on-chip cache 
is now common.  Both of these semiconductor trends will make 
existing hashes become bottlenecks and will make it harder to 
design a fast new hash.

- Alex 

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Factorization polynomially reducible to discrete log - known fact or not?

2006-07-12 Thread Max A.

On 7/9/06, Ondrej Mikle [EMAIL PROTECTED] wrote:


I believe I have the proof that factorization of N=p*q (p, q prime) is
polynomially reducible to discrete logarithm problem. Is it a known fact
or not? I searched for such proof, but only found that the two problems
are believed to be equivalent (i.e. no proof).


Take a look at this paper: http://portal.acm.org/citation.cfm?id=894497

Eric Bach  Discrete Logarithms and Factoring

ABSTRACT: This note discusses the relationship between the two
problems of the title. We present probabilistic polynomial-time
reduction that show: 1) To factor n, it suffices to be able to compute
discrete logarithms modulo n. 2) To compute a discrete logarithm
modulo a prime power p^E, it suffices to know It mod p. 3) To compute
a discrete logarithm modulo any n, it suffices to be able to factor
and compute discrete logarithms modulo primes. To summarize: solving
the discrete logarithm problem for a composite modulus is exactly as
hard as factoring and solving it modulo primes.

Max

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Interesting bit of a quote

2006-07-12 Thread leichter_jerrold
On Tue, 11 Jul 2006, Anne  Lynn Wheeler wrote:
| ...independent operation/sources/entities have been used for a variety of
| different purposes. however, my claim has been then auditing has been used
to
| look for inconsistencies. this has worked better in situations where there
was
| independent physical books from independent sources (even in the same
| corporation).
| 
| As IT technology has evolved ... my assertion is a complete set of
| (consistent) corporate books can be generated from a single IT
| source/operation. The IRS example is having multiple independent sources
of
| the same information (so that you can have independent sources to check
for
| inconsistencies)
Another, very simple, example of the way that the assumptions of
auditing are increasingly at odds with reality can be seen in receipts.
Whenever I apply for a reimbursement of business expenses, I have to
provide original receipts.  Well ... just what *is* an original
receipt for an Amazon purchase?  Sure, I can print the page Amazon
gives me.  Then again, I can easily modify it to say anything I like.

Hotel receipts are all computer-printed these days.  Yes, some of them
still use pre-printed forms, but as the cost of color laser printers
continues to drop, eventually it will make no sense to order and stock
that stuff.  Restaurant receipts are printed on little slips of paper by
one of a small number of brands of printer with some easily set custom-
ization, readily available at low cost to anyone who cares to buy one.

Back in the days when receipts were often hand-written or typed on
good-quality letterhead forms, original receipts actually proved
something.  Yes, they could be faked, but doing so was difficult and
hardly worth the effort.  That's simply not true any more.

Interestingly, the auditors at my employer - and at many others, I'm
sure - have recognized this, and now accept fax images of all receipts.
However, the IRS still insists on originals in case of an audit.
Keeping all those little pieces of paper around until the IRS loses
interest (I've heard different ideas about how long is safe - either 3
or 7 years) is now *my* problem.  (If the IRS audits my employer, and
comes to me for receipts I don't have, the business expense reimburse-
ments covered by those missing receipts suddenly get reclassified as
ordinary income, on which *I*, not my employer, now owe taxes - and
their good friends interest and penalties.)
-- Jerry


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: hashes in p2p, was Re: switching from SHA-1 to Tiger ?

2006-07-12 Thread Ondrej Mikle

Travis H. wrote:

On 7/11/06, Zooko O'Whielacronx [EMAIL PROTECTED] wrote:

I hope that the hash function designers will be aware that hash
functions are being used in more and more contexts outside of the
traditional digital signatures and MACs.  These new contexts include
filesystems like ZFS [3], decentralized revision control systems like
Monotone [4], git [5], mercurial [6] and bazaar-ng [7], and peer-to-peer
file-sharing systems such as Direct Connect, Gnutella, and Bitzi [6].


MD4/5 are commonly used as a unique fixed-size identifier of an
arbitrarily-chosen* length of data in p2p file systems, and we are all
aware of the collision attacks.  They bring up some interesting points
to consider:

1) What semantics can one induce by using a collision attack, given
the existing protocols/clients?  There are some rumors the MPAA or
RIAA is using protocol-level attacks to poison p2p networks like
bittorrent and KaZaa.  Can cryptanalysis results be playing a part?


RIAA uses only very basic cryptanalytic attacks. Specifically, Kazaa 
(FastTrack protocol) uses very stupid UUHash algorithm, which works like 
this: first 300kB are hashed with MD5, then 300 kB at 1MB boundary is 
added to hash, then 300 kB at 2MB boundary, then 300kB at each 2^n MB 
boundary. It is clear that it is very easy to generate collisions for 
UUHash.


For other networks, mostly sybil attacks are used (spawning a lot of 
fake nodes and fake files with the same name so that search turns them up).



2) How do we refactor these widely deployed systems with a new,
stronger hash function?


An example how to handle this might be aMule and eMule clients. The 
basic ed2k protocol only uses MD4 hashes (it is hash list, the hash in 
magnet link is the MD4 hash of MD4 hashes of file blocks). These two 
clients add protocol extension called AICH algorithm (basically it is 
Merkle tree of SHA-1 hashes). The root hash might be a part of magnet 
link, but does not have to be (since it is not part of the original 
protocol).


If the root hash is part of the magnet link, then the received tree can 
be checked. If it is not part of the magnet link, then the client 
attempts to retrieve it from clients that support AICH. If at least 10 
clients send the same root hash and if that is at least 92% of all 
received root hashes, such root hash is considered trusted for the 
current session. It is definitely not 100% secure, but the more clients 
support AICH, the more often you will find the root hash in magnet link 
(thus being able to check the received tree). Even in the absence of 
root hash in magnet link, the attacker needs to control significant 
portion of network to be able to spoof the root hash with some high 
probability.


A simple concept that would allow replacing the hash functions would be 
adding optional parameters to protocol specification. Then users can 
switch clients continuosly, i.e. users with old clients will not be 
cut off of the network each time hash function changes.



-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Factorization polynomially reducible to discrete log - known

2006-07-12 Thread David Wagner
  The algorithm is very simple:
  1. Choose a big random value x from some very broad range
(say, {1,2,..,N^2}).
  2. Pick a random element g (mod N).
  3. Compute y = g^x (mod N).
  4. Ask for the discrete log of y to the base g, and get back some
answer x' such that y = g^x' (mod N).
  5. Compute x-x'.  Note that x-x' is a multiple of phi(N), and
it is highly likely that x-x' is non-zero.  It is well-known
that given a non-zero multiple of phi(N), you can factor N in
polynomial time.
 
 Not exactly. Consider N = 3*7 = 21, phi(N) = 12, g = 4, x = 2, x' = 5. 
 You'll only get a multiple of phi(N) if g was a generator of the 
 multiplicative group Z_N^*.

When N is a large RSA modulus, there is a non-trivial probability that g
will be a generator (or that g will be such that x-x' lets you factor N).
The above is good enough for a polytime reduction.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Factorization polynomially reducible to discrete log - known

2006-07-12 Thread Peter Kosinar

Not exactly. Consider N = 3*7 = 21, phi(N) = 12, g = 4, x = 2, x' = 5.
You'll only get a multiple of phi(N) if g was a generator of the
multiplicative group Z_N^*.


When N is a large RSA modulus, there is a non-trivial probability that g
will be a generator (or that g will be such that x-x' lets you factor N).
The above is good enough for a polytime reduction.


You're absolutely right, although the probability actually does not depend 
on the size of the modulus (in fact, the provable lower bound on this 
probability goes down with size of the modulus), as it depends only on the 
factorization of phi(N) which, in turn, might depend on the process used 
to choose the factors of the modulus (e.g. sometimes-suggested approach of 
using Sophie-Germain primes creates abundance of generators; whereas some 
primorial-like construction might decrease it).


Peter

--
[Name] Peter Kosinar   [Quote] 2B | ~2B = exp(i*PI)   [ICQ] 134813278


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Factorization polynomially reducible to discrete log - known

2006-07-12 Thread Ondrej Mikle

David Wagner wrote:

The algorithm is very simple:
1. Choose a big random value x from some very broad range
  (say, {1,2,..,N^2}).
2. Pick a random element g (mod N).
3. Compute y = g^x (mod N).
4. Ask for the discrete log of y to the base g, and get back some
  answer x' such that y = g^x' (mod N).
5. Compute x-x'.  Note that x-x' is a multiple of phi(N), and
  it is highly likely that x-x' is non-zero.  It is well-known
  that given a non-zero multiple of phi(N), you can factor N in
  polynomial time.
Not exactly. Consider N = 3*7 = 21, phi(N) = 12, g = 4, x = 2, x' = 5. 
You'll only get a multiple of phi(N) if g was a generator of the 
multiplicative group Z_N^*.


When N is a large RSA modulus, there is a non-trivial probability that g
will be a generator (or that g will be such that x-x' lets you factor N).
The above is good enough for a polytime reduction.



Actually, you can never get a generator of Z_N^* unless p=q, because 
when p != q, it is not a cyclic group (this error was in my proof, too). 
Though with great probability you get an element of high order. It is 
enough doing lcm() to get the phi(N) and it will run in polynomial time.


I noted the fact IFP(N) to DLOG in Z_N^* is really mentioned in Handbook 
of Applied Crypto, but without proof or algorithm, just two lines (I 
guess that's why I missed/didn't remember it).


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Interesting bit of a quote

2006-07-12 Thread Anton Stiglic
 David Wagner writes:
 SB1386 says that if a company conducts business in Caliornia and
 has a system that includes personal information stored in unencrypted from
 and if that company discovers or is notified of a breach of the security
 that system, then the company must notify any California resident whose
 unencrypted personal information was, or is reasonably believed to have
 been, acquired by an unauthorized person. [*]


 [*] This is pretty close to an direct quote from Section 1798.82(a)
 of California law.  See for yourself:
   
 http://info.sen.ca.gov/pub/01-02/bill/sen/sb_1351-1400/sb_1386_bill_20020926_chaptered.html

Does that mean that you (the company) are safe if all of the personal
information in the database is simply encrypted with the decryption key
laying right there alongside the data?  Alot of solutions do this, some go
to different lengths in trying to obfuscate the key.

--Anton




-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Interesting bit of a quote

2006-07-12 Thread Abe Singer
On Tue, Jul 11, 2006 at 05:50:06PM -0700, David Wagner wrote:
 
 No, it doesn't.  I think you've got it backwards.  That's not what SB1386
 says.  SB1386 says that if a company conducts business in Caliornia and
 has a system that includes personal information stored in unencrypted from
 and if that company discovers or is notified of a breach of the security
 that system, then the company must notify any California resident whose
 unencrypted personal information was, or is reasonably believed to have
 been, acquired by an unauthorized person. [*]

A small, but very significant correction.  The law says any breach of the 
security of the data, not security of the system.

The more explicit paragraph is in 1798.82(b)

   (b) Any person or business that maintains computerized data that
   includes personal information that the person or business does not
   own shall notify the owner or licensee of the information of any
   breach of the security of the data immediately following discovery,
   if the personal information was, or is reasonably believed to have
   been, acquired by an unauthorized person.


And even though the code has already stated such, it further goes on
to define security of the system in 1798.82(d):

   (d) For purposes of this section, breach of the security of the
   system means unauthorized acquisition of computerized data that
   compromises the security, confidentiality, or integrity of personal
   information maintained by the person or business. [...]

 If you know or are notified that the security of your system has been
 breached and if you know or have some reason to believe that someone
 has received unauthorized access to unencrypted personal information
 about California residents, then sure, you have to act on the presumption
 that the personal information was spilled.  So what?  That seems awfully
 reasonable to me.

reasonable is for a judge or jury to decide.  A lawyer's job is to do
what's the the best interests of the client, and in this circumstance,
make a determination of what will be considered reasonable in court.
And ask three lawyers a question, you'll get at least four opinions. (the
same can be said for security geeks).

But ultimately, what the lawyer is deciding is what's going to cost the
client less: disclosure or possibly penatly of non-disclosure.  They'll
often opt for the former to avoid the possibility high cost of the latter.

I've been on and around the pointy end of this stick (and no,
not any publicized events).  If unauthorized access cannot clearly
be substatiated, it becomes a judgement call, based on a variety of
factors.  Factors might include duration between compromise and discovery
(e.g. they've been on the system so long that we just can't tell anymore),
intruder activities, etc.

 In short, my reading of SB1386 is that companies only have to notify
 customers if (a) they know or are notified of a security breach and
 (b) they know or have reason to believe that this breach led to an
 unauthorized disclosure of personal information.  In other words, SB1386
 treats companies as innocent until there is some reason to believe that
 they are guilty.  I don't know anything about SOX, but I think you've
 mis-characterized SB1386.  Don't tar SB1386 with SOX-feathers.

SB1386 doesn't spell out guilt or innocence.  It just provides a liability
shield for a company who complies with it, and spells out punitive
damages for failing to comply.

A company could make the decision that the penalty for non-disclosure
is less than it would cost otherwise, and choose to keep quiet and hope
for the best.


 [*] This is pretty close to an direct quote from Section 1798.82(a)
 of California law.  See for yourself:
   
 http://info.sen.ca.gov/pub/01-02/bill/sen/sb_1351-1400/sb_1386_bill_20020926_chaptered.html

Better yet, go directly to the California Code (Civil Code Section):

http://www.leginfo.ca.gov/cgi-bin/displaycode?section=civgroup=01001-02000file=1798.80-1798.84

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]