Re: Circle Bank plays with two-factor authentication

2006-10-03 Thread leichter_jerrold
| Have you seen the technique used at http://www.griddatasecurity.com ?  Sounds
| a lot like your original idea.
Nah - more clever than what I had (which was meant for an age when you
couldn't carry any computation with you, and things you interacted with
on a day by day basis didn't have displays).

GridCode's idea is quite clever, but the fact that it's ultimately a
simple substitution - a varying simple substitution, but of a fixed
value - seems dangerous.  No obvious (to me!) attacks, though

-- Jerry


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Recovering data from encrypted disks, broken CD's

2006-07-29 Thread leichter_jerrold
From a Computerworld blog.
--Jerry


When encryption doesn't work

By Robert L. Mitchell on Wed, 07/26/2006 - 12:00pm

In my interview with Ontrack Data Recovery this week (see
Recovery specialists bring data back from the dead:

http://www.computerworld.com/action/article.do?command=printArticleBasicart
icleId=112460),

quite a bit hit the cutting room floor, including these three nuggets by
Mike Burmeister, director of engineering for data recovery:

Encrption can be broken
I was surprised to learn that Ontrack regularly recovers encrypted data
on systems where the user has lost the key. There's only a couple of
technologies where we would run into a roadblock [such as] some of the
new laptops that have passwords that are tied to the media and to the
BIOS, says Burmeister. That raises the question: if they can do it, who
else can?

On encrypted systems that are more difficult to crack, OnTrack also has
a secret weapon. Certain situations involve getting permission to get
help from the manufacturer, he says.

Broken CDs still yield data
Ontrack can also reassemble and recover data from CD-ROM discs that have
been broken into pieces. If you're using CDs for backups of sensitive
data, it's probably best to shred them.

Tapes work. People fail
Among the tape problems Ontrack sees most often are those related to
human errors, such as accidentally erased or formatted tapes.

Formatting the wrong tapes is the most common [problem] by far.  The
other one is they back up over a tape that has information on it.  The
general thing is they back up the wrong data. We'll get the tape in and
they'll say, 'The data I thought was on this tape is not on it.'

While those failures can be attributed to confusion, another failure is
the result of just plain laziness. People run these backup processes
and they're not simple anymore. They run these large, complex tape
libraries and they call that good enough. They don't actually go through
the process of verifying [the tape], Burmeister says. The result:
disaster strikes twice: once when the primary storage goes down and
again when the restore fails.

For more on how the technical challenge of recovery have raised the
stakes and what you can do to protect your data, see the story above.

Filed under : Security | Software | Storage
Robert L. Mitchell's blog



James Earl wrote:

It's really too bad that ComputerWorld deems to edit these
explainations. Especially when you consider its all ELECTRONIC paper.

Posted on Thu, 07/27/2006 - 4:12pm| reply

Security Skeptic wrote:

CDs (and DVDs) are very effective targets for recovery, because they
have massive error correction and the data is self-identifying because
of the embedded sector IDs. It's quite possible to recover a CD that has
been shredded, not just broken.

A few years ago, there was academic research describing automated
reassembly of shredded documents by scanning the bits and matching the
rough edges of along the cuts. I'm sure that technology has improved,
too.

The moral of the story is that physical destruction is hard. Grinding to
powder and heating past the Curie point are pretty reliable, but short
of that, it's tough. You're better off encrypting, as long as the key
actually is secret.

Posted on Thu, 07/27/2006 - 4:44pm| reply

Security Skeptic wrote:

Computer BIOS passwords: easy to recover by resetting or other direct
access to CMOS. You can do this at home.

Disk drive media passwords: hard to recover, but possible by direct
access to flash memory on the drive. This is tough to do at home, but
probably a breeze for OnTrack.

Disk drive built-in hardware encryption (which as far as I know is only
a Seagate feature so far) should be essentially impossible to recover,
unless Seagate has built in a back door, has fumbled the implementation,
or the password is simple enough to guess. Same is true for software-
based full-disk encryption: it can be invulnerable in the absence of
errors. Use it properly, and you'll never have to worry about your data
if the computer is lost or stolen.

Posted on Thu, 07/27/2006 - 4:54pm| reply

Iain Wilkinson wrote:

Surely it's far more common to use the BIOS to prevent a hard drive
being mounted in another device that to encrypt it.

As one of the other commentators says, the BIOS is pretty easy to get
into if you know what you are doing. Basing an encryption system on this
would inherit all its weaknesses.

Posted on Fri, 07/28/2006 - 7:53am| reply

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Interesting bit of a quote

2006-07-13 Thread leichter_jerrold
On Thu, 13 Jul 2006, John Kelsey wrote:
| From: Anne  Lynn Wheeler [EMAIL PROTECTED]
| ...
| my slightly different perspective is that audits in the past have 
| somewhat been looking for inconsistencies from independent sources. this 
| worked in the days of paper books from multiple different corporate 
| sources. my claim with the current reliance on IT technology ... that 
| the audited information can be all generated from a single IT source ... 
| invalidating any assumptions about audits being able to look for 
| inconsistencies from independent sources. A reasonable intelligent 
| hacker could make sure that all the information was consistent.
| 
| It's interesting to me that this same kind of issue comes up in voting
| security, where computerized counting of hand-marked paper ballots (or
| punched cards) has been and is being replaced with much more
| user-friendly DREs, where paper poll books are being replaced with
| electronic ones, etc.  It's easy to have all your procedures built
| around the idea that records X and Y come from independent sources,
| and then have technology undermine that assumption.  The obvious
| example of this is rules for recounts and paper record retention which
| are applied to DREs; the procedures make lots of sense for paper
| ballots, but no sense at all for DREs.  I wonder how many other areas
| of computer and more general security have this same kind of issue.   
That's a very interesting comparison.  I think it's a bit more subtle: We
have
two distinct phenomena here, and it's worth examining them more closely.

Phenomenon 1:
Computerized records are malleable, and it's in general impossible
to
determine if someone has changed them, when they changed them, what
the previous value was, and so on.  Further, changing computer
records
scales easily - it costs about as much to change a million records
as
it does to change one record.  Contrast this to traditional record
keeping systems, where forging even one record was quite difficult,
and forging a million was so difficult and expensive that it was
probably never done in history.  Even *destroying* a million paper
records is quite difficult.

This phenomenon is present in both the auditing and voting examples.
It's not so much that the DRE doesn't, or can't, keep a record just
as
the paper ballot system does; it's that the record is just something
in memory, or maybe written to a disk, and we simply have no faith
in our ability to detect tampering with such media.  Similarly,
as long as the books were physical books on paper, it was quite
difficult to tamper with them.  Now that they are in a computer
database somewhere, it's very easy.

Phenomenon 2:
The only way to merge the information from paper records is to
create
new, combined paper records.  The only way to filter out some of the
data from paper records is to make new, redacted paper records.
These
are expensive, time-consuming operations.  As a result,
record-keeping
systems based on paper tend to keep the originals distinct and only
produce rare roll-ups for analysis.  This lets you compare distinct
sources for the same piece of information.

Computerized systems, on the other hand, make it easy to merge,
select, and reformat data.  It's so easy that a central tenant of
database design is to avoid storing the same information more than
once (thus avoiding the problem of keeping multiple copies in sync).
But when this principle is applied to data relevant to auditing,
it discards exactly the redundancy that has always been used to
detect problems.  Sure, you can produce the traditional double-
entry reports, but if they you generate them on the fly from a
single database that just records transactions, sure enough, all
the amounts will tally - always, regardless of what errors or
shenanigans have occurred.

This has no obvious analogue in voting systems, except I suppose
in those that keep only totals, not individual votes.  (Of course,
that was the case with the old mechanical voting machines, too;
but their resistance to Phenomenon 1 made that acceptable.)

-- Jerry

| 
| --John Kelsey, NIST
| 

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Interesting bit of a quote

2006-07-12 Thread leichter_jerrold
On Tue, 11 Jul 2006, Anne  Lynn Wheeler wrote:
| ...independent operation/sources/entities have been used for a variety of
| different purposes. however, my claim has been then auditing has been used
to
| look for inconsistencies. this has worked better in situations where there
was
| independent physical books from independent sources (even in the same
| corporation).
| 
| As IT technology has evolved ... my assertion is a complete set of
| (consistent) corporate books can be generated from a single IT
| source/operation. The IRS example is having multiple independent sources
of
| the same information (so that you can have independent sources to check
for
| inconsistencies)
Another, very simple, example of the way that the assumptions of
auditing are increasingly at odds with reality can be seen in receipts.
Whenever I apply for a reimbursement of business expenses, I have to
provide original receipts.  Well ... just what *is* an original
receipt for an Amazon purchase?  Sure, I can print the page Amazon
gives me.  Then again, I can easily modify it to say anything I like.

Hotel receipts are all computer-printed these days.  Yes, some of them
still use pre-printed forms, but as the cost of color laser printers
continues to drop, eventually it will make no sense to order and stock
that stuff.  Restaurant receipts are printed on little slips of paper by
one of a small number of brands of printer with some easily set custom-
ization, readily available at low cost to anyone who cares to buy one.

Back in the days when receipts were often hand-written or typed on
good-quality letterhead forms, original receipts actually proved
something.  Yes, they could be faked, but doing so was difficult and
hardly worth the effort.  That's simply not true any more.

Interestingly, the auditors at my employer - and at many others, I'm
sure - have recognized this, and now accept fax images of all receipts.
However, the IRS still insists on originals in case of an audit.
Keeping all those little pieces of paper around until the IRS loses
interest (I've heard different ideas about how long is safe - either 3
or 7 years) is now *my* problem.  (If the IRS audits my employer, and
comes to me for receipts I don't have, the business expense reimburse-
ments covered by those missing receipts suddenly get reclassified as
ordinary income, on which *I*, not my employer, now owe taxes - and
their good friends interest and penalties.)
-- Jerry


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Interesting bit of a quote

2006-07-11 Thread leichter_jerrold
...from a round-table discussion on identity theft in the current
Computerworld:

IDGNS: What are the new threats that people aren't thinking
about?

CEO Dean Drako, Sana Security Inc.: There has been a market
change over the last five-to-six years, primarily due to
Sarbanes-Oxley. It used to be that you actually trusted your
employees. What's changed -- and which is really kind of morally
and socially depressing -- is that now, the way the auditors
approach the problem, the way Sarbanes-Oxley approaches the
problem, is you actually put in systems assuming that you can't
trust anyone.  Everything has to be double-signoff or a
double-check in the process of how you organize all of the
financials of the company

-- Jerry

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Use of TPM chip for RNG?

2006-07-04 Thread leichter_jerrold
| On 7/3/06, Leichter, Jerry [EMAIL PROTECTED] wrote:
|  You're damned if you do and damned if you don't.  Would you want to use
a
|  hardware RNG that was *not* inside a tamper-proof package - i.e., inside
|  of a package that allows someone to tamper with it?
| 
| Yes.  If someone has physical access to your equipment, they could
| compromise it.  On the other hand, if you have access to it, you can
| establish a baseline and check it for changes.
This assumes an odd definition of tamper-proof:  I can't look inside,
but the bad guys can change it without my knowing.  There are such
things around - all too many of them; your typical Windows PC, for
most people, is a great examplar of the class - but no  one describes
them as tamper-proof.  Tamper-proof means that *no one* can change
the thing.  Obviously, this is a matter of degree, and tamper-resistant
is a much better description.  But there are devices considered
tamper-resistent against very well-funded, very technologically
adept adversaries.

|I recall the book
| titled Computer Security by Carroll suggested taking polaroids of
| all your equipment, and from each window, and other even more paranoid
| things
which is yet another issue, that of tamper-evident design.  If your
design isn't tamper-evident - which again is a matter of degree -
it's unlikely your pictures will do you much good against even a
moderately sophisticated attacker.  With physical access and no
tamper evidence, a couple of minutes with a USB stick is all that's
necessary to insert some rather nasty code, which you have little
hope of detecting, whether by physical or software means.

-- Jerry


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Chinese WAPI protocol?

2006-06-14 Thread leichter_jerrold
|  The specification is secret and confidential.  It uses the SMS4
|  block cipher, which is secret and patented. [*]
| 
| Secret and patented are mutually exclusive.
Actually, they are not.  There is a special provision in the law under
which something submitted to the patent office can be declared secret.
You as the inventor are then no longer allowed to talk about it.  I think
you are granted the patent, but it cannot be published.

This provision has been applied in the past - we know about it because
the secrecy order was later (years later) lifted.  I don't believe
there is any way for someone on the outside to know how many patents
may have tripped over this provision.

Needless to say, this is a disaster for you if you are the patent
applicant and want to sell your product.  But there isn't much of
anything you can do about it.  I'm not sure what happens to the term
of a patent hidden in this way.

The above description is of US law.  It's likely that similar provisions
exist in other countries.
-- Jerry


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: complexity classes and crypto algorithms

2006-06-13 Thread leichter_jerrold
| What kind of problems do people run into when they try to make
| cryptographic algorithms that reduce to problems of known complexity?
| I'm expecting that the literature is full of such attempts, and one
| could probably spend a lifetime reading up on them, but I have other
| plans and would appreciate a summary.
| 
| In particular, it seems like you should be able to make a respectable
| one-way function out of 3SAT.
This is an idea that keeps coming up.

Suppose you had such a thing - for example, a one-way hash in which you
could prove that calculating a preimage inverse as NP-complete.

First off you have a basic problem in definition:  You have to specify
*one* hash with *one* output size, but NP-completeness has to do with
asymptotic behavior.  For any hash producing a fixed-size output string,
there is a deterministic machine that runs in time O(1) that computes a
pre-image.  It's a rather large machine that does a table lookup.

So, suppose you take the obvious approach - all you'd get out of 3SAT -
and said that you had a family of hash functions H_i, each producing
an i-bit output; and you made an NP-completeness argument about that.
Then the family H_i' defined by:

H_i'(X) = 0   for all i  10^100 and all X
H_i'(X) = H_i(X)  otherwise

would satisfy the same NP-completeness predicates, but would be of no
conceivable use to anyone.

Finally, even if you get around all of that, NP-completeness is a
statement about *worst case* behavior.  All it says is that *some*
instances of the problem are hard.  It could well be that almost all
instances are easy!  The simplex algorithm, for example, is actually
known to have instances that require exponential time, but is widely 
used because in practice it's always fast - an empirical
observation that has been confirmed by analysis which shows that
not only is it polynomial time on average (for some appropriate
notion of randomized inputs), but it's even true in a stronger sense 
(smoothed complexity, which beyond the description as being somewhere 
between average- and worst-case complexity I haven't looked at).

-- Jerry


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Trusted path (was: status of SRP)

2006-06-06 Thread leichter_jerrold
| ...This is the trusted-path problem.  Some examples of proposed
| solutions to trusted-path are:
| 
| - Dim the entire screen.
| - Use special window borders.
| - Use flashing window borders.
| - Use specially shaped windows.
| - Attach a warning label to all untrusted windows.
| - Display a customized word or name.
| - Display a customized image.
| - Overlay a semitransparent customized image.
| - Require the user to press a secure attention key.
| - Require the user to click a customized button.
| 
| I'm interested in people's thoughts on what works better or
| might work better.  (Feel free to add to the list.)
I'm going to give a pessimistic answer here:  None of the above.

You're fighting the entire direction of development of display technologies
on end-user machines.  There are no fixed standards - everything is subject
to change and (we hope) improvement.  Applications regularly improve on
the standards, or imitate what others have done.

Use a specially shaped window?  Soon, other applications will start
imitating that as a flag for important data.  Customized images?  How
many people will set one.  And how hard will it be to fool them with a
nice-looking new app that tells you it has a whole library of images you
can use for your customized image?

There is simply no precedent for people making trust distinctions based on
user elements on the screen.  They see the screen as similar to a piece of
paper, and draw distinctions using the same kinds of rules we've
traditionally
applied to paper:  Does it look professionally done?  Is it well written?
Does it have all the right logos on it?  *None* of these are helpful on the
Web, but that doesn't change how people react.

The only trusted path most people ever see is the Windows Ctrl/Alt/Delete
to enter a password.  That's not a good example:  The *dialog* it produces
is indistinguishable from other Windows dialogs.  You should only trust it
to the degree that you know you typed Ctrl/Alt/Delete, and haven't yet hit
enter.  There's no way to generalize this.

This is a human factors issue.  You have to look at what people actually
use to make trust distinctions.  As far as I can see, the only thing that
will really work is specialized hardware.  Vendors are already moving in
this kind of direction.  Some are adding fingerprint scanners, for example.
However, any *generally accessible* device is useless - an attacker can
get at them, too.  What's needed is some physically separate device, with
a trusted path between it and something controlled.  A physical button,
with a small LCD near it, with enough room for a simple prompt, and you
are probably fine.  Make *that* part of the browser chrome and you have
something.
-- Jerry

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: statistical inferences and PRNG characterization

2006-05-22 Thread leichter_jerrold
| Hi,
| 
| I've been wondering about the proper application of statistics with
| regard to comparing PRNGs and encrypted text to truly random sources.
| 
| As I understand it, when looking at output, one can take a
| hypothetical source model (e.g. P(0) = 0.3, P(1) = 0.7, all bits
| independent) and come up with a probability that the source may have
| generated that output.  One cannot, however, say what probability such
| a source had generated the output, because there is an infinite number
| of sources (e.g. P(0) = 0.2.., P(1) = 7.000...).  Can one say
| that, if the source must be A or B, what probability it actually was A
| (and if so, how)?
That's not the way it's done.  Ignore for a moment that we have a sequence
(which is probably irrelevant for this purpose, but might not be).  Instead,
just imagine we have a large collection of values generated by the PRNG -
or, looked at another way, a large collection of values alleged to have been
drawn from a population with P(0) = 0.3 and P(1) = 0.7.  Now take a truely
random sample from that collection and ask the question:  What is the
probability that I would have seen this result, given that the collection
I'm drawing from is really taken from the alleged distribution?  You don't
need any information about *other* possible distributions.  (Not that there
aren't other questions you can ask.  Thus, if the collection could have
been drawn from either of two possible distributions, you can ask which
is more probable to have resulted in the random sample you saw.)

The randomness in the sampling is essential.  When you have it, you wipe out
any underlying bias in the way the collection was created.

| Also, it strikes me that it may not be possible to prove something
| cannot be distinguished from random, but that proofs must be of the
| opposite form, i.e. that some source is distinguishable from random.
Actually, what one tends to prove are things like:  If X is uniformally
randomly distributed over (0,1), then 2X is uniformally randomly distributed
over (0,2).  (On the other hand, X + X, while still random, is *not*
uniformally distributed.)  That's about as close as you are going to get
to a proof of randomness.

| Am I correct?  Are there any other subtleties in the application of
| statistics to crypto that anyone wishes to describe?  I have yet to
| find a good book on statistics in these kinds of situations, or for
| that matter in any.
Statistics in general require subtle reasoning.

| As an aside, it's amusing to see the abuse of statistics and
| probability in the media.  For example, when people ask what's the
| probability of some non-repeating event or condition?
That may or may not be a meaningful concept.  If I toss a coin, and
depending
on the result, blow up a building - there is no way to repeat the blowing up
of the building, but still it's meaningful to say that the probability that
the building gets blown up is 50%.

-- Jerry


| -- 
| Curiousity killed the cat, but for a while I was a suspect -- Steven
Wright
| Security Guru for Hire http://www.lightconsulting.com/~travis/ --
| GPG fingerprint: 9D3F 395A DAC5 5CCC 9066  151D 0A6B 4098 0C55 1484
| 
| -
| The Cryptography Mailing List
| Unsubscribe by sending unsubscribe cryptography to
[EMAIL PROTECTED]
| 

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Piercing network anonymity in real time

2006-05-15 Thread leichter_jerrold
|The Locate appliance sits passively on the network and
|analyzes packets in real time to garner ID info from sources
|like Active Directory, IM and e-mail traffic, then associates
|this data with network information.
| 
| This is really nothing new -- I've been seeing systems like these,
| though home brewed, in use for years. The availability of good tools as
| a foundation (things like Snort, the layer7 iptables patch, and so on)
| makes building decent layer 8 inference not far from trivial. Calling
| this piercing network anonymity in real time is highly misleading; in
| reality, it's more like making it bloody obvious that there's no such
| thing as network anonymity.
| 
| The best one can hope for today is a bit of anonymous browsing and IM
| with Tor, and that only insofar as you can trust a system whose single
| point of failure -- the directory service -- was, at least until
| recently, Roger's personal machine sitting in an MIT dorm room.
There's a difference between can be done by someone skilled and
your IT can buy a box and have it running on your network this
afternoon.  The first basically means that most people, most of
the time, effectively have anonymity because it isn't worth anyone's
bother to figure out what they are up to.  With the second, information
about who you are, who you talk to, etc., etc., becomes a commodity -
a very *cheap* commodity.  Safety in numbers disappears.

It's always been possible to go to town hall and look up public records
like deeds - which often contain things like Social Security numbers,
bank account  numbers, etc.  Skilled experts - PI's - have made use of
this information for years.  There's no difference, in principle, when
that some information goes up on the web.  But that's not how most
people feel about it.
-- Jerry


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: the meaning of linearity, was Re: picking a hash function to be encrypted

2006-05-15 Thread leichter_jerrold
|  - Stream ciphers (additive)
| 
| This reminds me, when people talk about linearity with regard to a
| function, for example CRCs, exactly what sense of the word do they
| mean?  I can understand f(x) = ax + b being linear, but how exactly
| does XOR get involved, and are there +-linear functions and xor-linear
| functions?  Are they disjoint?  etc.
XOR is the same as addition mod 2.  The integers mod 2 form a field
with XOR as the addition operation and integer multiplication (mod 2,
though that has no effect in this case) as the multiplication.

If you think of a stream of n bits as a member of the vector space
of dimension n over the integers mod 2 treated as a field, then
adding two of these - the fundamental linear operation - is XOR'ing
them bit by bit.

The thing I've always wondered about stream ciphers is why we only
talk about linear ones.  A stream cipher is fundamentally constructed
of two things:  A stream of bits (alleged to be unpredictable) as
long as the plaintext; and a combining function that takes one
plaintext bit and one stream bit and produces a ciphertext bit.
The combining function has to conserve information.  If you only
combine single bits, there are only two possible functions:  XOR
and the complement of XOR.  But consider RC4:  It actually generates
a byte at a time.  We just choose to use that byte as a vector of
8 bits.  For plaintexts that are multiples of 8 bits long - just
about everything these days - there are many possible combining
functions.  Most aren't even close to linear.

Other than post by a guy - Terry someone or another - on sci.crypt
a number of years ago - I've never seen any work in this direction.
Is there stuff I'm not aware of?
-- Jerry


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Consumers Losing Trust in Internet Banking

2006-05-13 Thread leichter_jerrold
Summary:  The deluge of reports of problems at on-line banks is having
an effect.  Customer attitudes are increasing negative, and customers
mention concerns about security as worrying them.  The adoption rate
for internet banking has dropped to only 3.1% for the last quarter
of 2005, about matching the rate at which people drop their accounts.
Over all, 38% of Americans use Internet banking - compared to 75% of
Europeans.  (Europeans report a much higher level of confidence in
on-line banking.)

The full report is at

http://www.marketwire.com/mw/release_html_b1?release_id=128505

Eventually, all those voting feet will have an effect.  Perhaps we
don't need to despair of the market forcing better security.

-- Jerry


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Get a boarding pass, steal someone's identity

2006-05-08 Thread leichter_jerrold
| I got this pointer off of Paul Hoffman's blog. Basically, a reporter
| uses information on a discarded boarding pass to find out far too much
| about the person who threw it away
| 
|   http://www.guardian.co.uk/idcards/story/0,,1766266,00.html
| 
| The story may be exaggerated but it feels quite real. Certainly I've
| found similar issues in the past.
| 
| These days, I shred practically anything with my name on it before
| throwing it out. Perhaps I'm paranoid, but then again...
I've actually gone in the opposite direction:  I shred less than I used
to.  Grabbing this kind of information off stray pieces of paper in a
garbage can is buying retail.  It's so much easier these days to buy
wholesale, stealing hundreds of thousands to tens of millions of on-line
records in one shot.

It would be useful to get some idea of the chances one takes in throwing
identifying material out.  Everything in security is cost vs. benefit,
and the cost of shredding, while it appears low on a single-item basis,
adds up in annoyance.  And all too many of the companies I deal with
seem to make it ever harder.  Just yesterday, I threw out a couple of
letters having to do with incidental matters (e.g., an incorrect charge)
from a credit card provider.  Every one of them had my full card number
on it.  Some of them looked like the routine junk you get every month
and don't even look at twice before discarding.

Meanwhile, my statements contain my credit card number, in small but
easily readable numbers, *vertically* on the page - next to what appears
to be a bar code with the same information.  Even a cross-cut shredder
probably isn't sufficient to render that unreadable.

The entire infrastructure we've built based on a shared pseudo-secrets
is one of the walking dead.  For credit cards, the responsibility for
loss is on the card companies, where it belongs - and I let it stay
there.  I take basic reasonable care, but I'm unwilling to go any
further, since it can't possibly help me and I'm paying indirectly for
all the costs the credit card companies assume anyway (since they push
them off on the vendors, who then raise their prices).  As far as
identity theft as a general issue:  What little evidence there is as to
the way the identity thieves work today implies that nothing I'm likely
to do - absent obvious dumb moves - will change my odds of being
successfully hit by very much.
-- Jerry


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Linux RNG paper

2006-05-05 Thread leichter_jerrold
|  I guess perhaps the reason they don't do integrity checking is that it
|  involves redundant data, so the encrypted volume would be smaller, or
|  the block offsets don't line up, and perhaps that's trickier to handle
|  than a 1:1 correspondence.
| 
| Exactly, many file systems rely on block devices with atomic single block
| (sector) writes. If sector updates are not atomic, the file system needs
| to be substantially more complex (unavoidable transaction logs to support
| roll-back and roll-forward). Encrypted block device implementations that
| are file system agnostic, cannot violate block update atomicity and so
| MUST not offer integrity.
That's way too strong.  Here's an implementation that preserves
block-level atomicity while providing integrity:  Corresponding to each
block, there are *two* checksums, A and B.

Read algorithm:  Read Block, A and B.  If checksum matches
either of A or B, return the value of the block;
otherwise, declare the block invalid.

Write algorithm:  Read current value of block.  If its
checksum matches A, write the checksum of the
new data to B; otherwise, write the checksum of
the new value to A.  After the checksum data is
known to be on the disk, write the data block.

Writes to a given block must be atomic with respect to each other.
(No synchronization is needed between reads and writes.)

Granted, this algorithm has other problems.  But it shows that the three
requirements - user block size matches disk block size; block level
atomicity; and authentication - are not mutually exclusive.  (Actually,
I suppose one should add a fourth requirement, which this scheme also
realizes:  The size of a user block identifier is the same as the size
of the block id passed to disk.  Otherwise, one can keep the checksum
with each block identifier.)
-- Jerry


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: PGP master keys

2006-05-01 Thread leichter_jerrold
|  issues did start showing up in the mid-90s in the corporate world ... 
|  there were a large number of former gov. employees starting to show up 
|  in different corporate security-related positions (apparently after 
|  being turfed from the gov). their interests appeared to possibly reflect

|  what they may have been doing prior to leaving the gov.
| 
| one of the issues is that corporate/commercial world has had much more 
| orientation towards prevention of wrong doing. govs. have tended to be 
| much more preoccupied with evidence and prosecution of wrong doing. the 
| influx of former gov. employees into the corporate world in the 2nd half 
| of the 90s, tended to shift some of the attention from activities 
| related to prevention to activities related to evidence and prosecution 
| (including evesdropping).
What I've heard described as the bull in the china shop theory of
security:  You can always buy new china, but the bull is dead meat.
(I'm pretty sure I heard this from Paul Karger, who probably picked it
up during his time at the Air Force.)

| for lots of drift ... one of the features of the work on x9.59 from the 
| mid-90s
| http://www.garlic.com/~lynn/x959.html#x959
| http://www.garlic.com/~lynn/subpubkey.html#x959
| 
| was its recognition that insiders had always been a major factor in the 
| majority of financial fraud and security breaches. furthermore that with 
| various financial functions overloaded for both authentication and 
| normal day-to-day operations ... that there was no way to practical way 
| of eliminating all such security breaches with that type of information. 
| ... part of this is my repeated comment on security proportional to risk
| http://www.garlic.com/~lynn/2001h.html#61
The dodge of creating phantom troops and then collecting their pay
checks has been around since Roman times.  No one has ever found a
way of detecting it cost-effectively.  However, it's also been known
forever that it's just about impossible to avoid detection indefinitely:
The officer who created the troops gets transferred, or retires, and
he has no way to maintain the fiction.  Or the troops themselves are
transferred. other events intervene.  So armies focus on making sure
they *eventually* find and severely and publicly punish anyone who tries
this, no matter how long it takes.  A large enough fraction of the
population is deterred to keep the problem under control.

A similar issue occurs in a civilian context, sometimes with fake
employees, other times with fake bills.  Often, these get found
because they rely on the person committing the fraud being there
every time a check arrives:  It's the check sitting around with no
one speaking for it that raises the alarm.  The long-standing
policy has been to *require* people in a position to handle those
checks to take their vacation.  (Of course, with direct deposit
of salaries, the form of the fraud, and what one needs to do to
detect it, have changed in detail - but probably not by much.)

-- Jerry

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


VoIP and phishing

2006-04-27 Thread leichter_jerrold
From Computerworld:


New phishing scam model leverages VoIP
Novelty of dialing a phone number lures in the unwary
  News Story by Cara Garretson

APRIL 26, 2006
(NETWORK WORLD) - Small businesses and consumers aren't the only ones
enjoying the cost savings of switching to voice over IP
(VoIP). According to messaging security company Cloudmark Inc., phishers
have begun using the technology to help them steal personal and
financial information over the phone.

Earlier this month, San Francisco-based Cloudmark trapped an e-mailed
phishing attack in its security filters that appeared to come from a
small bank in a big city and directed recipients to verify their account
information by dialing a certain phone number. The Cloudmark user who
received the e-mail and alerted the company knew it was a phishing scam
because he's not a customer of this bank.

Usually phishing scams are e-mail messages that direct unwitting
recipients to a Web site where they're tricked into giving up their
personal or financial information. But because much of the public is
learning not to visit the Web sites these messages try to direct them
to, phishers believe asking recipients to dial a phone number instead is
novel enough that people will do it, says Adam O'Donnell, senior
research scientist at Cloudmark.

And that's where VoIP comes in. By simply acquiring a VoIP account,
associating it with a phone number and backing it up with an interactive
voice-recognition system and free PBX software running on a cheap PC,
phishers can build phone systems that appear as elaborate as those used
by banks, O'Donnell says. They're leveraging the same economies that
make VoIP attractive for small businesses, he says.

Cloudmark has no proof that the phishing e-mail it snagged was using a
VoIP system, but O'Donnell says it's the only way that staging such an
attack could make economic sense for the phisher.

The company expects to see more of this new form of phishing. Once a
phished e-mail with a phone number is identified, Cloudmark's security
network can filter inbound e-mail messages and block those that contain
the number, says O'Donnell.

-- Jerry

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: VoIP and phishing

2006-04-27 Thread leichter_jerrold
| the other point that should be made about voip is that callerid is
| trivial to spoof.
| 
| so if you are counting on the calling party being who they say the
| are, or even within your company, based on callerid, don't.
| 
| i predict a round of targeted attacks on help desks and customer
| service, as well as more general scams with callerid set to (say)
| Visa Security.
To open a trouble ticket with IT where I work, you go to a Web page; or,
if you have problems using the network, you can use the phone.  When the
phone is replaced by one that use VoIP, just how will one report network
outages?  I can't wait

| does anyone know if time ANI from toll free services is still
| unspoofable?
The last I heard, it was fairly easy to *suppress* ANI (using games that
redirected calls the network saw as going to toll-free numbers), but
still difficult to *spoof* it.  Since ANI drives Telco billing - unlike
Caller ID, which is simply delivered to customers - the Telco's have an
interest in making it difficult to fake.  On the other hand, LD revenues
have been falling for years, so the funding to attack LD fraud has
probably been falling, too - given how many people now have all you
can eat plans, there's less and less reason to worry about them
stealing.

| some of my clients have been receiving targeted phishes recently that
| correctly name their bank and property address and claim to be about
| their mortgage.  this is information obtainable from public records.
I probably get an offer to refinance my mortgage every other week or
so.  The letters cite real information about me and my mortgage:  They
know its size, or at least the know the amount at the time I took out
the mortgage.

In low-income areas, there's a long history of fraudulent refinancing -
claiming you are getting a better loan for the person but really getting
him deeper and deeper in the hole while you pocket various fees.  I
wouldn't want bet that all the come-on letters I receive are legitimate!
The only difference between some of this stuff and phishing is the
medium used.
-- Jerry

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: webcam encryption beats quasar encryption

2006-03-31 Thread leichter_jerrold
| I think the Rip Van Winkle cipher was mentioned in Schneier's Applied 
| Cryptography.  Also, I vaguely recall another news story (1999?) that 
| reported on an encryption technique that hypothesized a stream of random 
| bits generated by an orbiting satellite.
Probably Rabin's work on beacons.  It explored the results of assuming
a universally available oracle providing the same stream of random bits
to everyone.  (If you think of the randomized Turing machine model
as a TM plus an oracle giving that machine a random bit stream, you
can think of this as a bunch of communicating TM's that get *the
same* random bit stream.)
-- Jerry


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Linux RNG paper

2006-03-24 Thread leichter_jerrold
| Min-entropy of a probability distribution is 
| 
| -lg ( P[max] ), 
| 
| minus the base-two log of the maximum probability.  
| 
| The nice thing about min-entropy in the PRNG world is that it leads to
| a really clean relationship between how many bits of entropy we need
| to seed the PRNG, and how many bits of security (in terms of
| resistance to brute force guessing attack) we can get.
Interesting; I hadn't seen this definition before.  It's related to a
concept in traditional probability theory:  The probability of ruin.  If
I play some kind of gambling game, the usual analysis looks at the
value of the game strictly as my long-term expectation value.  If,
however, I have finite resources, it may be that I lose all of them
before I get to play long enough to make long-term a useful notion.
The current TV game show , Deal Or No Deal, is based on this:  I've yet
to see a banker's offer that equals, much less exceeds, the expected
value of the board.  However, given a player's finite resources - they
only get to play one game - the offers eventually become worth taking,
since the alternative is that you walk away with very little.  (For
that matter, insurance makes sense only because of this kind of
analysis:  The long-term expectation value of buying insurance *must*
be negative, or the insurance companies would go out of business -
but insurance can still be worth buying.)
-- Jerry

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Creativity and security

2006-03-24 Thread leichter_jerrold
|  If all that information's printed on the outside of the card, then
|  isn't this battle kind of lost the moment you hand the card to them?
| 
| 1-  I don't hand it to them.  I put it in the chip-and-pin card reader 
| myself.  In any case, even if I hand it to a cashier, it is within my
sight 
| at all times.
| 
| 2-  If it was really that easy to memorize a name and the equivalent of a 
| 23-digit number at a glance without having to write anything down, surely 
| the credit card companies wouldn't need to issue cards in the first place?
| 
|   IOW, unless we're talking about a corrupt employee with a photographic 
| memory and telescopic eyes, the paper receipt I leave behind is the only 
| place they could get any information about my card details
You're underestimating human abilities when there is a reward present.
Back in the days when telephone calling cards were common, people used
to shoulder surf, watching someone enter the card number and
memorizing it.  A traditional hazing in the military is to give the new
soldier a gun, then a few seconds later demand that he tell you the
serial number from memory.  Soldiers caught out on this ... only get
caught out once.

Besides, there's a lot less to remember than you think.  I don't know
how your chip-and-pin card encoding is done, but a credit card number is
16 digits, with the first 4 (6?) specifying the bank (with a small
number of banks covering most of the market - if you see a card from
an uncommon bank, you can ignore it) and the last digit a check digit.
So you need to remember one of a small number of banks, a name, and
11 digits - for the few seconds it takes for the customer to move on
and give you the chance to scrawl it on a piece of paper.  Hardly very
challenging.
-- Jerry


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: passphrases with more than 160 bits of entropy

2006-03-22 Thread leichter_jerrold
| Let me rephrase my sequence. Create a sequence of 256 consecutive  
| bytes, with the first byte having the value of 0, the second byte the  
| value of 1, ... and the last byte the value of 255. If you measure  
| the entropy (according to Shannon) of that sequence of 256 bytes, you  
| have maximum entropy.
Shannon entropy is a property of a *source*, not a particular sequence
of values.  The entropy is derived from a sum of equivocations about
successive outputs.

If we read your create a sequence..., then you've described a source -
a source with exactly one possible output.  All the probabilities will
be 1 for the actual value, 0 for all other values; the equivocations are
all 0.  So the resulting Shannon entropy is precisely 0.

-- Jerry

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


PayPad

2006-03-22 Thread leichter_jerrold
PayPad (www.paypad.com) is an initiative that seems to have JPMorganChase
Chase behind it to provide an alternative method for paying transactions
on line.  You buy a PayPad device, a small card reader with integrated
keypad.  It connects to your PC using USB.  To pay using PayPad at
a merchant that supports it, you select that as an option, swipe your
card, enter your PIN, and the data is (allegedly) sent encrypted
from the PayPad device direct to the merchant.

Advantage to the merchant:  It's a debit card transaction, and they
claim the transaction fees are half those of a credit card. Of course,
the consumer pays for everything:  The device itself (about $60), the
lack of float.  It's not clear what kind of recourse you might have
in case of fraud.

It's sold as the secure alternative to using your credit card
online.  Unfortunately, it has the problems long discussed on
this list:  The PayPad itself has no display.  It authorizes a
transaction the details of which are on your computer screen.
You have only the software's word for it that there is any
connection between what's on the screen and what's sent to the
merchant (or to someone else entirely).

Realistically, it's hard to see how this is any more secure than
a standard credit card transaction in an SSL session.  It's not
even clear that the card data is encrypted in the device - for
all we know, card data and pin are transfered over the USB to the
application you have to run on your PC, ready to be stolen by,
say, a targetted virus.  They do claim that you are protected in
another way:  Your sensitive data never goes to the merchant or
into a database that can be hacked  The encrypted transaction
is handled directly with your bank  (I guess banks don't
keep databases)

Anyone know anything more about this effort?

-- Jerry


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: pipad, was Re: bounded storage model - why is R organized as 2-d array?

2006-03-21 Thread leichter_jerrold
| Anyone see a reason why the digits of Pi wouldn't form an excellent
| public large (infinite, actually) string of random bits?
| 
| There's even an efficient digit-extraction (a/k/a random access to
| fractional bits) formula, conveniently base 16:
| http://mathworld.wolfram.com/BBPFormula.html
| 
| I dub this pi pad.
The issue would be:  Are there any dependencies amoung the bits of
pi that would make it easier to predict where an XOR of n streams of
bits taken from different positions actually come from - or, more
weakly, to predict subsequent bits.

I doubt anyone knows.  What would worry me is exactly the existence
of the algorithm that would make this approach workable:  A way to
compute the i'th digit of pi without computing all the earlier ones.

As a starter problem, how about a simpler version:  Take n=1!  That
is, the key is simply a starting position in pi - taken from a
suitably large set, say the first 2^256 bits of pi - and we use
as our one-time pad the bits of pi starting from there.  An
attackers problem now turns into:  Given a sequence of k successive
bits of pi taken from among the first 2^256 bits, can you do better
than chance in predicting the k+1'st bit?  The obvious approach of
searching through pi for matches doesn't look fruitful, but perhaps
we can do better.  Note that if pi *isn't* normal to base 2 - and
we still don't know if it is - this starter problem is soluable.

BTW, Bailey and Crandall's work - which led to this discussion -
ties the question of normality to questions about chaotic
sequences.  If the approach of using pi as a one-time pad
works, then all the systems based on chaotic generators
will suddenly deserve a closer look!  (Many fail for much
simpler reasons than relying on such a generator, but some
are untrustworthy not because we don't know of an attack
but because we have no clue how to tell if there is one.)


| Is this idea transcendental or irrational?
Mathematician's insult:  You're transcendental (dense and totally
irrational).
-- Jerry

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Creativity and security

2006-03-20 Thread leichter_jerrold
I was tearing up some old credit card receipts recently - after all
these years, enough vendors continue to print full CC numbers on
receipts that I'm hesitant to just toss them as is, though I doubt there
are many dumpster divers looking for this stuff any more - when I found
a great example of why you don't want people applying their creativity
to security problems, at least not without a great deal of review.

You see, most vendors these days replace all but the last 4 digits of
the CC number on a receipt with X's.  But it must be boring to do the
same as everyone else, so some bright person at one vendor(*) decided
they were going to do it differently:  They X'd out *just the last four
digits*.  After all, who could guess the number from the 10,000
possibilities?

Ahem.
-- Jerry

(*) It was Build-A-Bear.  The receipt was at least a year old, so for
all I know they've long since fixed this.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Study shows how photonic decoys can foil hackers

2006-03-01 Thread leichter_jerrold
Does anyone have an idea of what this is about?  (From Computerworld):

-- Jerry


FEBRUARY 23, 2006 (NETWORK WORLD) - A University of Toronto professor
and researcher has demonstrated for the first time a new technique for
safeguarding data transmitted over fiber-optic networks using quantum
cryptography.

Professor Hoi-Kwong Lo, a member of the school's Centre for Quantum
Information and Quantum Control, is the senior author of a study that
sheds light on using what's called a photonic decoy technique for
encrypting data.

Quantum cryptography is starting to be used by the military, banks and
other organizations that seek to better protect the data on their
networks.  This sort of cryptography uses photons to carry encryption
keys, which is considered safer than protecting data via traditional
methods that powerful computers can crack. Quantum cryptography is
based on fundamental laws of physics, such that merely observing a
quantum object alters it.

Lo's team used modified quantum key distribution equipment from Id
Quantique and a 9.3-mile fiber-optic link to demonstrate the use of
decoys in data transmissions and to alert receiving computers about
which photons were legit and which were phony.  The technique is
designed to support high key generation rates over long distances.

Lo's study is slated to appear in the Feb. 24 issue of Physical Review
Letters.

Lo notes that existing products, such as those from Id Quantique and
MagiQ Technologies, are for point-to-point applications used by the
military and security-sensitive businesses.  In the long run, one can
envision a global quantum cryptographic network, either based on
satellite relays or based on quantum repeaters, he says.

University researchers are fueling many advances in network
security. A University of Indiana professor recently revealed
technology for thwarting phishing and pharming culprits by using a
technique called active cookies.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


DHS: Sony rootkit may lead to regulation

2006-02-28 Thread leichter_jerrold
DHS: Sony rootkit may lead to regulation U.S. officials aim to avoid future 
security threats caused by copy protection software

News Story by Robert McMillan

FEBRUARY 16, 2006 (IDG NEWS SERVICE) - A U.S.  Department of Homeland
Security
official warned today that if software distributors continue to sell
products
with dangerous rootkit software, as Sony BMG Music Entertainment recently
did,
legislation or regulation could follow.

We need to think about how that situation could have been avoided in the
first place, said Jonathan Frenkel, director of law enforcement policy for
the DHS's Border and Transportation Security Directorate, speaking at the
RSA
Conference 2006 in San Jose. Legislation or regulation may not be
appropriate
in all cases, but it may be warranted in some circumstances.

Last year, Sony began distributing XCP (Extended Copy Protection) software
in
some of its products. The digital rights management software, which used
rootkit cloaking techniques normally employed by hackers, was later found to
be a security risk, and Sony was forced to recall millions of its CDs.

The incident quickly turned into a public relations disaster for Sony. It
also
attracted the attention of DHS officials, who met with Sony a few weeks
after
news of the rootkit was first published, Frenkel said. The message was
certainly delivered in forceful terms that this was certainly not a useful
thing, he said.

While Sony's software was distributed without malicious intent, the DHS is
worried that a similar situation could occur again, this time with
more-serious consequences. It's a potential vulnerability that's of strong
concern to the department, Frenkel said.

Though the DHS has no ability to implement the kind of regulation that
Frenkel
mentioned, the organization is attempting to increase industry awareness of
the rootkit problem, he said. All we can do is, in essence, talk to them
and
embarrass them a little bit, Frenkel said.

In fact, this is not the first time the department has expressed concerns
over
the security of copy protection software. In November, the DHS's assistant
secretary for policy, Stewart Baker, warned copyright holders to be careful
of
how they protect their music and DVDs. In the pursuit of protection of
intellectual property, it's important not to defeat or undermine the
security
measures that people need to adopt in these days, Baker said, according to
a
video posted to The Washington Post Web site.

Despite the Sony experience, the entertainment industry's use of rootkits
appears to be an ongoing problem. Earlier this week, security vendor
F-Secure
Corp. reported that it had discovered rootkit technology in the copy
protection system of the German DVD release of the American movie Mr. and
Mrs. Smith. The DVD is distributed in Germany by Kinowelt GmbH, according to
the Internet Movie Database.

Baker stopped short of mentioning Sony by name, but Frenkel did not. The
recent Sony experience shows us that we need to be thinking about how to
ensure that consumers aren't surprised by what their software is programmed
to
do, he said.

Sony BMG officials could not immediately be reached for comment.


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: GnuTLS (libgrypt really) and Postfix

2006-02-14 Thread leichter_jerrold
|  I disagree strongly here.  Any code which detects an impossible state
|  or an error clearly due to a programming error by the caller should
|  die as soon as possible.  
| 
| That is a remarkably unprofessional suggestion.  I hope the people
| who write software for autopilots, pacemakers, antilock brakes,
| etc. do not follow this suggestion.
| 
| This just shows the dangers of over-generalization.
And *this* shows the danger of false dichotomies.

| Of course, we have to decide which is more important: integrity,
| or availability.  I suspect that in the overwhelming majority (perhaps
| all) of the cases where libgcrypt is used, integrity is more important
| than availability.  If that is true, well, if in doubt, it's better to
| fail closed than to fail open.
| 
| You rightly points out that there are important applications where
| availability is more important than integrity.  However, I suspect
| those cases are not too common when building Internet-connected desktop
| applications.
A library can't possibly know what kind of applications it will be part of!

| I think the attitude that it's better to die than to risk letting an
| attacker take control of the crypto library is defensible, in many cases.
| Of course, it would be better for a crypto library to document this
| assumption explicitly than to leave it up to users to discover it the
| hard way, but I would not agree with the suggestion that this exit before
| failing open stance is always inappropriate.
No, the library thinks it can call exit() is *always* inappropriate.

There are reasonable ways to deal with this kind of thing that are just as
safe, but allow general-purpose use.  For example:

1.  On an error like this, put the encrypted connection (or whatever
it is) into a permanent error state.  Any further calls act
as
if the connection had been closed.  Any incoming or outgoing
data is erased and discarded.  Any keying material is
immediately erased and discarded.  Of course, return error
statuses to the caller appropriately.  (You don't return
error statuses?  Then you're already talking about a poor
design.  Note that there's a world of difference between
returning an error status *locally* and sending it over the
wire.  The latter can turn your code into an oracle.  The
former ... well, unless you're writing a closed-source
library for a secret protocol and you assume your code and
protocol can't be reverse-engineered, the local user can
*always* get this information somehow.)

2.  When such an error occurs, throw an exception.  In a language
that supports exceptions as such (C++, Java), use the native
mechanism.  For languages that don't support exceptions, you
can call a function through a pointer.  By default, the
function can call, or simply be, exit(); but the user can
specify his own function.  The function *must* be allowed
to do something other than call exit()!

In general, this technique has to be combined with technique
1.

Granted, a user *could* write code that leaked important information upon
being informed of an error.  But he would have to try.  And, frankly,
there's
not a damn thing you can do to *prevent* that.  Most Unix systems these days
allow you to interpolate functions over standard library functions.  You
think you're calling exit(), or invoking kill()?  Hah, I've replaced them
with
my own functions.  So there.  (No interpolation?  Patching compiled code to
change where a function call goes is pretty easy.)

Of course, all this is nonsensical for an open-source library anyway!

You're kidding yourself if you think *any* programming practice will protect
you against a programmer who needs his program to do something that you
consider a bad idea.  But the whole approach is fundamentally wrong-headed.
The user of your library is *not* your enemy.  You should be cooperating
with
him, not trying to box him in.  If you treat him as your enemy, he'll either
choose another library - or find a way to work around your obstinacy.

-- Jerry


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Nonrepudiation - in some sense

2006-02-10 Thread leichter_jerrold
From a description of the Imperva SecureSphere technology.  Imperva makes 
firewalls that can look inside SSL sessions:

SSL Security that Maintains Non-Repudiation

SecureSphere can inspect the contents of both HTTP and HTTPS
(SSL) traffic.  SecureSphere delivers higher HTTPS performance
than competing reverse proxy point solutions because
SecureSphere decrypts SSL encrypted traffic but does not
terminate it. Therefore SecureSphere simply passes the encrypted
packets unchanged to the application or database server. This
eliminates the overhead of re-packaging (i.e. changing) the
communications, re-negotiating a new SSL connection to the
server, and re-encrypting the information. Moreover, it
maintains the non-repudiation of transactions since the
encrypted communication is between client and application with
no proxy acting as middleman.

-- Jerry


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


RE: thoughts on one time pads

2006-01-31 Thread leichter_jerrold
[CD destruction] 
| You missed the old standby - the microwave oven.
| 
| The disk remains physically intact (at least after the
| 5 seconds or so I've tried), but a great deal of pretty
| arcing occurs in the conductive data layer. Where the
| arcs travel, the data layer is vapourized. 
| 
| The end result is an otherwise intact disk in which the
| data layer is broken up into small intact islands 
| surrounded by clear channels. It might be interesting
| to try a longer burn, in which case you might also
| want to put a glass of water in with the disk(s) to
| preserve the microwave's electronics.
| 
| This is probably less effective than the other methods
| you've described, but its very fast and leaves no noxious
| residues. It also uses a very commonly available tool.
As always, who are you defending against?  There are commercial CD
shredders
whose effect - preserved islands with some destroyed material - is produced
by 
a much more prosaic approach:  The surface is covered with a grid of pits.
Only a small fraction of the surface is actually damaged, but no standard 
device will have any chance of reading the disk.  I suppose specialized 
hardware might do so, but even if it code, there's the question of the 
encoding format.  CD's are written with error-correcting codes which can 
recover from fairly significant damage - but if the damage exceeds their 
correction capability, they provide no information about what was there to 
begin with.

If you want to go further down the same route, grinding the whole surface of

the disk should work even better.

Of course, all this assumes that there's no way to polish or otherwise
smooth
the protective plastic.  Polishing should work if the scratches aren't too
deep.  (The pits produced by the CD shredder I've seen look deep enough to 
make this difficult, but that's tough to do over the whole surface.)

Probably the best approach would be better living through chemistry:  It 
should be possible to dissolve or otherwise degrade the plastic, leaving the

internal metallic surface - very thin and delicate - easy to destroy.  One 
would need to contact a chemist to determine the best way to do this.  (If
all 
else fails, sulfuric acid is likely pretty effective - if not something you 
want to keep around.)

Realistically, especially given the error-correcting code issues, anything 
that breaks the CD into a large number of small pieces probably puts any 
recovery into the national lab range - if even they could do it.

-- Jerry


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: quantum chip built

2006-01-19 Thread leichter_jerrold
| I'm fairly ignorant of quantum computers, 
I'm no expert myself.  I can say a few things, but take them with a grain of
salt.

|   having had the opportunity
| to see Schor lecture at a local university but unfortunately finding
| myself quickly out of my depth (I still don't understand the weird
| notation they use for representing [superpositions of?] states in
| Bell inequalities and his lecture was full of diagrams that I didn't
| grok at all).  So, I have a few questions:
| 
| 1) Are there quantum encryption algorithms that we will use on quantum
| computers to prevent quantum cryptanalysis?  Not just key
| distribution; ID Quantique is commercially selling units for that
| already.
I don't recall seeing any quantum encryption algorithms proposed.  Someone
may
have done so, of course - the field is moving quickly.  Our understanding of
quantum computation is very limited so far.  Quantum key exchange is one
pretty well-developed area.  The main other algorithms are variations of
search.  A number of years down the road, I'm sure both will be seen as
obvious applications of ideas that had been around for years.  (Quantum
key
exchange is the practical application of ideas from thought experiments
going
back to the birth of quantum mechanics.  Search algorithms are pretty
straightforward applications of the basic idea of quantization.  There was
never a reason to look at these things as computational mechanisms until
recently.)

| 2) Can't they superimpose more than two states on an particle, such
| that the precision of the equipment is the limiting factor and not the
| number of entangled particles?
There is actually a limit to the number of distinct quantum states that
any system can have, based mainly on the *area*, not volume, of the system.
(In some sense, we seem to have a 2-space-dimensional universe!)  The limit
for an elementary particle is pretty small.

BTW, this has some interesting implications.  We usually argue that some
computation, while beyond our current reach, is in principle possible.
But
in fact one can compute a bound on the number of primitive computational
events that could have taken place since the creation of the universe.  If
a computation required more than that number of computations - think bit
flips, if you like - then in principle it would seem to be impossible, not
possible!  One can flip this around:  Suppose you wanted to do a brute-force
attack against a 128-bit key.  OK, that requires at least 2^128
computational
steps.  Suppose you wanted the result in 100 years.  Then the computation
can't require a volume of space more than 100 light-years across.  (Well,
really 50.)  You can compute how many bit flips could take place in a volume
of space-time 100 light-years by 100 years across.  If it's less than 2^128,
then even in principle, no such attack is possible.

I did some *very* rough calculations based on some published results - I
didn't have enough details or knowledge to do more than make a very rough
estimate - and it turns out that we are very near the not possible in
principle point.  If I remember right, a 128-bit key is, in principle, just
barely attackable in 100 years; but a 256-bit key is completely out of
bounds.
So much for the snake-oil my 1500-bit key is much more secure than your
256-bit key claims!


| 3) Does anyone remember the paper on the statistical quantum method
| that uses a large source of molecules as the computing device?  I
| think it was jokingly suggested that a cup of coffee could be used as
| the computing device.  What became of that?  All this delicate mucking
| about with single atoms is beyond my means for the forseeable future. 
| I still have hopes of progress on the classical system but if that
| doesn't work out my second bet is on computation en masse.
There are some very recent - last couple of weeks - results on creating
entangled systems of 100's of thousands of particles.  (Hence my suggestion
that we are doing quantum transistors, but will eventually do quantum
IC's.)
-- Jerry

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: long-term GPG signing key

2006-01-18 Thread leichter_jerrold
| Even though triple-DES is still considered to have avoided that
| trap, its relatively small block size means you can now put the
| entire decrypt table on a dvd (or somesuch, I forget the maths).
|  
|  
|  This would need 8 x 2^{64} bytes of storage which is approximately
|  2,000,000,000 DVD's (~ 4 x 2^{32} bytes on each).
|  
|  Probably, you are referring to the fact that during encryption of a
|  whole DVD, say, in CBC mode two blocks are likely to be the same
|  since there are an order of 2^{32} x 2^{32} pairs.
| 
| Thanks for the correction, yes, so obviously I
| muffed that one.  I saw it mentioned on this list
| about a year ago, but didn't pay enough attention
| to recall the precise difficulty that the small
| block size of 8 bytes now has.
| 
| The difficulty with 3DES's small blocksize is the 2^32 block limit when 
| using CBC -- you start getting collisions, allowing the attacker to 
| start building up a code book.  The amount of data is quite within 
| reach at gigabit speeds, and gigabit Ethernet is all but standard 
| equipment on new computers.  Mandatory arithmetic: 2^32 bytes 
But the collisions are after 2^32 *blocks*, not *bytes*.  So the number to
start with is 2^35 bytes.

|   is 2^38 
So this correspondingly is 2^41.

| bits, or ~275 * 10^9.  At 10^9 bits/sec, that's less than 5 minutes.  
And this is about 10^10/40 minutes.

| Even at 100M bps -- and that speed *is* standard today -- it's less 
| than an hour's worth of transmission.  The conclusion is that if you're 
8 hours.

| encrypting a LAN,
Realistically, rekeying every half an hour is probably acceptable.  In fact,
even if an attacker built up a large fraction of a codebook, there is no
known way to leverage that into the actual key.  So you could rekey using
some fixed procedure, breaking the codebook attack without requiring any
changes to the underlying protocols (i.e., no extra data to transfer).
Something like running the key through a round of SHA should do the trick.
If it's agreed that this is done after the 2^30 block is sent/received, on
a 1GB network you're doing this every 20 minutes, with essentially no chance
of a practical codebook attack.

(Not that replacing 3-DES with AES isn't a good idea anyway - but if you
have
a fielded system, this may be the most practical alternative.)

|   you need AES or you need to rekey fairly often.
Perhaps I'm being a bit fuzzy this morning, but wouldn't using counter mode
avoid the problem?  Now the collisions are known to be exactly 2^64 blocks
apart, regardless of the initial value for the counter.  Even at
10GB/second,
that will take some time to become a problem.  (Of course, that *would* 
require redoing the protocol, at which point using AES might be more 
reasonable.)
-- Jerry

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: quantum chip built

2006-01-18 Thread leichter_jerrold
|  From what I understand simple quantum computers can easily brute-force 
|  attack RSA keys or other
|  types of PK keys.  
|  
|  My understanding is that quantum computers cannot easily do anything.
|  
| 
| Au contraire, quantum computers can easily perform prime factoring or 
| perform discrete logarithms - this is Shor's algorithm and has been 
| known for more than a decade.  The difficulty is in making a QC.
| 
|  
|  Is ECC at risk too?  And are we at risk in 10, 20 or 30 years from now?
|  
| 
| ECC is also at risk because it relies on the difficulty of discrete 
| logarithms which are victim to a quantum attack.  Are we at risk in 10, 
| 20 or 30 years?  Well, as John said, it's hard to say.  The first 
| working 2 qbit computers were demonstrated in 1998, then 3 qbits in the 
| same year.  7 qbits were demonstrated in 2000.  8 in December 2005.  As 
| you can see, adding a qbit is pretty hard.  In order to factor a 1024 
| bit modulus you'd need a 1024 bit QC.  Perhaps if there were some sudden 
| breakthrough it'd be a danger in a decade - but this is the same as the 
| risk of a sudden classical breakthrough: low.
There is little basis for any real estimates here.  First off, you should 
probably think of current qbit construction techniques as analogous to 
transistors.  If you looked at number of transistors in a computer and 
didn't know that IC's were on the way, you would make much smaller estimates

as to the sizes of practical machines in 1980, much less 2006.

But more fundamentally, qbits don't necessarily scale linearly.  Yes,
current 
algorithms may need some number of qbits to deal with a key of n bits, but
the tradeoff between time and q-space is not known.  (Then again, the 
tradeoff between time and space for *conventional* computation isn't known,
except for some particular algorithms.)  I believe there's a result that if 
any of some broad class of quantum computations can be done using n qbits,
it 
can also be done with just one (plus conventional bits).
 
| My assessment: nothing to worry about for now or in the immediate 
| future. A key valid for 20 years will face much greater dangers from 
| expanding classical computer power, weak implementations, social 
| engineering etc.  The quantum chip is just a new housing, not anything 
| that puts RSA or ECC at risk.
I'm not sure I would be tHat confident.  There are too many unknowns - and
quantum computation has gone from neat theoretical idea, but there's no 
possible way it could actually be done because of many plausible-sounding 
arguments to well, yes, it can be done for a small number of bits but
they 
can't really scale it in a very short period of time.

-- Jerry

| Regards,
| 
| Michael Cordover
| -- 
| http://mine.mjec.net/
| 
| -
| The Cryptography Mailing List
| Unsubscribe by sending unsubscribe cryptography to
[EMAIL PROTECTED]
| 

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: phone records for sale.

2006-01-09 Thread leichter_jerrold
| 18 USC 2702(c) says
| 
|   A provider described in subsection (a) may divulge a record or
|   other information pertaining to a subscriber to or customer of
|   such service (not including the contents of communications
|   covered by subsection (a)(1) or (a)(2)) ...
| 
|   (6) to any person other than a governmental entity.
| 
| The first time I read that last clause, I couldn't believe it; I
| actually went and looked up the legislative history.  I found that
| Congress wanted to permit sale for marketing or financial reasons, but
| wanted to limit the power of the government.  (The Supreme Court had
| ruled previously that individuals had no expectation of privacy for
| phone numbers they'd dialed, since they were being given voluntarily to
| a third party -- the phone company.)
Where two parties exchange information voluntarily, deciding who ought to
have 
control of what can get ... interesting.  Here's a more complex case:  
Vendors have long claimed the right use their own customer lists for
marketing 
purposes.  But suppose you buy using a credit card.  Then information about 
your purchase is known not just to you and the vendor you dealt with, but
the 
credit card company (construed broadly - there's the issuing bank, the 
vendor's bank, various clearing houses...).  Can the credit card company use

the same information for marketing - selling, say, a list of a vendor's 
customers who used a credit card to the vendor's competitors?  The same 
vendors who claim that you have no right to tell them what they can do with 
the transaction information incidental to you doing business with them make
a 
very different set of arguments when its their information being sold by 
someone else.

This issue came up a number of years ago, but I haven't heard anything
recent
about it.  I'm not sure how it came out - the credit card companies may have
decided to back off because the profit wasn't worth the conflicts.  We're in
the midst of battles, not yet resolved as far as I know, about whether a
search engine can let company A put ads up in reponse to searches for
competitor company B.  Can an ISP sell lists of people who visited ford.com
from among their customers to GM?

Information doesn't want to be free - in today's economy, information wants
to
be charged for everywhere, from everyone.
-- Jerry

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: browser vendors and CAs agreeing on high-assurance certificat es

2005-12-23 Thread leichter_jerrold
| | But is what they are doing wrong?
| | 
| | The users?  No, not really, in that given the extensive conditioning
that
| | they've been subject to, they're doing the logical thing, which is not
paying
| | any attention to certificates.  That's why I've been taking the
(apparently
| | somewhat radical) view that PKI in browsers is a lost cause - apart from
a
| | minute segment of hardcore geeks, neither users nor web site admins
either
| | understand it or care about it, and no amount of frantic turd polishing
will
| | save us any more because it's about ten years too late for that - this
| | approach has been about as effective as Just say no has for STD's and
drugs.
| | That's why I've been advocating alternative measures like mutual
challenge-
| | response authentication, it's definitely still got its problems but it's
| | nothing like the mess we're in at the moment.  PKI in browsers has had
10
| | years to start working and has failed completely, how many more years
are we
| | going to keep diligently polishing away before we start looking at
alternative
| | approaches?
| I agreed with your analysis when I read it - and then went on to my next
mail 
| message, also from you, which refers to your retrospective on the year and
had 
| a pointer to an page at financialcryptography.  So ... I try to download
the 
| page - using my trusty Netscape 3.01, which with tons of things turned off

| (Java, Javascript, background images, autoloading of images) remains my 
| work-a-day browser, giving decent performance on an old Sun box.
| 
| Well, guess what:
| 
|   Netscape and this server cannot communicate securely
|   because they have no common cryptographic algorithm(s).
| 
| So ... we have the worst possible combination:  A system that doesn't
work,
| which is forced on you even when you don't care about it (I can live with
| the possibility that someone will do a MITM attack on my attempt to read
your 
| article).
| 
| Sigh.
BTW, illustrating points made here, the cert is for
financialcryptography.com
but your link was to www.financialcryptography.com.  So of course Firefox
generated a warning
-- Jerry


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: browser vendors and CAs agreeing on high-assurance certificat es

2005-12-21 Thread leichter_jerrold
|  Imagine a E-commerce front end:  Instead of little-guy.com buying a
cert
|  which you are supposed to trust, they go to e-commerce.com and pay for a
|  link.  Everyone trusts e-commerce.com and its cert.  e-commerce provides
a
|  guarantee of some sort to customers who go through it, and charges the
|  little guys for the right.
| 
| Do you mean like Amazon Marketplace and Amazon zShops? I think it's been
| done already:
| 
| http://www.amazon.com/exec/obidos/tg/browse/-/1161232/103-4791981-1614232
Well, yes, and eBay provides the same service.  But how much protection are
they providing for buyers?  I think Amazon will cover the first $100 a
customer paid.  eBay gives you a bit of protection if you go with PayPal,
but not a whole load - they rely on their reputation system.

e-commerce.com would bring up a page saying:  We guarantee that
transactions
up to $nnn with this site will be to your satisfaction or your money back.
The merchant would specify the maximum dollar value, and pay e-commerce.com
based on the limit and, presumably, his reputation with e-commerce.  (This 
is one way it might be set up - there are certainly other ways.  And, even
in this style, the entire wording of the guarantee would be something agreed
upon between the seller and e-commerce.

-- Jerry


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: browser vendors and CAs agreeing on high-assurance certificat es

2005-12-18 Thread leichter_jerrold
| 2) the vast majority of e-commerce sites did very few number of
| transactions each. this was the market segment involving e-commerce
| sites that aren't widely known and/or represents first time business. it
| is this market segment that is in the most need of trust establishment;
| however, it is this market segment that has the lowest revenue flow to
| cover the cost of creating a trust value.
...which raises the interesting question of whether there is a role here for
banks in their traditional role:  As introducers and trusted third parties.
Imagine a E-commerce front end:  Instead of little-guy.com buying a cert
which you are supposed to trust, they go to e-commerce.com and pay for a
link.  Everyone trusts e-commerce.com and its cert.  e-commerce provides a
guarantee of some sort to customers who go through it, and charges the
little
guys for the right.

| there is actually a third issue for the vast numbers of low traffic
| e-commerce merchants ... the lack of trust can be offset by risk
| mitigation. it turns out that this market segment where there is
| poissble litte reason for the customer to trust the merchant has had a
| trust issues predating the internet ... at least going back to the
| introduction of credit financial transactions. as opposed to trust, risk
| mitigation was addressed in this period with things like reg-e and the
| customer having a high level of confidence that disputes tended to
| heavily favor the customer. this characteristics of risk mitigation, in
| lieu of trust, then carried over into the internet e-commerce relm.
Yup.  This is the role E-commerce.com would play.

Since e-commerce.com would actually be present in the transaction - as
opposed
to a distant cert authority - in principle it could charge in a way that
made
sense.  If it's mitigating risk, the cost should be proportional to the risk
-
i.e., the size of the transaction and what e-commerce knows about little-guy
and its history.
-- Jerry

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: crypto for the average programmer

2005-12-12 Thread leichter_jerrold
On Mon, 12 Dec 2005, Steve Furlong wrote:
|  My question is, what is the layperson supposed to do, if they must use
|  crypto and can't use an off-the-shelf product?
| 
| When would that be the case?
| 
| The only defensible situations I can think of in which a
| non-crypto-specialist programmer would need to write crypto routines
| would be an uncommon OS or hardware, or a new or rare programming
| language which doesn't have libraries available from SourceForge etc.
| Or maybe implementing an algorithm that's new enough it doesn't have a
| decent free implementation, but I'm not sure such an algorithm should
| be used in production code.
I can tell you a situation that applied in one system I worked on:  You
could 
go with SSL, which gets you into GPL'ed code, not to mention the known
complexities of using the SSL libraries correctly (steep learning curve); or

we could go commercial code that had fairly steep license fees.  The
decision 
was to use off-the-shelf where completely unencumbered (e.g., Gladman's AES 
implementation), build the rest ourselves.

BTW, there are other issues with SSL.  We needed to fit this implementation
in 
to an already-running system with minimal effect - but try to get people to 
use it.  Having to get certificates for SSL was a big hurdle.  Even creating

self-signed certs was a hassle.  The existing code ran directly over TCP,
and 
assumed a byte stream.  SSL is record-oriented.  This shows up, for example,
when your 1-byte ACK (of which we send many) turns into a 32-byte block (or 
even larger).

We weren't interested in complete security - we just needed to raise the 
level considerably.  Given the nature of the application, message 
authentication was not *that* big a deal - it could be put off.

SSL is a fine protocol, and on theoretical terms, yes, you probably want 
everything it provides.  But in practice it's too much.

BTW, there are some interesting social issues.  Before we implemented our 
own crypto layer, we recommended people go through ssh tunnels.  The product

was set up to allow that.  I made the argument Do you really want us to 
provide your crypto?  We're not crypto experts.  But this was perceived as 
clunky, complicated ... it didn't make it look as if the *product*.  Those 
factors were ultimately seen as more important than the very highest level
of 
security.  You can *still* use ssh, of course  (In fact, I was in a 
discussion with a military contractor who wanted to use the product.  The 
question came up of exactly how our crypto worked, whether it would be 
approvable for their application, etc.  My comment was:  Isn't NSA providing

you guys with encrypted links anyway?  Answer - sure, you're right; we don't

need to do application-level encryption.  If IPSEC were actually out there,
all sorts of nasty issues would just magically go away.)

-- Jerry


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Malicious chat bots

2005-12-08 Thread leichter_jerrold
[From Computerworld - see
http://www.computerworld.com/securitytopics/security/story/0,10801,106832,00
.html?source=NLT_PMnid=106832
]

   Security firm detects IM bot that chats with you

   Bot replies with messages such as 'lol no its
   not its a virus'

   News Story by Nancy Gohring

   DECEMBER 07, 2005
   (IDG NEWS SERVICE) - A
   new form of malicious instant-message bot is on the loose
   that talks back to the user, possibly signifying a
   potentially dangerous trend, an instant messaging security
   firm said.

   IMlogic Inc. issued the warning late yesterday after
   citing a recent example of such a malicious bot. On
   Monday, the company first published details of a new
   threat known as IM.Myspace04.AIM. Once the computer of an
   America Online Inc. IM user is infected, the bot sends
   messages to people on the infected user's buddy list,
   making the messages appear to come from the infected user.
   The user isn't aware that the messages are being sent. If
   recipients click on a URL sent with a message, they will
   also become infected and start spreading the virus.

   A bot is a program that can automatically interact with
   people or other programs. AOL, for example, has bots that
   let users ask questions via IM, such as directory queries,
   and the bot responds.

   The unusual part of this malicious bot is that it replies
   to messages. If a recipient responds after the initial
   message, the bot replies with messages such as lol no its
   not its a virus and lol thats cool. Because the bot
   mimics a live user interaction, it could increase
   infection rates, IMlogic said.

   IMlogic continues to analyze this threat but so far it
   seems to only be propagating and not otherwise affecting
   users.

   An AOL spokesman said today that the company's IT staff
   has not yet seen the bot appear on its network. The
   company said it reminds its users not to click on links
   inside IM messages unless the user can confirm that he
   knows the sender and what is being sent.

   Some similar IM worms install spybots or keyloggers onto
   users' computers, said Sean Doherty, IMlogic's director of
   services in Europe, the Middle East and Africa. Such
   malicious programs record key strokes or other user
   activity in an effort to discover user passwords or other
   information.

   What we're seeing with some of these worms is they vary
   quickly, so the initial one may be a probe to see how well
   it infected users, and then a later variant will be one
   that may put a spybot out, Doherty said. The initial worm
   could be essentially a proof of concept coming from the
   malware writers, he said.

   Computerworld staff writer Todd Weiss contributed to this
   article.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Proving the randomness of a random number generator?

2005-12-05 Thread leichter_jerrold
| There's another definition of randomness I'm aware of, namely that the
| bits are derived from independent samples taken from some sample space
| based on some fixed probability distribution, but that doesn't seem
| relevant unless you're talking about a HWRNG.  As another poster
| pointed out, this definition is about a process, not an outcome, as
| all outcomes are equally likely.
That's not a definition of randomness except in terms of itself.  What does
independent samples mean?  For that matter, what's a sample?  It's an
element chosen at random from a sample space, no?

All outcomes equally likely is again simply a synonym:  Equally likely
comes down to any of them could come out, and the one that does is chosen
at random.

Probability theory isn't going to help you here.  It takes the notion of
randomness as a starting point, not something to define - because you really
can't!  Randomness is defined by its properties within the theory; it
doesn't 
need anything else.

One can, in fact, argue plausibly that randomness doesn't really exist:  
It's simply a reflection of lack of knowledge.  Even if you get down to the 
level of quantum mechanics, it's not so much that when an atom decays is 
random, it's that we don't - and, in fact, perhaps *can't* - have the 
knowledge of when that decay will happen ahead of time.  Once the decay has 
occurred, all the apparent randomness disappears.  If it was real, where
did 
it go?  (It's easy to see where our *ignorance* went)

-- Jerry

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: [Clips] Banks Seek Better Online-Security Tools

2005-12-04 Thread leichter_jerrold
| You know, I'd wonder how many people on this
| list use or have used online banking.  
| 
| To start the ball rolling, I have not and won't.
Until a couple of months ago, I avoided doing anything of this sort at all.
Simple reasoning:  If I know I never do any financial stuff on-line, I can
safely delete any message from a bank or other financial institution.

Now, I pay some large bills - mortgage, credit cards - on line.  I just got
tired of the ever-increasing penalties for being even a day late in paying -
coupled with ever-more-unpredictable post office delivery times.  (Then
again,
who can really say when the letter arrived at the credit card company?  You
have to accept their word for it, and they have every incentive to err in 
their own favor.)

I have consistently refused on-line delivery of statements, automated
paying, 
or anything of that sort.  I cannot at this point forsee a world in which I 
would trust these systems enough to willingly move in that direction.  (It
doesn't help that, for example, one credit-card site I use - ATT Universal
-
sends an invalid certificate.  ATT Universal has its own URL, but they
are 
owned by Citibank, so use the citibank.com certificate)

Of course, increasingly one has little choice.  My employer doesn't provide
an 
option:  Pay stubs are on-line only.  Reimbursment reports likewise.

There are increasing hints of various benefits if you use the on-line
systems for banking and credit cards and such.  The next step - it won't
be long - will be charges for using the old paper systems.  How many people 
here still ask for paper airline tickets?  (I gave up on this one)

-- Jerry


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Proving the randomness of a random number generator?

2005-12-03 Thread leichter_jerrold
| Hi,
| Apologies if this has been asked before.
| 
| The company I work for has been asked to prove the randomness of a random
| number generator. I assume they mean an PRNG, but knowing my employer it
| could be anything.. I've turned the work down on the basis of having
another
| gig that week. However, it raised the issue of just how this could be
| achieved. As far as I'm aware there are no strong mathematicians in the
| team, so it will get thrown out to the first available person (cool idea,
| eh?). There will most likely be very little time allocated to do it.
| 
| So, the question is, how can the randomness of a PRNG be proved within 
| reasonable limits of time, processing availability and skill?
It can't be *proved*, for any significant sense of that word, regardless of 
the availability of resources.  At best, you can - if you are lucky - prove 
*non-randomness*.  In practice, one makes attempts to prove non-randomness 
and, if enough of those fail - enough being determined by available 
resources - one just asserts randomness.

There are basically two kinds of tests one can do:

- Various kinds of statistical tests.  These look at things like
average numbers of 0's and 1's (assume a series of random
bits), correlations between successive bits, and so on.
There are a number of test suites out there, the best known
of which is probably the Diehard suite.  (I don't have a
link, but you should have no trouble finding it.)

  Testing like this looks for statistical randomness:  That is,
the random number generator produces outputs that have the
same statistical properties as random numbers.  They say
*nothing* about predictability by someone who knows how the
numbers have been generated.  For example, any good PRNG
will pass most or all of these tests, but if you know the
starting key, you can predict the outputs exactly.  So if
your interest is *cryptographic security*, statistical
randomness tells you little (though *lack* of it is
obviously
a red flag).

- Attack attempts.  This is mainly relevant for cryptographic random
number generation, and is like cryptanalysis:  Look at the
generator and try to break it, i.e., predict its output.
The techniques and expertise needed are as varied as the
techniques used to construct the generators.  If the
generator uses measurements of system events, you need to
know, at a deep level, what causes those system events,
how an attacker might observe them, and how an attacker
might
influence them.  If the generator is based on some
electronic
circuit, e.g., a noise diode, you need to understand the
physics and electronics.  In almost all cases, you need to
understand how one attacks electronics, at various levels of
abstraction.

   A thorough analysis like this is likely to be very expensive, and
is prone to miss things - it's just the nature of the beast.

-- Jerry



-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Broken SSL domain name trust model

2005-12-02 Thread leichter_jerrold
| ...basically, there was suppose to be a binding between the URL the user
| typed in, the domain name in the URL, the domain name in the digital
| certificate, the public key in the digital certificate and something
| that certification authorities do. this has gotten terribly obfuscated
| and looses much of its security value because users rarely deal directly
| in actual URLs anymore (so the whole rest of the trust chain becomes
| significantly depreciated)
One can look at this in more general terms.  For validation to mean
anything,
what's validated has to be the semantically meaningful data - not some
incidental aspect of the transaction.  The SSL model was based on the
assumption that the URL was semantically meaningful, and further that any
other semantically meaningful data was irreversibly bound to it, so that if
the URL were valid, anything you read using that URL could also be assumed
to be equally valid.

This fails today in (at least) two different ways.  First, as you point out,
URL's are simply not semantically meaningful any more.  They are way too
complex, and they're used in ways nothing like what was envisioned when SSL
was designed.  In another dimension, things like cache poisoning attacks
lead to a situationd in which, even if the URL is valid, the information
you actually get when you try to use it may not be the information that was
thought to be irreversibly bound to it.

Perhaps the right thing to do is to go back to basics.  First off, there's
your observation that for payment systems, certificates have become a
solution in search of a problem:  If you can assume you have on-line access
- and today you can - then a certificate adds nothing but overhead.

The SSL certificate model is, I contend, getting to pretty much the same
state.  Who cares if you can validate a signature using entirely off-line
data?  You have to be on-line to have any need to do such a validation, and
you form so many connections to so many sites that another one to do a
validation would be lost in the noise anyway.

Imagine an entirely different model.  First off, we separate encryption
from authentication.  Many pages have absolutely no need for encryption
anyway.  Deliver them in the clear.  To validate them, do a secure hash,
and look up the secure hash in an on-line registry which returns to you
the registered owner of that page.  Consider the page valid if the
registered owner is who it ought to be.  What's a registered owner?  It
could be the URL (which you never have to see - the software will take
care of that).  It could be a company name, which you *do* see:  Use a
Trustbar-like mechanism in which the company name appears as metadata
which can be (a) checked against the registry; (b) displayed in some non-
alterable form.

The registry can also provide the public key of the registered owner, for
use
if you need to establish an encrypted session.  Also, for dynamically
created
pages - which can't be checked in the registry - you can use the public key
to
send a signed hash value along with a page.

Notice that a phisher can exactly duplicate a page on his own site, and it
may
well end up being considered valid - but he can't change the links, and he
can't change the public key.  So all he's done is provide another way to get
to the legitimate site.

The hash registries now obviously play a central role.  However, there are a
relatively small number of them and this is all they do.  So the SSL model
should work well for them:  They can be *designed* to match the original
model.
-- Jerry





-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: timing attack countermeasures (nonrandom but unpredictable de lays)

2005-11-30 Thread leichter_jerrold
|  Why do you need to separate f from f+d?  The attack is based on a timing
|  variation that is a function of k and x, that's all.  Think of it this
way:
|  Your implementation with the new d(k,x) added in is indistinguishable,
in
|  externally visible behavior, from a *different* implementation f'(k,x)
|  which has the undesired property:  That the time is a function of the
|  inputs.
| 
| Suppose that the total computation time was equal to a one way
| function of the inputs k and x.  How does he go about obtaining k?
Why would it matter?  None of the attacks depend on inverting f in any 
analytical sense.  They depend on making observations.  The assumption is
not 
that f is invertible, it's that it's countinous in some rough sense.
 
| It is not enough that it is a function, it must be a function that can
| leak k given x and f(k,x) with an efficiency greater than a
| brute-force of the input space of k (because, presumably, f and the
| output are known to an attacker, so he could simply search for k that
| gives the correct value(s)).
Well, yes ... but the point is to characterize such functions in some useful

way other than they don't leak.  I suppose if d(k,x) were to be computed
as D(SHA1(k | x)) for some function D, timing information would be lost 
(assuming that your computation of SHA1 didn't leak!); but that's a very 
expensive way to do things:  SHA1 isn't all that much cheaper to compute
than 
an actual encryption.

| In reality, the time it takes to compute the crypto function is just
| another output to the attacker, and should have the same properties
| that any other output has with respect to the inputs one wishes to
| keep secret.  It does not have to be constant.
Agreed.  The problem is to (a) characterize those properties; (b) attain
them 
at acceptable cost.
-- Jerry


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: timing attack countermeasures (nonrandom but unpredictable de lays)

2005-11-17 Thread leichter_jerrold
|  In many cases, the observed time depends both on the input and on some
|  other random noise.  In such cases, averaging attacks that use the same
|  input over and over again will continue to work, despite the use of
|  a pseudorandom input-dependent delay.  For instance, think of a timing
|  attack on AES, where the time compute the map X |-- AES(K,X) depends
only
|  on K and X, but where the measured time depends on the computation time
|  (which might be a deterministic function of K and X) plus the network
|  latency (which is random).  Indeed, in this example even the computation
|  time might not be a deterministic function of K and X: it might depend
|  on the state of the cache, which might have some random component.
| 
| I don't follow; averaging allows one to remove random variables from
| the overall time, but you're still left with the real computation time
| plus the the deterministic delay I suggested as a function of the
| input.
| 
| Specifically, time t(k,x) = f(k,x) + d(k,x) + r
| 
| Where r is a random variable modelling all random factors, f is the
| time to compute the function, and d is the deterministic delay I
| suggested that is a function of the inputs.  Averaging with repeated
| evaluations of the same k and x allows one to compute the mean value
| of r, and the sum f+d, but I don't see how that helps one seperate f
| from d.  What am I missing?
Why do you need to separate f from f+d?  The attack is based on a timing 
variation that is a function of k and x, that's all.  Think of it this way:
Your implementation with the new d(k,x) added in is indistinguishable, in
externally visible behavior, from a *different* implementation f'(k,x)
which has the undesired property:  That the time is a function of the
inputs.
Any attack that works against such an implementation works against yours.

Now, your model is actually general enough to allow for effective d(k,x)'s.
For example, suppose that d(k,x) = C - f(k,x), for some constant C.  Then
t(k,x) is just C - i.e., the computation is constant-time.

One can generalize this a bit:  f(k,x) in any real application isn't going
to 
have a unique value for every possible (k,x) pair (or even for every
possible 
x for fixed k, or k for fixed x).  Even if this were true in a theoretical 
sense, you couldn't possibly measure it finely enough.  The real attack
arises 
because of a combination of things:  f(k,x) is actually a function or k and
x 
(or can be made so by averaging); the size of f's range is significant 
fraction of the size of the domain of k, x, or (k,x), depending on what you 
are attacking; and, finally, that the inverses images of the elements of f's

range are fairly even in size.  These all arise because the nature of the 
attack is to use f(k,x) to determine that k (or x or (k,x)) is actually a 
member of some subset of the range of k (or ...), namely, the inverse image
of 
the observed value under f.  (The need for the last one can be seen by 
considering a function that sends f(0,x) to x and every other pair of values

to 1.  Then it's easy to attack the 0 key by computing the timing, but no 
information about any other key can be gained by timing attacks.)
  
If we think of your d() function as a compensation function, then

d(k,x) = C - f(k,x)

is an ideal compensation function, which it may be impractical to use.
(The ideal compensation function is always available *in principle* because
we can set C = max over k,x f(k,x), compute naturally, then compute d(k,x)
by looking at the time elapsed for the function we just finished and delay
for C less that value.)  However, the analysis above shows that there may
be other useful compensation functions which, while they can't by their
nature 
provide the degree of security of the ideal compensation function, may
still be effective.  For example, suppose I have several different ways to 
compute the function to be protected, with differing timing characteristics;
but it's certain that for no input values to all the calculations take the 
maximum amount of time.  If I run all the algorithms in parallel and deliver

the first result that is available, I've reduced the range of f by
eliminating 
some of the largest values.  (Of course, one has to get the details right!)

-- Jerry


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: the effects of a spy

2005-11-16 Thread leichter_jerrold
On Tue, 15 Nov 2005, Perry E. Metzger wrote:
| Does the tension between securing one's own communications and
| breaking an opponents communications sometimes drive the use of COMSEC
| gear that may be too close to the edge for comfort, for fear of
| revealing too much about more secure methods? If so, does the public
| revelation of Suite B mean that the NSA has decided it prefers to keep
| communications secure to breaking opposition communications?
Remember Clipper?  It had an NSA-designed 80-bit encryption algorithm.  One
interesting fact about it was that it appeared to be very aggressively
designed.  Most published algorithms will, for example, use (say) 5 rounds
beyond the point where differential cryptoanalysis stops giving you an
advantage.  Clipper, on the other hand, falls to differential cryptoanalysis
if you use even one less round than the specification calls for.

Why the NSA would design something so close to the edge has always been a
bit
of a mystery (well, to me anyway).  One interpretation is that NSA simply
has a deeper understanding than outsiders of where the limits really are.
What to us looks like aggressive design, to them is reasonable and even
conservative.

Or maybe ... the reasoning Perry mentions above applies here.  Any time you
field a system, there is a possibility that your opponents will get hold of
it.  In the case of Clipper, where the algorithm was intended to be
published,
there's no possibility about it.  So why make it any stronger than you
have
to?

Note that it still bespeaks a great deal of confidence in your understanding
of the design to skate *that* close to the edge.  One hopes that confidence
is
actually justified for cryptosystems:  It turned out, on the key escrow side
of the protocol design, NSA actually fell over the edge, and there was a
simple attack (Matt Blaze's work, as I recall).

-- Jerry


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: [fc-discuss] Financial Cryptography Update: On Digital Cash-l ike Payment Systems

2005-10-25 Thread leichter_jerrold
| U.S. law generally requires that stolen goods be returned to the
| original owner without compensation to the current holder, even if
| they had been purchased legitimately (from the thief or his agent) by
| an innocent third party.
This is incorrect.  The law draws a distinction between recognized sellers 
of the good in question, and other sellers.  If you buy a washer from a guy 
who comes up to you and offers you a great deal on something from the back
of 
his truck, and it turns out to be stolen, you lose.  If you go to an
appliance 
store and buy a washer that turned out to be stolen, it's yours.  Buy a gold

ring from the salesman at the same store, and you better hope he didn't
steal 
it.

As in any real-world situation, there are fuzzy areas at the edges; and
there 
are exceptions.  (Some more expensive objects transfer by title - mainly 
houses and cars.  You don't get any claim on the object unless you have a 
state-issued title.)  But the general intent is clear and reasonable.

|  Likewise a payment system with traceable
| money might find itself subject to legal orders to reverse subsequent
| transactions, confiscate value held by third parties and return the
| ill-gotten gains to the victim of theft or fraud. Depending on the
| full operational details of the system, Daniel Nagy's epoints might be
| vulnerable to such legal actions.
This is no different from the case with cash today.  If there is a way to 
prove - in the legal sense, not some abstract mathematical sense - that a 
transfer took place, the legal system may reverse it.  This comes up in 
contexts like improper transfers of assets before a bankruptcy declaration,
or 
when people try to hide money during a divorce.
-- Jerry


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: semi-preditcable OTPs

2005-10-25 Thread leichter_jerrold
| I recall reading somewhere that the NSA got ahold of some KGB numeric
| OTPs (in the standard five-digit groups).  They found that they
| contained corrections, typos, and showed definite non-random
| characteristics.  Specifically, they had a definite left-hand
| right-hand alternation, and tended to not have enough repeated digits,
| as though typists had been told to type random numbers.  Despite this,
| the NSA wasn't able to crack any messages.
| 
| My question is, why?   I think I know the reason, and that is that any
| predictability in a symbol of the OTP correlated to a predictability
| in only one plaintext symbol.  In other words, there was no leverage
| whereby that plaintext could then be used to derive other symbols. 
| Can anyone explain this better (or more accurately)?  Is this lack of
| diffusion?  Or does it have something to do with the unicity distance?
To get perfect security in a OTP system, you need to add as much
equivocation 
from the keystream as is being removed by the plaintext.  It's generally 
calculated that each letter in English text adds between 2 and 3 bits of 
information.  Hence you only need to add 3 or so bits of randomness from
each 
key input to make the system secure.  Even with the biases, there was
probably 
easily enough randomness in the OTP's to make recovery at least impractical
(e.g., information leaks but so slowly that you never see enough input to
get 
any useful decryptions) and perhaps even be theoretically impossible.

-- Jerry


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]