FBI wiretap worries slow satellite phones

1999-08-04 Thread Eric Blossom

Good article.

http://www.news.com/News/Item/0%2c4%2c40048%2c00.html?dd.ne.txt.0803.03

FBI wiretap worries slow satellite phones 
By John Borland
Staff Writer, CNET News.com
August 3, 1999, 4:00 a.m. PT 

The Federal Bureau of Investigation is putting the brakes--at least
temporarily--on the satellite phone industry.

The FBI and other U.S. law enforcement agencies are worried that new
space-based telephone systems, which theoretically allow a person to
use a wireless phone from virtually anywhere on earth, will undermine
their ability to wiretap telephone calls and trace criminals through
cellphones.

[snip]

Eric



Product Evaluations (was: Re: House committee ditches...)

1999-08-04 Thread Rick Smith

At 02:19 AM 8/3/99, Peter Gutmann wrote:

>[1] There isn't any rule of thumb for the work involved in attaining the
higher
>assurance levels because it's done so rarely, although in terms of
cost and
>time I've seen an estimate of $40M for an A1 Multics (it never
eventuated)
>and DEC's A1 security kernel took nearly a decade to do, with 30-40
people
>working on it at the end (just before it was cancelled).  A lot of this
>overhead was due to the fact that this hadn't been done much and there
was
>a lot of research work involved, an estimate I've had for doing a
>commercial-product A1 system now would be about 3-5 years (probably
closer
>to 5), ramping up from an initial 10 to 30 people at the end, and costing
>maybe $15-20M.

ObCrypto: we all face the problem of judging whether or not a particular
implementation meets particular security objectives. Evaluation techniques
like formal assurance provide a candidate set of tools, so this is somewhat
worth examining. There is a particular bias towards formal methods in
several communities.

I'm currently putting together a paper that outlines in detail the labor
costs of the LOCK program, a government funded project to built a Unix
compatible A1 system. It was started in the late 80s, about when the VMS
program died, and sucked up between $20M and $30M before a descendent was
put into operation as the Standard Mail Guard. It was never formally
evaluated at A1 or anything else, though large chunks of assurance evidence
were reviewed by Govt representatives.

The A1 formal assurance stuff added a 58% premium to the development of
LOCK TCB code. That premium focused almost entierly on the effectiveness of
multilevel security (MLS) mechanisms. MLS has not been useful enough to
find their way into many applications, military or non-military.
Unfortunately, the processes developed for A1 assurance are extremely
difficult to adapt to non-MLS applications. 

In other words, developers of a non-MLS mechanism needs to do R&D into how
to formally model and specify their security requirements. But there is
nobody out there with the clout to review this newly created model and
judge its fidelity to reality. The Orange Book/TCSEC/NCSC approach provided
somewhat canned answers to basic things and defined the review process for
more complex things. But you have nowhere to go outside of this. At the
present there's no way to evaluate anything past EAL4 except perhaps by
going to NSA, which will probably make demands that no commercial product
can justify.

ObCrypto: I've seen occasional mentions of strategies for verifying
cryptographic protocols, but I've never seen anything really practical
published about it other than Gus Simmons' paper in CACM. Unfortunately,
his conclusion was that we still need to develop such techniques, which
should perhaps be characterized as "formalized paranoia." The only thing
I've seen in practice are lists of rules like the ones I published in
"Internet Cryptography" as "Security Requirements" for various techniques,
products, and sites. NSA's "Functional Security Requirements
Specifications" for crypto devices tend to take that approach, and rely on
point-by-point explanations of how a given thing complies with each
requirement.


Rick.
[EMAIL PROTECTED]
"Internet Cryptography" at http://www.visi.com/crypto/




Re: linux-ipsec: Re: Summary re: /dev/random

1999-08-04 Thread Henry Spencer

On Tue, 3 Aug 1999, bram wrote:
> The goal is to make it so that any time someone wants random numbers they
> can go to /dev/random, with no required studying of entropy and threat
> models and all that yadda yadda yadda which most developers will
> rightfully recoil from getting into when all they want is a few random
> bytes.

That, surely, is what /dev/urandom is for.  (Maybe /dev/random ought to
be mode rw---, so that only root applications can use it?)

  Henry Spencer
   [EMAIL PROTECTED]
 ([EMAIL PROTECTED])




Re: Summary re: /dev/random

1999-08-04 Thread bram

On Mon, 2 Aug 1999 [EMAIL PROTECTED] wrote:

> Linux's /dev/random uses a very different design, in that it uses a
> large pool to store the entropy.  As long as you have enough entropy
> (i.e., you don't overdraw on the pool's entropy), /dev/random isn't
> relying on the cryptographic properties as much as Yarrow does.

The problem is that the one bit of entropy for one bit of output rule
creates the potential for lots of denial of service attacks where the
entropy gets used up. There is no application which needs that amount of
entropy. John Kelsey put it pretty well earlier this thread:

  Suppose God, in a fit of budget-consciouness, decides to get
  rid of all this wasteful hardware for generating random
  numbers that are necessary for quantum mechanics, and
  instead replaces them with a PRNG with a 256-bit seed.  In
  this case, all hardware noise sources are ultimately tapping
  into this same seed and PRNG. How will you, or anyone, tell
  the difference?

Most people don't know the fine-grained distinction between /dev/random
and /dev/urandom. In fact, I'll bet most developers don't even know that
/dev/urandom exists. As a result, lots of programs which require very
large amounts of random numbers suck data out of /dev/random, creating a
very large potential for unknown numbers of present and future problems.
This entire class of problems can be eliminated completely by altering
/dev/random to only block at bootup until it has enough entropy (or not at
all if there was some stored on disk) and thereafter to spit out data as
soon as it's requested.

The complete threat model for RNG's, admittedly, has a number of attacks
which seem very impractical under current circumstances, but since those
attacks can be completely eliminated now prudence indicates doing so. That
way, when circumstances arise in which one of those attacks is practical,
we can make a little academic note which nobody cares about for it, rather
than having to deal with a disaster.

The other reason for changing the way /dev/random currently works is that
the long output version of RIPEMD160 would make it just plain faster,
since it would halve the amount of hashing done per byte of output.

The goal is to make it so that any time someone wants random numbers they
can go to /dev/random, with no required studying of entropy and threat
models and all that yadda yadda yadda which most developers will
rightfully recoil from getting into when all they want is a few random
bytes.

-Bram




Re: Subject: Re: Security Lab To Certify Banking Applications (was Re: ECARM NEWS for July 23,1999 Second Ed.)

1999-08-04 Thread Marty Levy

Keeping an ITSEC TOE confidential is not unusual.  It would be more
unusual to not keep it confidential or at least restricted distribution,
given the contents.  It is a major flaw of the scheme...you are trusting
the certifier to enforce a "good" TOE if they are going to give an
E3-High rating.

In the ITSEC scheme, saying something is certified as "E3" says nothing
substantial anyway.  (E levels refer to correctness of implementation,
which is quite important, but not the whole story.)  You also need to
know the rating for the "strength of mechanism", which is Basic, Medium
or High.  In other workds there's another work-around that is at least
as simple as what you stated:

1.  Define your TOE as tough or easy as you like. 
2.  Do a reasonably good job of documenting your process and doing
configuration management. Don't worry about how secure your product is.
3.  Do the certification process, pay the $$.  Get an E3-Basic (lowest
level) rating from the certifier.
4.  Tell your customers that you "passed ITSEC E3", but don't tell them
at what strength.  Rely on their ignorance to not ask the most important
question.

   - ml

Peter Gutmann wrote:
> 
> 
> Actually there's a way you can manage this (which was used by MS to get NT's
> ITSEC E3 certification in the UK):
> 
>   1. Define your own TOE (target of evaluation) for the certification
>  (translation: lower your expectations to the point where they're already
>  met).
>   2. Have the product certified to your own TOE.
>   3. Mark the TOE "Microsoft Confidential" and don't let anyone see it
>  (leading to considerable speculation about how you could possibly manage
>  to write a TOE which would allow NT to get an E3 certification).
>   4. Tell everyone you have an E3 certified OS and sell it to government
>  departments as secure.
> 
> This isn't to say that the certification process is a bad thing.  If it's done
> properly it can lead to a reasonable degree of assurance that you really do
> have a secure product, which is exactly what was intended.  Unfortunately if
> all you're interested in is filling a marketing checkbox, you can do this as
> well.  This was the Orange Book's strength (and weakness), it told you exactly
> what you had to do to get the certification so you couldn't work around it
> with fancy footwork.  OTOH it was also inflexible and had requirements which
> didn't make sense in many instances, which is what lead to the development of
> alternatives like ITSEC/the Common Criteria.  For all its failings I prefer
> the Orange Book (if it can be made to apply to the product in question)
> because that way at least you know what you're getting.
> 
> (Given that NT now has a UK E3 certification, I don't think you need to get
> it recertified in the US, since it's transferrable to all participating
> contries, so I don't think it'd have to be certified by the above lab).
> 
> Peter.




Re: linux-ipsec: Re: Summary re: /dev/random

1999-08-04 Thread Paul Koning

> "Osma" == Osma Ahvenlampi <[EMAIL PROTECTED]> writes:

 Osma> Looking at this discussing going round and round, I'm very
 Osma> inclined to fetch the latest freeswan-snapshot, grep for
 Osma> /dev/random, and replace all reads with a routine that has it's
 Osma> own internal Yarrow-like SHA mixer that gets reseeded from
 Osma> /dev/random at semi-frequent intervals, and in the meantime
 Osma> returns random numbers from the current SHA value. That's how I
 Osma> believe /dev/random was intended to be used, anyway...

No, that's how /dev/urandom was intended to be used.

What you describe duplicates the functionality of /dev/urandom.  Why
do it?

I agree with Ted that there may well be people that misuse
/dev/random.  If so, the obvious comment is RT*M.  Perhaps the
documentation may want to emphasize the intended use of /dev/random
more strongly.  (Come to think of it, it's not clear to me especially
after reading the Yarrow paper that there really *are* cases where the 
use of /dev/random rather than /dev/urandom is actually warranted.)

Re Henry Spencer's comment:
>On Tue, 3 Aug 1999, bram wrote:
>> The goal is to make it so that any time someone wants random numbers they
>> can go to /dev/random, with no required studying of entropy and threat
>> models and all that yadda yadda yadda which most developers will
>> rightfully recoil from getting into when all they want is a few random
>> bytes.

> That, surely, is what /dev/urandom is for.  (Maybe /dev/random ought to
> be mode rw---, so that only root applications can use it?)

That may reduce the number of applications that blindly use
/dev/random without knowing why this isn't the right thing to do.  On
the other hand, it won't prevent applications that read /dev/urandom
from causing those that use /dev/random to block (so long as both
continue to use the same pool.

Then again, if the valid uses of /dev/random are somewhere between
rare and non-existent, which seems to be the case, this is a
non-issue.

Finally, from Bram:

> 5) a (very small) amount of persistent memory to keep pool state in (or at
> least periodically put some random bytes in to put in the pool at next
> reboot.) It would have to be plugged into a trusted piece of hardware to
> give it real randomness at least once, of course, but that wouldn't be a
> big deal.

That doesn't solve the issue of entropy sources on diskless UI-less
systems.  All it does is let you carry whatever you got across
reboots.  If you have none to carry, you still have an issue.

I do agree that using any available NV memory for keeping pool state
across reboots is a good thing.  

paul



Re: linux-ipsec: /dev/random

1999-08-04 Thread Bill Frantz

At 12:35 PM -0700 8/2/99, John Denker wrote:
>2) Network timing may be subject to observation and possibly manipulation
>by the attacker.  My real-time clocks are pretty coarse (10ms resolution).
>This subthread started with a discussion of software to estimate the
>entropy of a bitstream, and I submit that this attack scenario is a perfect
>example of a situation where no software on earth can provide a useful
>upper bound on the entropy of the offered bit-stream.

Most modern chips has some sort of "cycle counter" built into the chip.
These counters offer high resolution.  The initial value depends on when
the system was started, which may not be available to an attacker.  The
value is also dependent on the cache behavior of the system, another value
possibly not available to an attacker.


-
Bill Frantz | The availability and use of secure encryption may |
Periwinkle  | offer an opportunity to reclaim some portion of   |
Consulting  | the privacy we have lost. - B. FLETCHER, Circuit Judge|





Re: linux-ipsec: /dev/random

1999-08-04 Thread John Denker

At 10:08 AM 8/4/99 -0400, D. Hugh Redelmeier wrote:
>
>I think that this description reflects an inappropriate understanding
>of entropy.  Entropy is in some sense spread throughout the whole
>output of /dev/urandom.  You don't use entropy up, you spread it over
>more and more bytes of output.  This view, of course, depends on
>trusting the hashing/mixing to do what it is supposed to do.

What matters here is not your understanding or my understanding of what
entropy is.  What matters to me is /dev/random's opinion of how much
entropy it has on hand.  Reads from /dev/urandom deplete this quantity,
byte for byte, so that heavy demands on /dev/urandom cause blockage of any
processes that make any use of /dev/random.  I renew my assertion that this
constitutes, shall we say, an opportunity for improvement.




Re: linux-ipsec: /dev/random

1999-08-04 Thread John Denker

At 11:42 AM 8/4/99 -0400, D. Hugh Redelmeier wrote:
>
>Pluto is a "bad guy" in that it is using up the entropy-estimate.  

Your modesty is charming.  But I wouldn't say that pluto is the bad guy.
There "ought" to be a system service (call it /dev/vrandom or whatever)
that provides the sort of bits that pluto needs, without this unfortunate
side effect on /dev/random.

>Is
>there some other software that you are running that is suffering
>because of this?

Yes and no.  I cobbled up a hardware RNG for my server, so nobody here is
suffering at the moment.  And the machines in the field (the moats) don't
have any great need for /dev/random.

But it's easy to foresee other folks getting into trouble as the user
community gets larger.

Cheers --- jsd




IP: Security of on-line banking studied

1999-08-04 Thread Robert Hettinga


--- begin forwarded text


From: [EMAIL PROTECTED]
Date: Wed, 04 Aug 1999 11:10:49 -0500
To: [EMAIL PROTECTED]
Subject: IP: Security of on-line banking studied
Cc: [EMAIL PROTECTED]
Sender: $[EMAIL PROTECTED]
Reply-To: [EMAIL PROTECTED]

Source:  Washington Times
http://www.WashTimes.com/business/business2.html

Security of on-line banking studied

   By Julie Hyman
   THE WASHINGTON TIMES

Congressional investigators said yesterday that the 6 million
  Americans who bank on line may be getting convenience
  at the expense of security.

According to the General Accounting Office, 44 percent of
   banks, thrifts and credit unions it surveyed have not enacted
   strict enough measures to keep their computer systems safe
   from hackers.

The report was released at a hearing of the House banking
   subcommittee on monetary policy. Lawmakers shied away from
   suggesting regulation as a solution to on-line banking security, but
   said both banks and consumers must address the risks.

"We don't want to overregulate the activity to the point that
   we unduly dampen it or retard its growth," said Rep. Spencer
   Bachus, Alabama Republican. "At the same time, the public has
   the right to safety and soundness in Internet banking, so we can't
   walk away from it."

Consumers who bank over the Internet use Web sites to
   transfer money between accounts, pay bills, check account or
   investment balances and apply for loans.

The GAO report concluded that Internet banking is by nature
   riskier than conventional banking. Its review of banking
   regulators' examinations of 81 financial institutions found that 35
   of them, about 44 percent, hadn't taken all the risk-limiting steps
   regulators have said are needed.

Mr. Bachus said Internet banking is projected to grow 20 to
   25 percent by 2004, making it necessary to be vigilant about
   hackers.

"All the banking representatives agreed that we need to
   prosecute [hackers who break into on-line accounts] and we
   need to publicize it."

He noted that the hearing was just the first stage in a
   congressional look at on-line banking that could help increase
   Internet security before consumer use explodes.

The study also said that in some cases, on-line banking
   operations were begun at companies without the approval of
   boards of directors or chief executive officers. If problems arise,
   the report cautions, senior management will not have the
   foreknowledge to deal with them.

The banking community is responding to the challenges of
   on-line banking. The Financial Services Roundtable, a District of
   Columbia trade group, formed a technology division in 1996 to
   foster the development of Internet banking and to test
   electronic-security measures.

But Catherine A. Allen, the division's CEO, said banks alone
   cannot ensure security.

"We would like to emphasize that security is a shared
   activity," she said at the hearing. Consumers should be aware of
   risk, and should choose on-line banks that are insured by the
   Federal Deposit Insurance Corporation, she said.

John Hall, an American Bankers Association spokesman, said
   that the bottom line of banking, whether it be on-line or the more
   traditional, in-person fashion, is trust.

"The banks' No. 1 attribute they sell is trust. Their customers
   have to feel completely comfortable that they are secure."

For that reason, he said, the banking industry is vigorously
   pursuing security measures.

Even with the explosive growth of electronic commerce and
   on-line investing, most consumers are still somewhat hesitant
   about conducting financial transactions on the Internet, and even
   more so when it comes to managing their finances.

According to a June report by investment firm Goldman
   Sachs, only about 4 percent of U.S. households currently use
   on-line banking products.

  This article is based in part on wire service reports.

   Copyright © 1999 News World Communications, Inc.


**
To subscribe or unsubscribe, email:
  [EMAIL PROTECTED]
with the message:
  (un)subscribe ignition-point email@address
**

**

--- end forwarded text


-
Robert A. Hettinga 
The Internet Bearer Underwriting Corporation 
44 Farquhar Street, Boston, MA 02131 USA
"... however it may deserve respect for its usefulness and antiquity,
[predicting the end of the world] has not been found agreeable to
experience." -- Edward Gibbon, 'Decline and Fall of the Roman Empire'



more than linear algebra?

1999-08-04 Thread staym

I have a set of unit vectors, but don't know their coordinates, or even
the dimension of the space they span.  I'm given the angle between each
pair of vectors in units of some unknown "unit angle".  I'd like to find
the smallest dimension into which the set fits, as well as the range of
values the "unit angle" is restricted to.  Do I need anything more than
linear algebra to solve this?
-- 
Mike Stay
Cryptographer / Programmer
AccessData Corp.
mailto:[EMAIL PROTECTED]



Re: Proposed bill for tax credit to develop encryption with covert access

1999-08-04 Thread David Jablon

At 05:44 PM 8/2/99 -0400, Radia Perlman - Boston Center for Networking wrote:
>http://thomas.loc.gov/cgi-bin/bdquery/z?d106:h.r.02617:
>
>I'm sure you'll all be enthusiastic about the chance to save your
>company tax money.

Amazing!  Despite the title, this seems to be a retro-active tax break for
all developers
of snake-oil and other poorly concieved or poorly implemented cryptography.

>  Tax Relief for Responsible Encryption Act of 1999 (Introduced in the
House) H. R. 2617
>
>  To amend the Internal Revenue Code of 1986 to allow a tax credit for
development costs of encryption products with plaintext
>  capability without the user's knowledge.
>  [...]
>  (1) IN GENERAL- The term `encrypted product-plaintext capability
development costs' means amounts paid or
>  incurred in connection with the development of computer software
allowing for a plaintext access capability
>  without the user's knowledge of such access at the time such
access occurs through any method, including the
>  following methods:
>  [...]
>   (D) [***>] Any other technique or methodology [<***] that may be created
that allows timely access to plaintext or
>   decryption information.

Truly, a cleverly worded bill.  Does anyone know what vendors are behind
it?  :-)


David P. Jablon
[EMAIL PROTECTED]
www.IntegritySciences.com




Re: Proposed bill for tax credit to develop encryption with covert access

1999-08-04 Thread Russell Nelson

-- BEGIN 2rot-13

David Jablon writes:
 > Amazing!  Despite the title, this seems to be a retro-active tax
 > break for all developers of snake-oil and other poorly concieved or
 > poorly implemented cryptography.

Or for that matter, poorly selling.  There's nothing in the bill that
requires that the encryption be otherwise unbreakable (e.g. I could
get a tax credit for implementing a backdoor'ed rot-13) or that anyone
actually buy or use the snake-oil^H^H^H^H^H^H^H^H^Hencryption
software.

Just how *would* you put a backdoor into rot-13?  Ah!  I've got it!
Implement a new, higher security 2rot-13 (apply rot-13 twice, for
double the encryption value).

-- END 2rot-13 
-- 
-russ nelson <[EMAIL PROTECTED]>  http://crynwr.com/~nelson
Crynwr sells support for free software  | PGPok | Government schools are so
521 Pleasant Valley Rd. | +1 315 268 1925 voice | bad that any rank amateur
Potsdam, NY 13676-3213  | +1 315 268 9201 FAX   | can outdo them. Homeschool!