At 10:10 PM 05/24/2000 -0400, Arnold G. Reinhold wrote:
>Maybe this is where our outlooks differ the most. I view a
>"localized thing with limited effects" as *more* sophisticated than
>some big lump of snuck-in code that searches your hard drive and
>sends periodic e-mail to [EMAIL PROTECTED] Another example of
>a more subtle approach might be a race condition that cause the
>memory segment that contains the secret key to be unprotected every
>so often. Or a key-pair generator that sometimes forgets to check for
>primality. Think subtle, not brute force: leave doors open a crack,
>don't bust through walls.
As usual with such discussions, lots of traffic hides substantial amounts
of agreement with touches of disagreement.
While I recognize that predicting the behavior of any entity is a perilous
thing, my fundamental opinion is that NSA is risk-averse when it comes to
domestic operations like that, especially when they're talking about a
product that's running too much of the government's computing
infrastructure (warships, for example).
There would be profound political and legal ramifications for NSA if word
got out. Given the level of risk, I doubt they'd do this unless they had an
opportunity to implement something bulletproof, that is, something that
won't spring open by accident, but that they can spring open when they need
it. That rules out these "security by obscurity" backdoors like "Joshua"
passwords, buffer overflows, and even the weakened RNGs. Think of the
fallout in the DOD, the Administration, and Congress, if it was discovered
that NSA had installed a backdoor that, once identified, could be used by
any hostile entity to spy on or even shut down critical national systems.
The theorized upside just isn't worth it to them.
Other entities might willingly take such risks, notably organized crime and
foreign intelligence agencies. So it's certainly worthwhile to explore the
possibilities.
>I dare say that NSA has a number of competent programmers on staff.
>Rotating a few through Redmond every couple of years would be simple
>enough.
I can see it now --
Microserf 1: Who's the new guy?
Microserf 2: The latest whiz kid from NSA. Don't tell anyone he's here.
Microserf 1: Yeah, right. Did he bring his shoe phone?
>I lost a programmer to NSA once. He gave up stock options, took a 28%
>pay cut and had to go through 3 lie detector tests. Yet we couldn't
>come close to talking him out of it. NSA motivates people with a
>combination of patriotism, important cutting edge work, and a strong
>sense of community.
True, but patriotism doesn't automatically turn someone into an effective
deep cover agent. The point is that they'd need a combination of technical
savvy to do the software work and the ability to continuously deceive their
co-workers about their real identity, motivation, and intent. While not all
of my colleagues have been completely candid over the years, I argue that
such a combination is very rare.
One reason NSA successfully keeps lots of secrets is because the people
usually work at facilities with incredible physical security and among
people who share a common vision of what's supposed to be secret and how
secrets are to be handled. That's very different from working somewhere
else, among people who focus on keeping other kinds of things secret. And
it's going to be hard to try to maintain the thought processes of the one
environment when working under cover in another environment.
> Note that there have been no leaks of any US
>classified ("Type 1") ciphers since World War II. Not one. That's
>long term secrecy.
I'm not sure how to interpret this. I'm sure that the security of some
aspects of our cipher systems have remained intact, but it seems really
unlikely that all aspects of Type 1 cipher systems have resisted disclosure.
In any case, we're not trying to keep something secret within an
institution like NSA where secrecy is such a well-funded obsession. While
Microsoft itself probably puts a good deal of effort into its own culture
of secrecy, I doubt the two are comparable. Besides, we have two cultures
with different secrecy goals, and that makes it much less likely that the
backdoor secret would be kept. The combination of secrecy cultures won't
protect everything uniformly, and the backdoor is the sort of thing to fall
through the cracks.
>As for being able to classify backdoors, I believe they would be
>considered intelligence gathering methods and would receive the
>highest levels of protection (up to TS/SCI).
There are strict limits to what the government can classify. The government
CAN classify instructions that direct certain people working at Microsoft
to put certain functions into Microsoft's code base. But the resulting code
belongs to Microsoft. The code is NOT classified. A person can describe the
code's behavior without violating laws on government secrecy, teason, or
espionage (trade secret is something else, but that's likely to be a civil
action). But such a description will indeed disclose the backdoor.
> The administration has
>asked for new legislation to insure that methods such as these would
>not have to be revealed in court in the event of civil litigation or
>criminal prosecution. If I recall correctly, there is a provision
>to allow the government to get a suppression order even if both
>parties to a civil suit wanted to reveal the information.
The whole point in our case is to keep the mere existence of the backdoor a
secret, especially if it's a "security by obscurity" backdoor. I seriously
doubt that the laws will successfully suppress news reports that say there
is a backdoor, or indicate that it was installed by the government, though
the laws might suppress public distribution of the technical details.
>The fact
>that they are asking for such laws is a strong indication of the
>government's intent.
It certainly indicates that the government wants to use backdoors, and milk
whatever ones it can for as long as it can by discouraging public exposure.
This is no surprise. They're exploiting poor software engineering.
Rick.
[EMAIL PROTECTED]