Re: did you really expunge that key?

2002-11-09 Thread Simon Josefsson
[EMAIL PROTECTED] (Peter Gutmann) writes:

Which operating systems leak memory between processes in this way?

 Win32 via ReadProcessMemory.  

The documentation for the function says it will check read access
permissions.  Isn't this permission check done properly?  I.e.,
disallow memory reads across processes owned by different users.  If
so, this should be reported and fixed.  The remaining situation seems
to be if ReadProcessMemory() on the running process leak data
initialized by dead processed owned by other users, any pointers to
information on this case would be appreciated.

 Most Linux systems which set up the user as root when they install
 the OS.  The combined total would be what, 97%? 98%? 99%? of the
 market?

If you can run a program as root, aren't there easier way to discover
passwords than allocating memory initialized by other processes?
E.g., attaching a debugger to /bin/login.

Which operating systems write core dumps that can be read by non-privileged
users?

 Watson under Win32, any Unix system with poor file permissions (which means a
 great many of them).  Again, that's most of the market.

 This *is* a serious issue, which is why any security software worth its salt
 takes care to zeroise memory after use.

My point is that the software in general cannot solve this without
help from the operating system.  In particular, software cannot
protect itself from operating systems bugs that reveal secret data
handled by the software.  If you run security software on a insecure
host, you won't achieve security no matter how good the security
software is.  A pair of functions secure_memory_allocate() and
secure_memory_zeroize() that handle volatile char* data, together
with a compiler that respects the volatile property, seems like a
useful interface.  No doubt, this already exists.


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]



Re: did you really expunge that key?

2002-11-09 Thread Peter Gutmann
Simon Josefsson [EMAIL PROTECTED] writes:

[EMAIL PROTECTED] (Peter Gutmann) writes:
Which operating systems leak memory between processes in this way?

Win32 via ReadProcessMemory.

The documentation for the function says it will check read access
permissions.  Isn't this permission check done properly?  I.e., disallow
memory reads across processes owned by different users.

Almost all Win32 systems (except for a few Citrix-style systems) are single-
user, so the check is irrelevant.  Even if it's running in a different user
context, for Win9x systems that's meaningless, and for NT systems it's pretty
safe to assume the user is Admin so you can get to anything anyway.

If you can run a program as root, aren't there easier way to discover
passwords than allocating memory initialized by other processes? E.g.,
attaching a debugger to /bin/login.

The problem is someone running a program 3 days later and finding keys in
memory, not active attacks.

My point is that the software in general cannot solve this without help from
the operating system.

It can do a pretty good job of it.  Zeroising a key after use on a system
which isn't currently thrashing gives you a pretty good chance of getting rid
of it.

(Yes, you can hypothesise all sorts of weird places where data could end up if
 you're not careful, but to date multiple demonstrated attacks have pulled
 plaintext keys from memory where they were left by programs, and not from
 keyboard device driver buffers or whatever).

If you run security software on a insecure host, you won't achieve security
no matter how good the security software is.

Right, so we'll just given up even trying then, and wait for the day when
secure systems are readily available.

A pair of functions secure_memory_allocate() and secure_memory_zeroize() that
handle volatile char* data, together with a compiler that respects the
volatile property, seems like a useful interface.  No doubt, this already
exists.

Nope.  NT (not Win9x) has VirtualLock(), but there are special issues
surrounding this which are too complex to go into here, and Unix doesn't have
anything (mlock() won't cut it).

BTW I misattributed the previous message in my reply (I'm posting from another
system and had to manually edit the reply), apologies for any confusion this
caused.

Peter.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]



Re: did you really expunge that key?

2002-11-08 Thread John S. Denker
1) This topic must be taken seriously.  A standard technique
for attacking a system is to request a bunch of memory or
disk space, leave it uninitialized, and see what you've got.

2) As regards the volatile keyword, I agree with Perry.
The two punchlines are:

 if, for example, gcc did not honor [the volatile keyword],
 the machine I am typing at right now would not work because
 the device drivers would not work.

 If they haven't implemented volatile right, why should
 they implement the pragma correctly?

3) However, a discussion of compilers and keywords does not
complete the analysis.  A compiler is only part of a larger
system.  At the very least, we must pay attention to:
 -- compiler
 -- operating system
 -- hardware architecture
 -- hardware physics

At the OS and hardware-architecture levels, note that a
device driver accesses a volatile device register only
after beseeching the OS to map the register to a certain
address in the driver's logical address space. In contrast,
for some address that points to ordinary storage, the OS and
the hardware could (and probably do) make multiple copies:
Swap space, main memory, L2 cache, L1 cache, et cetera.
When you write to some address, you have no reason to assume
that it will write through all the layers.

Swap space is the extreme case: if you were swapped out
previously, there will be images of your process on the
swap device.  If you clear the copy in main memory somehow,
it is unlikely to have any effect on the images on the swap
device.  Even if you get swapped out again later (and there's
no guarantee of that), you may well get swapped out to a
different location on the swap device, so that the previous
images remain.

The analogy to device drivers is invalid unless you have
arranged to obtain a chunk of memory that is uncacheable and
unswappable.

To say the same thing in other words: a compiler can only do
so much.  It can generate instructions to be executed by the
hardware.  Whether that instruction affects the real
physical world in the way you desire is another question
entirely.

4) In the effort to prevent the just-mentioned attack, a
moderately-good operating system will expunge memory right
before giving it to a new owner.  It would be more secure
(but vastly less efficient) to expunge it right after the
previous owner is finished with it.

To see this in more detail, consider swap space again: a
piece of used swap space need not be expunged, unless you
are fastidious about security, because the operating system
knows that it will write there before it reads there.  Clearing
it immediately would be a waste of resources.  Leaving it
uncleared is potentially a security hole, because of the risk
that some agent unknown to the operating system will (sooner or
later) open the swap-space as a file and read everything.

5) We turn now to the hardware-physics layer.  Suppose
you really do manage to overwrite a disk file with zeros.
That does not really guarantee that the data will be
unrecoverable.  As Richard Nixon found out the hard way,
the recording head never follows exactly the same path, so
there could be little patches of magnetism just to the left
and/or just to the right of the track.  An adversary with
specialized equipment and specialized skills may be able
to recover your data.

6) To reduce the just-mentioned threat, a good strategy is
to overwrite the file with random numbers, not zeros.  Then
the adversary has a much harder time figuring out what is old
data and what is new gibberish.  (To do a really good job
requires writing your valuable data always in the middle,
and overwriting gibberish twice, once offset left and once
offset right.)

This is one of the reasons why you might need an industrial-
strength stretched random symbol generator:
  http://www.monmouth.com/~jsd/turbid/paper/turbid.htm#sec-srandom

Note that the random-number trick can be used for main
memory (not just disks) to ensure that the compiler + OS +
hardware system doesn't optimize away a block of zeros.
This actually happened to me once: I was doing some timing
studies, and I wanted to force something out of cache by
making it too big, so I allocated a large chunk of memory
and set it to zero.  But no matter how big I made it, it fit
in cache.  The system was using the memory map to give me
unlimited copies of one small page of zeros (with the
copy-on-write bit set).

7) Terminology:  I use the word expunge to denote doing
whatever is necessary to utterly destroy all copies of
something.  Clearing a memory location is sometimes far
from sufficient.


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]



Re: did you really expunge that key?

2002-11-08 Thread Simon Josefsson
John S. Denker [EMAIL PROTECTED] writes:

 1) This topic must be taken seriously.  A standard technique
 for attacking a system is to request a bunch of memory or
 disk space, leave it uninitialized, and see what you've got.

I find that this thread doesn't discuss the threat model behind
expunging keys, and this statement finally triggered my question.
On which systems is all this really an issue, and when?  Which
operating systems leak memory between processes in this way?  Which
operating systems swap out processes to disk that can be read by
non-privileged users?  Which operating systems write core dumps that
can be read by non-privileged users?  My gut feeling tells me that if
you can allocate memory on a system, there are easier way to attack it.


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]