Cryptography-Digest Digest #824, Volume #13 Wed, 7 Mar 01 00:13:01 EST
Contents:
Re: super strong crypto, phase 3 (David Wagner)
Re: super strong crypto, phase 3 (John Savard)
Re: Super strong crypto (David Wagner)
Re: Again on key expansion. (Benjamin Goldberg)
Re: super strong crypto, phase 3 ("Douglas A. Gwyn")
Re: PKI and Non-repudiation practicalities (Benjamin Goldberg)
Re: Super strong crypto ("Douglas A. Gwyn")
Re: The Foolish Dozen or so in This News Group (Eric Lee Green)
Re: => FBI easily cracks encryption ...? (CR Lyttle)
Re: => FBI easily cracks encryption ...? (Paul Rubin)
----------------------------------------------------------------------------
From: [EMAIL PROTECTED] (David Wagner)
Subject: Re: super strong crypto, phase 3
Date: 7 Mar 2001 02:10:08 GMT
Reply-To: [EMAIL PROTECTED] (David Wagner)
Douglas A. Gwyn wrote:
>David Wagner wrote:
>> ... but the keys surely are related, no?
>
>The specific example I gave earlier shipped only a half batch of
>new key, but I also suggested that if one is uneasy about that,
>an entire new key could be shipped.
But even if entire new keys are shipped, the keys are still related.
They're related by the fact that the new key is the decryption of
an observable ciphertext under the old key.
------------------------------
From: [EMAIL PROTECTED] (John Savard)
Subject: Re: super strong crypto, phase 3
Date: Wed, 07 Mar 2001 02:01:18 GMT
On Wed, 07 Mar 2001 00:53:01 GMT, "Douglas A. Gwyn" <[EMAIL PROTECTED]>
wrote, in part:
>John Savard wrote:
>> But it's still transmitted under the old key, so *that* relationship
>> still exists.
>
>Ah, but that doesn't seem to be an exploitable relationship,
>since the rate of introduction of new unknowns (key) equals
>the rate of accumulation of information about them.
But this isn't true. With each additional key introduced, one also has
a sample of plaintext encrypted in that key. So, while a single block,
in a single key, can't be decrypted unambiguously, that stops being
true with more blocks.
This can be shown most simply by noting that brute-force search is
possible.
But, given that brute-force search is impractical, whether or not this
makes other attacks any more difficult than, say, double encryption,
is not clear.
John Savard
http://home.ecn.ab.ca/~jsavard/crypto.htm
------------------------------
From: [EMAIL PROTECTED] (David Wagner)
Subject: Re: Super strong crypto
Date: 7 Mar 2001 02:27:55 GMT
Reply-To: [EMAIL PROTECTED] (David Wagner)
Douglas A. Gwyn wrote:
>David Wagner wrote:
>> Suppose for my block cipher I use the following particularly
>> dumb choice: E_k(x) = k xor x. ...
>
>I don't understand that, since size(k) < size(x).
Ok, let me modify my counterexample, then. Take
E_k(x) = (k,k,k,..) xor x, where (k,k,k,..) denotes k
repeatedly concatenated with itself until you get something
exactly as long as x. This gives a cipher where your
mode is insecure, which shows that (as you say) one must
make some kind of assumptions on the block cipher for your
mode to be applicable. The $64,000 question is: *What*
assumptions are needed, and does there exist any cipher
where we can show those assumptions are satisfied?
>contrary to the stipulation that the block encryption have
>reasonable mixing properties.
Could you define "reasonable mixing properties", please?
I suspect that everything has been swept under the rug of
this innocuous-sounding phrase. In particular, I'd like to
be sure that, when you say "reasonable mixing properties",
you haven't simply assumed the security of the block cipher
against all known-text attacks (which would make your
construction rather trivial and uninteresting, of course).
Could you help convince me that this isn't a problem?
Probably the most convincing way would be to state a precise
definition of what it means for a cipher to have "reasonable
mixing properties" (and, if you feel especially ambitious,
point to one cipher -- any one, pick your favorite -- that
provably has these nice properties).
>I have already stated that
>there *are* deductive gaps; in filling them in one might turn up
>fresh insight into necessary conditions such as using a
>nontrivial block function (although I already recognized the
>necessity of that).
Well, ok. I don't want to throw water on the flames of
creativity, and I certainly agree that it would be exciting
to have any direction (even if incomplete) towards the goals
you're hoping for. But nonetheless, let me observe that at
present we're lacking not only a proof of our desired security
theorem but even a precise statement of such a theorem might
claim. Without understanding even what the claims might be,
I find it difficult to identify the deductive gaps, let alone
help with trying to fill them. I apologize if I'm missing
the whole point of your ideas.
------------------------------
From: Benjamin Goldberg <[EMAIL PROTECTED]>
Subject: Re: Again on key expansion.
Date: Wed, 07 Mar 2001 03:18:43 GMT
Cristiano wrote:
>
> > > > [...]
> > > > Two methods that both take a second offer the same strength.
> > >
> > > This point is not clear (for me). I'd like better understand what
> > > you tell me: to add 13 bits of entropy to my key my method takes
> > > 0.079 s, SALEK's method takes 1.26 s (on *my* computer with *my*
> > > program).
> > > Why the two methods offer the same strength?
> > > In the same time my method add more entropy to the key.
> > > I don't want to blame SALEK's method, I want only understand and
> > > learn!
> >
> > The disparity is due to your miscounting the number of rounds.
> > To multiply a point by a 512 bit integer is 512 doublings, and 256
> > adds. To perform sha512 once, is 64 internal rounds.
> >
> > (Note, I forgot the adds in my earlier estimate).
>
> Miracl library use the addition/subtraction method: to multiply a
> point by a 512 bit integer is 511 (or 512) doublings, and 40 adds (I
> think the subtraction is not important for the entropy).
>
> > Your method, looping 16 times adds log2(16*(512+256+64)) bits of
> > work.
> > SALEK's method, looping 2^13 times adds log2(2^13*64) bits of work.
> >
> > So your method takes .079s to add 13.7 bits of work.
> > SALEK's method takes 1.26s to add 19 bits of work.
>
> I forgot to specify in my message that I have used sha256.
In which cases? Both? If you are multiplying a point by a 256 integer,
it's 255 or 256 doublings, and some number of adds.
> With sha512 my method takes .378 s to add log2(16*(512+40+64)) = 13.3
> bits of work (but "work" is "entropy"?), SALEK's method takes .026 s
> to add log2(2^7*64) = 13 bits of work.
> With sha512, my method needs 2^e/616 iterations to add e bits of work,
> SALEK's method needs log2(2^(e-6)) iterations to add e bits of work
> (and SALEK's method is very fast!).
Your math seems a bit confused. Let e = bits, i = iterations.
Your method:
e = log2( i * 616 ); 2^e = i * 616; i = 2^e / 616; i = 2^(e-9);
SALEK method:
e = log2( i * 64 ); 2^e = i * 64; i = 2^e / 64; i = 2^(e-6);
You've got an extra log2 in your calculation.
> I have only a doubt: I don't think that a single round of sha is the
> same as a single doubling of a point in GF(p).
True. Doubling or adding a point is slower than a single round of sha.
> In other words, a doubling of a point in GF(p) should be compared with
> the whole sha, not with a single round of sha. What do you think?
That's kinda overdoing it. Do you think doubling a point in GF(p) takes
the same time as the full 64 rounds in sha512? I seriously doubt it.
Perhaps you should consider doubling a point as the same as x (maybe 8?)
internal rounds of sha. Or better yet, consider the operations which
make up a point doubling as each being one round. In the ecc
implementation I have of point add, I see 6 subtracts, two multiplys,
one square (squaring an integer is faster than multiplying it with
itself), one modinverse, and 2 mods. In point double, I see 3
subtracts, 5 multiplys, 2 squares, one modinverse, and 2 mods. Having
written this implementation myself, I am 100% certain that it is not
optimal.
If we measured the number of integer operations point doubles and adds
takes with my impl, it would be bad, since a smart implementor will (1)
be using montgomery operations throught, rather than a mod at the end,
and (2) avoid doing divisions throughout a point mutiply until the end,
by converting points from x, y, to x, y, z (where the original (x,y) is
(x/z,y/z)). Any variant of SALEK depends on measuring the fastest
possible version -- if we do a slow version, and our enemy does a faster
but equivilant algorithm, they are doing less work than we have
measured.
> > Thus, by attempting to measure rounds more accuratly, we end up with
> > a more accurate measurement of how much the two methods should
> > differ.
>
> Thank you very much for your clear explanation!
>
> Cristiano
I have an idea you might like, for making measuring things easier:
instead of multiplying your 512 bit integer by a point, why not do an
exponentiation of some primitive (3, I suppose), with a 512 bit modulo?
Anyway, it doesn't matter if you use sha as your hash, or point
multiplications, or whatever else... as long as you use a method for
which there are no shortcuts to do it faster, then the time it takes you
to strengthen your key is the same as the time it takes the enemy to do
the operation, and thus he has to do that much more work.
Oh, and accurate measurements are everything!
--
The difference between theory and practice is that in theory, theory and
practice are identical, but in practice, they are not.
------------------------------
From: "Douglas A. Gwyn" <[EMAIL PROTECTED]>
Subject: Re: super strong crypto, phase 3
Date: Wed, 07 Mar 2001 04:08:21 GMT
David Wagner wrote:
> >since the rate of introduction of new unknowns (key) equals
> >the rate of accumulation of information about them.
> Why should this imply that there is no exploitable relationship?
Starting with an unrecoverable situation due to insufficient
information, the above implies that the situation remains
unrecoverable (always insufficient information). True, it's
hand-waving, but plausible hand-waving.
------------------------------
From: Benjamin Goldberg <[EMAIL PROTECTED]>
Subject: Re: PKI and Non-repudiation practicalities
Date: Wed, 07 Mar 2001 04:10:56 GMT
Anne & Lynn Wheeler wrote:
>
> "Lyalc" <[EMAIL PROTECTED]> writes:
>
> > I've always taken the view that non-repudiation, at a commercial
> > level, is implementation dependent, regardless of the underlying
> > technology.
> > Certificates/PKI use a shared 'secret' such as a password or
> > biometric The 'weakest link' rule means that shared
> > secrets/passwords are the thing to focus on for really getting PKI
> > secure, even after the expertise devoted to PKI comes up with the
> > answer to the PKI part of the puzzle.
>
> pins/passwords/biometrics for activating a hardware token is
> significantlyifferent from pins/passwords/biometrics used as
> shared-secrets.
>
> it is possible to do 3-factor authentication with no shared-secret
>
> 1) something you have
> 2) something you know
> 3) something you are
>
> using hardware token that requires both pin & biometric ....
> where the business process of a pin activated card is significantly
> different from a business process shared-secret PIN (even if they are
> both PINS).
>
> hardware token with a pin & biometric requirement can be used to meet
> 3-factor authentication ... and a shared secret secret doesn't exist.
Even if your authentification is via a token, which in turn is activated
via biometrics, there still is a secret. It's the data in the token!
If all tokens were identical (allowing for them needing different
biometrics to activate), they would be useless. The token needs to
contain something to uniquely identify it electronically, and
authenticate that identity. Identification is simple; give each token a
unique id. Authentification, however, requires some sort of secret --
either a private key, or a shared secret.
The weakest link is still access to the secret. An attacker merely
needs to get access to a token and open it up, and avoid the tamper
resistance, and he has the secret. This is conceptually no different
from beating a password out of the user with a rubber hose.
> and doing it w/o impacting the business process infrastructure
> .... separate issue from impacting the technology implementation;
> pilots tend to be technology issues ... real deployment frequently are
> business process issues.
--
The difference between theory and practice is that in theory, theory and
practice are identical, but in practice, they are not.
------------------------------
From: "Douglas A. Gwyn" <[EMAIL PROTECTED]>
Subject: Re: Super strong crypto
Date: Wed, 07 Mar 2001 04:26:50 GMT
David Wagner wrote:
> Could you define "reasonable mixing properties", please?
Essentially as indicated by the name -- each bit of the output
is a function of *every* input (PT and key) and vice-versa for
the (generalized) inverse, such that there are 2^sizeof(k)
solutions distributed fairly evenly among all the input bits,
for on the order of (2^sizeof(k))/sizeof(x) possibilities for
*any* given PT bit. I.e., what one would want anyway for any
block cipher with sizeof(k)<sizeof(x).
You may want to further require that general inversion is
expensive, but given the above property I'm not sure that is
necessary.
------------------------------
From: [EMAIL PROTECTED] (Eric Lee Green)
Crossposted-To: alt.hacker
Subject: Re: The Foolish Dozen or so in This News Group
Reply-To: [EMAIL PROTECTED]
Date: 6 Mar 2001 22:21:47 -0600
On Tue, 06 Mar 2001 12:31:47 -0800, Anthony Stephen Szopa <[EMAIL PROTECTED]
> wrote:
>Eric Lee Green wrote:
>>
>> On Tue, 06 Mar 2001 05:20:22 -0800, Anthony Stephen Szopa <anthony@ciphile
.com
>> > wrote:
>> >Are you qualifying this to be restricted to hard drives only?
>> >
>> >Or do you mean all this applies to floippy disks, too?
>>
>> On Windows 98 SE, at least, the default for removable drives is that the
>> OS does NOT do write-behind caching. This is a control panel setting.
>> Floppies are a removable drive.
>>
>> You can disable write-behind caching for hard drives on Windows 98 SE
>> via the control panel, as described in an earlier post. This does not,
>> however, address the problem of the buffer inside SCSI disk drives, or
>> the problem of NTFS not operating the way you think it should on
>> Windows 2000/Windows NT, or Windows 2000/NT in general.
...
>> you get with Windows. An application, however, has no way of knowing
>> whether the write-behind buffer caching is turned on or not.
>Thanks for agreeing that the OverWrite program works as described
>for floppies. At least this is out of the way.
Well, from the OS point of view you're not going to have stuff in the
buffer cache, because write-behind buffer caching is turned off by
default for removable media in Windows. However, there may be
hardware-level buffering issues. I'm not familiar with the controller
chips used in x86 machines, but on some older machines, writes to the
floppy were actually to a track buffer. While the controller was
waiting for the sector that needed writing, it would accept other
writes intended for that track. Once the disk rotated around to the
first sector in the track buffer, the controller would write all the
sectors intended for that track. It had in the meantime already told
the OS that those sectors had already been written, even though they
had not been. Now, as long as your data that you're overwriting is
larger than a track (let's see, 1.44mb, 80 tracks, that's 18K per
track), you don't have to worry about buffer effects. And as long as
you make sure to read or write to some other track during or between
your writes to that file, you don't have to worry about buffer
effects. But if the entire file fits on one track, you have a problem.
On the Linux re-implementation of the DOS filesystem, your
fopen()/fclose() pair will not move the floppy disks' head -- the
information needed for fopen() is cached internally in what's known as
a "dentry" cache. I believe that Windows 9x does not have the
equivalent of the Linux "dentry" cache, but without source code to
Windows, I cannot verify this.
In any event, if your file on a 1.44mb floppy is bigger than 18K, you
get the behavior that you think out of your fopen()/write()/fclose()
triplet, guaranteed. Assuming that the user has not gone into his
preferences and turned on the write-behind caching for removable
media. So I would suggest writing out a 32K or so "dummy" file (to get
the 2.88mb floppies too) between writes to a file that is smaller than
32K. On Windows 9x, with floppies, you MAY get the expected behavior
even if the file is smaller than 18K, depending upon how smart the
Windows 9x buffer manager is -- I personally would experiment,
creating a file that's 1K long and repeatedly writing it and seeing
whether Windows has to go fetch the dentry information again (the
Linux version of the Windows filesystem doesn't move the head, but the
Linux implementation of the Windows filesystem is built on top of the
already-existing Unix buffer cache layer). And finally, with ZIP disks and
other removable media, you don't know how big the on-disk track buffer
is, so you're kind of screwed when dealing with small files -- the best
solution I can think of is to go write a big file somewhere else between
your writes to the file you're trying to erase, and hope you overrun its
track buffer. This is a case of where even though the OS is not buffering
the data, the SCSI or IDE controller embedded in the ZIP or JAZZ drive
may very well be buffering it internally, totally invisible to you.
>I am of the opinion that it would be best to insure that the hard
>drive is written to regardless of if the write behind caching as MS
>calls it is disabled or not.
>
>All of this becomes academic if we finally address the issue of
>when the write will or must take place instead of when the write
>will not necessarily take place.
Well, there's three layers to consider here: Filesystem, buffer cache,
and underlying hardware.
First of all, there is the file system layer. The file system layer,
insofar as writes goes, has only one goal -- to make sure that data
gets written to the underlying hardware in an order that keeps the
filesystem consistent enough to recover from an interrupted operation
(albeit probably with loss of data, but at least your whole filesystem
isn't corrupted). Some filesystems, such as the traditional
DOS/Windows 9x filesystem, do this basically by starting at the
beginning of the hard drive and going upwards (which is why you must
occasionally run the disk optimizer to keep your performance from
going to the toilet). Since the underlying buffer cache operates in
the same order, this insures that the data written to disk at least is
written in a consistent order. Some filesystems, such as Linux EXT2 or
the traditional Unix filesystem, don't care -- they write hither and
thither, don't care what order the underlying buffer cache writes data
in, and use redundant 'root' blocks and a very smart 'fsck' disk
recovery program to get the filesystem back in order. Finally, there
are filesystems such as the Reiser filesystem on Linux and the FFS
filesystem on BSD Unix. These depend upon writes taking place in a
transactional manner -- metadata gets written first, then the actual
blocks, then metadata gets updated again to indicate that the
transaction is completed. This means that they must give 'hints' to
the underlying buffer cache to tell it what order things need to be
written in, in order to maintain a consistent filesystem on the hard
drive. This is why the Reiser filesystem was not in the Linux 2.4.0
kernel -- Linus did not like the extensive modifications needed to the
underlying buffer cache for it to deal with Reiser's "hints".
Whatever the case is, the filesystem's job is finished the moment it
hands the block off to the buffer cache. The filesystem assumes that
the buffer cache is going to "do the right thing". If the system is in
"sync" mode (either because you've mounted it that way in Linux, or
because you went and turned on that "no write-behind caching" setting
in the Windows preference), the buffer cache may very well then send
the block straight to the hardware. But the filesystem does not
necessarily know that.
There's another issue regarding overwriting blocks: Whether blocks are
re-used or not depends upon the filesystem. The Windows filesystem
will reuse the same blocks for the file. The traditional Unix and Linux
filesystems will reuse the same blocks for the file. However, the Reiser
filesystem, and probably NTFS (since NTFS also claims to be a transactional
filesystem), will *NOT* reuse the same blocks for the file. It will
first write the changed block to a different place on disk, then
mark it as the new block 5 of the file "/foo/bar" (just to give an example).
Then the old block is released back to the free pool, with your data
still in it, recoverable via forensics mechanisms until such time as it
is re-used off of the free pool.
Now, assuming that you're using a filesystem that reuses blocks when
you write to an already-existing file in r+a mode (i.e., you're *NOT*
using Windows NT or Windows 2000 with an NTFS filesystem): there is a
call on Unix called 'fsync', which supposedly guarantees that all
blocks currently in the buffer cache will be flushed out to disk
before 'fsync()' returns. Since I am not a Windows programmer, I do
not know whether Windows has a similar call, and if so, what it is
named. Use of 'fsync()' or its Windows equivalent would thus be needed
to flush your buffer cache between your passes over the file. But even
that may not be enough. To whit:
Our NFS server at the office has an ICP-Vortex SCSI RAID-5 controller
in it. This RAID controller has up to 128 megabytes of cache memory in
it (we happen to have 64 megabytes of PC-133 memory in it). This is a
normal x86-based system running Linux, BTW, this is not some kind of
super-high-priced server system (we probably paid under $10,000 for
this system, including the hard drives). We have the NFS server
attached to a UPS system, so we have write caching turned on. If you
write a 1 megabyte file to this thing, then write the 1 megabyte file
with different data once again to this thing (repeat a few dozen
times), it will generally write to the actual physical hard drive only
once, with the last version of the file. Now, there does exist a BIOS
call on this beast to force it to flush its internal buffer. But you
must first know that a) this beast is connected, and b) how to call
that BIOS call. Right now, the Linux driver for this card, when its
"shutdown" routine is called (when the system is shutting down), knows
how to call this BIOS call (so that all data gets written to the disk
at shutdown), but a normal "fsync()" call doesn't touch it, because
the buffer cache knows nothing about the underlying hardware (that's
the driver's job).
Then finally, the SCSI hard drive has its own internal cache. The SCSI
hard drive may be entirely inaccessible, though. For example, the ICP-Vortex
card works by making the three to five drives of a RAID-5 array look, to
the operating system, as if they were a single SCSI hard drive. SCSI commands
that the operating system tries to send to that "fake" hard drive are
intercepted by the ICP-Vortex controller, and it does what it thinks are
appropriate with them. Thus you have no way of actually going into individual
SCSI hard drives and turning off their disconnect or their caching.
>Do you have any thoughts on this since this is going for the issue's
>jugular?
My thought is that on Windows 9x you can probably get by with the
Windows equivalent of the 'fsync()'. Windows 9x machines generally do
not have RAID controllers attached to them (which are especially bad
for buffering stuff), and the Windows 9x filesystem always overwrites
the file when you fopen() it with r+a. I would put some important
caveats into the documentation though to turn off the write caching on
RAID controllers and to be aware of caching effects on hard drives
(especially for small files). It's not as if any other product can
get around those effects, and it would make you seem more honest than
your competitors.
Windows NT and Windows 2000 are a lost cause. There is basically no
way, other than going directly to the hard drive and interpreting the
NTFS filesystem and then directly issuing low-level IDE or SCSI READ
and WRITE instructions, to force it to overwrite a block. If it is a
transactional file system the way that Microsoft says it is, it will
always write a changed block to another place on disk, and only then
mark the original (unchanged) block as a "free" block. Since I do not
have source code to NTFS I cannot verify that this is how it
operates. However, this is how other transactional filesystems such as
SGI's XFS and Linux's Reiser FS work, and I seriously doubt that
Microsoft's engineers are less competent than SGI's or Reiser's (I may
not like the company's policies, and I may limit my use of the
company's products as a result, but they have some damn good engineers
working there -- the fact that they could get Windows 2000 as stable
as it is, despite the inherent difficulties of making a multi-threaded
C++ program stable, should be evidence enough of that).
My basic thought: Eventually, everything is going to Windows 2000 or
Linux and to transactional filesystems such as NTFS or Reiser FS. So
in the long run, it becomes impossible to securely erase data. It's
almost impossible now for anything other than floppies, due to hard
drive caches getting larger and larger (I remember when IDE hard
drives had a 128K cache internally, now many have an 2 megabyte or
larger cache internally!). My philosophy is to instead move towards
keeping sensitive data on an encrypted filesystem such as a
"scramdisk" on Windows or an encrypted loopback device on Linux (both
of which create a large "container" file on disk, which is then used
as an "encrypted" hard drive -- but it's inaccessible unless you know
the key). Then the only thing you must securely erase is the key.
Probably the only place where secure erase is going to remain
important is on floppy disks -- you need someplace to store the key
for your "scramdisk", after all.
Note that this is all mostly hypothetical on my part, since I
personally have no need either for an encrypted "scramdisk" or for a
secure erase (especially a Windows one, since I am typing this via my
Linux system). Too much of me is public records nowdays for me to
worry about my privacy. That horse is already out of the barn :-(.
--
Eric Lee Green [EMAIL PROTECTED] http://www.badtux.org
AVOID EVIDENCE ELIMINATOR -- for details, see
http://badtux.org/eric/editorial/scumbags.html
====== Posted via Newsfeeds.Com, Uncensored Usenet News ======
http://www.newsfeeds.com - The #1 Newsgroup Service in the World!
======= Over 80,000 Newsgroups = 16 Different Servers! ======
------------------------------
From: CR Lyttle <[EMAIL PROTECTED]>
Crossposted-To: alt.security.pgp,talk.politics.crypto
Subject: Re: => FBI easily cracks encryption ...?
Date: Wed, 07 Mar 2001 04:28:37 GMT
Jerry wrote:
>
> On Mon, 05 Mar 2001 20:06:21 GMT, "Mxsmanic" <[EMAIL PROTECTED]> wrote:
> > "Joe H. Acker" <[EMAIL PROTECTED]> wrote in message
> > news:[EMAIL PROTECTED]...
> >
> > > Breaking strong crypto is the most expensive
> > > path of several dozens of paths that lead to
> > > your private information. Both government
> > > agencies and crooks are more likely to break
> > > into your appartment ...
> >
> > An excellent point, often overlooked. Aside from breaking in in an
> > obvious way, experts could defeat even the fanciest lock and sneak in
> > undetected _far_ more easily than anyone could crack any decent
> > encryption scheme.
> >
> > And even that isn't necessary. The spooks can just park a van across
> > the street from your house and watch what you type on your screen. That
> > would be a million times cheaper than trying to break your encryption
> > the hard way.
> >
> TEMPEST "eavesdropping" is very resource intensive and not something that's done at
>random. If that van's
> parked across the street, you did something to bring it
> there.
I've seen and built system for less than $100 that can read your monitor
from across the street. Several countries have regular patrols checking,
from the street, what their citizens are watching on TV or listening to
on radios. (Does England still do that?). Such technology has been
available for over 50 years. It just keeps getting cheaper.
--
Russ
<http://home.earthlink.net/~lyttlec>
Home of the Universal Automotive Test Set
Linux Open Source (GPL) Project
------------------------------
From: Paul Rubin <[EMAIL PROTECTED]>
Crossposted-To: alt.security.pgp,talk.politics.crypto
Subject: Re: => FBI easily cracks encryption ...?
Date: 06 Mar 2001 20:43:33 -0800
CR Lyttle <[EMAIL PROTECTED]> writes:
> I've seen and built system for less than $100 that can read your monitor
> from across the street. Several countries have regular patrols checking,
> from the street, what their citizens are watching on TV or listening to
> on radios. (Does England still do that?). Such technology has been
> available for over 50 years. It just keeps getting cheaper.
Can you post details about this? I've always thought it was an urban
myth except under lab conditions.
------------------------------
** FOR YOUR REFERENCE **
The service address, to which questions about the list itself and requests
to be added to or deleted from it should be directed, is:
Internet: [EMAIL PROTECTED]
You can send mail to the entire list by posting to sci.crypt.
End of Cryptography-Digest Digest
******************************