Cryptography-Digest Digest #143, Volume #10 Mon, 30 Aug 99 19:13:03 EDT
Contents:
Re: Can I export software that uses encryption as copy protection? ("Trevor Jackson,
III")
Re: public key encryption - unlicensed algorithm (David A Molnar)
Re: What if RSA / factoring really breaks? (David A Molnar)
Re: Q: Cross-covariance of independent RN sequences in practice (Mok-Kong Shen)
Re: Cryptography Items and Issues (JPeschel)
Re: On employing message-decoys (Jim Dunnett)
Re: On employing message-decoys (Mok-Kong Shen)
Re: On employing message-decoys (Mok-Kong Shen)
Re: 512 bit number factored (Bob Silverman)
----------------------------------------------------------------------------
Date: Mon, 30 Aug 1999 16:45:37 -0400
From: "Trevor Jackson, III" <[EMAIL PROTECTED]>
Crossposted-To: misc.legal.computing
Subject: Re: Can I export software that uses encryption as copy protection?
Eric Lee Green wrote:
> "Trevor Jackson, III" wrote:
> > Eric Lee Green wrote:
> > > Timur Tabi wrote:
> > > Yes, that is legal, but note that I could crack this "registration"
> > > scheme within minutes using a normal binary editor to change the output
> > > of your verification routine to always say "verified!".
> >
> > What if there is no such routine? What if the security routine simply decrypts the
> > operable binary image?
>
> Yes, that was a common copy protection scheme back in the early 80's.
> They would encrypt major portions of the binary image, including the
> portions that examined the copy protection on the disk. All it did was
> make it a nuisance, because we had to set a breakpoint at the end of the
> decrypt routine and then save the decrypted binary before continuing. In
> retaliation, the copy protection vendors added multiple levels of
> decryption, i.e., they decrypted the binary, but then other portions of
> the binary were similarly encrypted and had to be decrypted using a
> different key, etc. This made it a nuisance, but5 the same process could
> be done.
>
> > To circumvent this you have to intercept the "plaintext binary" and replay it.
> > That can be made difficult.
>
> Not if it is physically on my disk. I don't even necessarily have to
> replay it. The first major program that I ever wrote was a commenting
> disassembler (i.e., you could add comments that went with various memory
> addresses), and then I could patch the binary directly on the disk prior
> to loading it.
OF COURSE it's easy to patch a disk image. That's like solving a monoalphabetic
cipher.
Trivial. Your success in patching programs is to your credit, but solving easy
problems
does not support your contention that all problems are as easy to solve.
> > > Back in the early 80's software publishers tried to create "unbreakable"
> > > copy protection schemes.
This lends weight to my position not to yours. Anyone attempting to produce a
theoretically "unbreakable" security in pure software is an idiot or an amateur.
> They failed. If I have physical access to your
> > > software, I can load it into a binary debugger, trace its execution, and
> > > 'break' it.
You might be able to, but it wouldn't be as simple as the sentence above. For
instance,
if you are tracing from point to point with breakpoints, you'll have a bit of trouble
with the code that stomps the breakpoint trap vector and breakpoint trap handler. If
you
are single stepping, you'll have a bit of trouble with the code that stomps the trace
trap vector and handler.
Of course you'll be able to detect these and correct them. But that will make it hard
to
execute the app because the app has hundreds of dispatch routines, some of which
utilize
addresses and or code stored in the vectors and/or the trap handlers.
Point of this is that while "cracking" an app is always possible, it isn't always easy.
> >
> > In theory this is always possible, but in some cases it requires enormous hardware
> > support.
>
> It requires enormous hardware support only if it's not on your disk
> drive. As I mentioned, I can disassemble it while it's not running, and
> patch the binary directly to put a breakpoint after the end of the
> decryption routine that jumps into the debugger.
If the app fights the debugger hard enough your patch effort will be larger than the
effort required to write the applicaiton from scratch. It will still be possible to
crack the application, but it wouldn't be sane to do so unless you wanted bragging
rights.
> > a few of the more interesting interrupt vectors. An application can make this
> > difficult by using those vectors for other purposes
>
> Not on most modern operating systems. For example, Unix runs all
> programs in a virtual machine,
Really. How many versions of Unix have you used? Most don't use virtual machines
because
virtual machines became common far later than Unix did.
Of the versions that do use virtual machines, there are always ways to escape the
virtualization.
> and the debugger operates outside of that
> virtual machine.
Usually it is outside. But it always needs a tiny footprint inside the virtual
environment, usually for efficiency reasons. Those footprints can be detected and
stomped upon.
> But even on more primitive operating systems, all you
> have done is inconvenience me, not stop me.
In principle you are correct. You cannot be stopped because you can add additional
resources and time whereas the application has a fixed set of defenses. But your
effort
level may be quite high given a thorough set of defenses.
As the attacker you enjoy the attackers fundamental advantage of selecting the point of
attack, where the defender, the author of the applicaiton, has to defend "everywhere".
This fact does lead to a dismissal of defense as a concept.
> > In theory one can provide a perfect virtualization of any environment, single
> > stepping the application as necessary. But this will skew the instruction rate in
> > a manner detectable to the application. In order to prevent the skew the attacker
> > needs the equivalent of an extremely fast in-circuit-emulator.
>
> Poppycock. All I need is access to the program on my hard drive, a
> commenting disassembler, and a debugger. How many programs have you
> cracked anyhow? (I'll take the 5th on how many I cracked back in my
> childhood).
No comment.
> > It's not impossible, but, like most modern crypto, it can be made unreasonably
> > expensive. The failure of the software vendors around the time PCs were introduced
> > does not indicate the difficulty of creating debug-proof software. It indicates
> > the amateurishness of the software vendors.
>
> Again: How many programs did you attempt to crack?
See above.
> How can you tell us
> that the software vendors were amateruish, when you have zero (zilch)
> experience in cracking programs?
How did you conclude that? My statements were not a personal attack. Yours are
starting
to look like one.
Jumping to conclusions about my experience won't advance your arguments.
> If you are talking about programs that
> run directly on the hardware you may be correct. But programs which are
> on disk and which run under the control of an operating system can
> always be attacked.
Sure. But "always be attacked" <> "always be easily broken".
> The fact of the matter is that if I have physical access to the actual
> decryption part of the program, I can always (ALWAYS) rig it to spit out
> the plaintext to disk. Assuming the decryption key is embedded in the
> program somewhere, I have your plaintext. The most you can do is encrypt
> the program using a key that must be received from external sources, but
> that is either a distribution nightmare or useless (i.e., either each
> copy of the program is encrypted with a different key, or all copies are
> encrypted with the same key, i.e. one person posting the key on alt.2600
> just rendered your encrypted license stuff useless).
I'm glad your simple attacks work for you on simple defenses. "Assuming the decruption
key is embedded in the program somewhere" is equivalent to assuming the application
author is an idiot. Or an amateur as I originally stated.
> Cryptographic systems can only secure communications. They cannot stop
> an attacker from viewing the plaintext by "tapping" the decryption
> engine. Given physical access to the decryption engine, it can be rigged
> to spit out the plaintext to me at the same time that you view it.
> Without understanding this, you will never be able to create a secure
> cryptographic system.
If you can undetectably "tap" the decryption engine you can certainly obtain the
plaintext. If you cannot the rest of your argument becomes moot.
> My own licensing system uses strong authentication to make sure that
> script kiddies cannot type just anything for the license key and have
> the program work, but I have no illusions about this being secure
> against crackers with binary editors, disassemblers, and debuggers.
All such defensive systems have to be justified in terms of cost/benefit. I'll take
your
word for it that the cost of more "hardening" of your code is not justified. But that
has nothing to do with the possibility of such hardening being useful.
------------------------------
From: David A Molnar <[EMAIL PROTECTED]>
Subject: Re: public key encryption - unlicensed algorithm
Date: 30 Aug 1999 21:36:57 GMT
shivers <[EMAIL PROTECTED]> wrote:
> The main purpose is for the development of a _very_ secure online credit
> card submission system - where the details stay encryption all the way from
> the user's desktop to the serving company's payment processing desk.
Have you looked at the SET protocol ?
-David
------------------------------
From: David A Molnar <[EMAIL PROTECTED]>
Subject: Re: What if RSA / factoring really breaks?
Date: 30 Aug 1999 21:31:06 GMT
Bob Silverman <[EMAIL PROTECTED]> wrote:
>> reduction from one to the other. If there is, then the rest of this post
>> is wrong.
> There is a reduction from discrete logs over Z/NZ to factoring N.
Thank you for the pointer!
>> > What would this contibute to the NP vs. P problem?
>>
>> It shows us that a problem not known to be NP-complete is in P.
^^^^^^^^^
> Huh? Justify this statement!!
> If P != NP, it is well known that there are infinitely many
> problems that are neither in P nor in NPC.
> Why would showing that factoring is in P show that any
> problem not in NPC must be in P? [hint: It wouldn't]
I'm sorry, I wasn't clear. By "a problem" I meant "the single problem
'factoring' ". If factoring is shown to be in P (my interpretation of the
original poster's word "easy"), then it is "a problem not known to be
NP-complete" which has been shown to be in P.
I did not mean to imply that any problem not NPC would
necessarily be in P. Apologies.
Thanks,
-David Molnar
------------------------------
From: Mok-Kong Shen <[EMAIL PROTECTED]>
Subject: Re: Q: Cross-covariance of independent RN sequences in practice
Date: Mon, 30 Aug 1999 21:53:10 +0200
Douglas A. Gwyn wrote:
>
> Mok-Kong Shen wrote:
> > Perhaps I have not expressed myself clear enough. I meant what
> > magnitude of the value of computed cross-covariance can be safely
> > considered to be 0 in practice (even though that is non-zero) and
> > hence assume that there is indeed independence.
>
> That's not proper statistical practice. The question that *can*
> be answered by such statistics is "How likely is it that I would
> have seen a statistic this large if the hypothesis of independence
> is actually true?" Fortunately, such a model (independence) makes
> computing the expected values of such statistics fairly easy.
> This is treated in many textbooks on communications.
I am not yet quite convinced that there is way to satisfactorily
evaluate a value of cross-covariance computed in practice without
more or less reference to 'experience'. What would a theoretical model
of independence look like? Exact zero of cross-covariance is required
by independence. But how can one numerically define certain 'more or
less independence' (i.e. not absolute independence)? Is one going
to employ cross-covariance for that? I suppose one can't do that,
because one would be fuzzy, if not circular, in definition. More
concretely, if given a cross-covariance of, say, 0.05 % (I am using
a value mentioned in a response elsewhere), can one refer to a
well-defined theoretical model and say that the hypothesis of
independence is not rejected at such and such a level? I very much
doubt that. If you happen to know a test statistic for the
independence in question as well as its distribution, I should like
very much to obtain a literature reference.
M. K. Shen
------------------------------
From: [EMAIL PROTECTED] (JPeschel)
Subject: Re: Cryptography Items and Issues
Date: 30 Aug 1999 21:54:52 GMT
> [EMAIL PROTECTED] (John Savard) writes:
>If there *are* encryption programs out there with backdoors put in at
>the request of intelligence agencies - I will not hazard an opinion on
>this, except to note that it isn't technically infeasible -
>
>then, wouldn't applications that are sold for money,
>
>the source code of which is, for obvious reasons (or at least as is
>the usual practice - the necessity of which is sometimes debated), not
>available,
>
>be more likely to be in this category than ones whose source code is
>out there too - which are usually the free ones?
Man, John, that is one long senetence.
Not only would it be more likely, as you suggest, for commercial programs
to have backdoors, that was the case. A few years ago, while ITAR
was in effect, I was looking at some cmmercial crypto. No source
code was provided. The program used, as I recall, DES. In the
international version of the program, 16 bits of the DES key were known
so recovery meant searching for the remaining 40 bits.
The company still claimed, wrongly, I think, that its international
version used 56-bit DES.
Though it's been a quite a while, I seem to remember this sort of "backdoor"
as fairly common, no matter what the crypto algorithm. Wasn't the techique
once
used by Netscape in the international versions of its browser?
Joe
__________________________________________
Joe Peschel
D.O.E. SysWorks
http://members.aol.com/jpeschel/index.htm
__________________________________________
------------------------------
From: [EMAIL PROTECTED] (Jim Dunnett)
Subject: Re: On employing message-decoys
Date: Mon, 30 Aug 1999 19:01:32 GMT
Reply-To: Jim Dunnett
On Mon, 30 Aug 1999 05:43:37 GMT, [EMAIL PROTECTED] (SCOTT19U.ZIP_GUY)
wrote:
>In article <[EMAIL PROTECTED]>, Mok-Kong Shen <[EMAIL PROTECTED]>
>wrote:
>>On the assumption that absolute security is neither attainable nor
>>(often) necessary/affordable, that no adversary is omnipotent (having
>>infinite resources) and that no real-life message needs to be kept
>>secret till eternity, let's consider the following scenario:
>>
>>Alice wants to send a message to Bob. The content of the message
>>needs to be kept secret only within 24 hours (e.g. stuffs concerning
>>decision making in commercial negotiations) and estimates that
>>this time frame is just within the capability of the adversary. She
>>sends in addition to the true message 9 dummy messages with random
>>or bogous contents, employing different keys. Doesn't this increases
>>the security of her message tenfold because the chance of analysis
>>is reduced to 1/10 of the original? If she sends 99 dummy messages,
>>doesn't she have a legitimate hope that the adversary would probably
>>give up?
> This is a good method the problem though is that one has to be
>very careful of the content of the messages. Since it maybe possilbe
>to weed out incorrect message. This can create other problems so
>that the dummy message look the same from a mathematical viewpoint
>does she send just random stuff.
It just might also give the legitimate recipient the problem
of recognising the bogus messages ... !!!!
--
Regards, Jim. | We have decided not to go to France
amadeus%netcomuk.co.uk | this summer as it is too full of
dynastic%cwcom.net | unattractive, shirtless Englishmen
| talking into mobile 'phones.
PGP Key: pgpkeys.mit.edu:11371 | - Auberon Waugh.
------------------------------
From: Mok-Kong Shen <[EMAIL PROTECTED]>
Subject: Re: On employing message-decoys
Date: Mon, 30 Aug 1999 23:48:26 +0200
Mike Bell wrote:
>
> The underlying assumption here is that the adversary does not have
> sufficient resources to analyze all the messages. e.g. If I have
> 100 DES crackers currently sitting idle, then this approach adds
> no security - I just use more of my spare resources.
>
> Also this assumes that there are no speed-ups available to me from
> processing N messages in parallel. Assuming a brute force known
> plaintext attack (searching the entire key space), then attacking
> N messages simultaneously takes little longer than attacking a
> single message, since each guessed key can be checked against all N
> messages for little extra cost.
Is is assumed that there is a definite upper bound of the resources
that the adversary can (or can afford to) employ.
>
> An interesting sidebar to this are the "numbers" stations which
> were/are used for communication to/from agents in Cuba/USA.
> Here an interminable sequence of random numbers is read over the
> radio in a continuous broadcast. At some point, known only to
> the sender/receiver, the random numbers are replaced with an
> encrypted message. The attacker has to guess where the message
> starts and stops (if anywhere).
This is the busy channel I referred to.
>
> Even this approach (on its own) may not help much. Assuming 1 message
> is passed per day, and 1 number is uttered per second, there are only
> 86,400 possible start points. Assuming there is a better attack than
> brute force (where all 86,400 possibilities can be attacked in
> parallel) - then this is only equivalent to extending the key length
> by some 16 bits. (Assumes ciphertext indistinguishable from random,
> which it should be).
>
> Unless a weak algorithm must be used (e.g. to fit in with US export
> policy), then extending the keylength is probably a better
> way to add security.
It is an unstated (my apology!) assumption of mine that one is using
a common type of symmetric algorithm where the key length cannot be
changed by the user. With a variable key length, or more generally a
parametrized algorithm, one can of course very easily render the
cracking time of a single message far beyond the capability of the
analyst. I use to puzzle why user choosable parametrized algorithms
appear never have been favoured in major symmetric crypto system
designs todate.
M. K. Shen
------------------------------
From: Mok-Kong Shen <[EMAIL PROTECTED]>
Subject: Re: On employing message-decoys
Date: Tue, 31 Aug 1999 00:23:31 +0200
[EMAIL PROTECTED] wrote:
>
> 1. How does Bob know which message is correct? There needs to be a way
> for Bob to know what the correct message is without it being to obvious.
> Beginning a message with "Hey Bob, this is the right one!" is not very
> effective.
Agreeing on a certain hint to help is not necessary. The recepient
of course has much more work to do than in case of no dummy messages.
If he tries the correct key and only one plaintext comes out to be
readable (the keys encrypting the dummy messages need not be known to
him), that must be (theoretically: almost certainly is) the true
message.
> If you can encrypt with different algorithms, you could try something
> like this.
The availability of different algorithms would certaily greatly
improve the position of the communication partners relative to the
adversary. As was discussed recently in another thread, one can
switch among the algorithms to encrypt the (proper) messages. Another
possibility is to do superencipherment with the algorithms applied
in different (permutation) order.
M. K. Shen
------------------------------
From: Bob Silverman <[EMAIL PROTECTED]>
Subject: Re: 512 bit number factored
Date: Mon, 30 Aug 1999 21:47:38 GMT
In article <[EMAIL PROTECTED]>,
Anton Stiglic <[EMAIL PROTECTED]> wrote:
> > Your facts are confused.
> > > Are you talking about RSA labs?
> > > First of all, Pomerance (1982) came up with QS
> >
> > Try 1980.
> >
>
> These are the refs I have for Pomerance QS:
>
> @InCollection{Po82,
> author = "C. Pomerance",
<snip>
> What ref do you have that states 1980?
(0) Carl Pomerance himself.
(1) Joe Gerver was already woking on an implementation before
Carl's article appeared (using a 16 bit HP machine)
(2) Articles always take a year or two to appear in print. Date
of publication is not date of discovery.
(3) I was *there*. Discussion was already going on over internet
mail (using old style internet address) (e.g. mail linus!ihnp4! etc.)
about the algorithm prior to its publication.
[some nostalgia: anyone lse remember ihnp4 as a major node on
the net prior to 1984's revision of internet adressing]
>
> > > Pollard came up with NFS
> > > (1993).
> >
> > Try 1989.
> I was thinking GNFS
Once again, once Pollard sent out his short note, "factoring with cubic
integers", [in 1989] it was immediately obvious to everyone working on
factoring algorithms that the algorithm could be generalized. It was clear,
even to me with limited knowledge of algebraic number theory, that the
method would generalized for any extension which was a UFD and whose units
were not too large. [I did not know how to make it work for non-UFD's] It
took a while for others to work out the details of not having to explicitly
find generators of the units and ideals, but the method was known (in rough
preprints) long before anything was published.
--
Bob Silverman
"You can lead a horse's ass to knowledge, but you can't make him think"
Sent via Deja.com http://www.deja.com/
Share what you know. Learn what you don't.
------------------------------
** FOR YOUR REFERENCE **
The service address, to which questions about the list itself and requests
to be added to or deleted from it should be directed, is:
Internet: [EMAIL PROTECTED]
You can send mail to the entire list (and sci.crypt) via:
Internet: [EMAIL PROTECTED]
End of Cryptography-Digest Digest
******************************