Cryptography-Digest Digest #146, Volume #10      Tue, 31 Aug 99 01:13:04 EDT

Contents:
  Re: Can I export software that uses encryption as copy protection? (Eric Lee Green)
  Re: Can I export software that uses encryption as copy protection? ("Timur Tabi")
  Re: public key encryption - unlicensed algorithm (Paul Rubin)
  Re: public key encryption - unlicensed algorithm (Paul Rubin)
  original source code for robert morris crypt.c circa 1970's (dan braun)
  Which of these books are better ? ("JaeYong Kim")
  Re: WT Shaw temporarily sidelined (Anthony Stephen Szopa)

----------------------------------------------------------------------------

From: Eric Lee Green <[EMAIL PROTECTED]>
Crossposted-To: misc.legal.computing
Subject: Re: Can I export software that uses encryption as copy protection?
Date: Mon, 30 Aug 1999 20:50:14 -0700

"Trevor Jackson, III" wrote:
> Eric Lee Green wrote:
> > Yawn. That's the code you binary-patch a "JMP" around. I know (former)
> > crackers who used to do that in their sleep. (Or at least at a time of
> > night when they SHOULD have been asleep!).
> 
> Yup.  And then you find the app silently fails to operate because the footprint of 
>the stomp is
> necessary to its continued functionality.

Yes, those are the tricky ones (grin). They require doing a
search-and-replace of all the places that do a "movem" from the stomped
place to instead have a movem from wherever we hid our copy of the
stomped place (grin). 

> > > If the app fights the debugger hard enough your patch effort will be larger than 
>the
> > > effort required to write the application from scratch.  
> > Congratulations, you just discovered that crackers aren't sane! The more
> > effort it takes, the more prestige that crackers get by breaking it, and
> > the more they'll trumpet the fact and feature your product on "warez"
> > sites.
> No.  You are assuming the software, once stripped of its protection, can be executed 
>on any
> machine.  That is trivially false.

Really? (shrug). Proof by assertation, I guess. Unfortunately, I haven't
found an example of something uncrackable, just things that a
cost-benefit analysis says no sane person wants to crack. Of course, the
problem with that statement is assuming that all people are sane, a
statement which does not hold up (read the local newspaper if you doubt
me :-). 

> > couple more that I haven't recalled at the moment, since I've been
> > "doing" Unix since 1985. I currently work for a Unix software house that
> > has ports to every major Unix platform, most of which we have in our
> > porting lab, and which I refer to regularly in order to, e.g., make sure
> > my software works properly on both little-endian and big-endian
> > machines.
> 
> Then you should know the proper definition of virtual machine.  BSD didn't have a 
>virtual
> machine (except fot he embedded PDP-11 mode on VAXen), it had virtual memory.  
>Privilege levels
> do not constitute a virtual machine.  A virtual machine is privilege levels PLUS an 
>emulator
> that makes the kernel and/or supervisor -mode instructions appear to operate.

Sorry, while what you state is a special case of a virtual machine, the
concept of a virtual machine encompasses a far larger sphere. A virtual
machine can be as completely virtualized as IBM's VM system, which even
goes down to virtual channel controllers and virtual DASD devices, or it
can be as simple as a "sandbox" that runs programs in a controlled
environment. For example, see the Java Virtual Machine (JVM), which
basically presents a simplified processor to the programs that run under
it.

You may not agree that the typical Unix execution environment of memory
plus a very smart "trap" instruction constitutes a virtual machine, but
it inarguably has the same properties as the JVM -- it presents a
simplified processor (that can't run supervisor-mode instructions) in
order to "sandbox" programs. It just doesn't try to virtualize a whole
machine environment the way that IBM's VM does. 

> No, that's pretty much an abstract machine not a virtual machine. 
> A virtual machine is an
> abstract machine that is enforced by the hardware.

Correct, the Unix virtual machine is enforced by the hardware. Just try
accessing memory that doesn't belong to you, or try to execute an "inb"
or "outb" instruction to i/o ports that you don't own (grin). It's a
very SIMPLE virtual machine, one that doesn't do a lot besides let you
call that very smart "trap" instruction, but it's still a virtual
machine. 

> And there are always ways for a well-behaved program to obtain the necessary 
>permissions.

Definitely. In the Unix environment, for example, you would make the
program SUID root. In the Multics environment, you'd go through a call
gateway to a lower ring when you wanted to execute priviliged
instructions. Both have to be done by the system administrator. On my
machines, I *AM* the system administrator (grin). 

> > >  Most don't use virtual machines because
> > > virtual machines became common far later than Unix did.
> >
> > Err, VM/CMS? Multics? Hmm? (I don't know about VM/CMS, but Multics
> > *certainly* precedes Unix). 

> The systems you are talking about have virtual memory not virtual machines. 

IBM's VM definitely *DOES* virtual machines. The VM environment emulates
a simplified IBM mainframe all the way down to channel controllers and
DASD devices, and will run most IBM operating systems (CMS is the one
traditionally run under the VM environment). 

Multics had a VM system also. For example, you could run the Honeywell
GCOS operating system in a virtual machine under Multics in order to run
GCOS software (why did Honeywell have two operating systems? that's what
their customers wondered too, which is why Honeywell is no longer in the
computer business!). 

> > If you are running SUID root then you can request specific i/o ports and
> > such. But for normal programs you do not have that option.
> 
> Is all of your experience on Unix?  There are legacy systems running applications 
>under up to
> five layers of emulation.  Some of those apps were written in the 50s!

Well, I have a little bit of experience with VM/CMS and Multics, and
back in the early 80's I did a lot with Commodore 64's and later the
Amiga, as well as a little Windows in the mid 90's (when it appeared
that Windows NT would be the wave of the future -- I was HEAVILY pushing
my employer at the time to do an NT port of our software). But yeah,
most of my experience is as a Unix geek (grin). 

> By making it arbitrarily difficult to "crack"  an application I can exclude an 
>arbitrarily
> large fraction of users from "cracking" the app.  That fraction will always be less 
>than one,
> but I can get unreasonably close.

Cost-benefit analysis. For games, there's a good cost-benefit for
putting huge amounts of arbitrary hurdles in front of potential
crackers. A game has a shelf life of only a few months, so if you deter
the crackers for those few months, you've probably earned enough in
sales by not having your game on the "warez" sites to justify the cost
of doing the convulated copy protection. In addition, games run very
close to the hardware in most environments (such as the Win9x/DOS
environment or Mac environment), meaning that there are things you can
do to make it extremely difficult for a cracker, such as taking over the
whole screen so that even if he loads your program into a debugger he
can't see what's going on. 

On the other hand, if I'm selling, say, school administration software,
it's not even worth putting a license key scheme on my software. The
laws and regulations regulating schools (which detirmine which paperwork
and reports must be generated by the system) vary from year to year and
even from quarter to quarter, requiring school districts to have service
contracts in order to get their quarterly updates (and more importantly,
in order to recieve the training in the new regulations that's necessary
to properly USE the updated software!). Being tied to their
administrative software vendor that way is far more reliable than a
license key scheme at assuring that all copies in use are actual
paid-for copies. 

Most consumer software is somewhere inbetween. In my employer's case,
the typical "cracker" is not going to buy it because there are free
programs that do much the same thing (though not as reliably or with as
much ease of use -- but these are folks who laugh at ease of use). Our
customers generally buy our product because there is a company standing
behind it with technical support, not because it's the latest greatest
thing that they "must" have. So unlike a game company, where delaying
the crackers by a month can result in a million dollars in extra sales,
we don't gain anything by delaying the crackers. In our particular case,
a cost-benefit analysis says spending huge amounts of resources
obfuscating our program to make it hard for crackers to strip out the
license key mechanism is not cost effective. Still, we do want to make
it hard for the casual user to pirate our program -- so we do use strong
authentication as part of our license key mechanism to insure that the
license key was in fact issued by us. If a cracker replaces the
subroutine call with a few NOP's and etc., no big deal -- he's not one
of our customers anyhow. 
 
> > Thank you. Actually, I think it does not rule out defense as a concept
> > but, rather, defense against crackers as a concept. A certain amount of
> > protection will keep out the script kiddies. Just don't overestimate
> > what can be done on a defensive basis against a detirmined cracker.
> 
> We will never agree upon this because you are simply asserting your conclusion as 
>fact.

And vice-versa (grin). 

> yours.  And some publicly launched attacks by people who claimed, like yourself, to 
>be
> experienced crackers.

I haven't claimed to be an experience cracker (grin). In the early 80's
when I was a kid I did a little bit, mostly to strip copy protection off
of legally purchased programs that were irritating me with the huge
variety of idiotic behavior that early copy protection schemes used (the
biggest being that I couldn't put them on a real disk drive but had to
keep them on that idiotic floppy), but it was too big a pain in the rear
to strip off all those layers of encryption (took me a couple of weeks
to do it on the first program) and I didn't get off on the "warez kidz"
bit anyhow. After a couple of months I moved on to more interesting
things. But I kept some of the same friends, some of whom did NOT move
on to more interesting things. They were an easy source of
un-copy-protected versions of the copy-protected software that I
purchased (grin). (The whole problem with copy protection was that it
made the programs so slow to load, and I couldn't copy the data to a
real disk drive... that's why copy protection eventually died as a
concept, to be replaced by license management, a concept where I have no
difficulties because I purchase my software legally in the first place
or else use free alternatives). 

But anyhow, one thing the experience taught me was that you can't be a
successful "white hat" without having at least close contact with "black
hats" at some point in time. I find those kinds of experiences quite
valuable when I'm trying to devise secure systems. You can't be a
successful "white hat" unless you can think like a "black hat". One
thing I find disturbing on the part of so many "white hats" is that they
suffer from a bad case of head bloat, believing that their systems are
uncrackable. Having known some very VERY sharp crackers who could shred
these "white hat"'s "uncrackable" systems in short order (well, in a
couple of weeks anyhow), all I can do is shake my head and sigh.  
 
> With enough effort that app could have been cracked.  But it would have taken a 
>substantial
> fraction of the effort required to implement the whole thing.

Granted. A couple of weeks of labor is worth it if it saves you a couple
of weeks of loading time in the future (i.e., stripping off the copy
protection from those early copy-protected programs), but not worth it
if the product itself only costs $300. Heck, we're talking about 80
hours of labor, at 30 dollars an hour (the kind of rates that a sharp
kiddo still in school can get contract), that's $2400 worth of labor
alone!

> > Given that the simplest of authentication schemes will suffice to give
> > the script kiddies fits, I made a cost-benefit analysis and decided that
> > further obfuscation was not necessary (don't get me wrong, it's a
> > cryptograhically strong license information authentication scheme, but
> > any detirmined cracker with a binary debugger and binary editor could
> > crack it). My boss, who got his start in a similar way, agrees, saying
> > "the kind of people who can defeat what you've done aren't going to buy
> > our product anyhow."
> 
> Sounds like a reasonable decision.

Just as levels of obfuscation that'll keep crackers out of a hot game
for a few extra weeks is a reasonable decision. But for most programs in
the middle, where they're not "play the game for a few weeks then go get
the next hot game" type programs, it doesn't make sense to spend tens of
thousands of dollars of programmer time in order to keep crackers out of
them for a few extra weeks -- you won't make up in sales what it costed
to keep them out, since a dull boring ole' word processor or tape backup
program isn't going to be a "must have" for the crackers capable of
stripping out your license key mechanism (unlike the latest hot game,
which they "must" have, even if requires paying money!). 

-- 
Eric Lee Green    http://members.tripod.com/e_l_green
  mail: [EMAIL PROTECTED]
                    ^^^^^^^    Burdening Microsoft with SPAM!

------------------------------

Crossposted-To: misc.legal.computing
From: "Timur Tabi" <[EMAIL PROTECTED]>
Reply-To: "Timur Tabi" <[EMAIL PROTECTED]>
Subject: Re: Can I export software that uses encryption as copy protection?
Date: Tue, 31 Aug 1999 04:13:56 GMT

Excuse me, but would it be possible for one of you answer my original
question?  There are 20 posts on this thread, and none of them answer my
question!!!!!!!!!!!!!!!

On Mon, 30 Aug 1999 15:55:04 -0700, Eric Lee Green wrote:

>"Trevor Jackson, III" wrote:
>> Eric Lee Green wrote:
>> >  it is physically on my disk. I don't even necessarily have to
>> > replay it. The first major program that I ever wrote was a commenting
>> > disassembler (i.e., you could add comments that went with various memory
>> > addresses), and then I could patch the binary directly on the disk prior
>> > to loading it.
>> 
>> OF COURSE it's easy to patch a disk image.  That's like solving a monoalphabetic 
>cipher.
>> Trivial.  Your success in patching programs is to your credit, but solving easy 
>problems
>> does not support your contention that all problems are as easy to solve.
>
>All I'm noting is that encrypting the license key portions of your
>program is going to be more of a nuisance factor than anything else.
>It'll stop a few script kiddies at best, but "real" crackers will view
>it as a challenge and swiftly strip out the license portion of your
>code. 
>
>> > They failed. If I have physical access to your
>> > > > software, I can load it into a binary debugger, trace its execution, and
>> > > > 'break' it.
>> 
>> You might be able to, but it wouldn't be as simple as the sentence above.  For 
>instance,
>> if you are tracing from point to point with breakpoints, you'll have a bit of 
>trouble
>> with the code that stomps the breakpoint trap vector and breakpoint trap handler.  
>If you
>> are single stepping, you'll have a bit of trouble with the code that stomps the 
>trace
>> trap vector and handler.
>
>Yawn. That's the code you binary-patch a "JMP" around. I know (former)
>crackers who used to do that in their sleep. (Or at least at a time of
>night when they SHOULD have been asleep!). 
>
>> > It requires enormous hardware support only if it's not on your disk
>> > drive. As I mentioned, I can disassemble it while it's not running, and
>> > patch the binary directly to put a breakpoint after the end of the
>> > decryption routine that jumps into the debugger.
>> 
>> If the app fights the debugger hard enough your patch effort will be larger than the
>> effort required to write the applicaiton from scratch.  It will still be possible to
>> crack the application, but it wouldn't be sane to do so unless you wanted bragging
>> rights.
>
>Congratulations, you just discovered that crackers aren't sane! The more
>effort it takes, the more prestige that crackers get by breaking it, and
>the more they'll trumpet the fact and feature your product on "warez"
>sites. 
>
>> > Not on most modern operating systems. For example, Unix runs all
>> > programs in a virtual machine,
>> 
>> Really. How many versions of Unix have you used? 
>
>Let's see. BSD 4.2, BSD 4.3, FreeBSD, OpenBSD, Sys V.2, Sys V.3, Sys
>V.4,
>SunOS, HPUX,  Solaris, Linux, Xenix, Pyramid OSx, SCO Unix... probably a
>couple more that I haven't recalled at the moment, since I've been
>"doing" Unix since 1985. I currently work for a Unix software house that
>has ports to every major Unix platform, most of which we have in our
>porting lab, and which I refer to regularly in order to, e.g., make sure
>my software works properly on both little-endian and big-endian
>machines. 
>
>I think you are confusing a virtual machine with a virtualized CPU. Unix
>programs run on a virtual machine that consists of: memory (and only
>memory that it has been allocated by the OS), and a "trap" call to enter
>the operating system. That's pretty much it. If a Unix program attempts
>to do things like, e.g., set interrupt vectors, or directly access a
>hard drive controller, an exception will be generated and the execution
>of the program suspended (what the OS does then depends on what handlers
>have been set up but most probably you will NOT be setting an interrupt
>vector or writing bytes to the hard drive). Thus you obviously cannot
>run an operating system within the Unix virtual machine, since it is not
>a virtualized CPU but rather a simplified pseudo-machine that just
>happens to have a VERY intelligent "trap" call (grin). 
>
>>  Most don't use virtual machines because
>> virtual machines became common far later than Unix did.
>
>Err, VM/CMS? Multics? Hmm? (I don't know about VM/CMS, but Multics
>*certainly* precedes Unix). It doesn't really matter, because it appears
>that we are talking about different things. You are talking about a
>virtualized CPU, and I am talking about the virtual memory/system call
>exception scheme that every major Unix variant uses as its virtual
>machine (the only ones that I am aware of that do not are certain older,
>cruder Xenix implementations, and various academic "toys" like Minix). 
>
>> Of the versions that do use virtual machines, there are always ways to escape the
>> virtualization.
>
>If you are running SUID root then you can request specific i/o ports and
>such. But for normal programs you do not have that option. 
>
>> As the attacker you enjoy the attackers fundamental advantage of selecting the 
>point of
>> attack, where the defender, the author of the applicaiton, has to defend 
>"everywhere".
>> This fact does lead to a dismissal of defense as a concept.
>
>
>Thank you. Actually, I think it does not rule out defense as a concept
>but, rather, defense against crackers as a concept. A certain amount of
>protection will keep out the script kiddies. Just don't overestimate
>what can be done on a defensive basis against a detirmined cracker. 
>
>> > Cryptographic systems can only secure communications. They cannot stop
>> > an attacker  from viewing the plaintext by "tapping" the decryption
>> > engine. Given physical access to the decryption engine, it can be rigged
>> > to spit out the plaintext to me at the same time that you view it.
>> > Without understanding this, you will never be able to create a secure
>> > cryptographic system.
>> 
>> If you can undetectably "tap" the decryption engine you can certainly obtain the
>> plaintext.  If you cannot the rest of your argument becomes moot.
>
>Correct. I'm just pointing out that there's a number of ways to
>undetectably "tap" the decryption engine on most modern operating
>systems. 
>
>> All such defensive systems have to be justified in terms of cost/benefit.  I'll 
>take your
>> word for it that the cost of more "hardening" of your code is not justified.  But 
>that
>> has nothing to do with the possibility of such hardening being useful.
>
>Granted. It'll keep out the script kiddies and casual browsers. But as
>far as hard-core crackers, it'll just be another challenge for them to
>boast about. 
>
>Given that the simplest of authentication schemes will suffice to give
>the script kiddies fits, I made a cost-benefit analysis and decided that
>further obfuscation was not necessary (don't get me wrong, it's a
>cryptograhically strong license information authentication scheme, but
>any detirmined cracker with a binary debugger and binary editor could
>crack it). My boss, who got his start in a similar way, agrees, saying
>"the kind of people who can defeat what you've done aren't going to buy
>our product anyhow." 
>
>-- 
>Eric Lee Green    http://members.tripod.com/e_l_green
>  mail: [EMAIL PROTECTED]
>                    ^^^^^^^    Burdening Microsoft with SPAM!




------------------------------

From: [EMAIL PROTECTED] (Paul Rubin)
Subject: Re: public key encryption - unlicensed algorithm
Date: 31 Aug 1999 04:25:51 GMT

In article <[EMAIL PROTECTED]>,
shivers <[EMAIL PROTECTED]> wrote:
>>Have you looked at the SET protocol ?
>
>no, I've never heard of it - is it any good?  I.e. strong and unlicensed?

SET is a specialized and very complicated protocol being pushed by
Visa for credit card transactions.  See www.setco.org for details.
It is like EDI for online credit card processing, with special message
fields for all kinds of purchase-specific data such as the amount of
gas left in the tank of a rental car when you return it.  It is almost
certainly not what you want.

------------------------------

From: [EMAIL PROTECTED] (Paul Rubin)
Subject: Re: public key encryption - unlicensed algorithm
Date: 31 Aug 1999 04:35:08 GMT

In article <[EMAIL PROTECTED]>,
shivers <[EMAIL PROTECTED]> wrote:
>Further to my original message, some details about what it's for:
>
>The main purpose is for the development of a _very_ secure online credit
>card submission system - where the details stay encryption all the way from
>the user's desktop to the serving company's payment processing desk.

What you want is server-gated cryptography (SGC), a system that allows
current Netscape and Microsoft browsers (even the 40 bit versions) to
do 128-bit SSL when a special server certificate is installed.  It's
almost certainly not worth the hassle of implementing your own
cryptography in Java, which even if it works is likely to worsen the
user experience by slowing down the transaction with the public key
calculation.  Note also that generating good random session keys in an
applet is slow and/or difficult.

See http://www.verisign.com/server/prd/preq.html for info about
obtaining SGC certificates.

That said, if you, the server, and the users are all in the UK, you're
out of reach of both the RSA patent (RSA is patented only in the US) and
of the US cryptography export restrictions.  The UK has some export
restrictions but from what I understand, they are much more relaxed
than the US's.

------------------------------

From: dan braun <[EMAIL PROTECTED]>
Subject: original source code for robert morris crypt.c circa 1970's
Date: Mon, 30 Aug 1999 23:59:25 -0400
Reply-To: [EMAIL PROTECTED]

Does anybody have a copy of the original (circa 1970?) source code for
robert h. morris' crypt.c?
thanks in advance
dan
--
 Dan Braun - Broadcast Engineer
                    Toronto, Ontario, Canada
                    [EMAIL PROTECTED], [EMAIL PROTECTED]



------------------------------

From: "JaeYong Kim" <[EMAIL PROTECTED]>
Subject: Which of these books are better ?
Date: Tue, 31 Aug 1999 04:23:24 GMT

for both conceptional understanding and mathematical understanding..
1. Applied Cryptography, Bruce Schneier
2. Handbook of Applied cryptography, Menezes et al
3. Cryptography: Theory and Practice, Stinson

and I doubt free electronic distribution of Handbook of.. is due to incoming
publication of next version.. how do you think?
please answer

JaeYong Kim
--
[EMAIL PROTECTED]




------------------------------

From: Anthony Stephen Szopa <[EMAIL PROTECTED]>
Subject: Re: WT Shaw temporarily sidelined
Date: Mon, 30 Aug 1999 20:50:19 -0700
Reply-To: [EMAIL PROTECTED]

[EMAIL PROTECTED] wrote:

> In article <[EMAIL PROTECTED]>,
>   [EMAIL PROTECTED] (John Savard) wrote:
> > [EMAIL PROTECTED] (SCOTT19U.ZIP_GUY) wrote, in part:
> >
> > >I think he is in Texas
> > >is he not.
> >
> > Yes, I think so too. And I thought you were in New York or
> > thereabouts, so you probably wouldn't get the chance to just drop by.
> >
> > But I do wish him a speedy recovery.
> >
> > John Savard ( teneerf<- )
> > http://www.ecn.ab.ca/~jsavard/crypto.htm
> >
> This may be a dumb question, but what's wrong with him?
>
> Sent via Deja.com http://www.deja.com/
> Share what you know. Learn what you don't.

Odds are he is in the hospital in the same city where his company is
located in.



------------------------------


** FOR YOUR REFERENCE **

The service address, to which questions about the list itself and requests
to be added to or deleted from it should be directed, is:

    Internet: [EMAIL PROTECTED]

You can send mail to the entire list (and sci.crypt) via:

    Internet: [EMAIL PROTECTED]

End of Cryptography-Digest Digest
******************************

Reply via email to