Why bother turning off verification when you can just reverse engineer it
(e.g. ILDASM), change the strong name to use a key pair of your own and
recompile, with whatever modifications you want in place?

If your goal is to stop someone modifying your software and then running the
modified version, the only way to achieve this is not to give them your
software in the first place.

This attack works fine even if there are no bugs in the CLR and you don't
have administrative rights on the machine in question.


--
Ian Griffiths
DevelopMentor

----- Original Message -----
From: "John St. Clair" <[EMAIL PROTECTED]>


I wouldn't really refer to this scenario as "hacking" per se. More like
violating the terms of the licensing/stealing/etc.

In fact, what you are sketching is quite trivial. The CLR isn't going to
help at all.

You could, for instance, strongly-name your assemblies and load them
locally (i.e., not in the GAC). This would buy you run-time verification
checking (as opposed to the GAC, which only does install-time).

Unfortunately, since we can assume that the "cracker" owns his own
machine, he could just turn off verification (as discussed previously),
reverse-engineer your code (see Anakrino), remove any licensing checks,
and then re-compile the assemblies. Since verification is turned off,
all bets are as well.

Re-distributing the "cracked" assembly would be relatively easy as well
-- you could even provide a Installer (like Chris Sells' does with
Wahoo) that would turn off verification. This would assume that the
end-users are local admins with the ability to turn off verification.

For this scenario, you'd have to wait for something like Palladium...

John

John St. Clair
Prosjekt- og teamleder
Reaktor AS

> -----Original Message-----
> From: Moderated discussion of advanced .NET topics. [mailto:ADVANCED-
> [EMAIL PROTECTED]] On Behalf Of Trey Nash
> Sent: Tuesday, October 22, 2002 2:53 AM
> To: [EMAIL PROTECTED]
> Subject: Re: [ADVANCED-DOTNET] tamper proof assembly question
>
> Hi all,
>
> OK, let me explain a little further. :-)  I'm not looking at a
scenario
> where I'm trying to avoid people hacking into a machine remotely.
>
> Many apps out there require serial numbers.  The compiled code of the
app,
> whether it be IL or i386 assembly, typically will boil down to a
junction
> point in the code where you can simply replace a 'je/jne' with 'jmp'
(in
> the i386 assembly case.)  See what I mean?  Hacker finds that weak
point,
> hex edits the exe, and it's hacked.
>
> Now then, suppose we want to rely on the CLR to prevent this.  The CLR
> then becomes the target.  The hacker then would have to hack the CLR
to
> allow it to load assemblies without varifying their integrity.  My
> question is, does anyone have a metric as to how difficult this will
be
> for a determined hacker?
>
> Thanks,
>
>    -Trey
>
> > One of the worst case scenarios would be for someone to ship a
hacked
> > mscorlib then somehow run sn.exe on the deployment machine to turn
off
> > verification checking on mscorlib. There are 3 problems the bad guy
has
> > to overcome:
> >
> > 1. getting the fake mscorlib onto the machine
> > 2. getting sn.exe onto the machine (it only ships with the sdk and
not
> > the redist)
> > 3. running the application (sn.exe) under an admin account
> >
> > So at least make 2 harder by only putting the redist on to
deployment
> > machines.

You can read messages from the Advanced DOTNET archive, unsubscribe from Advanced 
DOTNET, or
subscribe to other DevelopMentor lists at http://discuss.develop.com.

Reply via email to