Cryptography-Digest Digest #373, Volume #12       Mon, 7 Aug 00 10:13:01 EDT

Contents:
  Re: Q: Functions that are slow to invert (Tim Tyler)
  Hey I'm a nubee, can anyone help ? (ya_boy)
  Re: Software package locking ("Trevor L. Jackson, III")
  Re: DES implementation woes ([EMAIL PROTECTED])
  Re: Secure Operating Systems ("Trevor L. Jackson, III")
  Re: OTP using BBS generator? (Mark Wooding)
  Just how easy are some codes to crack?! (Stu P)

----------------------------------------------------------------------------

From: Tim Tyler <[EMAIL PROTECTED]>
Subject: Re: Q: Functions that are slow to invert
Reply-To: [EMAIL PROTECTED]
Date: Mon, 7 Aug 2000 12:03:43 GMT

Mok-Kong Shen <[EMAIL PROTECTED]> wrote:

: Are there practically implementable functions that are easy
: to compute but rather expensive (but not comparable to the
: oneway functions) to invert? What is desirable are such that
: the cost factor could be varied to suit one's need by varying,
: say, the size of the function argument.

There's a family of functions with this property described at:

  http://alife.co.uk/ca/largeinverse/

I believe these have the property you're asking after, when considered
from the point of view of hardware implementation.

Iterating forwards is always very rapid - while iterating backwards can be
made arbitrarily slow by increasing the size of the state.

In software, on a fundamentally serial machine, this asymmetry tends to
vanish, though.
-- 
__________  Lotus Artificial Life  http://alife.co.uk/  [EMAIL PROTECTED]
 |im |yler  The Mandala Centre   http://mandala.co.uk/  Namaste.

------------------------------

From: ya_boy <[EMAIL PROTECTED]>
Subject: Hey I'm a nubee, can anyone help ?
Date: Mon, 07 Aug 2000 08:04:12 -0500

Just wondering , without going into all osrts of technical stuff, what
is a good enceyption program for encrypting files . Not necessarily PGP.
I dont' want to send them over the net but secure my company's rechords.
Will PGP encrypt files well or do you actually have to have the file you
encrypted address to someone's public key?
Any help deeply appreciated.


------------------------------

Date: Mon, 07 Aug 2000 09:28:03 -0400
From: "Trevor L. Jackson, III" <[EMAIL PROTECTED]>
Subject: Re: Software package locking

[EMAIL PROTECTED] wrote:

> Trevor, a couple of questions/comments:
>
> 1) You mention that there are certain tricks you can do that will foil
> debuggers, like overwriting the breakpoint interrupt address, etc.
> Which operating system are you talking about?  It doesn't seem like this
> would work for Win32 programs where the user-level programs don't have
> the privileges that you're talking about.  Since everything is
> practically virtualized, you wouldn't have the control that you
> ultimately require.  It might work for DOS, but who cares?

As I mentioned in the original description software running at less than
maximum privileges is wide open.  Using your example of Win32 programs, you'd
need a VXD or something similar to circumvent the virtualization.  There are
lots of nooks and crannies in the Windows(!tm) API that represent ways to gain
privilege.

>
>
> If you, yourself, implemented this type of "anti-debugging" protection,
> how would you, as a software provider, implement bug fixes?  It seems as
> though you wouldn't be able to debug your own program effectively.

Good question.  The simple answer is that the security features are
configurable at build time.  The fundamental application can be debugged
without any security active.  This then provides a base line for regression
testing of the protected versions.  The components of the security such as
function pro/epilogs and vector table intercepts can be individually
configured.  More interestingly the syndromes for the debuggers to be
inhibited are independent.  So, to use your examples, I can test the Codeview
configuration under Soft-ICE, the Soft-ICE configuration under Watcom, and
Watcom under Codeview.

The situation is purposefully similar to writing a debugger.  Can it debug
itself?  Not if it insists on controlling some unique resource.  Assume that
the debugger replaces the DPMI service layer or something similar.  Then
debugging the debugger implies that there are two competing replacements for
the DPMI service.  This causes problems.  One of the aspects of undebuggable
software is that it utilizes the same set of unique resources as the debuggers
use.

Testing and debugging the security software has to be done carefully, and is,
honestly, a PITA.  But it's not that complicated as the individual components
can be tested very thoroughly in isolation.  It is only the combinations that
can cause head scratching.

>
> 2) Numega makes their living making quality programs such as Soft-ICE.
> If someone comes up with a "way" of foiling them, do you honestly
> believe they would stake their reputation and not try to find a way
> around it?  This is their bread and butter.

No.  I doubt NuMega cares.  It's been a long time since I've updated my
copies.  Does Soft-ICE run under Soft-ICE these days?

>
>
> I'm not sure how old you are, but I was around for the dawn of the PC
> from the earlier 80's onward.

I started a bit earlier than that.

>  Game copying was the biggest problem, and
> no technique, ever, has ever been able to foil crackers from copy
> protection.  Programs such as Copywrite and CopyIIPC would always be
> able to crack the latest and greatest software programs, because it was
> their specialty.

I've been on both sides of this.  Software security always has been a kind of
arms race.  Copy protection never impressed me as viable.  My applications
tend to revert to demo mode (no save) when they are copied.  But they never
complain.  That doesn't _stop_ the crackers, but it certainly slows them down
;-)

You need to understand the process of cracking, which involves backtracking
from the symptoms that are undesirable to the code the produces those
symptoms.  That is the process I've attempted to interrupt by eliminating
symptoms of failure.

>
> I'm no cracker, but I do know that running software on untrusted
> machines is one of the oldest and most unmanageable problems today.  No
> matter what trick you may have up your sleeve, there is nothing,
> absolutely nothing you can do to stop a determined cracker.

Agreed.  But the critical term in that observation is "stop".  There are lots
of things you can do that will "slow" them down.  And slowing them down to a
year or more is just as good as stopping them.

>  No one has
> been able to solve it, not by making software "undebuggable", not by
> having it check for network card MAC addresses, not by checking for BIOS
> info, etc.  What if the entire machine is virtualized, down to the BIOS?

Can't be.  There are always ways to detect the imperfections in the virtual
machine.  An application with connections can do this easily.  The virtual
model then has to include the external end of the connection.  If the
connection is to the software provider, they can refuse to be virtualized ;-)

Let's consider a concrete example.  Company X has poured several million
dollars of research into an application.  They expect it to provide a
competitive advantage for at least a year  but not much more than two years.
They want it to run on the desktop machines (PCs, Macs, etc.), but they don't
want their employees to be able to walk out the door with a functional copy
because it would materially reduce their advantage over their competition.  So
the software architect sequesters a few key components of the software on a
security server that is sequestered and recognizes only console users.  The
client programs on the employee's desks talk to the server over the LAN.

How much security is required in this situation?  There are two bounds on the
extent of the security.  First, one cannot make it more expensive to crack the
application than it would be to recreate it from scratch.  In crypto one
cannot make a cipher more secure than a brute force search of the key space.
So there is a dollar limit on the software security.

One also cannot make it take longer than the lifetime of the software to
crack.  Once the software is obsolete, protecting it is a waste of time.
Credit card numbers expire every ~4 years, so encrypting them with a cipher
that takes 100 years to crack is a waste of resources.  So there is a time
limit on the software security.

In principle a cracker could make the client program run without the server.
But it would take effort.  By careful software design one can inflate that
effort beyond the point of interest.  The application is not uncrackable.  But
all of the value of the software is protected, and then some.

>
>
> These crackers today can read assembly code like they're reading a book.
>  I don't think anything you can throw at the good ones will be able to
> foil them for very long.

You are entitled to your opinion.

Mine is based on actual experience publishing entertainment software (games)
with a hefty price tag (~$800.00).  I take the position that the protection
effort was successful because it took four years for a competitor to appear,
and in that time no one mentioned having cracked the software.  Several people
mentioned (privately) having attempting to crack the software, but explained
that it was not worth their time to see it through.  From the hints I got
about their progress I'd estimate that the deepest penetration was about 30%
of the way to completion.  I had expected about half, so I was satisfied.


> The only question is whether or not your software is worth their time and
> effort.

Exactly.  My claim is that in almost every case one can make it _not_ worth
their time and effort.  ;-)



------------------------------

From: [EMAIL PROTECTED]
Subject: Re: DES implementation woes
Date: Mon, 07 Aug 2000 13:18:19 GMT

In article <[EMAIL PROTECTED]>,
  [EMAIL PROTECTED] (Mark Wooding) wrote:
> [EMAIL PROTECTED] <[EMAIL PROTECTED]> wrote:
>
> > S-box input is 0, output is 0xD4C27AFE
>
> This should be 0xefa72c4d.  You're applying the S-boxes in the wrong
> order.  DES specs have funny bit numbering: the *most* significant bit
> is labelled 1.  Be careful of this!
Thanks. That was the most troublesome issue. I assumed little-endian
numbering and did not bother to check. Funny thing is, none of the three
references for DES I use define the order of bits.
DES now works fine on 205 test vectors. I think that's enough.

> Also bear in mind that Eric Young's version keeps the block rotated by
> one place, which makes doing the E expansion on the fly slightly
easier,
> and that he's reflected the world so that his implementation can be
> little-endian.  Bleugh.
Thanks for that, too. I didn't figure that out by myself.


Sent via Deja.com http://www.deja.com/
Before you buy.

------------------------------

Date: Mon, 07 Aug 2000 09:42:40 -0400
From: "Trevor L. Jackson, III" <[EMAIL PROTECTED]>
Subject: Re: Secure Operating Systems

Better than I cudda/wudda/shoudda done!  ;-)

Eric Lee Green wrote:

> Mok-Kong Shen wrote:
> > > Actually the original unix was derived from the disenchantment with of multics
> > > which was supposed to be a secure OS in addition to all things to all people.
> >
> > Could you give a reference to that fact? Thanks.
>
> Dennis Ritchie's web site http://www.cs.bell-labs.com/who/dmr/
>
> The problem with Multics, as explained by DMR, was not that Multics was
> secure. The problem was that Multics was slow, bloated, and
> uneconomical, and was not released as an actual commercial product until
> 1975, long after interest had died down regarding the system. In some of
> his history notes DMR notes that he liked using Multics, but five
> million dollars worth of hardware running the Multics simulator on a
> G.E. mainframe (since no real hardware was available until 1975) would
> barely support 8 users. When Multics was actually released with the
> "real" hardware, a triple-processor Level/68 installation would only
> support 100 users before slowing to a crawl (by the time you got to 120
> users, it was unbearably slow, with often a 5 second response time
> between hitting the ENTER key and something happening). By comparison,
> IBM mainframes of the same generation in the same price range (around
> $6M in 1975 dollars, when that was real money) would support several
> hundred users with reasonable response times.
>
> At multicians.org they have some interesting anecdotes from old Multics
> developers and users. The anecdotes from the Building M people (Building
> M was the converted department store in Phoenix where Honeywell exiled
> the Multics development team) are especially interesting. For example,
> one guy once proposed to Honeywell higher-ups that they include a $5
> time-of-day clock chip in the Multics console to get rid of the errors
> where the tape monkey/operator put the wrong time of day at bootup. The
> response he got back was that once the change was spec'ed, approved, had
> fifteen reams of design documentation written for it, etc., it would add
> over $10,000 *per console* to fit a $5 time of day clock into Multics.
> He commented that Honeywell's ability to create outrageously slow and
> expensive hardware that was outdated the moment it hit the streets was
> as astounding as their unwillingness to market those things they had
> that WERE innovative -- like Multics.
>
> Anyhow: One reason Multics was slow was its elaborate and rich security
> model, which was enforced by a "bag on the side" MMU unit. But that was
> by far not the only reason it was slow. I got to read the Multics
> architecture manual and was astounded at just how kludged-up this
> hardware was. For example, there was a bag-on-the-side decimal
> arithmetic unit that would do BCD (Binary Coded Decimal) math for COBOL
> and PL/1. There were dedicated index registers that you could not do
> math upon, and dedicated math registers that you could not do indexing
> with. The architecture was word-addressed, not byte-addressed, so there
> was also a bag-on-the-side character operations unit. The MMU unit
> itself had pointer registers in it, which were 80 bits even though the
> total address space (18 bit segment + 18 bit address) was 36 bits,
> because it used the miscellaneous bits for an indirect bit, 2 bits for a
> byte address for character operations (Multics was word-addressed), and
> assorted other miscellaneous. Pointers in memory were 80 bits also, with
> the same cruft, and had to be addressed via a pointer register with the
> indirect bit set. I am undoubtedly forgetting some of the idiocies of
> this architecture in the haze of 20 years since I last looked at it, but
> it is clear that the security baggage was the least of the reasons why
> Multics was slow -- the general insanity of the old G.E. architecture
> that it was bagged onto was at least as responsible for its astounding
> price and lack of performance. Alas, as the Multicians site
> remonstrates, Honeywell, given the chance to build hardware for Multics
> rather than a bag on the side of inherited G.E. hardware, decided they'd
> rather not be in the computer business and cancelled Multics, milked the
> GECOS unit a few more years for cash without making any investments into
> it, then sold the GECOS unit to Bull.
>
> In any event: Not only does DMR specifically say on his web site that
> Unix was a response to the loss of Multics access and that its design
> was in part a reaction to the complexity of Multics, design decisions
> within Unix clearly show that certain aspects of Multics were being
> explicitly rejected. For example, Multics took a long time to start up
> processes -- as you'd imagine, with a half-dozen bag-on-the-side units
> each of which had registers to flush etc., and with the need to create a
> new virtual address space for the new process (which was mapped onto
> hard drive, BTW -- Multics did not properly have the concept of a
> "file", everything was a segment, and file access was a case of mapping
> a segment into the address space and then doing normal loads/stores...
> but creating an address space in turn required creating files on disk to
> swap that segment to!). It is clear that Unix was designed to take a
> short time to start up processes, by having a dedicated swap file rather
> than one (or more) per process. It is also clear that certain aspects of
> Unix, such as pipelines, were designed around the notion of having
> lightweight processes that were chained together to create larger
> systems. By contrast, the Multics way of doing things was much like the
> Windows DLL way of doing things, where if you needed to access some
> functionality, you called it as a subroutine call and waited for the
> result. Almost all Multics commands were a thin "wrapper" around the
> shared libraries that made up the Multics system, much as Internet
> Explorer is a thin "wrapper" around the various DLL's that implement the
> HTML widget, the HTTP protocol stack, etc. By contrast, the original
> Unix did not have even the concept of a shared library, components were
> expected to be stand-alone programs on their own right that used their
> stdin and stdout for communicating between each other. I wonder how much
> of the animosity between the Windows and Unix crowds stems from this
> ancient and long-standing philosophical difference?
>
> --
> Eric Lee Green      There is No Conspiracy
> [EMAIL PROTECTED]     http://www.badtux.org


------------------------------

From: [EMAIL PROTECTED] (Mark Wooding)
Subject: Re: OTP using BBS generator?
Date: 7 Aug 2000 13:53:29 GMT

Benjamin Goldberg <[EMAIL PROTECTED]> wrote:
> Terry Ritter wrote:
> > 
> > On Thu, 03 Aug 2000 07:01:49 -0700, in
> > <[EMAIL PROTECTED]>, in sci.crypt lordcow77
> > <[EMAIL PROTECTED]> wrote:
> > 
> > >Mok-Kong Shen <[EMAIL PROTECTED]> wrote:
> > >>BBS has been a recurring topic in this group. I have little
> > >>knowledge in that but I have the impression that discussions
> > >>about it never led to unanimously accepted results. You
> > >>may try to look into old postings of this group.
> > >
> > >Wrong. Practically everyone accepts that choosing the factors of
> > >the modulus to be congruent to 3 mod 4 and picking a random
> > >starting point are enough to ensure that the resulting BBS
> > >sequence will be secure, based on the computational equivalence
> > >of predicting BBS and deciding quadratic residuosity (and
> > >therefore factoring).
> > 
> > That is false on its face.  You can accept if you want, however.
> > 
> > It is true that using a short cycle would be extremely unlikely.  But
> > *when* that event occurs, all the math proofs you have will not save
> > you, since the resulting system is insecure.
> 
> I'm confused...
> a) If the factors of the modulus ARE congruent to 3 mod 4, then are
> short cycles likely, unlikely, possible, or impossible?

Short cycles will exist.  In particular, the values 0, 1, p, and q lead
to cycles of period 1.  These are the only four values for which x^2 = x
(mod n).  (Note that the equation x(x - 1) = 0 has two solutions x = 0
and x = 1 in any field, such as GF(p).  The four values given correspond
to combining the two solutions to the equation mod p and mod q using the
Chinese Remainder Theorem.)

If the primes are *large* (e.g., 512 bits each) then traversing an
entire cycle of x^2 mod n will allow you to factor n.  This isn't an
efficient way of factoring large numbers, and the most efficient way of
factoring numbers of that size is impractical, accidently *finding* a
cycle long enough to be traversed is negligibly likely.

At this point, the nature of the primes mod 4 isn't relevant...

> b) If the factors of the modulus AREN'T congruent to 3 mod 4, then
> what?

Then the relationship between predicting the generator's output and the
quadratic residuosity problem doesn't work properly and the proofs given
in the paper don't apply.

> That is, what does that conguency gain, if not (complete?) avoidance of
> short cycles?

It gains the relationship between predicting the generator's output and
solving quadratic residuosity.  Since traversing a cycle enables
factoring, and factoring enables deciding quadratic residuosity, we know
that finding a short cycle is at least as hard as ANY OTHER METHOD OF
BREAKING THE GENERATOR.

> > Using a short cycle is unarguably insecure.  And, unless we
> > specifically prevent it, using a short cycle is an unarguable
> > possibility.
> 
> Makeing the factors congruent to 3 mod 4 isn't a specific prevention of
> short cycles?

No.  There are two approaches:

  * Acknowledge that finding a traversable cycle is impractical even if
    you try really hard, and just choose random primes.  Note that any
    weakness due to choice of random primes also occurs in RSA, Rabin-
    Williams, and many other IFP-based cryptosystems.

  * Be very paranoid about cycles, choose your modulus very carefully,
    choose your seed very carefully too, and be unable to take advantage
    of the public-key nature of the generator[1].

I prefer the former approach.  Terry prefers the latter.  Read the
arguments and take your pick.

[1] My Springer CD has now arrived, containing, among other things, the
    BBS paper.  I note that they explicitly suggest the use of the x^2
    mod n generator as a public-key encryption algorithm.  It's clearly
    impossible to use the generator in this way and choose the seed x_0
    carefully to avoid short cycles.  This can, I believe, be
    interpreted as clear evidence that the authors weren't particularly
    truobled by cycle lengths.

-- [mdw]

------------------------------

From: Stu P <[EMAIL PROTECTED]>
Subject: Just how easy are some codes to crack?!
Date: Mon, 07 Aug 2000 14:54:49 +0100

Hi, I'm new to this group, and as a simple programming exercise I
developed
a simple encryptor.

        It takes a 3-digit key (ie. between 000 and 999), and will encode /
decode any simple (<50 chars, say) text message (ie. A-Z and <SPACE>).
The key would be sent with the message. 
        
        I know this is pretty vague, but can anyone suggest how easy this
might be to break? Would anyone like to try?! I'm happy to code any
strings requested. For example

ABCDEFGHIJKLMNOPQRSTUVWXYZ becomes

(000) SDGJYJ RQODN W UFUHZD LDJE

or

(001) KGEIWMQKOL OSXSWSW YYZPE F

------------------------------


** FOR YOUR REFERENCE **

The service address, to which questions about the list itself and requests
to be added to or deleted from it should be directed, is:

    Internet: [EMAIL PROTECTED]

You can send mail to the entire list (and sci.crypt) via:

    Internet: [EMAIL PROTECTED]

End of Cryptography-Digest Digest
******************************

Reply via email to