Cryptography-Digest Digest #504, Volume #11 Thu, 6 Apr 00 21:13:01 EDT
Contents:
Re: Is this code crackable? (Dan Day)
Re: Stolen Enigma (John Savard)
An interesting cryptographic problem (David Hopwood)
Re: Public|Private key cryptography? ("Joseph Ashwood")
Re: Building a stream cipher? (newbie Question) ("Joseph Ashwood")
Re: Looking for Algorithm ("Joseph Ashwood")
Re: Q: Simulation of quantum computing ("Joseph Ashwood")
Re: Examples of topology related to crypto ? (Tim Tyler)
Re: Implementation of Blowfish (Eric Young)
Re: Sunday People 26/3/2000: "FORGET YOUR PASSWORD... END UP IN JAIL" (Jim Crowther)
Factoring Composite Numbers ("Nickolaus Benjamin Padgett")
Re: Public|Private key cryptography? (Tom St Denis)
Re: Factoring Composite Numbers (lordcow77)
Re: Factoring Composite Numbers (Tom St Denis)
Gray code computation? ("Joseph Ashwood")
----------------------------------------------------------------------------
From: [EMAIL PROTECTED] (Dan Day)
Subject: Re: Is this code crackable?
Date: Thu, 06 Apr 2000 21:07:24 GMT
On Wed, 5 Apr 2000 17:15:21 +0100, "Jethro" <[EMAIL PROTECTED]> wrote:
>
>My intent (and I'm just playing around) was to physically hand the "PAD"
>(I've learned something already) on a floppy to my recipient, whom I see
>about every 6 months. The decrpyt program is also on the floppy. He can
>work completely off the floppy, keeping everything off his HD. The decrpyt
>program kills the encrpted file upon decryption.
>
>I email him an encrypted file perhaps once a week, he moves it to his floppy
>and decrypts. So he ends up with nothing on his HD (or, at the most the
>encrypted files) and on his floppy only the decrpyt program, the decrypted
>file(s) and the PAD.
>
>Everyone is saying you can only use the PAD one time, hence the name "One
>Time PAD". Is this simply because I cannot trust my recipient to follow my
>instructions?
No, it's because if you encrypt two or more messages with the same
portion of the PAD, that makes it reasonably easy for an opponent
to read them. Any given byte in the PAD *must* be used for
encryption ONLY ONCE.
That's the "one time" in the term One Time Pad.
This is secure:
Handing your friend a floppy containing 1,000K of PAD.
Encrypting a 2K message to him using the first 2K of the
PAD.
Encrypting your next, 3K message to him using the NEXT 3K
of the PAD.
Encrypting the following 1K message to him using the NEXT
1K of the pad.
(So far you've used up the first 6K of the PAD, *AND MUST
NEVER USE THOSE 6K AGAIN.)
You can still send another 994K of messages to him before
the PAD is all used up.
This is *NOT* secure:
Handing your friend a floppy containing 1,000K of PAD.
Encrypting an 800K message to him using the first 800K
of the PAD.
Encrypting your next, 500K message for him, using ANY OF THE
FIRST 800K OF THE PAD OVER AGAIN.
In fact, you can't safely send him the 500K message until
you safely give to your friend at least another 300K of
new PAD.
And time a portion of the PAD is used to encrypt more than a
single message, you've just destroyed the security of
your encryption. A suitably trained high school student
could break your messages.
Also, ideally, once you encrypt a message using a given section
of your copy of the PAD, you should destroy that section of
your PAD, and similarly once your friend gets that message and
decrypts it, that same portion of his copy of the PAD should
be destroyed. Otherwise, an opponent could just steal either
your PAD or his at a later date and then use it to decrypt
all previously sent (and presumably eavesdropped) traffic.
But this is a separate issue, and is not in any way related
to the origin of the phrase "One Time Pad". The OTP name
simply stresses the cryptographic necessity of using any
given portion of the PAD for encryption exactly once, and then
never using it for encryption again.
A logical consequence of this very necessary requirement is
that you have to give your friend a PAD as large as the
total size of all messages you ever might want to send to him
(at least until the future date that you can meet again and
hand him a fresh PAD).
--
"How strangely will the Tools of a Tyrant pervert the
plain Meaning of Words!"
--Samuel Adams (1722-1803), letter to John Pitts, January 21, 1776
------------------------------
From: [EMAIL PROTECTED] (John Savard)
Subject: Re: Stolen Enigma
Date: Thu, 06 Apr 2000 21:29:36 GMT
[EMAIL PROTECTED] (JimD) wrote, in part:
>Anyway, they've arrested the thief, but it isn't clear whether
>the machine was recovered. Seems likely it was, but as it's now
>stale news, until the court case, who knows?
Seems like they did not recover the machine; they only arrested
someone who turned out to be the wrong person.
Frode's web site has links to newspaper articles; the one in the Daily
Telegraph was quite informative.
Apparently, there was an unknown red car parked near the museum, in an
area not availble to the public, at the time, and so the police are
looking for the woman who drove that car; and, as well, there was a
considerable amount of conflict at the museum, so it is possible that
one of the same people who was phoning in death threats simply stole
the machine in order to destroy it as an act of vandalism.
John Savard (jsavard<at>ecn<dot>ab<dot>ca)
http://www.ecn.ab.ca/~jsavard/crypto.htm
------------------------------
Date: Thu, 06 Apr 2000 06:44:03 +0100
From: David Hopwood <[EMAIL PROTECTED]>
Reply-To: [EMAIL PROTECTED]
Subject: An interesting cryptographic problem
=====BEGIN PGP SIGNED MESSAGE=====
Consider the following problem:
Let N = P.Q
P = 2p + 1
Q = 2q + 1
P, Q, p and q are large primes
x_0..x_{k-1} are randomly chosen on [0, q)
S(b) = product[i = 0..k-1](x_i^b_i) mod q
where b is a vector of k bits
(i.e. S(b) is the product of all the x_i for which b_i is set)
Sf_j = S(f_seed(j)), for j = 0..t-1
f_seed is a deterministic, one-to-one, easily computed function
mapping integers to vectors of k bits (each with at least 2 bits
set), which depends on a random seed.
k is about 20 (it could be larger than that if necessary); t is about
1000. P and Q are long enough that N cannot be factored. N, j, k, t,
and f are known; P, Q, p, q, and the x_i are unknown.
Suppose that all of the Sf_j are known except one, Sf_r (for some
0 <= r < j).
The problem is: How hard is it to find either Sf_r, or any of the
x_i (it would be enough to find an integer congruent to Sf_r or
x_i modulo q). If this depends on f_seed, what constraints on
f_seed are needed for it to be hard?
========
If there is a subset J of the j values, and some r not in J, such
that
f_seed(r) = vector_sum[j in J](f_seed(j))
then
Sf_r = product[j in J](S(f_seed(j)) (mod q)
Assume that isn't the case (if necessary by choosing a different
seed until it is not).
Obviously if the Sf_j were calculated as integers, it would be easy
to calculate the gcd of various subsets of them, which would give
the x_i. So, the security (if any) lies in the fact that the Sf_j
are calculated mod q, and q is unknown. I can't see any way to solve
it in that case, but I may have missed something, which is why I'm
asking here.
- --
David Hopwood <[EMAIL PROTECTED]>
PGP public key: http://www.users.zetnet.co.uk/hopwood/public.asc
RSA 2048-bit; fingerprint 71 8E A6 23 0E D3 4C E5 0F 69 8C D4 FA 66 15 01
=====BEGIN PGP SIGNATURE=====
Version: 2.6.3i
Charset: noconv
iQEVAwUBOOwkEjkCAxeYt5gVAQHz/wgAwKqOxCha5svO4qAoIa4HcywrqO3U5qt8
uGVb5arwBkziuqr02gOPvSC3FXLbp9BtjEyX9P2lz3qpf13nxEEHsvmCzhuz34V7
gH/ExgWkfVYu/MG/W1F+96wrXYhJJrIaWdhO9Yjane4SbGPtuC2/NS2AHfAPJz6C
BK5EfZOwdNnQ7L9BZL56utrrOLLVUB2jXlMzvW+HEYmTEzjRHbvLBXmaoYAbDFmF
e5+yfypUWd7UL8/sEcwfRtFuShnxdSXhUPlxxdnikdF53KUVS0jNOwbkPYw+Ye3M
km6GKscrHmnXR1K+fbThel+9uQuoUbzDPupj873EYMWCMPEShsf2Eg==
=XcMh
=====END PGP SIGNATURE=====
------------------------------
From: "Joseph Ashwood" <[EMAIL PROTECTED]>
Subject: Re: Public|Private key cryptography?
Date: Thu, 6 Apr 2000 13:16:57 -0700
Depends on who you trust, and the model that you are working
in. If you trust SKIPJACK (from the infamous clipper series
of chips), then you require only 80 bits, most people
(including myself) don't trust it, there has been a small
amount of headway made on it, reducing the work to around 75
bits. Outside of that ECC is probably your best bet, using a
key size of 160 bits is trusted for now. I personally would
recommend that you up the size to 256 bits, that offers
protection against algorithms that have not yet been
publically found. If you don't trust ECC then you are
basically restricted to large keys.
Joe
"Simon Johnson" <[EMAIL PROTECTED]> wrote in message
news:8cfnch$c9g$[EMAIL PROTECTED]...
> Every algorithm i have seen so far requires the use of
massive numbers to be
> secure. Are there any algorithm of this type that deal in
small number?
>
>
------------------------------
From: "Joseph Ashwood" <[EMAIL PROTECTED]>
Subject: Re: Building a stream cipher? (newbie Question)
Date: Thu, 6 Apr 2000 01:18:41 -0700
Ok, rule number 1, we really can't tell you how to make one
that is secure, we can only tell you things that don't work,
and things that have been secure so far.
Well let's start with the basics. It is only the current
convention that XOR is used, you are free to use anything
you want (including whatever feedback methods you can dream
up). Beyond that there lies a vast realm that is actually
quite unexplored.
Assuming that you simply wish to generate a standard XOR
pad. The general rules are:
1) No bias (or as little as possible)
2) No correlation (as little as possible).
3) passes DIEHARD, and any other tests you can find (I can
supply DIEHARD if you need it)
To do this it is generally necessary to use only a small
portion of the internal state (or a very large internal
state, that is mapped to a small output size), and to use a
well balanced function.
In reality correlation cannot be completely eliminated, it
can only be delayed to at most the cycle point, and
typically can be no further than the square root of that.
Joe
As a friend of mine once said (editted for content)
"Cryptography is the art of [messing] [stuff] up so no one
else can un[mess] it."
------------------------------
From: "Joseph Ashwood" <[EMAIL PROTECTED]>
Subject: Re: Looking for Algorithm
Date: Thu, 6 Apr 2000 13:30:28 -0700
Just an attempt, it may or may not work for you.
Split the message into N blocks of data (of whatever size or
various sizes that you desire for however many N you want)
Hash each block using a strong method (SHA-1)
Build a stream of the form l,data,hash where l is the
length of the block
Sign the full stream
Encrypt that stream using your favorite scheme (PGP style,
ECC&MQV, shared secret, etc)
transfer the stream
On the other end
Decrypt
Verify Signature
Seperate the blocks
Use them however you want. This is a variant of winnowing
and chaffing (without that chaff, and without using a keyed
hash). If you want increased security after the stream has
been signed, insert some random data (making sure that the
block signature doesn't match), you can also use a keyed
hash, such as postpending a shared secret to the end of each
block during signing, remove the secret afterward.
Joe
<[EMAIL PROTECTED]> wrote in message
news:8carjm$mn9$[EMAIL PROTECTED]...
> Help!
> I looking for an algorithm that does the following
> - splits a message in 2 parts,
> - hashes every line
> - so that key and lock have always the same bit-length.
> Maybe a stupid question, but i am new to cypto.
> I have read about somewhere in the web , but I can't
> remember the name of it - i need it for one of my
projects.
> Many thanks and a nice day
>
>
> Sent via Deja.com http://www.deja.com/
> Before you buy.
------------------------------
From: "Joseph Ashwood" <[EMAIL PROTECTED]>
Subject: Re: Q: Simulation of quantum computing
Date: Thu, 6 Apr 2000 14:24:53 -0700
> TM + oracle can generate truly random bits
iff the oracle produces true randomness
> QC can generate truly random bits
Generally accepted as gospel, but as with most gospels it
has not been proven, it may or may not be true, but the
ability of a classical computer to simulate all other
aspects of the quantum computer indicates that although the
bits are random enough that we cannot yet compute them, even
though there may be a way to compute them.
> TM (alone) can't generate truly random bits
Correct.
> I seems to follow from these that QC is more powerful than
TM.
> Is that right?
iff it is ever proven that QC can generate true random bits,
then yes, otherwise it simply has to be taken as gospel,
unproven but we believe it none the less.
Joe
------------------------------
Crossposted-To: sci.math
From: Tim Tyler <[EMAIL PROTECTED]>
Subject: Re: Examples of topology related to crypto ?
Reply-To: [EMAIL PROTECTED]
Date: Thu, 6 Apr 2000 21:48:42 GMT
In more than one newsgroup, [EMAIL PROTECTED] wrote:
: [...] A few weeks ago in sci.crypt, Tim Tyler
: started the thread "Cellular Automata (CA)
: based public key cryptography". For this
: purpose, one wants reversable CA. Topo is
: very important in the theory of symbolic
: dynamical systems and this theory is tied into
: the question of reversability in CA- (symbolic
: dynamics can be an application of automata
: theory, or vice versa).
Unfortunately, I rapidly demolished my proposed public key system :-|
The main automata-based public key cryptosystem remains the one I refer
at http://alife.co.uk/ca/publickey/biblio/
I have an interest in possible applications of reversible cellular
automata to cryptography.
Much of what I know about constructing reversible automata is available
beneath http://alife.co.uk/ca/
I dispaly a Java applet demonstrating reversible cellular systems
(using partitioning techniques) at http://hex.org.uk/diffusion/
You can set up an initial state, run this automata forwards, and then
backwards, and watch it recover its original state from an ocean of chaos.
I don't know much about any connections between cellular automata and
topology, though. Perhaps the most /obvious/ connection concerns
automata with differing sorts of boundary conditions - e.g. toroidial
ones.
I tend to try to avoid anything more sophisticated than null boundary
conditions, for reasons concerning efficient hardware implementation
- an issue which I discuss at http://alife.co.uk/ca/dimensions/
Extremely efficient hardware implementation is of course a primary
attraction of cellular automata. Another area of interest appears to
be a body of theory relating to ways of constructing invertible systems -
which owe little to Feistel network constructions.
--
__________ Lotus Artificial Life http://alife.co.uk/ [EMAIL PROTECTED]
|im |yler The Mandala Centre http://mandala.co.uk/ Breast is best.
------------------------------
From: Eric Young <[EMAIL PROTECTED]>
Subject: Re: Implementation of Blowfish
Date: Fri, 07 Apr 2000 08:16:52 +1000
Tom St Denis wrote:
> No what I am going todo, is change all the ciphers to use the standard
> IMACIPHER_ENCRYPT(unsigned char *input, unsigned char *output);
> Interface. But in the mean time it should work fine on 32-bit
> platforms. Sorry about that.
>
> > Tom St Denis wrote:
> > > I heard CB is a cool Crypto API, it includes Blowfish among it's other
> > > symmetric ciphers.
> > In ciphers.c we find:
> > extern
> > void blowfish_ENCRYPT(const unsigned char *plaintext,
> > unsigned char *ciphertext);
> > That declaration is contradicted by the implementation
> > in BLOWFISH.C, which has the type:
> > void blowfish_ENCRYPT(unsigned long *block,
> > unsigned long *out)
Well, this will almost definitely break on machines that don't like unaligned
memory accesses.
See if the library works when one does
blowfish_ENCRYPT(&(buf[1]),&(buf2[1]))
on a non-intel CPU.
The compiler would align the declared buffers, but often, especially when
encrypting the contents of a message but not the header (eg SSL),
the output address is often not word aligned.
eric
------------------------------
From: Jim Crowther <[EMAIL PROTECTED]>
Crossposted-To:
uk.media.newspapers,uk.legal,alt.security.pgp,alt.privacy,uk.politics.parliament,uk.politics.crime,uk.politics.censorship
Subject: Re: Sunday People 26/3/2000: "FORGET YOUR PASSWORD... END UP IN JAIL"
Date: Thu, 6 Apr 2000 23:29:58 +0100
Reply-To: Jim Crowther <[EMAIL PROTECTED]>
In article <[EMAIL PROTECTED]>, JimD <dynastic@REMOVE_THIS
cwcom.net> writes
>On Tue, 4 Apr 2000 09:53:15 +0100, "Owen Lewis" <[EMAIL PROTECTED]> wrote:
>
>>Nevertheless and as you surmise, a decision not to vote may be better
>>reasoned and as valid a choice than most votes cast.
>
>Right! I could never vote Tory, and refuse to vote for the New
>Labour Tories again. I think I'll stay at home next time.
>
>(Ex Labour Party member)
>
I have voted in many elections. In a very few of these I could see no
merit in any of the candidates. Rather than not vote at all, I
'spoiled' the ballot paper (allowed) by drawing a line through the lot,
and writing at the bottom 'None of the above'. At least my vote was
counted. This is important as it reflects in the percentage of cast
votes that the winner is said to enjoy.
One day an election will be won by 'None of the above'....
--
Jim Crowther
Mailto:[EMAIL PROTECTED]
Public PGP keys at ldap://certserver.pgp.com (and others)
Key ID 0xE0BCE5F1 (DH/DSS 2048/1024), 0x8A673777 (RSA 1024)
------------------------------
From: "Nickolaus Benjamin Padgett" <[EMAIL PROTECTED]>
Subject: Factoring Composite Numbers
Date: Thu, 6 Apr 2000 19:47:31 -0400
As a novice into the field of cryptography I have a few quick questions that
maybe someone could answer. I know a lot of cryptographic systems are based
off a system in which two relatively large prime numbers are multiplied
together to produce a composite number. I further understand the difficulty
it takes to factor this composite number in comparison to its generation.
So my questions are:
1) How long would it take to factor 'n' using the latest in factorization.
2) The most basic of all factorization methods is to divide 'n' up to the
sqrt('n'). If I were to further decrease this space by 99% so that the
total space to search was 1% of sqrt('n') would this be a viable decrease or
would this expectancy still be to great?
I'd very much appreciate any response you decide to entertain me with.
Thank you in advance.
Nickolaus Benjamin Padgett
[EMAIL PROTECTED]
------------------------------
From: Tom St Denis <[EMAIL PROTECTED]>
Subject: Re: Public|Private key cryptography?
Date: Fri, 07 Apr 2000 00:19:17 GMT
Jerry Coffin wrote:
>
> In article <[EMAIL PROTECTED]>, [EMAIL PROTECTED] says...
> > What you are seriously missing is the fact that the SPACE required to
> > factor +1000 bit numbers is insane [over 2^64] and is not possible.
> > While you could sieve it faster, you couldn't contain the information
> > required to solve the puzzle.
>
> For all practical purposes, all of those are clearly well into the
> future.
>
> > So those statistics are meaningless. It's like saying a 300 bit
> > symmetric-key is more secure then a 256 bit one, you can't search either
> > so what's the point?
>
> Two points. First of all, computers get faster all the time: at one
> time, the 56-bit keyspace of DES wasn't practically searchable, but
> it now is. An 80 or 100 bit keyspace currently isn't, but we're
> right at the point that an 80-bit key is technically feasible, if too
> expensive to be practical right now.
Actually 56 bits was too small then too. It's not practical to search a
80 bit key space now, or 10 years from now. Currently RC5des has been
going for what 3 years? Let's say they finish tommorow... an 80 bit key
with a keyspace 65,536 times larger will take just a bit to solve.
Yeah sure 25 years from now searching the keyspace *may* become
feasible, but so could quantum computers, distant space travel or new
M*A*S*H episodes. You are assuming computers [classical] will always
get faster, which generally has yet to be proven, even lately, actual
improvements in throughput [not just clock rate] are not exponential
like we want to think, it's 10% here, 5% there.
> In addition, just as CPUs get faster over time, so also larger
> memories become feasible over time. Right now, I've got 256
> megabytes of RAM in my computer, which is far more than I could have
> afforded even in hard-drive space only a few years ago.
>
> They're not currently available, but a couple of companies have
> already demonstrated that they can produce gigabit RAM chips. That
> would mean being able to plug a gigabyte of RAM into a single DIMM
> (or whatever) socket. At least a couple of companies have plans to
> produce these for the open market within the next couple of years.
>
> Right now, machines with gigabytes of RAM are big and expensive. I'm
> betting that inside of five years I'll be able to buy one off the
> shelf at a local retailer for under $1000US for a machine complete
> with a couple gigabytes of RAM and a CPU that runs at a few
> gigahertz. If you'll remember, about a year ago you posted a message
> saying that machines that ran at a gigahertz were around 10 years
> away.
>
> That was wrong -- they're here now. Larger memory will happen the
> same way: back when I was in college we used a Control Data mainframe
> that had only 18-bit addressing, so it could work with a maximum of
> 256K of 60-bit words -- less than a megabyte total for a system that
> routinely supported 250 to 300 users logged on at a time. The CPU
> weighed a couple of tons and IIRC it ran at something like 10 MHz.
Well the way Bob Silverman said it, factoring large keys [i.e 1024 bits]
will take a whopping amount of ram, more then you can store in a 2^64
space. I dunno but I doubt anytime soon computers will have 2^64+ bytes
of memory. Assuming that the memory board was made of solely 10^-8 (mm
square) transistors, that memory board would take 4500 ft sq memory
board. Let's say I am clueless [which I am about hardware] and each
memory cell is 10^-16 or so mm sq [millimeter], this is really really
small too, that same memory board would be 13 cm^2, so we would have to
make really dense memory, to have only 2^64 bits of memory, which I
don't think is even enough to hold the matrix from a NFS on a 1000 bit
RSA style number.
> > I do agree that you can get by with smaller keys using ECC such as 160 ~
> > 200 bit keys, but a 200 bit ECC key is not the same as a 20000bit RSA
> > key like they would want you to believe.
>
> Of course they're not "the same." The question is whether they're
> roughly equivalent. It's true that nobody can reasonably plan on
> factoring a 2048 but number today, but it's also true that nobody can
> reasonably plan on doing a 200+ bit ECDL either. TTBOMK, the largest
> ECDL problem solved yet was 108 bits, and that took around 4 months
> with hundreds of computers (and that was so recent, I may be out of
> place mentioning it -- I haven't seen an official, public annoucement
> about it from Rob yet). A 200 bit ECDL problem is still quite a ways
> out of reach.
You contradict yourself. You state "nah rsa keys have to be big because
later on we can solve them, *but* ECDL is ok at 200 bits, and will be
for a long time, *presumably* because no more adcances in ECDL will
occur".
Besides that's not the point. The argument is that factoring large
numbers are infeasible thus the original comparaison is invalid. The
comparaison is valid for say upto 600 bit integers [largest factored I
think with nfs] and that's about it. If anything the companies should
sell it on the size of the ciphertext, not the 'rsa vs ecc'. Then again
1024 bits vs 300 or so bits, is not a big deal for the avg joe like me,
unless I was sending 1000s of ecc messages...
> OTOH, the real question in both cases is how soon it'll be practical
> to attack one or the other. At least in my mind, that appears to
> depend on whether CPU speed will grow at about the same speed as
> memory size -- the future in factoring (at least by currently-known
> methods) depends largely on memory size, while the future in ECDL
> (again, by currently-known methods) depends more on CPU speed.
See above.
> Looking back over time, it appears to me that if anything, memory
> size is growing a _little_ faster than CPU speed. Doing a bit of
> extrapolation on my own, it appears that these assumptions are based
> on an assumption that they'll grow at the same speed. As such, it
> looks to me like it's _probably_ making RSA look a little better than
> it really is -- e.g. that it'll _probably_ be practical to break RSA
> using a 2000-bit key a little bit sooner than it's practical to break
> ECC using a 200-bit key. (In fairness, I'm pretty sure they were
> trying to give rough estimates using round numbers, so nobody should
> be surprised if they end up a _little_ off in one direction or the
> other).
You still miss the point, factoring large numbers requires an insane
amount of space that will not be available *any* time soon. Thus no
matter how fast you can sieve the required information for NFS [on the
composite] you will not be able to store it, let alone solve it.
So really *for now*, you get a cut off point where factoring with NFS is
*impossible* for large numbers, whereas pollard-rho could always factor
the number [in an insane amount of *time*], simiarly as ECDL could be
solved for large orders [in an insane amount of time too].
Let's keep our eyes on the reality of the situtation and not silly
extrapolations.
Tom
------------------------------
Subject: Re: Factoring Composite Numbers
From: lordcow77 <[EMAIL PROTECTED]>
Date: Thu, 06 Apr 2000 17:22:14 -0700
In article <8cj7o8$7qe$[EMAIL PROTECTED]>, "Nickolaus
Benjamin Padgett" <[EMAIL PROTECTED]> wrote:
>As a novice into the field of cryptography I have a few quick
questions that
>maybe someone could answer. I know a lot of cryptographic
systems are based
>off a system in which two relatively large prime numbers are
multiplied
>together to produce a composite number. I further understand
the difficulty
>it takes to factor this composite number in comparison to its
generation.
>So my questions are:
>
>1) How long would it take to factor 'n' using the latest in
factorization.
>
Eliptic curve method for composites with small prime factors or
number field sieve for composites with two [or more] very large
factors. The hueristic complexities (I don't believe anything
has been proven) are subexponential and can be found in any
reference.
>2) The most basic of all factorization methods is to
divide 'n' up to the
>sqrt('n'). If I were to further decrease this space by 99% so
that the
al
>total space to search was 1% of sqrt('n') would this be a
viable decrease or
>would this expectancy still be to great?
In principle, this wouldn't help matters much at all; an
exponential time complexity divided by a constant factor is,
after all, still exponential. However, if, and this is a big if,
you are able to reduce the proportion of the space to search by
a very large constant (of order > 2^450 if one wants to have any
hope of factoring a 1024-bit composite) you might have a chance.
* Sent from RemarQ http://www.remarq.com The Internet's Discussion Network *
The fastest and easiest way to search and participate in Usenet - Free!
------------------------------
From: Tom St Denis <[EMAIL PROTECTED]>
Subject: Re: Factoring Composite Numbers
Date: Fri, 07 Apr 2000 00:26:41 GMT
Nickolaus Benjamin Padgett wrote:
>
> As a novice into the field of cryptography I have a few quick questions that
> maybe someone could answer. I know a lot of cryptographic systems are based
> off a system in which two relatively large prime numbers are multiplied
> together to produce a composite number. I further understand the difficulty
> it takes to factor this composite number in comparison to its generation.
> So my questions are:
>
> 1) How long would it take to factor 'n' using the latest in factorization.
>
> 2) The most basic of all factorization methods is to divide 'n' up to the
> sqrt('n'). If I were to further decrease this space by 99% so that the
> total space to search was 1% of sqrt('n') would this be a viable decrease or
> would this expectancy still be to great?
>
> I'd very much appreciate any response you decide to entertain me with.
> Thank you in advance.
Here is the pollard-rho algorithm [I may get something wrong, so pro's
be wary]. It takes about sqrt(q) where 'q' is the largest factor of the
composite 'n'.
1. x = 2, y = 2, c = 1
2. x = x^2 + c mod n, y = y^2 + c mod n, y = y^2 + c mod n (yes you do
the latter twice)
3. d = gcd(x-y, n)
4. if (d != 1) and (d != n) then
4.a 'd' is a factor of n, n = n / d
5. if (d = n) then
5.a terminate and assume it failed
6. Goto #2
This method is usefull for finding small factors of a composite, and not
so great for rsa style numbers. A good start is to understand why
x^2 = y^2 (mod n), x != +-y (mod n)
Is an important relationship.... if this is true, then gcd(x - y, n) is
a trivial factor of n. This property is supposedly used in QS and NFS.
Hope that helps.
Tom
------------------------------
From: "Joseph Ashwood" <[EMAIL PROTECTED]>
Subject: Gray code computation?
Date: Thu, 6 Apr 2000 16:58:37 -0700
A while back someone posted an algorithm for computing gray
codes in series, I thought is was under the gray code like
thread but I can't seem to find it on Deja, code someone
repost the code that was there, I'll be using it to test a
cipher.
Joe
------------------------------
** FOR YOUR REFERENCE **
The service address, to which questions about the list itself and requests
to be added to or deleted from it should be directed, is:
Internet: [EMAIL PROTECTED]
You can send mail to the entire list (and sci.crypt) via:
Internet: [EMAIL PROTECTED]
End of Cryptography-Digest Digest
******************************