Re: English 19-year-old jailed for refusal to disclose decryption key

2010-10-06 Thread silky
On Thu, Oct 7, 2010 at 5:57 AM, Ray Dillinger b...@sonic.net wrote:
 a 19-year-old just got a 16-month jail sentence for his refusal to
 disclose the password that would have allowed investigators to see
 what was on his hard drive.

 I suppose that, if the authorities could not read his stuff
 without the key, it may mean that the software he was using may
 have had no links weaker than the encryption itself -- and that
 is extraordinarily unusual - an encouraging sign of progress in
 the field, if of mixed value in the current case.

 Really serious data recovery tools can get data that's been
 erased and overwritten several times (secure deletion being quite
 unexpectedly difficult), so if it's ever been in your filesystem
 unencrypted, it's usually available to well-funded investigators
 without recourse to the key.  I find it astonishing that they
 would actually need his key to get it.

Interesting.

It's interesting to think about the possibilities some sort of
homomorphic cryptosystem would offer here. I.e. it would be arguably
useful (from one point of view) if they were able to search the data
for specific items, and failing finding items of those types, *then*
the fallback is this sentence, otherwise it seems like a pretty
trivial way out for anyone wishing to hide bad activity.


 Rampant speculation: do you suppose he was using a solid-state
 drive instead of a magnetic-media hard disk?

 http://www.bbc.co.uk/news/uk-england-11479831

                                Bear

-- 
silky

http://dnoondt.wordpress.com/

Every morning when I wake up, I experience an exquisite joy — the joy
of being this signature.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: Quantum Key Distribution: the bad idea that won't die...

2010-04-21 Thread silky
On Wed, Apr 21, 2010 at 1:31 AM, Perry E. Metzger pe...@piermont.com wrote:

 Via /., I saw the following article on ever higher speed QKD:

 http://www.wired.co.uk/news/archive/2010-04/19/super-secure-data-encryption-gets-faster.aspx

 Very interesting physics, but quite useless in the real world.

Useless now maybe, but it's preparing for a world where RSA is broken
(i.e. quantum computers) and it doesn't require quantum computers; so
it's quite practical, in that sense.


 I wonder why it is that, in spite of almost universal disinterest in the
 security community, quantum key distribution continues to be a subject
 of active technological development.

 Perry

-- 
silky

  http://www.programmingbranch.com/

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: Quantum Key Distribution: the bad idea that won't die...

2010-04-21 Thread silky
On Thu, Apr 22, 2010 at 10:47 AM, Perry E. Metzger pe...@piermont.com wrote:

[...]

 Second, you can't use QKD on a computer network. It is strictly point to
 point. Want 200 nodes to talk to each other? Then you need 40,000
 fibers, without repeaters, in between the nodes, each with a $10,000 or
 more piece of equipment at each of the endpoints, for a total cost of
 hundreds of millions of dollars to do a task ethernet would do for a
 couple thousand dollars.

 Sure, now. That's the point of research though; to find more efficient
 ways of doing things.

 I'm afraid that QKD is literally incapable of being done more
 efficiently than this. The whole point of the protocol is to get
 guarantees of security from quantum mechanics, and as soon as you have
 any intermediate nodes they're gone. I know of no one who claims to have
 any idea about how to extend the protocol beyond that, and I suspect it
 of being literally impossible (that is, I suspect that a mathematical
 proof that it is impossible should be doable.)

What do you mean intermediate nodes? It's possible to extend the
length of QKD depending on the underlying QKD protocol used. I.e. in
the EPR-based QKD, extension is possible.


[...]

 No one is doing that, though. People are working on things like faster
 bit rates, as though the basic reasons the whole thing is useless were
 solved.

I don't think you can legitimately speak for the entire community as
to what or not they may be doing. It's interesting to me that some
arguably unrelated fields of research (i.e. quantum repeaters) may be
useful.


  Importantly, however, is that if a classical system is used to do
  authentication, then the resulting QKD stream is *stronger* than the
  classically-encrypted scheme.

 Nope. It isn't. The system is only as strong as the classical system. If
 the classical system is broken, you lose any assurance that you aren't
 being man-in-the-middled.

No, it's not only as strong as the classical; it gets stronger if the
classical component works. Quoting from:
http://arxiv.org/abs/0902.2839v2 - The Case for Quantum Key
Distribution

If authentication is unbroken during the first round of QKD, even if
it is only computationally
secure, then subsequent rounds of QKD will be information-theoretically secure.


 Perry
 --
 Perry E. Metzger                pe...@piermont.com

-- 
silky

  http://www.programmingbranch.com/

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: Quantum Key Distribution: the bad idea that won't die...

2010-04-21 Thread silky
On Thu, Apr 22, 2010 at 12:04 PM, Perry E. Metzger pe...@piermont.com wrote:
   No one is doing that, though. People are working on things like faster
   bit rates, as though the basic reasons the whole thing is useless were
   solved.
 
  I don't think you can legitimately speak for the entire community as
  to what or not they may be doing.

 I think I can, actually. I know of very few people in computer security
 who take QKD seriously. I feel pretty safe making these sorts of
 statements.

But QKD is more about Physics than computer security. Anyway, it seems
there is little purpose in continuing the discussion.


 Importantly, however, is that if a classical system is used to do
 authentication, then the resulting QKD stream is *stronger* than the
 classically-encrypted scheme.

 Nope. It isn't. The system is only as strong as the classical system. If
 the classical system is broken, you lose any assurance that you aren't
 being man-in-the-middled.

 No, it's not only as strong as the classical; it gets stronger if the
 classical component works. Quoting from:
 http://arxiv.org/abs/0902.2839v2 - The Case for Quantum Key
 Distribution

 If authentication is unbroken during the first round of QKD, even if
 it is only computationally secure, then subsequent rounds of QKD will
 be information-theoretically secure.

 Read what you just wrote.

 IF THE AUTHENTICATION IS UNBROKEN. That is, the system is only secure if
 the conventional cryptosystem is not broken -- that is, it is only as
 secure as the conventional system in use. Break the conventional system
 and you've broken the whole thing.

Yes, I never stated the opposite (quote tree left intact). You were
saying that it is only as strong as the classical system. It is
clearly shown that the security of a QKD system *after* authentication
is *stronger* than classical, due to the OTP.

If what you meant to say was it is broken if authentication is
broken then the answer is obviously yes. But the strength, in
cryptographic terms, is clearly better.


 Perry
 --
 Perry E. Metzger                pe...@piermont.com

-- 
silky

  http://www.programmingbranch.com/

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: Trusted timestamping

2009-10-05 Thread silky
On Mon, Oct 5, 2009 at 8:42 AM, Alex Pankratov a...@poneyhot.org wrote:
 Does anyone know what's the state of affairs in this area ?

 This is probably slightly off-topic, but I can't think of
 a better place to ask about this sort of thing.

 I have spent a couple of days looking around the Internet,
 and things appear to be .. erm .. hectic and disorganized.

 There is for example timestamp.verisign.com, but there is
 no documentation or description of it whatsoever. Even the
 website itself is broken. However it is used by Microsoft's
 code signing tool that embeds Verisign's timestamp into
 Authenticode signature of signed executable files.

 There is also a way to timestamp signed PDFs, but the there
 appears to be nothing _trusted_ about available Trusted
 Timestamping Authorities. Just a bunch of random companies
 that call themselves that way and provide no indication why
 they should actually be *trusted*. No audit practicies, not
 even a simple description of their backend setup. The same
 goes for the companies providing timestamping services for
 arbitrary documents, either using online interfaces or a
 downloadable software.

 There are also Digital Poststamps, which is a very strange
 version of a timestamping service, because their providers
 insist on NOT releasing the actual timestamp to the customer
 and then charging for each timestamp verification request.

 I guess my main confusion at the moment is why large CAs of
 Verisign's size not offering any standalone timestamping
 services.

 Any thoughts or comments ?

I have no useful comments other than to point you to a timestamping
service you may or may not have seen (I didn't see you mention it:
http://www.itconsult.co.uk/stamper/stampinf.htm), form what I've
noticed (just in passing) this seems to be the most popular stamping
service.


 Thanks,
 Alex

-- 
noon silky
  http://www.mirios.com.au/
  http://skillsforvilla.tumblr.com/

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: What will happen to your crypto keys when you die?

2009-07-04 Thread silky
On Fri, Jul 3, 2009 at 4:37 AM, Jack Lloydll...@randombit.net wrote:
 On Thu, Jul 02, 2009 at 09:29:30AM +1000, silky wrote:
  A potentially amusing/silly solution would be to have one strong key
  that you change monthly, and then, encrypt *that* key, with a method
  that will be brute-forceable in 2 months and make it public. As long
  as you are constantly changing your key, no-one will decrypt it in
  time, but assuming you do die, they can potentially decrypt it while
  arranging your funeral :)

 This method would not work terribly well for data at rest. Copy the
 ciphertext, start the brute force process, and two months later you
 get out everything, regardless of the fact that in the meantime the
 data was reencrypted.

Indeed, hence the reason I suggested encrypting only your real key
with this method. By the time you're done decrypting that, you've only
gotten a stale key. Of course the approach isn't really practical in
principle, it's only cute.


 -Jack

-- 
noon silky
http://lets.coozi.com.au/

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: What will happen to your crypto keys when you die?

2009-07-02 Thread silky
On Wed, Jul 1, 2009 at 6:48 PM, Udhay Shankar Nud...@pobox.com wrote:
 Udhay Shankar N wrote, [on 5/29/2009 9:02 AM]:
  Fascinating discussion at boing boing that will probably be of interest
  to this list.
 
 http://www.boingboing.net/2009/05/27/what-will-happen-to.html

 Followup article by Cory Doctorow:

 http://www.guardian.co.uk/technology/2009/jun/30/data-protection-internet

A potentially amusing/silly solution would be to have one strong key
that you change monthly, and then, encrypt *that* key, with a method
that will be brute-forceable in 2 months and make it public. As long
as you are constantly changing your key, no-one will decrypt it in
time, but assuming you do die, they can potentially decrypt it while
arranging your funeral :)



 Udhay
 --
 ((Udhay Shankar N)) ((udhay @ pobox.com)) ((www.digeratus.com))

-- 
noon silky
http://lets.coozi.com.au/

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: Warning! New cryptographic modes!

2009-05-21 Thread silky
On Tue, May 12, 2009 at 10:39 AM, Jerry Leichter leich...@lrw.com wrote:
 On May 11, 2009, at 8:27 PM, silky wrote:
 
  The local version needs access to the last committed file (to compare
  the changes) and the server version only keeps the 'base' file and the
  'changes' subsets.

 a)  What's a committed file.

I'm thinking an SVN-style backup system. When you're done with all
your editing, you just commit the changes to go into the backup. As
part of the commit operation, it decides on the amount of changes
you've done and whether it warrants an entire re-encrypt and upload,
or whether a segment can be done.


 b)  As in my response to Victor's message, note that you can't keep a base
 plus changes forever - eventually you need to resend the base.  And you'd
 like to do that efficiently.

As discussed in my original post, the base is reset when the changes
are greater then 50% of the size of the original file.


  So yes, it does increase the amount of space required locally (not a
  lot though, unless you are changing often and not committing),

 Some files change often.  There are files that go back and forth between two
 states.  (Consider a directory file that contains a lock file for a program
 that runs frequently.)  The deltas may be huge, but they all collapse!

In that specific case, say and MS Access lock file, it can obviously
be ignored by the entire backup process.


  and
  will also increase the amount of space required on the server by 50%,
  but you need pay the cost somewhere, and I think disk space is surely
  the cheapest cost to pay.

 A large percentage increase - and why 50%? - scales up with the amount of
 storage.  There are, and will for quite some time continue to be,
 applications that are limited by the amount of disk one can afford to throw
 at them.  Such an approach drops the maximum size of file the application
 can deal with by 50%.

No reason for 50%, it can (and should) be configurable. The point was
to set the time at which the base file would be reset.


 I'm not sure what cost you think needs to be paid here.  Ignoring
 encryption, an rsync-style algorithm uses little local memory (I think the
 standard program uses more than it has to because it always works on whole
 files; it could subdivide them) and transfers close to the minimum you could
 possibly transfer.

The cost of not transferring a entirely new encrypted file just
because of a minor change.

-- 
noon silky

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: Warning! New cryptographic modes!

2009-05-21 Thread silky
How about this.

When you modify a file, the backup system attempts to see if it can
summarise your modifications into a file that is, say, less then 50%
of the file size.

So if you modify a 10kb text file and change only the first word, it
will encrypt that component (the word you changed) on it's own, and
upload that seperate to the file. On the other end, it will have a
system to merging these changes when a file is decrypted. It will
actually be prepared and decrypted (so all operations of this nature
must be done *within* the system).

Then, when it reaches a critical point in file changes, it can just
upload the entire file new again, and replace it's base copy and all
the parts.

Slightly more difficult with binary files where the changes are spread
out over the file, but if these changes can still be summarised
relatively trivially, it should work.

--
silky

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: Warning! New cryptographic modes!

2009-05-21 Thread silky
On Tue, May 12, 2009 at 10:22 AM, Jerry Leichter leich...@lrw.com wrote:
 On May 11, 2009, at 7:06 PM, silky wrote:
  How about this.
 
  When you modify a file, the backup system attempts to see if it can
  summarise your modifications into a file that is, say, less then 50%
  of the file size.
 
  So if you modify a 10kb text file and change only the first word, it
  will encrypt that component (the word you changed) on it's own, and
  upload that seperate to the file. On the other end, it will have a
  system to merging these changes when a file is decrypted. It will
  actually be prepared and decrypted (so all operations of this nature
  must be done *within* the system).
 
  Then, when it reaches a critical point in file changes, it can just
  upload the entire file new again, and replace it's base copy and all
  the parts.
 
  Slightly more difficult with binary files where the changes are spread
  out over the file, but if these changes can still be summarised
  relatively trivially, it should work.

 To do this, the backup system needs access to both the old and new version
 of the file.  rsync does, because it is inherently sync'ing two copies,
 usually on two different systems, and we're doing this exactly because we
 *want* that second copy.

The local version needs access to the last committed file (to compare
the changes) and the server version only keeps the 'base' file and the
'changes' subsets.

So yes, it does increase the amount of space required locally (not a
lot though, unless you are changing often and not committing), and
will also increase the amount of space required on the server by 50%,
but you need pay the cost somewhere, and I think disk space is surely
the cheapest cost to pay.


 If you want the delta computation to be done locally, you need two local
 copies of the file - doubling your disk requirements.  In principle, you
 could do this only at file close time, so that you'd only need such a copy
 for files that are currently being written or backed up.  What happens if
 the system crashes after it's updated but before you can back it up?  Do you
 need full data logging?

I think this is resolved by saving only the last committed.


 Victor Duchovni suggested using snapshots, which also give you the effect of
 a local copy - but sliced differently, as it were, into blocks written to
 the file system over some defined period of time.  Very useful, but both it
 and any other mechanism must sometimes deal with worst cases - an erase of
 the whole disk, for example; or a single file that fills all or most of the
 disk.

                                                        -- Jerry

-- 
noon silky

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: Solving password problems one at a time, Re: The password-reset paradox

2009-02-24 Thread silky
On Tue, Feb 24, 2009 at 8:30 AM, Ed Gerck edge...@nma.com wrote:
[snip]
 Thanks for the comment. The BofA SiteKey attack you mention does not work
 for the web access scheme I mentioned because the usercode is private and
 random with a very large search space, and is always sent after SSL starts
 (hence, remains private).

This is meaningless. What attack is the 'usercode' trying to prevent?
You said it's trying to authorise the site to the user. It doesn't do
this, because a 3rd party site can take the usercode and send it to
the 'real' site.


[snip]
 I'm referring to SMTP authentication with implicit SSL. The same
 usercode|password combination is used here as well, but the usercode is
 prepended to the password while the username is the email address. In this
 case, there is no anti-phishing needed.

Eh? This still doesn't make any particular amount of sense.


[snip]
 This case has the  same BofA SiteKey vulnerability. However, if that is
 bothersome, the scheme can also send a timed nonce to a cell phone, which is
 unknown to the attacker. This is explained elsewhere in
 http://nma.com/papers/zsentryid-web.pdf

Anything you do can be simulated by an evil site. Sending a key to a
phone is a good idea, but still, in the end, useless, because the evil
site can simulate it by passing whatever requested the user did to
that site.


[snip]
 If the threat model is that you can learn or know the RNG a given site is
 using then the answer is to use a hardware RNG.

No, it isn't.


 The point is that two passwords would still not have an entropy value that
 you can trust, as it all would depend on user input.

*shrug* make one of them autogenerated. Doesn't matter. You're just
adding complexity for no real benefit.


 That data is just a key that is the same for /all/ users. It is not
 user-specific. its knowledge does not provide information to attack any
 account.

Well I'm sorry but you don't understand your own system then.
Obviously it must have information to 'attack' a given account,
because you used it to generate something. The function you used did
something, so you can repeat it if you have all the inputs.


 Sorry if it wasn't clear. Please have a second reading.

Indeed.


 Cheers,
 Ed Gerck

-- 
noon silky
http://www.boxofgoodfeelings.com/

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: Solving password problems one at a time, Re: The password-reset paradox

2009-02-24 Thread silky
On Tue, Feb 24, 2009 at 12:23 PM, Ed Gerck edge...@nma.com wrote:
[snip]
 What usercode? The point you are missing is that there are 2^35 private
 usercodes and you have no idea which one matches the email address that you
 want to sent your phishing email to.

What you're missing is that it doesn't matter. The user enters the
usercode! So they enter it into the phishing site which passes the
call along.

-- 
noon silky
http://www.boxofgoodfeelings.com/

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: Solving password problems one at a time, Re: The password-reset paradox

2009-02-23 Thread silky
, overload 0 with O, 1 with
 I, for example), will reduce the entropy that can be added to (say) 35
 bits. Considering that the average poor, short password chosen by users has
 between 20 and 40 bits of entropy, the end result is expected to have from
 55 to 75 bits of entropy, which is quite strong.

Doesn't really matter given it prevents nothing. Sites may as well
just ask for two passwords.


 This can be made larger by,
 for example, refusing to accept passwords that are less than 8 characters
 long, by and adding more characters to the usercode alphabet and/or usercode
 (a 7-character code can still be mnemonic and human friendly).

 The fourth problem, and the last important password problem that would still
 remain, is the vulnerability of password lists themselves, that could be
 downloaded and cracked given enough time, outside the access protections of
 online login (three-strikes and you're out). This is also solved in our
 scheme by using implicit passwords from a digital certificate calculation.
 There are no username and password lists to be attacked in the first place.
 No target, hence not threat.

Eh? So what data was used to do the digital certificate calculation?
That's still around.


 In other words, to solve the fourth password problem we shift the
 information security solution space. From the yet-unsolved security problem
 of protecting servers and clients against penetration attacks to a
 connection reliability problem that is easily solved today.

 This approach of solving password problems one at a time, shows that the
 big problem of passwords is now reduced to rather trivial data management
 functions -- no longer usability or data security functions.

Sorry, you've solved nothing.


 Usability considerations still must be applied, of course, but not to solve
 the security problem. I submit that trying to solve the security problem
 while facing usability restrictions is what has prevented success so far.

 Comments are welcome. More at www.Saas-ST.com

Stop spamming?


 Best regards,
 Ed Gerck
 e...@gerck.com

-- 
noon silky
http://www.boxofgoodfeelings.com/

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: PlayStation 3 predicts next US president

2007-12-11 Thread silky
On Dec 11, 2007 5:06 AM, Allen [EMAIL PROTECTED] wrote:
 What puzzles me in all this long and rather arcane discussion is
 why isn't the solution of using a double hash - MD5 *and* SHA
 whatever. The odds of find a double collision go way up.

 Some open source software people are already doing this. I've
 played around with the sample files that are out there and find
 an easy way to do this but I don't have either the horsepower or
 skill to be at all definitive.

 My gut tells me that using two processes that use different
 algorithms, even though compromised, will raise the bar so high
 that it would be secure for a long time.

 At my skill level and horsepower I can't find even a single way
 to do this with CRC32 and MD5. Granted, that certainly doesn't
 mean a whole lot.

Work has actually been done on this exact topic.

One link is here: http://cryptography.hyperlink.cz/2004/otherformats.html

I think there may be more; I'm not sure.


 But to take a real world example, a safety deposit box, the two
 keys have to work together to open the box. It really does not
 matter is one is a Yale and the other a combination, either one
 of which are easily compromised by themselves, but together you
 would have to find both at the same time to open the box, a lot
 tougher problem.

 Best,

 Allen


 Francois Grieu wrote:
  [EMAIL PROTECTED] wrote:
 
   Dp := any electronic document submitted by some person, converted to its
 canonical form
   Cp := a electronic certificate irrefutably identifying the other person
 submitting the document
   Cn := certificate of the notary
   Tn := timestamp of the notary
   S() := signature of the notary
 
   S( MD5(Tn || Dp || Cp || Cn) ).
 
  In this context, the only thing that guards agains an attack by
  some person is the faint hope that she can't predict the Tn
  that the notary will use for a Dp that she submits.
 
  That's because if Tn is known (including chosen) to some person,
  then (due to the weakness in MD5 we are talking about), she can
  generate Dp and Dp' such that
S( MD5(Tn || Dp || Cp || Cn) ) = S( MD5(Tn || Dp' || Cp || Cn) )
  whatever Cp, Cn and S() are.
 
  If Tn was hashed after Dp rather than before, poof goes security.
 
 
Francois Grieu
 
  -
  The Cryptography Mailing List
  Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]
 

 -
 The Cryptography Mailing List
 Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]




-- 
mike
http://lets.coozi.com.au/

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: password strengthening: salt vs. IVs

2007-11-01 Thread silky
On Oct 30, 2007 6:24 AM,  [EMAIL PROTECTED] wrote:
 So back in the bad old days when hashing was DES encryption of the
 zero vector with a fixed key, someone came up with salt as a password
 strengthening mechanism.

 I'm not quite sure why it was called salt.

 It perturbed the S-boxes in DES IIRC, but essentially it was a known
 bit of text that was an input to the algorithm that varied between
 entries, like an IV does with encryption.

 If there isn't already a term for this, I'm going to call this
 general concept individuation, or possibly uniquification.

 Nowadays with strong hash algorithms, but rainbow tables and
 low-entropy passwords as the threat, I'm wondering what the best
 practice is.

 I was thinking of simply prepending a block of text to each passphrase
 prior to hashing, and storing it with the hash - similar to salts in
 passwd entries.

well what you're describing is quite classically a salt, imho.


 It should have at least as much entropy as the hash output, maybe a
 little more in case there's collisions.  If it were uniformly random,
 you could simply XOR it with the passphrase prior to hashing and save
 yourself some cycles, right?

well no. i mean to xor it (or probably what you mean: to otp it)
you'll need to have a salt who's length is equal to the input. that
would then mean that short inputs would result in short salts. i.e. a
password of a may result in the salt of x. hash(a ^ x) is
hardly secure against a rainbow table.

so you're better off maintaining the salt in a separate location
(after all, the threat model is that someone takes the db and has a
list of all the hashes, and then calculates out the passwords) and
still prepend it on before the main passphase.

you may consider, however, that if this salt is as long as one block
of the input to the hash algorithm, it effectively becomes a new iv.
but what that has to do with anything; i don't know ...


 Would it be appropriate to call this salt, an IV, or some new term?

 --
 Life would be so much easier if it was open-source.
 URL:http://www.subspacefield.org/~travis/ Eff the ineffable!
 For a good time on my UBE blacklist, email [EMAIL PROTECTED]


-- 
mike
http://lets.coozi.com.au/

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]