Re: More on in-memory zeroisation
Leichter, Jerry wrote: On Wed, 12 Dec 2007, Thierry Moreau wrote: | Date: Wed, 12 Dec 2007 16:24:43 -0500 | From: Thierry Moreau <[EMAIL PROTECTED]> | To: "Leichter, Jerry" <[EMAIL PROTECTED]> | Cc: Peter Gutmann <[EMAIL PROTECTED]>, cryptography@metzdowd.com | Subject: Re: More on in-memory zeroisation | | / testf.c / | #include | #include | | typedef void *(*fpt_t)(void *, int, size_t); | | void f(fpt_t arg) | { | if (memset==arg) | printf("Hello world!\n"); | } | | / test.c / | #include | #include | | typedef void *(*fpt_t)(void *, int, size_t); | | extern void f(fpt_t arg); | | int main(int argc, char *argv[]) | { | f(memset); | return EXIT_SUCCESS; | } | | /* I don't want to argue too theoretically. | | - Thierry Moreau */ I'm not sure what you are trying to prove here. Yes, I believe that in most implementations, this will print "Hello world\n". Is it, however, a strictly conforming program (I think that's the right standardese) - i.e., are the results guaranteed to be the same on all conforming implementations? I think you'll find it difficult to prove that. If there is a consensus among comforming implementation developers that the above program is comforming, that's a good enough "proof" for me. As a consequence of alleged consensus above, my understanding of the C standard would prevail and (memset)(?,0,?) would refer to an external linkage function, which would guarantee (to the sterngth of the above consensus) resetting an arbitrary memory area for secret intermediate result protection. Reading ANSI X3.159-1989, I believe there would be such a consensus, and I find it quite obvious. You may disagree, and I will no further argument. Regards, -- - Thierry Moreau - The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]
Re: More on in-memory zeroisation
On Wed, 12 Dec 2007, Thierry Moreau wrote: | Date: Wed, 12 Dec 2007 16:24:43 -0500 | From: Thierry Moreau <[EMAIL PROTECTED]> | To: "Leichter, Jerry" <[EMAIL PROTECTED]> | Cc: Peter Gutmann <[EMAIL PROTECTED]>, cryptography@metzdowd.com | Subject: Re: More on in-memory zeroisation | | / testf.c / | #include | #include | | typedef void *(*fpt_t)(void *, int, size_t); | | void f(fpt_t arg) | { | if (memset==arg) | printf("Hello world!\n"); | } | | / test.c / | #include | #include | | typedef void *(*fpt_t)(void *, int, size_t); | | extern void f(fpt_t arg); | | int main(int argc, char *argv[]) | { | f(memset); | return EXIT_SUCCESS; | } | | /* I don't want to argue too theoretically. | | - Thierry Moreau */ I'm not sure what you are trying to prove here. Yes, I believe that in most implementations, this will print "Hello world\n". Is it, however, a strictly conforming program (I think that's the right standardese) - i.e., are the results guaranteed to be the same on all conforming implementations? I think you'll find it difficult to prove that. BTW, it *might* not even be true in practice if you build your program as multiple shared libraries! -- Jerry - The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]
Re: More on in-memory zeroisation
/ testf.c / #include #include typedef void *(*fpt_t)(void *, int, size_t); void f(fpt_t arg) { if (memset==arg) printf("Hello world!\n"); } / test.c / #include #include typedef void *(*fpt_t)(void *, int, size_t); extern void f(fpt_t arg); int main(int argc, char *argv[]) { f(memset); return EXIT_SUCCESS; } /* I don't want to argue too theoretically. - Thierry Moreau */ - The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]
Re: PlayStation 3 predicts next US president
* William Allen Simpson: > Assuming, > Dp := any electronic document submitted by some person, converted to its > canonical form > Cp := a electronic certificate irrefutably identifying the other person > submitting the document > Cn := certificate of the notary > Tn := timestamp of the notary > S() := signature of the notary > > S( MD5(Tn || Dp || Cp || Cn) ). > > Of course, I'm sure the formula could be improved, and there are > traditionally fields identifying the algorithms used, etc. -- or something > else I've forgotten off the top of my head -- but please argue about the > actual topic of this thread, instead of incessant strawmen. The problem is not the outer MD5 (explicitly mentioned in your description), but that Dp is typically (well, to the extent such services have been deployed) some kind of hash. This has got the advantage that the timestamping service does not need to know the contents of the document. On the other hand, if the timestamping service archives Dp and can reveal it in a dispute, evil twins can be identified and analyzed -- which undermine the submitting party's claim that it submitted the second document instead of the first. Of course, this is actually cheating by substituting proven protocols for fragile cryptography. And the result is still open to interpretation, but all evidence is. - The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]
gauging interest in forming an USA chapter of IISP
Would anyone on this list be interested in forming a USA chapter of the Institute of Information Security Professionals (IISP, www.instisp.org)? I'm finding it rather difficult to attend events, etc., that are only in London. - Alex -- Alex Alten [EMAIL PROTECTED] - The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]
Google Tech Talk : Theory and Practice of Cryptography
I have yet to watch it. http://video.google.com/videoplay?docid=2899172465808407804 Description: Topics include: Introduction to Modern Cryptography, Using Cryptography in Practice and at Google, Proofs of Security and Security Definitions and A Special Topic in Cryptography This talk is one in a series hosted by Google University: Wednesdays, 11/28/07 - 12/19/07 from 1-2pm Speaker: Steve Weis Steve Weis received his PhD from the Cryptography and Information Security group at MIT, where he was advised by Ron Rivest. He is a member of Google's Applied Security (AppSec) team and is the technical lead for Google's internal cryptographic library, KeyMaster -Ryan - The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]
Re: More on in-memory zeroisation
| > If the function is defined as I suggested - as a static or inline - | > you can, indeed, takes its address. (In the case of an inline, this | > forces the compiler to materialize a copy somewhere that it might | > not otherwise have produced, but not to actually *use* that copy, | > except when you take the address.) You are allowed to invoke the | > function using the address you just took. However, what in that | > tells you that the compiler - knowing exactly what code will be | > invoked - can't elide the call? | | Case of static function definition: the standard says that standard | library headers *declare* functions, not *define* them. Where does it say it *can't* define them? How could a Standard-conforming program tell the difference? If no Standard-conforming program can tell the difference between two implementations, it makes no difference what you, as an omniscient external observer, might know - they are either both compatible with the Standard, or neither is. | Case of inline: I don't know if inline definition falls in the | standard definition of declaration. It makes not difference. | Also, the standard refers to these identifiers as external | linkage. This language *might* not creare a mandatory provision if | there was a compelling reason to have static or inline implementation, | but I doubt the very infrequent use of (memset)(?,0,?) instead of | memset(?,0,?) is a significant optimization opportunity. The compiler | writer risks a non-compliance assessment in making such strectched | reading of the standard in the present instance, for no gain in any | benchmark or production software speed measurement. | | Obviously, a pointer to an external linkage scope function must adhere | to the definition of pointer equality (==) operator. What do you think "the definition of pointer equality" actually is? Keep in mind that you need to find the definition *in the Standard*. The *mathematical* definition is irrelevant. | Maybe a purposedly stretched reading of the standard might let you | make your point. I don't want to argue too theoretically. Peter and I | just want to clear memory! Look, I write practical programs all the time - mainly in C++ recently, but the same principles apply. My programs tend to be broadly portable across different compilers and OS's. I've been doing this for close to 30 years. I stick to the published standards where possible, but there's no way to avoid making assumptions that go beyond the standards in a few cases: Every standard I know of is incomplete, and no implementation I've ever worked with is *really* 100% compliant. It's one thing to point out a set of practical techniques for getting certain kinds of things done. It's another to make unsupportable arguments that those practical techniques are guaranteed to work. There's tons of threaded code out there, for example. Given the lack of any discussion of threading in existing language standards, most of them skate on thin ice. Some things are broadly agreed upon, and "quality of implementation" requirements make it unlikely that a compiler will break them. Other things are widely believed by developers to have been agreed upon, but have *not* really be agreed upon by providers. Programs that rely on these things - e.g., that C++ function-scope static initializers will be run in a thread-safe way - will fail here and there, because in fact compiler developers don't even try to support them. Because of the ever- growing importance of threaded programs, this situation is untenable, and in fact the language groups are starting to grapple with how to incorporate threads. Security issues are a similar issue. The fact is, secure programming sometimes requires primitives that the standards simply don't provide. Classic example: For a long time, there was *no* safe way to use sprintf(), since there was no a priori way of determining how long the output string might be. People had various hacks, but all of them could be fooled, unless you pretty much re-implemented sprintf() yourself. snprintf() fixed that. There is, today, no way to guarantee that memset() will be run, within the confines of the standard. This is a relatively minor oversight - C has seen such issues as important since volatile was introduced well before the language was standardized. I expect we'll see some help on this in a future version. In the meanwhile, it would be nice if compiler developers would agree on some extra-Standard mechanisms. The gcc hack could be a first step - but it should be written down, not just something a few insiders know about. Standards are supposed to grow by standardi- zing proven practice, not by innovation. The problem with unsupportable assumptions that some hack or another provides a solution is that they block *actual* solutions. By all means use them where necessary - but push for better approaches. -- Jerry | Kind reg
PunchScan voting protocol
Hi Folks -- I was wondering to what extent the folks on this list have taken a look the PunchScan voting scheme: http://punchscan.org/ The site makes the following claims: >> End-to-end cryptographic independent verification, or E2E, is a >> mechanism built into an election that allows voters to take a >> piece of the ballot home with them as a receipt. This receipt >> does not allow voters to prove to others how they voted, but it >> does permit them to: >> >> * Verify that they have properly indicated their votes to >> election officials (cast-as-intended). >> * Verify with extremely high assurance that all votes were >> counted properly (counted-as-cast). >> >> Voters can check that their vote actually made it to the tally, >> and that the election was conducted fairly. Those seem at first glance to be a decent set of claims, from a public-policy point of view. If somebody would prefer a different set of claims, please explain. PunchScan contains some nifty crypto, but IMHO this looks like a classic case of too much crypto and not enough real security. I am particularly skeptical of one of the FAQ-answers http://punchscan.org/faq-protections.php#5 Several important steps in the process must be carried out in secret, and if there is any leakage, there is unbounded potential for vote-buying and voter coercion. The Boss can go to each voter and make the usual silver-or-lead proposition: Vote as I say, and then show me your voting receipt. I'll give you ten dollars. But if I find out you voted against me, I'll kill you. The voter cannot afford to take the chance that even a small percentage of the ballot-keys leak out. 1) It would be nice to see some serious cryptological protection of election processes and results. 2a) I don't think we're there yet. 2b) In particular I don't think PunchScan really solves "the" whole problem. 3) I'd love to be wrong about item (2). Does anybody see a way to close the gaps? - The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]
Re: PlayStation 3 predicts next US president
William Allen Simpson wrote: > The whole point of a notary is to bind a document to a > person. That the person submitted two or more > different documents at different times is readily > observable. After all, the notary has the > document(s)! The notary does not want to have the documents, or to have the necessary apparatus to produce them on demand. Actually existent notaries do not keep the documents. Again, you are trying to invent a protocol that works around the flaws in MD5. No doubt a competent engineer can create such a protocol, but a competent engineer would much prefer not to have flaws he needs to work around. Further, there is a long history of cryptographic disasters, such as WiFi, where supposedly competent engineers set to working around flaws, and instead created more and bigger flaws. Even if someone really is a competent engineer, and perfectly capable of producing a protocol that works around the known flaws, it is hard for anyone else to tell if he really is competent enough to work around the flaws, or has produced, like WiFi and so many Microsoft projects, an even bigger hole than that which he was trying to fix. - The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]
Re: More on in-memory zeroisation
Leichter, Jerry wrote: If the function is defined as I suggested - as a static or inline - you can, indeed, takes its address. (In the case of an inline, this forces the compiler to materialize a copy somewhere that it might not otherwise have produced, but not to actually *use* that copy, except when you take the address.) You are allowed to invoke the function using the address you just took. However, what in that tells you that the compiler - knowing exactly what code will be invoked - can't elide the call? Case of static function definition: the standard says that standard library headers *declare* functions, not *define* them. Case of inline: I don't know if inline definition falls in the standard definition of declaration. Also, the standard refers to these identifiers as external linkage. This language *might* not creare a mandatory provision if there was a compelling reason to have static or inline implementation, but I doubt the very infrequent use of (memset)(?,0,?) instead of memset(?,0,?) is a significant optimization opportunity. The compiler writer risks a non-compliance assessment in making such strectched reading of the standard in the present instance, for no gain in any benchmark or production software speed measurement. Obviously, a pointer to an external linkage scope function must adhere to the definition of pointer equality (==) operator. Maybe a purposedly stretched reading of the standard might let you make your point. I don't want to argue too theoretically. Peter and I just want to clear memory! Kind regards, -- - Thierry Moreau CONNOTECH Experts-conseils inc. 9130 Place de Montgolfier Montreal, Qc Canada H2M 2A1 Tel.: (514)385-5691 Fax: (514)385-5900 web site: http://www.connotech.com e-mail: [EMAIL PROTECTED] - The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]
Re: Intercepting Microsoft wireless keyboard communications
Steven M. Bellovin wrote: > Believe it or not, I thought of CFB... > > Sending keep-alives will do nasties to battery > lifetime, I suspect; most of the time, you're not > typing. As for CFB -- with a 64-bit block cipher (you > want them to use DES? they're not going to think of > anything different), it will take 9 keypresses to > flush the buffer. With AES (apparently your > assumption), it will take 17 keypresses. This isn't > exactly muggle-friendly. Just think of the text in > the instructions... Redundancy? I wonder how much is > needed to avoid problems. It has to be a divisor of > the cipher block size, which more or less means 8 > extra bits. How much will that cost in battery life? Keypress signals, or change of keyboard state signals, do not need to be a divisor of the cipher block size. At every block boundary, keyboard transmits a special signal in the clear that signifies block boundary. Any time that no key has been pressed for a while, then when a key is finally pressed, keyboard transmits a bunch of no-ops sufficient to ensure that the recipient has recently received an entire block, followed by a complete description of current keyboard state, so that recipient knows what change of keyboard state signals are changes from. Conversely, when the receiver has not received any signal for a while, it expects such a signal, and distrusts anything else. Muggle unfriendliness only occurs if user is typing through a boot up, which is unlikely to terribly surprise the user, who is probably banging away at the same keys over and over again waiting for a reaction, or if the user wanders out of range while typing, then wanders back in range again while still typing, in which case again the user is unlikely to be very surprised. - The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]
Re: Intercepting Microsoft wireless keyboard communications
On 12/10/07, Steven M. Bellovin <[EMAIL PROTECTED]> wrote: > Believe it or not, I thought of CFB... What about PCFB to get around the block issue? I remember freenet using it that way... -- Taral <[EMAIL PROTECTED]> "Please let me know if there's any further trouble I can give you." -- Unknown - The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]
Re: More on in-memory zeroisation
| > However, that doesn't say anything about whether f is actually | > invoked at run time. That comes under the "acts as if" rule: If | > the compiler can prove that the state of the C (notional) virtual | > machine is the same whether f is actually invoked or not, it can | > elide the call. Nothing says that memset() can't actually be | > defined in the appropriate header, as a static (or, in C99, inline) | > function. | | The standard actually says "... it is permitted to take the address of | a library function even if it is defined as a macro ...". The standard | works for me as a source code author who needs an execution-aware | memcpy function from time to time. Overworked GCC contributors should | work to comply to the standard, not to address Peter, Thierry, and | whoever's wildests dreams. If the function is defined as I suggested - as a static or inline - you can, indeed, takes its address. (In the case of an inline, this forces the compiler to materialize a copy somewhere that it might not otherwise have produced, but not to actually *use* that copy, except when you take the address.) You are allowed to invoke the function using the address you just took. However, what in that tells you that the compiler - knowing exactly what code will be invoked - can't elide the call? By the way, you might wonder what happens if two different CU's take the address of memset and we then compare them. In this kind of implementation, they will be unequal - but in fact nothing in the Standard says they can't be! A clever compiler could have all kinds of reasons to produce multiple copies of the same function. All you can say is that if two function pointers are equal, they point to the same function. No converse form is provable within the Standard. You might try something like: typedef (void *(*memset_ptr)(const void*,int,size_t)); volatile memset_ptr p_memset = &(memset); (I *think* I got that syntax right!) Then you can invoke (*p_memset). But if you do this in the same compilation unit, a smart compiler that does value propagation could determine that it knows where p_memset points, and that it knows what the code there is, so it can go ahead and do its deeper analysis. Using: volatile memset_ptr p_memset = &(memset); in one compilation unit and then: extern volatile memset_ptr p_memset = &(memset); will keep you safe from single-CU optimizations, but nothing in the Standard says that's all there are. Linker-based optimizations could have the additional information that nowhere in the program can p_memset be changed, and further that p_memset is allocated to regular memory, and in principle the calls could be elided at that point. Mind you, I would be astounded if any compiler/linker system actually attempted such an optimization ... but that doesn't make it illegal within the language of the Standard. | > Then the compiler can look at the implementation and "prove" | > that a memset() to a dead variable can be elided | | It can't prove much in the case of (memset)() In principle (I'll grant you, probably not in practice), it can provie quite a bit - and certainly enough to justify eliding the call. -- Jerry - The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]
RE: More on in-memory zeroisation
| > Then the compiler can look at the implementation and "prove" that a | > memset() to a dead variable can be elided | | One alternative is to create zero-ing functions that wrap memset() | calls with extra instructions that examine some of the memory, log a | message and exit the application if the memory is not zero. This has | two benefits: 1) It guarantees the compiler will leave the memset() in | place and 2) guarantees the memset() worked. It does incur a few extra | instructions though. | | I guess it is possible that the compiler would somehow optimize the | memset to only zero the elements subsequent code compares... Hmmm | [Of course your application could be swapped out and just before the | memset call writing your valuable secrets to the system swap file on | disk... :-( ] In practice, with an existing compiler you are not in a position to change, these kinds of games are necessary. If you're careful, you look at the generated code to make sure it does what you expect. But this is a very bad - and potentially very dangerous - approach. You're relying on the stupidity of the compiler - and on the compiler not become more intelligent over time. Are you really prepared to re-check the generated code every time the compiler is rev'ed? There sometimes needs to be an explicit way to tell the compiler that some operation *must* be done in some way, no matter what the compiler thinks it knows. There's ample precedent for this. For example, floating point arithmetic doesn't exactly follow the usual laws of arithmetic (e.g., it's not associative, if you consider overflows), some if you know what you are doing in construction an FP algorithm, you have to have a way to tell the compiler "Yes, I know you think you can improve my code here, but just leave it alone, thank you very much." And all programming languages that see numerical programming as within their rubric provide standardized, documented ways to do just that. C have "volatile" so that you can tell the compiler that it may not elide or move operations on a variable, even when those operations have no effects visible in the C virtual machine. (The qualifier was added to support memory-mapped I/O, where there can be locations that look like memory but have arbitrarily different semantics from normal memory.) And so on. You can almost, but not quite, get the desired effect for memory zero- ization with "volatile". Something more is needed, and software that will be used to write cryptographic algorithms needs access to that "something more" (to be pinned down explicitly). -- Jerry - The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]
Re: Flaws in OpenSSL FIPS Object Module
| > It is, of course, the height of irony that the bug was introduced in | > the very process, and for the very purpose, of attaining FIPS | > compliance! | | But also to be expected, because the feature in question is | "unnatural": the software needs a testable PRNG to pass the compliance | tests, and this means adding code to the PRNG to make it more | predictable under test conditions. Agreed. In fact, this fits with an observation I've made in many contexts in the past: Any time you introduce a new mode of operation, you are potentially introducing a new failure mode corresponding to it as well. Thus, bulkhead doors on sidewalks are unlikely to open under you because the only mode of operation they try to support has the doors opening upward. I would be very leary of stepping on such a door if it could *ever* be opened downward. | As the tests only test the predictable PRNG, it is easy to not notice | failure to properly re-seed the non-test PRNG. One can't easily test | failure to operate correctly under non-test conditions. And the | additional complexity of the test harness makes such failure more | likely. | | The interaction of the test harness with the software under study | needs close scrutiny (thorough and likely multiple independent code | reviews). There's a famous story - perhaps apocryphal - from the time IBM introduced some of the first disk packs. They did great for a while, but then started experiencing head crashes at a rate much higher than had ever been seen in the development labs. The labs, of course, suspected production problems - but packs they brought in worked just as well as the ones they'd worked with earlier. Finally, someone sitting there, staring at one of the test packs and at a crashed disk from a customer had a moment of insight. There was one difference between the two packs: The labs pulled samples directly off the production line. Customers got packs that had gone through QA. The last thing QA did was put a "Passed" sticker on the top disk of the pack. So ... take a pack with a sticker and spin it up. This puts G forces on the sticker. The glue under the sticker slowly begins to migrate. Eventually, some of it goes flying off into the enclosure. If it gets under a head ... crash. | Similar bugs are just as likely in closed-source software and are less | likely to be discovered. Actually, now that this failure mode has been demonstrated, it would be a good idea to test for it. It's harder to do with just binaries, but possible - look at the recent analyses of the randomization in Vista ASLR. -- Jerry - The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]
Re: More on in-memory zeroisation
Leichter, Jerry wrote: | > There was a discussion on this list a year or two back about | > problems in using memset() to zeroise in-memory data, specifically | > the fact that optimising compilers would remove a memset() on | > (apparently) dead data in the belief that it wasn't serving any | > purpose. | | Then, s/memset(?,0,?)/(memset)(?,0,?)/ to get rid of compiler | in-lining. | | Ref: ANSI X3.159-1989, section 4.1.6 (Use of C standard library | functions) I don't have te C89 spec handy (just the C99 spec, which is laid out differently), but from what I recall, this construct guarantees nothing of the sort. Most standard library functions can be implemented as macros. Using the construct (f)(args) guarantees that you get the actual function f, rather than the macro f. Indeed, the actual function, memcpy in the present instance. memcpy, the actual function, is aware of the execution environment, because it is part of the run-time library. The compiler is not as deeply aware of the execution environment. The source code construct (f)(args) is provided by the standard to allow the program to explicitly rely on the actual library function. At least this is my understanding of compiler optimization techniques as subordinate to standard definition of the C language. I too often turned off optimization due to (suspected) optimized-in crash, I would like to rely on the (f)(args) to locally turn off optimization. However, that doesn't say anything about whether f is actually invoked at run time. That comes under the "acts as if" rule: If the compiler can prove that the state of the C (notional) virtual machine is the same whether f is actually invoked or not, it can elide the call. Nothing says that memset() can't actually be defined in the appropriate header, as a static (or, in C99, inline) function. The standard actually says "... it is permitted to take the address of a library function even if it is defined as a macro ...". The standard works for me as a source code author who needs an execution-aware memcpy function from time to time. Overworked GCC contributors should work to comply to the standard, not to address Peter, Thierry, and whoever's wildests dreams. Then the compiler can look at the implementation and "prove" that a memset() to a dead variable can be elided It can't prove much in the case of (memset)() Regards, -- - Thierry Moreau - The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]
Re: PlayStation 3 predicts next US president
| > The whole point of a notary is to bind a document to a person. That | > the person submitted two or more different documents at different | > times is readily observable. After all, the notary has the | > document(s)! | | No, the notary does not have the documents *after* they are notarized, | nor do they keep copies. Having been a notary I know this | personally. When I stopped being a notary all I had to submit to the | state was my seal and my record books. | | If I had to testify about a document I would only be attesting that | the person who presented themselves adequately proved, under the | prudent businessman's standard, that they were the person that they | said they were and that I saw them sign the document in | question. That's it. No copies at all. What would anyone have to | testify about if a legal battle arose after the notary either died or | stopped being a notary? | | Think for a minute about the burden on a notary if they had to have a | copy of every document they notarized. What a juicy target they would | make for thieves and industrial spies. No patent paperwork would be | safe, no sales contract, no will, or other document. Just think how | the safe and burglar alarm companies would thrive. Now ask yourself | how much it costs to notarize a document. Would that pay for the | copying and storage. I don't know what the current fees are in | California but 20 years ago they were limited to $6.00 per person per | document and an extra buck for each additional copy done at the same | time. My average was about $14.00 per session. My insurance was | $50/year. Nowhere near enough to cover my liability if I was to This whole discussion has an air of unreality about it. Historically, notary publics date from an era when most people couldn't read or write, and hardly anyone could afford a lawyer. How does someone who can't read a document and can perhaps only scrawl an X enter into a contract? In the old days, he took the written contract to a notary public, who would read it to him, explain it, make sure he understood it, then stamp his scrawled "X". The notary's stamp asserted exactly the kind of thing that we discuss on this list as missing from digital signatures: That the particular person whose "X" was attached (and who would be fully identified by the notary) understood and assented to the contents of the contract. Today, we assume that everyone can read, and where a contract is at all complex, that everyone will have access to a lawyer. (Of course, this assumption is often invalid, but that's another story.) The requirement for a notary public's stamp is a faded vestige. For certain important documents, we still require that a notary sign off, but what exactly that proves any more is rather vague. Yes, in theory it binds a signature to a particular person, with that signature being on a particular document. That latter is why the notary's stamp is a physical stamp through the paper - hard(er) to fake. Of course, most of the time, the stamp is only applied to the last page of a multi-page contract, so proves only that that last page was in the notary's hands - replacing the early pages is no big deal. I think I've seen notaries initial every page, but I've also seen notaries who don't. In practice, whenever I've needed to have a document notarized, a quick look at some basic ID is about all that was involved. It's quite easy to get fake ID past a notary. Given the trivial fee paid to a notary - I think the limit is $2 in Connecticut - asking the notary to actually add much of value is clearly a non-starter. The financial industry has actually created its own system - I forget the name, some like a Gold Bond Certification - that it requires for certain "high-importance" transactions (e.g., a document asserting you own some stock for which you've lost the certificates). I've never actually needed to get this - it's appeared as a requirement for some alternative kinds of transactions on forms I've had to fill out over the years - so I don't know exactly how it works. However, it's completely independent of the traditional notary public system, and is run through commercial banks. Trying to justify anything based on the current role of notary publics - at least as it exists in the US - is a waste of time. -- Jerry - The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]
Re: PlayStation 3 predicts next US president
Allen wrote: William Allen Simpson wrote: [snip] The whole point of a notary is to bind a document to a person. That the person submitted two or more different documents at different times is readily observable. After all, the notary has the document(s)! No, the notary does not have the documents *after* they are notarized, nor do they keep copies. Having been a notary I know this personally. Thanks, Allen. Interestingly, digital signatures do provide what notaries can't provide in this case. Even though a digital signature binds a document to a key, there are known legal frameworks that can be used to bind the key to a person. Cheers, Ed Gerck - The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]