How about just taking x size hunk from start and from end of the file, then
md5sum both of them? I think that possibility that you will get 2 exactly
same md5 hash is quite zero..

Bonus comes that you don't need to do md5 hash from whole file, so server
should not suffer performance loss because of this.

Interesting idea - but should only be applied, of course, to messages bigger than a pre-defined size...

--

Best regards,

Charles
_______________________________________________
DBmail mailing list
[email protected]
https://mailman.fastxs.nl/mailman/listinfo/dbmail

Reply via email to