On Sun, Jan 4, 2015 at 10:27 PM, Michael Hamburg <[email protected]> wrote:
> But an automated speaker identification system might be looking at the > same features that the voice-morphing software changes. In that case, it > might not be useful except possibly for having a larger training corpus. > In theory, it's possible for Bob's device to store a privacy-preserving biometric template of Alice's voice. Effectively, this is like a password hash in that it allows Bob to verify that a sample is really Alice but not to generate samples (without brute-force). The practicalities of this vary for different biometrics and I'm not sure if a concrete scheme has been evaluated for voice. My guess is that there is simply not enough entropy in human voices for this to withstand a malicious Bob brute-forcing to reconstruct Alice's voice template. In any case it's not clear how useful this would be even if you could do a privacy-preserving voice template. How does Bob get Alice's template to begin with? Why not just get her public key at the time these are obtained? Also FWIW voice recognition is pretty weak. As Mike suggested, it's easy to fool with voice synthesis given a small amount of training data. Turns out it's also easy to break by simply testing random people to find somebody who can impersonate a target voice. There was a good paper at SOUPS this year demonstrating this attack using Mechanical Turk [1] [1] https://www.usenix.org/system/files/conference/soups2014/soups14-paper-panjwani.pdf
_______________________________________________ Messaging mailing list [email protected] https://moderncrypto.org/mailman/listinfo/messaging
