Computerised language translation is predicated on good continuous speech recognition. For a long time that is going to rely on relatively clean audio. Even if automated translation in noise becomes possible, with the amount of noise, etc., on an amateur radio circuit, the additional degradation in going from digital to analogue and back is likely to be negligible.

There is also a fundamental problem with machine translation in that the solution for the translation will not stabilise until the end of sentence, or later. Whilst a human also needs to reach this point for full understanding, they will have part processed the speech before they get there. In the general case, the machine translator cannot even start passing the translation on to the human until it reaches this point, so there will always be a significant extra processing delay, compared with understanding the language, directly.

I would also expect any software or hardware that comes onto the market to have been designed for use with at least the telephone bandwidth of 300-3.4kHz, not the narrower bandwidth used for SSB radio. Even telephone bandwidth is not enough to accurately recognize sibilants (s, sh, h, etc.).

The main case where direct digital is useful is for digital mode, where phase errors, which have no impact on speech recognition, may be significant.

On 30/03/17 19:20, Steve Sergeant wrote:

How abut signal processing operations that
are beyond the DSP capability in the radio? How about some
not-so-distant future when spoken language translation might be possible?

______________________________________________________________
Elecraft mailing list
Home: http://mailman.qth.net/mailman/listinfo/elecraft
Help: http://mailman.qth.net/mmfaq.htm
Post: mailto:[email protected]

This list hosted by: http://www.qsl.net
Please help support this email list: http://www.qsl.net/donate.html
Message delivered to [email protected]

Reply via email to