Hello Giancarlo,

this is an interesting system; I have a few notes below:

On 12.03.2016 04:59, Giancarlo Murillo wrote:
>
> First, thanks for reading my question experts.
>
> Im trying to implement a */Continous Phase Frequency Shift Keying
> Modulation/ (CPFSK)* to stream audio, for the design of the modulator,
> i implemented my modulation system in the GNUradio for simulate it in
> prior to use a USRP1 from Ettus Research.
>
> /*Some details*/
>
> I decided to put every variable in function of the bit rate needed. in
> my example im using a bitrate *R=1024e3 (bits per second), *this means.
>
>   * Sample_Rate=16*R (Samples/Second)
>
That won't work; 16*1024e3 S/s is not a valid sample rate for the USRP1;
all sampling rates must be integer divisors of 64e6 Hz on the USRP1, and
also at least 64e6/512 = 125kS/s.
>
>   * Modulation_Rate=R/2 (Symbols/Second)
>
ok; that means 2 samples per symbol. Is that really what you're after?
Especially, demodulation of FSK with just 2 samples per symbol isn't
trivial; think about the timing synchronization requirements. You might
want to apply oversampling at the receiver side at least.
>
>   * Modulation_index=1.05
>
So 5% of the carrier frequency is your maximum frequency deviation.
>
>   * -/+ FM_Deviation_rate=h*R/2 (Hertz)
>
Does that lead to the same numbers as the modulation index? What's *h*?
>
>   * Samples_per_Symbol=2*Fs/R (Samples/Symbol)
>
That contradicts your modulation rate above; 2*Fs/R = 2* (16*R)/R = 32,
but Modulation Rate says samples_per_symbol=2.
>
> The CPFSK block needs 2 variables: *K*, which is *Modulation index*
> and *SPS* whis is *Samples_per_Symbol.*
>
yep, that makes sense; the absolute frequencies don't matter to the FSK
modulator.
**
>
> My problem is when im trying to import a WAV file to stream, which has
> a frecuency spectrum detail in *music.png, *
>
 That almost certainly doesn't look like the spectrum of voice. Useful
voice signal is typically below 4kHz, and everything above is usually
filtered out before being fed into the classical vocoders.
The filename also indicates you're trying to apply a vocoder (voice
coder!) to general audio; that's not the worst thing to do, because even
with music low-pass signals still lead to recognizability, but being frank:
You're applying something application-specific to something that it is
/explicitly /not meant for.
>
> *Sample_Rate=44.1 KHz and one channel of audio.*
>
The CPFSK doesn't care the least about absolute sample rate; modulation
index tells it what is the ratio of the maximum deviation to the
sampling rate.
**
>
> *I*m using the vocoder CVSD since it works better for my application,
> it needs2 fields
>
>   * Resample: To increase the Samples (Interpolation)
>   * Fractional Bandwidth: 0.5 (The best quality)
>
> My question is related to this Resample field (How can i use this
> variable to obtain a Bitrate variable?)
>
> *Bitrate=function(R,fa)*
>
Well, the job of a vocoder is to exploit the properties of a digital
speech signal to reduce the bandwidth occupied by it, thus making it
either possible to send it using less bandwidth, or within the same
bandwidth optimize quality by adding robustness.

As the Documentation says
> It converts an incoming float (+-1) to a short, scales it (to 32000;
> slightly below the maximum value), interpolates it, and then vocodes it.
The resampler is really just used to parameterize the interpolation above.

So I assume you want to achieve a certain fixed bitrate *R*, and though
you don't mention it, I assume *fa* ist the audio sampling rate.

Now, the question is what the proper interpolation is; well, you would
need to know the bitrate ratio of your vocoder. Haven't read the "CVSD
(raw bits)" encoder's documentation too closely, but if I remember
correctly, CVSD is dead simple: you just convert your signal values to
the difference to the previous sample, and quantize that to 1bit (which
is the sign and nothing more).
Because that obviously strongly distorts that signal a lot, you first
interpolate to a higher rate. Of that higher rate, 8 samples are
converted to 8 bits of sign info, and that is one byte of output.
So, the higher the interpolation, the more bits you make out of your
input sample. In fact, of the 16 bit input information per input sample,
you make 1bit * interpolation output info. So any interpolation greater
than 16 is senseless; you could as well send the original signal.

For your target bit rate of 1000 kb/s, and your 44.1kHz input sampling
rate I wouldn't even bother with vocoding:
Let's round this to 1 Mb/s, and divide that by the 44.1kHz input
sampling rate: you have 22.7 bit of channel bit rate to spend per input
sample. Assuming your audio file is 16bit quantized, you can simply take
the input, and still have more than 7 bits per sample to spend -- for
example, on channel coding. If you really had a voice signal, you could
virtually losslessly decimate to 8kHz (or 44.1kHz/5=8.82kHz for the sake
of integer decimation). That would lead to 125 (or 113) bit on the air
per input sample. Really, no need for vocoding; if you still applied an
e.g. interpolation=8 vocoder, you'd get that up to 250 bit (226 bit) per
input sample. That's enough for a rate-1/15 channel coder; I don't think
this is what anyone would aim for.

I think you should more carefully sit down and design your system's
requirements; get a proper test signal for a vocoder, and orientate on
existing vocoder parametrizations.

Best regards,
Marcus
_______________________________________________
Discuss-gnuradio mailing list
[email protected]
https://lists.gnu.org/mailman/listinfo/discuss-gnuradio

Reply via email to