Peter S, your combative attitude is unwelcome. It seems that you are less
interested in grasping these topics than you are in hectoring myself and
other list members. Given that and the dubious topicality of this thread,
this will be my last response to you. I hope that you find a healthy way to
Dear Ethan,
You suggested me to be short and concise.
My kind recommendation to you:
1) Read A Mathematical Theory of Communication.
2) Try to understand Theorem 2.
3) Try to see, when p_i != 1, then H != 0.
I hope this excercise will help you grasp this topic.
Best regards,
Peter
On
On 17/07/2015, robert bristow-johnson r...@audioimagination.com wrote:
On 7/17/15 1:26 AM, Peter S wrote:
On 17/07/2015, robert bristow-johnsonr...@audioimagination.com wrote:
in your model, is one sample (from the DSP semantic) the same as a
message (from the Information Theory semantic)?
A
On 7/17/15 1:26 AM, Peter S wrote:
On 17/07/2015, robert bristow-johnsonr...@audioimagination.com wrote:
in your model, is one sample (from the DSP semantic) the same as a
message (from the Information Theory semantic)?
A message can be anything - it can be a sample, a bit, a combination
of
A linear predictor[1] tries to predict the next sample as the linear
combination of previous samples as
x'[n] = SUM [i=1..k] a_i * x[n-i]
where x'[n] is the predicted sample, and a_1, a_2 ... a_k are the
prediction coefficients (weights). This is often called linear
predictive coding
On 7/17/15 2:28 AM, Peter S wrote:
Dear Ethan,
You suggested me to be short and concise.
My kind recommendation to you:
1) Read A Mathematical Theory of Communication.
2) Try to understand Theorem 2.
3) Try to see, when p_i != 1, then H != 0.
I hope this excercise will help you grasp this
I tested a simple, first-order histogram-based entropy estimate idea
on various 8-bit signed waveforms (message=sample, no correlations
analyzed). Only trivial (non-bandlimited) waveforms were analyzed.
Method:
1) Signal is trivially turned into a histogram.
2) Probabilities assumed based on