I've used the dsPIC33FJ64GP802 with on-chip stereo audio DAC in a couple of
well-recieved mass-produced products (Euro-rack synth modules) and the on-chip
DAC is decent enough. It does suffer from some limit cycling at 1/2 the sample
rate if you try to drive constant data, but I always ran it
On 14/10/2014, ro...@khitchdee.com ro...@khitchdee.com wrote:
Peter,
How would you characterize the impact of your posts on the entropy of this
mailing list, starting with the symbol space that get's defined by the
different perspectives on entropy :-)
I merely showed that:
1) 'entropy'
Again, the minimal number of 'yes/no' questions needed to guess your
message with 100% probability is _precisely_ the Shannon entropy of
the message:
For the case of equal probabilities (i.e. each message is equally
probable), the Shannon entropy (in bits) is just the number of yes/no
questions
Another way of expressing what my algorithm does: it estimates
'decorrelation' in the message by doing a simple first-order
approximation of decorrelation between bits. The more random a
message is, the more decorrelated their bits are. Otherwise, if the
bits are correlated, that is not random and
Longest discussion thread so far I think!
The discussion reminded me of more general measures of entropy than
Shannon's, examples are the Renyi entropies:
http://en.wikipedia.org/wiki/R%C3%A9nyi_entropy
Some might find it amusing and relevant to this discussion that the
'Hartley entropy' H_0 is
So, instead of academic hocus-pocus and arguing about formalisms, what
I'm rather concerned about is:
- What are the real-world implications of the Shannon entropy problem?
- How could we possibly use this to categorize arbitrary data?
--
dupswapdrop -- the music-dsp mailing list and website:
On 14/10/2014, Max Little max.a.lit...@gmail.com wrote:
P.S. Any chance people could go offline for this thread now please?
It's really jamming up my inbox and I don't want to unsubscribe ...
Any chance your mailbox has the possibility of setting up a filter
that moves messages with the
On 14/10/2014, Max Little max.a.lit...@gmail.com wrote:
Some might find it amusing and relevant to this discussion that the
'Hartley entropy' H_0 is defined as the base 2 log of the cardinality
of the sample space of the random variable ...
Which implies, that if the symbol space is binary (0
Eric,
Are you using the MPLab IDE? I saw that it runs on Mac OS X as well, which
makes it a bit more attractive.
Best,
Steffan
On 14 Oct 2014, at 08:09, Eric Brombaugh ebrombau...@cox.net wrote:
I've used the dsPIC33FJ64GP802 with on-chip stereo audio DAC in a couple of
well-recieved
Which is another way of saying: a fully decorrelated sequence of bits
has the maximum amount of entropy.
So if we try to estimate the 'decorrelation' (randomness) in the
signal, then we can estimate 'entropy'.
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ,
Well, it just says that there is a measure of information for which
the actual distribution of symbols is (effectively) irrelevant. Which
is interesting in it's own right ...
Max
On 14 October 2014 11:59, Peter S peter.schoffhau...@gmail.com wrote:
On 14/10/2014, Max Little
On 14/10/2014, Max Little max.a.lit...@gmail.com wrote:
Well, it just says that there is a measure of information for which
the actual distribution of symbols is (effectively) irrelevant. Which
is interesting in it's own right ...
Feel free to think outside the box.
Welcome to the real world,
Prescient. Apparently, Kolmogorov sort of, perhaps, agreed:
Discussions of information theory do not usually go into this
combinatorial approach [that is, the Hartley function] at any length,
but I consider it important to emphasize its logical independence of
probabilistic assumptions, from Three
Yes, I use MPLAB X on Mac, Linux and Windows for developing on Microchip
MCUs. It's... quirky... but it does work.
Eric
On 10/14/2014 04:08 AM, STEFFAN DIEDRICHSEN wrote:
Eric,
Are you using the MPLab IDE? I saw that it runs on Mac OS X as well, which
makes it a bit more attractive.
--
On 14/10/2014, Sampo Syreeni de...@iki.fi wrote:
We do know this stuff. We already took the red pill, *ages* ago. Peter's
problem appears to be that he's hesitant to take the plunge into the
math, proper. Starting with the basics
Didn't you recently tell us that you have no clue of 'entropy
Although, it's interesting to me that you might be able to get some
surprising value out of information theory while avoiding any use of
probability ...
Hartley entropy doesn't avoid any use of probability, it simply
introduces the assumption that all probabilities are uniform which greatly
Hartley entropy doesn't avoid any use of probability, it simply
introduces the assumption that all probabilities are uniform which greatly
simplifies all of the calculations.
How so? It's defined as the log cardinality of the sample space. It is
independent of the actual distribution of the
On 2014-10-14, Max Little wrote:
Hartley entropy doesn't avoid any use of probability, it simply
introduces the assumption that all probabilities are uniform which
greatly simplifies all of the calculations.
How so? It's defined as the log cardinality of the sample space. It is
independent
Right, and that is exactly equivalent to using Shannon entropy under the
assumption that the distribution is uniform.
Well, we'd probably have to be clearer about that. The Hartley entropy
is invariant to the actual distribution (provided all the
probabilities are non-zero, and the sample space
Hartley entropy doesn't avoid any use of probability, it simply
introduces the assumption that all probabilities are uniform which greatly
simplifies all of the calculations.
How so? It's defined as the log cardinality of the sample space. It is
independent of the actual distribution of the
On 2014-10-14, Max Little wrote:
Hmm .. don't shoot the messenger! I merely said, it's interesting that
you don't actually have to specify the distribution of a random
variable to compute the Hartley entropy. No idea if that's useful.
Math always has this precise tradeoff: more general but
Max Little wrote:
...
Well, we'd probably have to be clearer about that. The Hartley entropy
is invariant to the actual distribution
Without going into the comparison of wanting to be able to influence the
lottery to achieve a higher winning chance, I looked up the
Shannon/Hartley theorem,
The Hartley entropy
is invariant to the actual distribution (provided all the
probabilities are non-zero, and the sample space remains unchanged).
No, the sample space does not require that any probabilities are nonzero.
It's defined up-front, independently of any probability distribution.
OK yes, 0^0 = 1. Delete the bit about probabilities needing to be
non-zero I guess!
Think you're taking what I said too seriously, I just said it's an
interesting formula! Kolmogorov seemed to think so too.
M.
On 14 October 2014 18:37, Ethan Duni ethan.d...@gmail.com wrote:
The Hartley entropy
Yes, don't have time for a long answer, but all elegantly put.
I'm just reiterating the formula. And saying it's interesting. Maths
is really just patterns, lots of them are interesting to me,
regardless of whether there is any other extrinsic 'meaning' to those
patterns.
M.
On 14 October 2014
On 2014-10-14, Max Little wrote:
Maths is really just patterns, lots of them are interesting to me,
regardless of whether there is any other extrinsic 'meaning' to those
patterns.
In that vein, it might even be the most humanistic of sciences. Moreso
even than poetry:
If you look at the real audio signals out there, which statistic would you
expect them to follow under the Shannonian framework? A flat one? Or
alternatively, what precise good would it do to your analysis, or your code,
if you went with the equidistributed, earlier, Hartley framework? Would
The relevant limit here is:
lim x*log(x) = 0
x-0
It's pretty standard to introduce a convention of 0*log(0) = 0 early on
in information theory texts, since it avoids a lot of messy delta/epsilon
stuff in the later exposition (and since the results cease to make sense
without it, with empty
28 matches
Mail list logo