What I essentially showed using the compression example, is that in a
digital binary sequence of digits, each 'constant 0' or 'constant 1'
segment of length N can be represented in log2(N) number of bits,
because for me it is enough to represent the length of the segment to
fully reconstruct it.
On 11/10/2014, Peter S peter.schoffhau...@gmail.com wrote:
Don't forget that the central point of Shannon entropy is: how many
bits do we (minimally) need to represent this.
Which is (on some level) essentially a 'data compression' problem -
the Shannon entropy is the length of the output of an
On 11/10/2014, Peter S peter.schoffhau...@gmail.com wrote:
On 11/10/2014, Peter S peter.schoffhau...@gmail.com wrote:
Don't forget that the central point of Shannon entropy is: how many
bits do we (minimally) need to represent this.
Which is (on some level) essentially a 'data compression'
Pater, since roughly this time 5 days ago, you've posted 61 public
messages here.
Maybe it's time to give it a rest? Or if not, perhaps your point
(whatever that may be) could be made with only 1 or 2 messages per day?
Please?!
--
dupswapdrop -- the music-dsp mailing list and website:
Are you suggesting I should unsubscribe from this mailing list?
If you're not interested in the topic, let me ask, why are you
subscribed to this list?
On 11/10/2014, Paul Stoffregen p...@pjrc.com wrote:
Pater, since roughly this time 5 days ago, you've posted 61 public
messages here.
Maybe
On 11/10/2014, Paul Stoffregen p...@pjrc.com wrote:
Maybe it's time to give it a rest? Or if not, perhaps your point
(whatever that may be) could be made with only 1 or 2 messages per day?
Please?!
Maybe, it's time to switch your mailing list subscription to 'daily digest' ?
Please?!
You
On 11/10/2014 16:40, Paul Stoffregen wrote:
Pater, since roughly this time 5 days ago, you've posted 61 public
messages here.
Maybe it's time to give it a rest? Or if not, perhaps your point
(whatever that may be) could be made with only 1 or 2 messages per day?
Please?!
Oh hum, lahdeedah,
n 09/10/2014, Ethan Duni ethan.d...@gmail.com wrote:
I did not claim anything about entropy of
continuous signals,
Aren't we talking about impulses in auditory nerves (among other things)?
Those things live in the analog domain.
Nerves fire discrete impulses, so those are definitely not
I saw the Shannon entropy page, possibly relevant for algorithms like
Karplus Strong though I doubt many here should like to analyze this, on
Wikipedia requires verification. That not surprising, because it isn't a
good idea to give a monkey a sort given-probability statistics theory
and wait
On 10/10/2014, Theo Verelst theo...@theover.org wrote:
[...] I fail to see the point.
all I meant:
1) Entropy can be estimated (and gave an example of that)
2) Entropy can be extracted (and gave an example of that)
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info,
Nerves fire discrete impulses, so those are definitely not continuous
signals.
I don't think that you understand what a continuous signal is. Nerve
impulses are definitely examples of continuous signals. To get a
discontinuity there would be physically impossible.
Jon has told us that a nerve
On Fri, Oct 10, 2014 at 8:02 AM, Theo Verelst theo...@theover.org wrote:
So as soon as a communicating pair of neurons can be given an entropy
value that's depending on if the unit is quarters or dimes, I fail to see
the point.
Oh, and meantsimlpon, because real neurons communicate depending
On 10/10/2014, Ethan Duni ethan.d...@gmail.com wrote:
I'm calling them analog because they are obviously, unequivocably
continuous analog signals.
What do you think the relevance of that is, from the point of view of
transmitting neural signals?
Do you think that if they were not 'continuous',
On 10/10/2014, Ethan Duni ethan.d...@gmail.com wrote:
Your entropy estimator does not estimate Shannon entropy.
Exactly. Which was never claimed in the first place.
You said: You cannot estimate entropy of arbitrary signals!
I said: I can, here is an example.
I gave a function that gives a
On 11/10/2014, Peter S peter.schoffhau...@gmail.com wrote:
(I think we agree that the 'entropy' content of a
signal in the strict sense means the minimal number of bits that it
can be used to represent it).
... and that always depends on how we're representing signals. Are we
using a
On 11/10/2014, Peter S peter.schoffhau...@gmail.com wrote:
In essence, this is another way of saying: constant parts do not add
anything to the entropy, the entropy is contained in the transitions. (*)
(*) ...this is not strictly true, the length of the constant part also
contain some
Academic person: We cannot _precisely_ calculate entropy, because we
cannot know and calculate the entire timeline of the universe!
Practical person: Hmm... What if we tried to roughly estimate entropy
with a simple formula instead...
--
dupswapdrop -- the music-dsp mailing list and website:
Here's another, practical way of estimating binary (Shannon) entropy
content of an arbitrary digital signal:
Compress it with PKZIP, and check the resulting file size in bits.
Compression ratio will inversely correlate with the Shannon entropy
content of the signal, low entropy signals being
On 11/10/2014, Peter S peter.schoffhau...@gmail.com wrote:
32A
16A16B
8A8B8A8B
4A4B4A4B4A4B4A4B
2A2B2A2B2A2B2A2B2A2B2A2B2A2B2A2B
1A1B1A1B1A1B1A1B1A1B1A1B1A1B1A1B1A1B1A1B1A1B1A1B1A1B1A1B1A1B1A1B
Probably I should rather write that in binary form instead of decimal:
10A
1A1B
On 11/10/2014, Peter S peter.schoffhau...@gmail.com wrote:
On 10/10/2014, Ethan Duni ethan.d...@gmail.com wrote:
Your entropy estimator does not estimate Shannon entropy.
Maybe a better formula would be...
number of binary transitions, plus sum of the log2 of the length of
constant parts?
Feel
On 11/10/2014, Peter S peter.schoffhau...@gmail.com wrote:
Maybe a better formula would be...
number of binary transitions, plus sum of the log2 of the length of
constant parts?
Let's test this formula on the original data:
- 0 + 5 = 6
On 11/10/2014, Peter S peter.schoffhau...@gmail.com wrote:
It is 31 instead of 32 because there's discontinuity at the edge (I
see no trivial way of fixing that, other than maybe just add +1 to all
values).
... or maybe wrap around and add +1 if first and last bit differ
(dunno if that makes
The reason why there is no correlation between the time-domain PCM
entropy and the rate of neural firing is this:
One works in the time domain, another works in the frequency domain.
No direct correlation. We mostly agreed that the cochlea acts as a
filterbank, creating a frequency-amplitude
Comparing neural firing rate to a PCM data rate is... just not possible.
Calculating the information/entropy in the auditory nerve is a daunting
task, especially considering that it depends on the sound (with
nonlinearities, masking, etc).
Yeah that whole line of thought seems really
How are the sonic decodings integrated with the
simultaneous spatial localization performed by
the cochlea?
On Oct 9, 2014, at 6:34 AM, Peter S peter.schoffhau...@gmail.com wrote:
The reason why there is no correlation between the time-domain PCM
entropy and the rate of neural firing is this:
On 09/10/2014, mads dyrholm misterm...@gmail.com wrote:
All the ear/neurons have to do is project the stimulus onto long term memory
- Granular synthesis if you will. An entire symphony could in principle be
perceived from a single bit (the bit that says PLAY).
This is an interesting
Peter S peter.schoffhau...@gmail.com wrote:
The reason why there is no correlation between the time-domain PCM
entropy and the rate of neural firing is this:
One works in the time domain, another works in the frequency domain.
No direct correlation.
baloney.� (not that either a
On 09/10/2014, r...@audioimagination.com r...@audioimagination.com wrote:
1) The binary entropy of both PCM sine waves is just about the same -
the amplitude of a sinusoidal partial of a signal does not directly
affect the binary entropy. Both PCM sound files are the same size, and
contain
For these reasons, I think digital entropy in the classical sense has
little direct relevance for our neural processes.
Another thought experiment: compare what happens when listening to a
sine wave and a (non-bandlimited) square wave of the same amplitude.
The square wave has alot more
On Thu, Oct 9, 2014 at 5:34 AM, Peter S peter.schoffhau...@gmail.com wrote:
The reason why there is no correlation between the time-domain PCM
entropy and the rate of neural firing is this:
One works in the time domain, another works in the frequency domain.
No direct correlation. We mostly
On 09/10/2014, Charles Z Henry czhe...@gmail.com wrote:
Your thought experiments are fine, but you're clearly just feeling out
how to define entropy for audio signals.
Since that's what r b-j asked :)
All I did is try to test this analytically.
It's *not* a well defined problem.
Exactly,
On 09/10/2014, Ethan Duni ethan.d...@gmail.com wrote:
Amplitude has a direct, strong relationship to signal entropy (not
information, which is a property of pairs of random variables).
Unless it is a non-bandlimited (naive) square wave.
In that case, that claim is absolutely not true.
That's
On 09/10/2014, Peter S peter.schoffhau...@gmail.com wrote:
On 09/10/2014, Ethan Duni ethan.d...@gmail.com wrote:
Amplitude has a direct, strong relationship to signal entropy (not
information, which is a property of pairs of random variables).
Unless it is a non-bandlimited (naive) square
Unless it is a non-bandlimited (naive) square wave.
In that case, that claim is absolutely not true.
I'm not making claims, I'm just conveying basic results of information
theory. And they definitely apply to square waves as much as anything else.
Again: if I make _very_ loud (= lot of signal
By turning up the volume on these signals, no new entropy is gained.
Again, that statement is wrong. The relationship between entropy and signal
power does not depend on the details of the signal shape.
Again, please spend a few hours learning the basics of information theory
before jumping to
On 09/10/2014, Ethan Duni ethan.d...@gmail.com wrote:
Amplitude has a direct, strong relationship to signal entropy (not
information, which is a property of pairs of random variables).
Let's assume I have a sinusoidal signal.
Let's assume I amplify it to 10x.
Where does new entropy come from?
On 09/10/2014, Ethan Duni ethan.d...@gmail.com wrote:
You need way more than 1 bit to represent any square wave
Correction: 1 bit PER SAMPLE (either 1 or 0, hi or low - a naive
square only has twose two states...)
(I thought that was trivial that I meant that)
--
dupswapdrop -- the music-dsp
On 09/10/2014, Peter S peter.schoffhau...@gmail.com wrote:
Let's assume I have a sinusoidal signal.
Let's assume I amplify it to 10x.
Where does new entropy come from?
I make an even better example:
Lets assume I amplify the signal by a power of two ( x2, x4, x8, x16 etc. )
Assuming integer
On 2014-10-09, Ethan Duni wrote:
Again: if I make _very_ loud (= lot of signal energy) naive,
non-bandlimited square wave, that _still_ has only 1 bit of entropy.
No matter how much I turn up the volume, I do not gain any additional
entropy. Still 1 bit.
No, you are clearly misunderstanding
Last time I listened to a guitar, it didn't have any bits.
How would you define entropy for a single pluck from my guitar?
See all I hear you keep arguing is about bits and quantization...
Seems to be missing the point--you're assuming what the possible sets
of things are by making them into
On 09/10/2014, Charles Z Henry czhe...@gmail.com wrote:
See all I hear you keep arguing is about bits and quantization...
Exactly my point - It's pretty flawed to talk about entropy of bits
here, because as soon as the digital signal leaves your sound card's
D/A converter, you no longer have
�
i'm not gonna pile onto Peter S.� many others have.
Last time I listened to a guitar, it didn't have any bits.
How would you define entropy for a single pluck from my guitar?
See all I hear you keep arguing is about bits and quantization...
Seems to be missing the point--you're
On 09/10/2014, Sampo Syreeni de...@iki.fi wrote:
So, actually, when you talk about entropy, you ought to define the model
it's calculated against,
The entopy estimation I assumed was the amount of transitions (either
1-0 or 0-1) in the binary numerical representation of the signal (I
thought
Let's assume I have a sinusoidal signal.
Let's assume I amplify it to 10x.
Where does new entropy come from?
It comes from the amplification.
Look carefully - I'm not speaking about creating _another_ sine wave
with 10x volume. No.
I'm saying that I amplify the _original_ sine wave by 10x
Those
On 09/10/2014, Ethan Duni ethan.d...@gmail.com wrote:
Let's assume I have a sinusoidal signal.
Let's assume I amplify it to 10x.
Where does new entropy come from?
It comes from the amplification.
What is your entropy model?
So you aren't talking about literal sine waves then, you're talking
What is your entropy model?
There is no entropy model. Entropy is a property of statistical
distributions. Are you asking about *signal* models?
Sorry, weren't we talking about digital PCM signals?
This thread seems to cover several different signal types, including
digital audio, analog audio,
... since the original question of r b-j was: how can the human ear
convey the high amount of digital PCM information contained on a CD?
Right, my point is that the digital PCM info on a CD typically contains a
*lot* of data that is redundant to human audio perception, and which gets
On 2014-10-09, Ethan Duni wrote:
Look carefully - I'm not speaking about creating _another_ sine wave
with 10x volume. No. I'm saying that I amplify the _original_ sine
wave by 10x
Those kinds of philosophical distinctions do not have any bearing on
entropy.
Again, in a certain sense, they
On 09/10/2014, Sampo Syreeni de...@iki.fi wrote:
Which of course brings us back to your very point: Peter really should
understand the basics of information theory before applying it.
Could you offer me a reading that you think would clarify the concepts
that you think I applied in an improper
On 2014-10-09, Peter S wrote:
Could you offer me a reading that you think would clarify the concepts
that you think I applied in an improper way?
Any standard textbook will do the job. Google's first recommendation is
as good as any: Cover Thomas, Elements of Information theory. It's
cheap
I did not claim anything about entropy of
continuous signals,
Aren't we talking about impulses in auditory nerves (among other things)?
Those things live in the analog domain.
I was only talking about the entropy
content of digital PCM signals that could be estimated using standard
digital,
For you guys who like to think a bit about little networks: how about
the information coming from a sensor, running through some number of
dendrites and axons, making certain neurons fire, and somewhere along
the way of the activation flow, there is a construction where a certain
crucial
it sorta does, but i don't think you're getting this noise-shaping thing,
which is a similar technology used in 1-bit converters.� the model that Adams
proposes is one where this noise-shaping affects neighboring channels in
such a way that it models masking in the
frequency domain.
it
On 08/10/2014, r...@audioimagination.com r...@audioimagination.com wrote:
there is actually a difference between digital signals and discrete-time
signals. not the same thing. but with a sufficiently high sample rate,
you can reasonably simulate a
continuous-time signal and the system
On 08/10/2014, Peter S peter.schoffhau...@gmail.com wrote:
On 08/10/2014, r...@audioimagination.com r...@audioimagination.com wrote:
there is actually a difference between digital signals and discrete-time
signals. not the same thing. but with a sufficiently high sample rate,
you
On 08/10/2014, r...@audioimagination.com r...@audioimagination.com wrote:
maybe before you do, check out that Bob Adams paper to make sure you're not
preaching the same sermon from 17 years ago to the choir.
Do you have a link to it?
--
dupswapdrop -- the music-dsp mailing list and website:
On 08/10/2014, r...@audioimagination.com r...@audioimagination.com wrote:
but again, compare the total number of neural impulses per second and the
number of bits per second flying at you with high-quality audio. there is
an information reduction going on there.
By chance, have you made some
Further parallels between [A] analog-to-digital conversion and [B]
pulse-rate based neural audiotory encoding:
1) Both encode a continuous signal:
---[A] encoding an analog electric signal
---[B] encoding a waveform on a membrane (*)
2) Both encode the signal in discrete form:
---[A] encoding
On 08/10/2014, r...@audioimagination.com r...@audioimagination.com wrote:
i'm not a biologist nor a physiologist. i am only repeating stuff i
remember from a fascinating presentation at the IEEE Mohonk conference in
1997. i've thought that it was about 100 or fewer firings per second when
Assuming mono CD quality audio (16*44100 = 705,600 bits per second),
the number of bits per second per auditory nerve (assuming a total of
30,000 auditory nerves) is:
705,600 / 30,000 = 23.52
That is, to encode CD quality audio in the cochlear nerve, one nerve
fiber needs to
A quick physiologist's perspective (although I am no longer doing
physiology work)...
There are ~30k fibers connected to ~3500 inner hair cells.
After firing, each neuron needs ~1ms to recover. Actually, some take
longer, but let's say the max rate is 1000 spikes per second.
So 30k fibers
Lossy codecs are deemed transparent due to perceptibility and annoyance of
artifacts. How do you resolve lossy codecs with HD download shops like
HDtracks?
—
Sent from Mailbox
On Wed, Oct 8, 2014 at 4:05 PM, Ethan Duni ethan.d...@gmail.com wrote:
Comparing neural firing rate to a PCM data
...@audioimagination.com r...@audioimagination.com
Sent: 08/10/2014 15:43
To: A discussion list for music-related DSP music-dsp@music.columbia.edu
Subject: Re: [music-dsp] #music-dsp chatroom invitation
Assuming mono CD quality audio (16*44100 = 705,600 bits per second),
the number of bits per second per
On 07/10/2014, Theo Verelst theo...@theover.org wrote:
I'm fine with someone taking an FFT as a filter or filter bank
It _is_ a filter bank, literally. Each FFT/DFT bin is like an
individual bandpass filter. The FFT spectrum is the sum of these
individual, overlapping bandpass filters.
Graphed
Peter S wrote:
On 07/10/2014, Theo Verelst theo...@theover.org wrote:
I'm fine with someone taking an FFT as a filter or filter bank
That wasn't my remark, it is namely a strange filter, and not easily
related to pretty much all mechanical/physical/electronics filters, as
the interesting
On 10/7/14 8:45 AM, Peter S wrote:
On 07/10/2014, Theo Verelsttheo...@theover.org wrote:
I'm fine with someone taking an FFT as a filter or filter bank
It _is_ a filter bank, literally. Each FFT/DFT bin is like an
individual bandpass filter. The FFT spectrum is the sum of these
individual,
I stumbled across this the other day, it may be relevant to this discussion:
Simple fluid waveguide performs spectral analysis in a manner similar to the
cochlea
http://phys.org/news/2014-09-simple-fluid-waveguide-spectral-analysis.html
Best,
Brian
On Oct 7, 2014, at 4:36 PM, robert
On 07/10/2014, Zhiguang Zhang ericzh...@gmail.com wrote:
The FFT relates
to a ‘filter’ in a way in which you can digitally reconstruct the original
frequency by picking out a magnitude bin and doing an inverse FFT. That way
you can get a sine tone back.
Couldn't we still call it an 'analysis
On 07/10/2014, Zhiguang Zhang ericzh...@gmail.com wrote:
The view of the windowing function having bandpass and cutoff regions is
misleading. The windowing function is in the time domain, whereas filters
operate with a frequency response in the frequency domain.
And doesn't the time domain
On 2014-10-07, Peter S wrote:
I only pointed out that both are 'filterbanks', without stating
anything about where exactly the center frequencies are located, nor
assuming that their centers are located at the same frequencies. (and
the number of bands is also different, obviously.)
Talking
Would that create some kind of a 'vocoder' effect?
Yes :-)
What I have done is take several auditory nerve responses, bandpass-filter
them, and add them all up.
It sounds like a vocoded version of the original. Even with just a few
channels, it is easy to understand speech that has been
On 10/7/14 3:17 PM, Peter S wrote:
On 07/10/2014, Charles Z Henryczhe...@gmail.com wrote:
Also--the cochlea does not create an invertible representation.
What would happen if we connected each auditory nerve to a single
electrode that controls the volume of a sinusoidal oscillator tuned to
On 10/7/14 4:15 PM, Jon Boley wrote:
Would that create some kind of a 'vocoder' effect?
Yes :-)
What I have done is take several auditory nerve responses,
where did you get them?
were all 3 auditory nerve fibers sampled? (i sorta doubt it.)
how did they measure these auditory nerve
On 07/10/2014, Sampo Syreeni de...@iki.fi wrote:
Talking about filterbanks implicitly says that all of the filters are
somehow structurally the same, and linear.
Not for me. That probably depends on how you define the word 'filterbank'.
I looked up Wikipedia, which defines Filter bank as:
In
On 07/10/2014, Jon Boley j...@jboley.com wrote:
What I have done is take several auditory nerve responses, bandpass-filter
them, and add them all up.
It sounds like a vocoded version of the original. Even with just a few
channels, it is easy to understand speech that has been processed in this
On 07/10/2014, robert bristow-johnson r...@audioimagination.com wrote:
What I have done is take several auditory nerve responses, bandpass-filter
them, and add them all up.
where did you get them?
were all 3 auditory nerve fibers sampled? (i sorta doubt it.)
how did they measure these
We are all talking about analogies here for the type of processing
being performed by a organism.
Except Jon, who has mentioned measurements taken from an actual
organism. Everyone else is not to be taken exactly literally.
watch yer tone, Peter we're not idiots, because we didn't happen
to
where did you get them?
For this, I used the output of a computer model of the cochlea auditory
nerve
(The model has been shown to be pretty darn accurate at matching real data
measured in lab animals.)
This page shows the progression ( source code) of the model I used:
On 07/10/2014, Peter S peter.schoffhau...@gmail.com wrote:
Irregardless of Wikipedia, for me the word 'filterbank' does not imply
linearity How is that even meant? As - linearly spaced in frequency,
or as f(a)+f(b)=f(a+b)?
And if he meant the latter, then say I take a 'linear' filterbank
On 07/10/2014, Bjorn Roche bj...@xowave.com wrote:
I was just on a call with someone who researches hearing and
psychoacoustics. He happened to mention gamma tone filters, which I had
never heard of. I may have misunderstood, since it was a tangent, but I
believe he said it's a commonly used
Original Message
Subject: Re: [music-dsp] #music-dsp chatroom invitation
From: Peter S peter.schoffhau...@gmail.com
Date: Tue, October 7, 2014 7:40 pm
To: A discussion list for music-related DSP music-dsp@music.columbia.edu
thats a book where he presents his theories.
http://www.amazon.com/Auditory-Visual-Sensations-Yoichi-Ando/dp/144190171X/ref=sr_1_2?s=booksie=UTF8qid=1412616971sr=1-2keywords=ando+acf
you can find more info searching for Ando's papers on JASA or other
scientific magazines.
On Mon, Oct 6, 2014
i have a lot of issues with some of the subjective statements.
first, i think it's *cross-correlation* (between our ears) more than
auto-correlation that is used in our hearing especially for space
perception.
auto-correlation is directly related to the magnitude of the spectrum
(which the
On 06/10/2014, Charles Z Henry czhe...@gmail.com wrote:
--- SO at what levels are sounds represented like a Fourier Transform:
1. The cochlea--for each frequency, there is a point along the
cochlea where the basilar membrane has its largest displacement. The
inner hair cells are most
Jon,
On 06/10/2014, Jon Boley j...@jboley.com wrote:
In the hearing science community, it is well-established that the cochlea
acts as a filterbank (via the resonances that you mentioned) and each
auditory nerve fiber responds to sounds within a limited frequency range.
Thanks for confirming
On 10/5/14 1:39 AM, Peter S wrote:
Hi Everyone,
I was told that my invitation contained more personal information than
it should, thus it will be removed.
it's still living in my computer.
i think your invite was fine, but i have to confess to being an
old-fashioned old-codger, so i haven't
Hi Everyone,
I was told that my invitation contained more personal information than
it should, thus it will be removed. So, here's the short version of it
again:
You are invited to the #music-dsp IRC chatroom on EFNet (with a dash
in the middle, as opposed to the orginal #musicdsp).
tl;dr
Cool thanks!
On 10/5/2014 1:39:22 AM, Peter S peter.schoffhau...@gmail.com wrote:
Hi Everyone,
I was told that my invitation contained more personal information than
it should, thus it will be removed. So, here's the short version of it
again:
You are invited to the #music-dsp IRC chatroom on
88 matches
Mail list logo