Element Green wrote:
I'm the author of a SoundFont instrument editing application called
Swami (http://swami.sourceforge.net). A while back an interested
developer added a loop finding algorithm which I integrated into the
application. This feature is supposed to generate a list of start/end
robert bristow-johnson wrote:
one thing i might point out is that, when comparing apples-to-apples, an
optimal design program like Parks-McClellan (firpm() in MATLAB) or
Least-Squares (firls()) might do better than a windowed (i presume Kaiser
window) sinc in most cases. this is where you
Alan Wolfe wrote:
I have a future retro revolution (303 clone) and one of the knobs it
has is resonance.
Does anyone know what resonance is in that context or how it's
implemented?
I was reading some online and it seems like it might be some kind of
feedback but I'm not really sure...
In
Department for hosting the list.
Ross Bencina
Andraudio list admin
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music
Dan Stowell wrote:
As Oskari noted, here's hoping the angle of this new list (the low-level
aspects you mention) can help speed Android towards good low-latency i/o!
Yeah, well that's the main thing that got us together so I hope so.
Some have already started looking at bypassing audioflinger
I oppose patent trolls and trivial patents. Beyond that I think it's a bit
more murky. My a basic rule of thumb would be: If I can think of a
mathematical or algorithmic solution to some random problem in my field in
less than a month I don't expect that solution to be patented or patentable.
Hi Andy
Andy Farnell wrote:
I don't want to open up a lengthy OT debate here. But
will reply privately to address some of your points in detail.
Fair enough. I guess the main reason I bought into this conversation is that
I do feel like it's something that affects all of us here and I'm
Hi Andy
I wish I were worthy of quoting Blaise Pascal here, but instead I will just
apologise for the rant...
I think it has a bearing on all of us too. And thus you lure
me in. But if people complain that this is getting boring,
off-topic or ill-natured then let's quit it.
(Subject
Hi Andy
Andy Farnell wrote:
AXIOM: Ideas should not be patentable. Period.
Do I need to explain this?
Sorry, you've lost me a bit here. Pehaps you do need to explain it.. see if
I'm twisting your words below or if you find that I'm addressing your
position (of course I don't expect you to
Hi Andy
Are you suggesting by stating the above axiom that algorithms are
_simply_
ideas and that for this reason alone they shouldn't be patentable?
Yes I am, you've got it.
An algorithm is unsufficiently concrete to deserve a patent, it is an
abstraction, a generalisation.
Ok...
An
Morgan wrote:
simply plugging unit generators in
to one another, not having to stop and think about how to, for
example, go from a mono oscillator signal to a stereo reverb signal.
I'd like to be able to work more like I work in SuperCollider, writing
higher-level code to create a signal path,
Kevin Dixon wrote:
My EE friend is recommending I go PIC, but the Arduino looks
promising, especially for fast return on effort :) I guess startup
cost is an issue too, I'd like to be up and running for about 50 USD.
Any thoughts/recommendations? Thanks,
I wouldn't usually use microcontroller
weeks, freely available. The first is actually a developer chapter,
the last in the book, on the internals of scsynth, by Ross Bencina. It gives
a good idea of the layout of the book for those who might not have seen
inside yet, and will be particularly useful to devs exploring how
SuperCollider
robert bristow-johnson wrote:
i don't have time now to complete the analysis, but here is my first pass
at getting the z-plane transfer function (something to compare to the DF1
or DF2).
Thanks very much Robert,
I was able to follow your analysis below. Previously I didn't really
Is that not like saying It is ok to use an illegal copy of software [x]
because it is so expensive they cannot expect a lot of people to buy it?
Or do you find this situation to be different?
It might be like saying State funded research should be available free
of charge to the scientific
On 12/01/2012 4:01 AM, robert bristow-johnson wrote:
well, i cannot tell that the WP admins are going to do anything about
this other than wait for the page protection to expire (about 26 hours)
and then see what happens. if enough of us converge upon the article,
then the tendentious editor
Hi Linda
Some (possibly spurious) thoughts...
I'm confused about what you're actually trying to achieve by referencing
things relative to 0dBFS (which is measure of signal level relative to
digital full scale). You talk about frequency responses below, which are
expressed in terms of
On 19/01/2012 9:03 AM, Linda Seltzer wrote:
Why and under what circumstances is it advantageous to set up the Y axis
as dbFS rather than dbV, dbSPL,
out of the ones you mention, dBFS is the only one that has any meaning
in the digital domain -- since as we've established, the others don't
Hi Everyone,
Does anyone know if there's a standard way to calculate pan laws for
stereo-wide panning ?
By stereo-wide I mean panning something beyond the speakers by using
180-degree shifted signal in the opposite speaker. For example, for
beyond hard left you would output full gain signal
:20 PM, Ross Bencina wrote:
Hi Everyone,
Does anyone know if there's a standard way to calculate pan laws for
stereo-wide panning ?
By stereo-wide I mean panning something beyond the speakers by using
180-degree shifted signal in the opposite speaker. For example, for
beyond hard left you would
On 9/02/2012 1:06 AM, Olli Niemitalo wrote:
Now, it would be unreasonable if, compared to input, the
output would have an opposite polarity in L or R.
I'm not sure what you're getting at here, for example, the following is
reasonable:
Considering the left channel only (right is opposite
^)
This panning law agrees exactly with the panning described by HRTF methods at
the low frequency limit (and only there).
Jerry
On Feb 7, 2012, at 11:10 PM, Ross Bencina wrote:
Thanks for the responses,
Seems like I may have asked the wrong question.
Ralph Glasgal wrote:
There is no valid
On 9/02/2012 11:02 AM, Jerry wrote:
(Good grief, people.) You want the *very famous* Bauer's Law of Sines:
Benjamin B. Bauer, Phasor Analysis of Some Stereophonic Phenomena, IRE
Transactions on Audio, January-February, 1962.
This panning law is mentioned in many introductory books on stereo
On 11/02/2012 2:27 PM, Jerry wrote:
Glad to help. With your set-up, if you try to put a loud low frequency signal
well outside the loudspeaker array, you will notice that your speakers and/or
amplifiers will have melted. To the extent that sin(theta_A) = theta_A
(small-angle approximation),
On 23/02/2012 6:22 PM, Oskari Tammelin wrote:
Come on, it's a perfect visualization of their understanding of audio.
+1
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp
links
Hi Brad,
On 24/02/2012 3:01 PM, Brad Garton wrote:
Joining this conversation a little late, but what the heck...
Me too...
On Feb 22, 2012, at 9:18 AM, Michael Gogins wrote:
I got my start in computer music in 1986 or 1987 at the woof group at
Columbia University using cmix on a Sun
Hi Charles,
On 24/02/2012 10:45 PM, Charles Turner wrote:
Anything else is just plugging unit generators together, which is limiting
in many situations
Has it escaped me that Audio Mulch supports this kind of interpretation?
Hi Charles,
I'm not exactly sure what you think has escaped
On 25/02/2012 4:50 AM, Adam Puckett wrote:
Is there a minimal example of a complete working program that renders
a sine wave in realtime using Kernel Streaming that will compile with
just a bare MinGW install? (I have the latest GCC 4.5 on Windows XP
Service Pack 3). Which DirectX SDK do I
Hi Andy,
On 25/02/2012 5:05 AM, Andy Farnell wrote:
The problem with plug unit generators languages for me is that they
privilege the process (network of unit generators) over the content
Some really interesting thoughts here Ross. At what level of
granularity does the trade-off of control,
On 25/02/2012 2:38 PM, Adam Puckett wrote:
What is WaveRT? I don't see it in the tarball.
WaveRT is a recent WDM-KS driver sub-model that was introduced in
Windows Vista. It is the version of WDM-KS that people seem to get
excited about as offering the lowest latency and efficiency. I can't
/wiki/Linguistic_relativity
(see Present status section).
Here's a short excerpt from a discussion on the POTAC list last year. I
really liked Dan Stowell's introduction of the idea of long-term bias
effects [1]:
Ross Bencina wrote:
In any case I am not reverting to strong-SW here.. just
On 27/02/2012 1:11 AM, Brad Garton wrote:
We're fooling around with the new Max/MSP gen~ stuff in class, it
seems an interesting alternative model for low-level DSP coding.
Once they figure out how to do proper conditionals it will be really
powerful.
Why anyone would want to use a visual
On 27/02/2012 1:22 AM, Brad Garton wrote:
I would like to agree with you, because I also value all these things
(and am pretty much a dilettante in all four). But I see an analog
with the is a DJ*really* a [computer music] composer? question
that floats around (or in an earlier generation, is
Hi Andy,
Some comments, and questions for clarification...
IIRC, most Music-N line of systems are multi-rate. That means we
have a fast computation rate, on which audio signals are calculated,
and a slower rate (obviously some integer factor of the audio rate),
usually called the control
Hi Richard,
On 27/02/2012 3:01 AM, Richard Dobson wrote:
On 26/02/2012 11:33, Ross Bencina wrote:
..
Perhaps I'm not being clear. My point is about being able to execute
arbitrary code at an arbitrary time based on the value of some
signal(s). The zero crossing thing was a simple example
On 28/02/2012 8:55 AM, Richard Dobson wrote:
On 27/02/2012 21:00, Ross Bencina wrote:
..
And as I have already said. Computer musicians, must be programmers *by
definition*. Otherwise they are musicians, using computers.
But there is no single universally agreed definition of Computer
On 29/02/2012 11:41 AM, Richard Dobson wrote:
On 28/02/2012 16:03, Bill Schottstaedt wrote:
I don't think this conversation is useful. The only question I'd
ask is did this person make good music?, and I don't care at all about
his degrees or grants. One of the best mathematicians I've known
On 29/02/2012 8:00 AM, douglas repetto wrote:
Oh, come on, transistors are for babies. Real composers roll their own
diodes!
http://hackaday.com/2010/03/05/diy-diodes
Etching your own transistors is still pretty cool:
http://www.youtube.com/watch?v=w_znRopGtbE
Might take a while to make
On 28/06/2012 7:05 AM, Michael Gogins wrote:
Sorry, I confused this discussion with a similar one I was trying to
get going on the Csound list, where I do want to get rid of the VST
SDK while still being able to redistribute Csound source code from
SourceForge.
I have no idea how you propose
Hello Shashank,
I'm interested in this stuff too, but I'm no expert. I've tried to give
some pointers below. Hopefully someone else will correct me if I've made
an error:
On 17/11/2012 8:24 PM, Shashank Kumar (shanxS) wrote:
I am a self taught Linux fanatic who is trying to teach himself
On 19/11/2012 9:24 AM, Bjorn Roche wrote:
(Shashank wrote:)
I have one more question:
Why so many people use analog prototypes to get a digital filter
? Why not just put a few constraints on location of poles/zeros
on Z plane and get done with it ?
This is a really great question.
Indeed.
On 19/11/2012 6:33 AM, Shashank Kumar (shanxS) wrote:
Why so many people use analog prototypes to get a digital filter ?
Further to this question, I just came accross this brief but
enlightening piece by Ken Steiglitz, it discusses the dawn of the use of
the BLT and music-dsp:
Hi Alessandro,
A lot has been written about this. Google precision of summing floating
point values and read the .pdfs on the first page for some analysis.
Follow the citations for more.
Somewhere there is a paper that analyses the performance of different
methods and suggests the optimal
On 10/12/2012 1:47 PM, Bjorn Roche wrote:
There is something called double double which is a software 128
bit floating point type that maybe isn't too expensive.
long double, I believe
No. long double afaik usually means extended precision, as supported
in hardware by the x86 FPU, and is 80
Hi Everyone,
I have a question which in a broad sense relates to physical modelling
and acoustics:
Under what circumstances does a resonating (acoustic) system move energy
from one frequency to another?
One gross example I can think of would be snares on a snare drum.
But aside from
On 4/01/2013 4:05 AM, Thomas Young wrote:
Is there a way to modify the bandpass coefficient equations in the
cookbook (the one from the analogue prototype H(s) = s / (s^2 + s/Q +
1)) such that the gain of the stopband may be specified? I want to be
able
I'm pretty sure that the BLT bandpass
Hi Thomas,
Replying to both of your messages at once...
On 4/01/2013 4:34 AM, Thomas Young wrote:
However I was hoping to avoid scaling the output since if I have to
do that then I might as well just change the wet/dry mix with the
original signal for essentially the same effect and less
Hello Jeff,
Before I attempt an answer, can I ask: what programming languages do you
know (if any) and how proficient are you at programming?
Ross.
On 21/01/2013 9:49 PM, Jeffrey Small wrote:
Hello,
I'm a recently new computer programmer that is interested in getting into the
world of
Hi Jeff,
At your stage of learning with C the advice to just write some code
seems most pertinent, but I guess it depends on your learning style.
Coming up with an achievable project and seeing it through to completion
is a good way to learn programming.
Read lots of code applies, and is
On 21/01/2013 9:49 PM, Jeffrey Small wrote:
I'm a recently new computer programmer that is interested in getting
into the world of Audio Plug Ins. I have a degree in Recording/Music,
as well as a degree in Applied Mathematics. How would you recommend
that I start learning how to program for
Hi Russell,
So to be clear, you're creating a Linkwitz-Riley crossover?
http://en.wikipedia.org/wiki/Linkwitz%E2%80%93Riley_filter
On 8/02/2013 6:05 PM, Russell Borogove wrote:
I have two digital 12dB/octave state-variable filters, each with
lowpass/highpass/bandpass/notch outputs; I'd like
Hi Bram,
A Generalization of the Biquadratic Parametric Equalizer
Christensen, Knud Bank
AES 115 (October 2003)
https://secure.aes.org/forum/pubs/conventions/?elib=12429
Defines equations with a symmetry parameter for smoothly moving
between the states you mention. There are graphs so you can
On 11/02/2013 1:37 AM, robert bristow-johnson wrote:
maybe i shouldn't say this, but someone here likely has a pdf copy of
the paper in case it breaks your bank to buy it from AES.
Unfortunately not me. I lost the pdf in a data loss incident and only
have a printout and don't have an AES
Can someone please explain the scientific basis for this kind of study?
Surely by now it is widely accepted that correlations between music and
mood and emotion are culturally biased and socially acquired?
Does the study below control for cultural bias?
Please explain why an otherwise
Hi Marcelo,
Just came accross this, maybe it is helpful:
Rorschach Audio – Art Illusion for Sound On The Art
http://rorschachaudio.wordpress.com/about/
Ross.
On 19/02/2013 9:26 PM, Marcelo Caetano wrote:
Dear list,
I'll teach a couple of introductory lectures on audio and music
Tel: +44 (0)20 7882 7986 - Fax: +44 (0)20 7882 7997
E-mail: mathieu.bart...@eecs.qmul.ac.uk
http://www.elec.qmul.ac.uk/digitalmusic/
De : music-dsp-boun...@music.columbia.edu
[music-dsp-boun...@music.columbia.edu] de la part de Ross Bencina
[rossb-li
On 22/02/2013 9:54 AM, Richard Dobson wrote:
Listen to each track at least once and then select which track is the
best match with the seed. If you think that none of them match, just
select an answer at random.
Now I am no statistician, but with only four possible answers offered
per test,
, lightbulb o schoolbus? Uh, lightbulb? No!
Lo siento, Schoolbus es mas macho que lightbulb.
Best to you,
Ross.
best @ all
Andy
On Fri, Feb 22, 2013 at 10:19:02AM +1100, Ross Bencina wrote:
On 22/02/2013 9:54 AM, Richard Dobson wrote:
Listen to each track at least once and then select which
Stephen,
On 8/03/2013 9:29 AM, ChordWizard Software wrote:
a) additive mixing of audio buffers b) clearing to zero before
additive processing
You could also consider writing (rather than adding) the first signal to
the buffer. That way you don't have to zero it first. It requires having
a
On 9/03/2013 9:53 AM, ChordWizard Software wrote:
Maybe you can advise me on a related question - what's the best
approach to implementing attenuation? I'm guessing it is not
linear, since perceived sound loudness has a logarithmic profile - or
am I confusing amplifier wattage with signal
On 9/03/2013 2:55 PM, Ross Bencina wrote:
Note that audio faders are not linear in decibels either, e.g.:
http://iub.edu/~emusic/etext/studio/studio_images/mixer9.jpg
There is some discussion here:
http://www.kvraudio.com/forum/viewtopic.php?t=348751
Ross.
--
dupswapdrop -- the music-dsp
On 10/03/2013 7:01 AM, Tim Goetze wrote:
[robert bristow-johnson]
On 3/9/13 1:31 PM, Wen Xue wrote:
I think one can trust the compiler to handle a/3.14 as a multiplication. If it
doesn't it'd probably be worse to write a*(1/3.14), for this would be a
division AND a multiplication.
there are
On 15/03/2013 6:02 AM, jpff wrote:
Ross == Ross Bencinarossb-li...@audiomulch.com writes:
Ross I am suspicious about whether the mask is fast than the conditional for
Ross a couple of reasons:
Ross - branch prediction works well if the branch usually falls one way
Ross - cmove
On 15/03/2013 7:27 AM, Sampo Syreeni wrote:
Quite a number of processors have/used to have explicit support for
counted for loops. Has anybody tried masking against doing the inner
loop as a buffer-sized counted for and only worrying about the
wrap-around in an outer, second loop, the way we do
On 26/03/2013 4:55 PM, Alan Wolfe wrote:
I just wanted to chime in real quick to say that unless you need to go
multithreaded for some reason, you are far better off doing things
single threaded.
Introducing more threads does give you more processing power, but the
communication between threads
On 26/03/2013 5:28 PM, ChordWizard Software wrote:
Hi Ross,
Thanks, couple more questions then:
- There can be significant jitter in the time at which an audio callback
is called.
Can you define jitter? Callbacks with different frame counts, or dropped
frames?
If you call
On 7/08/2013 2:38 AM, Theo Verelst wrote:
I suppose in EE terms, if you know something about the waves you're
trying to detect
Strikes me that we are talking about perceptual note onset, not
something you could define /easily/ in EE terms.
You would need a definition of note onset that
On 7/08/2013 12:23 PM, charles morrow wrote:
Please explain your reference Roberts transcription notes for me.
Robert expressed the following requirement:
On 6/08/2013 6:01 AM, robert bristow-johnson wrote:
the big problem i am dealing with is people singing or humming and
changing notes.
Hi Robert,
I have a question: are you trying to output the pitch and note on/off
information in a real-time streaming scenario with minimum delay? or is
this an off-line process? My impression is that the MIR folk worry less
about minimum-delay/causal processing than us real-time people.
On 3/11/2013 3:22 PM, Laurent de Soras wrote:
Chris Townsend wrote:
Any ideas? Recommendations?
Probably this:
http://cytomic.com/files/dsp/SvfLinearTrapOptimised.pdf
Consider ramping interpolated coefficients at audio rate to smooth out
parameter changes. I'm pretty sure that Andy's
On 8/11/2013 4:29 AM, Theo Verelst wrote:
Fine. He insulted run of the mill academic EE insights from decades ago,
i merely stated facts, which should be respected, but here are still not.
The theory is quite right, and I've taken the effort of correcting a lot
of misinterpretations. I suppose
that the lower bound on k
can approaches zero as the 2,2 entry approaches zero from below.
Hopefully I'm not imagining things.
Ross.
On 11/11/2013 2:58 AM, Ross Bencina wrote:
Hi Everyone,
I took a stab at converting Andrew's SVF derivation [1] to a state space
representation and followed
Hi Ezra,
A few comments:
On 11/11/2013 3:19 PM, Ezra Buchla wrote:
there seems to be some concern about distortion introduced by the
trapezoidal integration. i've tried the algo in both fixed 32 ands
float, and it seems to sound and look ok to but i have not done a
proper analysis either
On 11/11/2013 12:21 PM, robert bristow-johnson wrote:
but you cannot define your current output sample in terms of the
current output sample.
But that, with all due respect, is what has been done for quite a while.
it's been reported or *reputed* to be done for quite a while.
but when the
On 12/11/2013 7:40 PM, Tim Blechmann wrote:
some real-world benchmarks from the csound community imply a performance
difference of roughly 10% [1].
Csound doesn't have a facility for running multiple filters in parallel
though does it? not even 2 in parallel for stereo.
4 biquads in
response which may be what matters most in audio DSP.
Max
On 14 November 2013 14:06, Ross Bencina rossb-li...@audiomulch.com wrote:
On 14/11/2013 11:41 PM, Max Little wrote:
I may have misread, but the discussion seems to suggest that this
discipline is just discovering implicit finite differencing
On 11/12/2013 4:29 PM, Sol Friedman wrote:
minimum phase would be a likely candidate
Is minimum-phase a well defined property of non-linear time-varying systems?
Ross.
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book
Hi Mark,
I'm not really sure that I understand the problem. Can you be more
specific about the problems that you're facing?
Personally I would avoid managed code for anything real-time (ducks).
You're need to build a simple audio engine (consider PortAudio or the
ASIO SDK). And write some
Hello Mark,
On 27/02/2014 3:52 PM, Mark Garvin wrote:
Most sample banks these days seem to be in NKI format (Native
Instruments). They have the ability to map ranges of a keyboard into
different samples so the timbres don't become munchkin-ized or
Vader-ized. IOW, natural sound within each
On 28/02/2014 12:16 AM, Michael Gogins wrote:
For straight sample playback, the C library FluidSynth, you can use it via
PInvoke. FluidSynth plays SoundFonts, which are widely available, and there
are tools for making your own SoundFonts from sample recordings.
For more sophisticated synthesis,
On 28/02/2014 2:06 PM, Michael Gogins wrote:
I think the VSTHost code could be adapted. It is possible to mix managed
C++/CLI and unmanaged standard C++ code in a single binary. I think this
could be used to provide a .NET wrapper for the VSTHost classes that C#
could use.
I agree.
Maybe I
On 5/03/2014 7:56 AM, Ethan Duni wrote:
Seems like somebody somewhere should have already thought
through the problem of matching a single biquad stage to an arbitrary
frequency response - anybody?
Pretty sure that the oft-cited Knud Bank Christensen paper does LMS fit
of a biquad over an
On 5/03/2014 2:27 PM, Sampo Syreeni wrote:
Pretty sure that literature has to contain the relevant algorithms if
used with just a single resonance.
I never looked at rational function fitting, but this would be easy
enough to try:
http://www.mathworks.com.au/help/rf/rationalfit.html
The
On 15/03/2014 1:46 AM, Richard Dobson wrote:
But portaudio only states the software i/o buffer latency, it knows
nothing directly of internal codec latencies. You would need to subtract
the (two-way?) buffer latency portaudio reports, and then measure or
compute how much of the remainder is down
On 27/03/2014 3:23 PM, Doug Houghton wrote:
Is that making any sense? I'm struggling with the fine points. I bet
this is obvious if you understand the math in the proof.
I'm following along, vaguely.
My take is that this conversation is not making enough sense to give you
the certainty you
On 19/06/2014 4:52 PM, Rohit Agarwal wrote:
In terms of computational complexity, most of the complexity is in
modelling, tuning the parameters to fit data. However, once you're done
with this offline task, running the result should not be that heavy. That
process should be real-time on new
On 19/06/2014 7:09 PM, Rohit Agarwal wrote:
Enlighten me, does that mean faster tempo or is 10% too much delay for
that?
I think that this conversation is at risk of going off the rails. Make
sure that you're asking the right question.
There are a number of different ways that delays can
Hi Rich,
On 22/06/2014 1:09 AM, Rich Breen wrote:
Just as a data point; Been measuring and dealing with converter and
DSP throughput latency in the studio since the first digital machines
in the early '80's;
Out of interest, what is your latency measurement method of choice?
my own
On 28/11/2014 12:54 AM, Victor Lazzarini wrote:
Thanks everyone for the links. Apart from an article in arXiv written by
viznut, I had no
further luck finding papers on the subject (the article was from 2011, so I
thought that by
now there would have been something somewhere, beyond the code
On 21/12/2014 5:12 PM, Andrew Simper wrote:
and all the other papers (including the SVF version of the same thing I did
a while back) are always available here:
www.cytomic.com/techincal-papers
Actually:
http://www.cytomic.com/technical-papers
--
dupswapdrop -- the music-dsp mailing list and
On 2/02/2015 9:45 PM, Vadim Zavalishin wrote:
One should be careful not to mix up two different requirements:
- time-varying stability of the filter
- the minimization of modulation artifacts
True.
My logic was thus: One way to minimise artifacts is to band-limit the
coefficient changes.
Hi Ethan,
On 6/02/2015 1:17 PM, Ethan Duni wrote:
There is just no way A/B testing on a sample of listeners,
at loud, but still realistic listening levels, would show that
dithering to 16bit makes a difference.
Well, can you refer us to an A/B test that confirms your assertions?
Personally
Hello Alan,
On 1/02/2015 4:51 AM, Alan O Cinneide wrote:
Dear List,
While filtering an audio stream, I'd like to change the filter's
characteristics.
You didn't say what kind of filter, so I'll assume a bi-quad section.
In order to do this without audible artifacts, I've been filtering a
Hello Ralph,
On 19/06/2015 9:18 AM, Ralph Glasgal wrote:
I used to have AudioMulch 1.0 working fine with Waves IR-1 VST hall
impulse responses. But after a computer crash I can't seem to get
Waves working with either AudioMulch 1.0 or 2.2 due to a lack of
kPlugCategShell support. How do I get
Hey Bjorn, Connor,
On 12/06/2015 1:27 AM, Bjorn Roche wrote:
The important thing is to do anything that might take an unbounded
amount of time outside your callback. For a simple FFT, the rule of
thumb might bethat all setup takes place outside the callback. For
example, as long as you do all
Hi Everyone,
Suppose that I generate a time series x[n] as follows:
>>>
P is a constant value between 0 and 1
At each time step n (n is an integer):
r[n] = uniform_random(0, 1)
x[n] = (r[n] <= P) ? uniform_random(-1, 1) : x[n-1]
Where "(a) ? b : c" is the C ternary operator that takes on the
Thanks Ethan(s),
I was able to follow your derivation. A few questions:
On 4/11/2015 7:07 PM, Ethan Duni wrote:
It's pretty straightforward to derive the autocorrelation and psd for
this one. Let me restate it with some convenient notation. Let's say
there are a parameter P in (0,1) and 3
On 4/11/2015 9:39 AM, robert bristow-johnson wrote:
i have to confess that this is hard and i don't have a concrete solution
for you.
Knowing that this isn't well known helps. I have an idea (see below). It
might be wrong.
it seems to me that, by this description:
r[n] =
art looking.
Ross.
E
On Tue, Nov 3, 2015 at 9:42 AM, Ross Bencina <rossb-li...@audiomulch.com
<mailto:rossb-li...@audiomulch.com>> wrote:
Hi Everyone,
Suppose that I generate a time series x[n] as follows:
>>>
P is a constant value between 0 and 1
On 6/09/2015 8:37 AM, Daniel Varela wrote:
sample rate is part of the audio information so any related message
( AudioSampleBuffer ) should provide it, no need to extend the discursion.
There's more than one concept at play here:
(1) If you consider the AudioSampleBuffer as a stand-alone
1 - 100 of 147 matches
Mail list logo