Re: [music-dsp] Simulating Valve Amps

2014-06-25 Thread robert bristow-johnson


i forgot for the 2nd or 3rd time, but we need to take the [admin] tag 
offa the Subject: header.


second, at the risk of being pedantic, i realized i didn't declare and 
type everything to make the pseudocode legit with a compiler.



On 6/25/14 11:05 AM, robert bristow-johnson wrote:


 float f(x1, x2, x3, ...)
 {
 y = some_initial_estimate;

 do
 {
 y0 = y;
 y = g(y0, x1, x2, x3,...);
 }
 while ( abs(y - y0)  epsilon );

 return y;
 }




--

r b-j  r...@audioimagination.com

Imagination is more important than knowledge.



--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Simulating Valve Amps

2014-06-25 Thread robert bristow-johnson

On 6/25/14 1:40 PM, Ethan Duni wrote:

which is the point.  so these iterations don't have to happen *during* runtime.


At the cost of additional memory overhead, sure. Pretty much any algorithm
can be refactored in various ways to trade off runtime MIPS, memory
overhead, etc. Which implementational choice is best depends on the context
- how precious CPU cycles are vs how precious memory is, etc. In some
cases, trading MIPS for memory is desirable, in others it's not.


similarly to why do real-time additive synthesis when computing wavetables in 
advance will result is the same waveform?

The answer is because memory is more precious than MIPS in your specific
implementational context.



well, in the year 2014, let's consider that relative cost.  how 
expensive is a 1/2 MB in a computer with 8 or more GB?  unlike MIPS, 
which increase linearly with the number of simultaneous voices and such, 
a large lookup table can be used in common with every voice.  (without 
naming names, i am aware of a specific manufacturer who is licensing 
their synth and sample-playback and amp emulation code to another 
unnamed company for cloud use.  and because the first-mentioned 
manufacturer would not restructure their code, the latter company must 
in their servers, *duplicate* *everything* including large tables with 
constant values, for each and every instantiation of this library in the 
servers.  so i would like to assume that we're smart enough to avoid that.)


both speed and memory have been increasing according to, roughly, 
Moore's Law.  but, because of the speed of propagation of electrons, i 
am seeing a wall approaching on speed (not considering parallel or 
multiprocessing which is a complication that hurts my old geezer brain 
to think about - just the scheduling is a fright), but with 64-bit words 
(and 64-bit addresses), i am not seeing a similar wall on memory.


but i am fully aware of the tradeoff.  at the other extreme, there was 
another company i used to worked for that had a reasonably cheap 
ARM-like CPU in their synth and they seemed to believe that table lookup 
is the only way to do anything.  they (unnecessarily, in my opinion) 
wasted many megabytes on huge tables to do what could have been easily 
done with a 4 or 5 term power series.  so i have been arguing both sides 
here, in case you think that i'm some sorta neophyte computer guy that 
doesn't understand the cost of memory.


if it's a Freescale 56K chip in a stompbox, your point is well-taken.


so how are the tube curves obtained and represented in memory in the first 
place?


In the present context, my understanding is that they are modeled as affine
combinations of functions like tanh(), which then are folded into the
implementation. So they're baked into the instruction code, and not sitting
in any discrete place in memory at all. Am I missing your point?


no, you're addressing it.  (but if i were a different participant, i 
might answer your question Yes or Ahh, yes and say later i meant 
no.)  it leads to the next question of: how is tanh() (or whatever 
family of curves) computed with a CPU that knows how to move and add and 
subtract and multiply and divide floating-point and integer numbers (and 
other basic bit manipulation operations)?



what's the point in straining to get a perfect representation of approximated 
data to begin with?

My point was only that the table approach adds an additional layer of
approximation, which must be accounted for.


but, in the engineering lab (let's say it's a computer with MATLAB or 
whatever numerical computational platform of your heart's desire), you 
*still* have your original tube curve data that was empirically 
obtained.  so the *net* accounting is whatever the end result you get 
from the non-iterative function evaluation to the original data.


in my stripped-down representation of f() and g() in the previous post, 
how do we know that, once we start fitting curves to f(), that it will 
have significantly greater error than fitting curves to g()?  maybe we 
can tweek the coefficients to f() and, instead of aiming to fit g() 
(which was fit to the original empirical data), we can fit to the 
original empirical data?



And which is going to require runtime MIPS. Let's note that a single
interpolation between two table points is about as much MIPS as a single
Newton step (and in high dimensions, that is the *simplest* interpolation
available - it's easy to think of approaches that use more than two points,
by varying different combinations of the independent variables). So if
you're comparing to a non-table iterative solution that uses, on average, 2
Newton steps, then the table based approach is only saving about 1 Newton
step worth of MIPS on average, in the simplest case. That's not a lot, and
so doesn't leave you much room to invent more sophisticated table
techniques to reduce memory footprint while still reducing MIPS.



understood.  so, then again i'll ask, can we 

Re: [music-dsp] Simulating Valve Amps

2014-06-25 Thread Scott L
In an effort to learn more on the topic, are there any suggested articles, 
books, etc. that could help grok that basic machinery?   I've been trying
to follow the thread (and earlier ZDF threads), but am waaay rusty.

I do have the PDF book Vadim shared a while back which I intend to spend 
some quality time with, but another other resources would be helpful.

Thanks,

Scott



From: Andrew Simper a...@cytomic.com
To: A discussion list for music-related DSP music-dsp@music.columbia.edu 
Sent: Tuesday, June 24, 2014 11:20 PM
Subject: Re: [music-dsp] [admin] Re: Simulating Valve Amps




Thanks Ethan for your contribution, I couldn't agree more with
everything in your entire post.  I hope you will have more luck
communicating these concepts than I did. I think a lot of the problem
is lack of familiarity for some people about the basic machinery we
are talking about. To become familiar requires not just time and
effort, but most of all willingness to learn methods that may be
helpful.
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Simulating Valve Amps

2014-06-25 Thread Theo Verelst
Still about the simulation idea: you've got to be aware of what you're 
simulating, and what the proper theoretically founded simplifications 
are that you apply, or you risk sounding, well, ehm, the same as a lot 
of others who use similar approximations, or well, rather simplified 
compared to the good synthesizers, or not so adapted to pro norms as 
some of the (better) digital hardware synths are.


For instance, years ago (more a decade and a half ago) a I used an 
arctan function in an Open Source (non-commercial license) C program 
which does a physical simulation, and the atan() was clearly intended to 
imitate a somewhat smooth distortion effect (like in a guitar amp). 
However it was part of an active physical simulation signal path (with 
bidirectional components), and only chosen for general properties, or 
there would be many other viable choices known to cunning mathematicians 
(or EEs or Physicists). Also, that particular program was supposed to 
run as more or less a prototype on a for the time potent Personal 
Computer (Intel kind), which makes it not problematic to do single or 
double precision C function calls as part of a pipeline-d computation 
with sufficiently many pipeline steps. I wanted to use the prototype 
example software to work on non-PC implementations, as a challenge.


To me it isn't all too logical to take it that pure DSP-ish 
implementations in this time of heavily pipelined, cache-d and somewhat 
parallel (up to 12 threads in modern PCs) are to tend to the same as 
specialist hardware approached including new (parallel) DSPs with 
interesting acceleration components, special chips and programmable 
logic, because the nature of the PC architecture is more in the 
direction of the ancient supercomputers, which for instance ran Spice fine.


From a mathematical point of view (I'm no stranger to Theoretical 
Physics), talking about iterators for non-linear function 
approximations, and the number of dimensions for a sub-matrix 
computation being used neither to compute eigenvalues, nor for 
matrix/system inversion is a bit strange, probably a proper efficieny 
analysis in line with the latest computer means is in order. Of course a 
mathematician may not want to do that too much by nature, but the 
analysis to me sounds not very effective, no matter how fun the 
mathematical approximation woods are.


T.

--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Simulating Valve Amps

2014-06-25 Thread Andrew Simper
On 26 June 2014 03:11, robert bristow-johnson r...@audioimagination.com wrote:
 well, in the year 2014, let's consider that relative cost.  how expensive is
 a 1/2 MB in a computer with 8 or more GB?  unlike MIPS, which increase
 linearly with the number of simultaneous voices and such, a large lookup
 table can be used in common with every voice

 both speed and memory have been increasing according to, roughly, Moore's
 Law.  but, because of the speed of propagation of electrons, i am seeing a
 wall approaching on speed (not considering parallel or multiprocessing which
 is a complication that hurts my old geezer brain to think about - just the
 scheduling is a fright), but with 64-bit words (and 64-bit addresses), i am
 not seeing a similar wall on memory.

 but i am fully aware of the tradeoff.  ...

 if it's a Freescale 56K chip in a stompbox, your point is well-taken.


I think the only way to tell is profiling on the target platform. The
point here is that huge table lookups are likely not going to be the
best approach on a modern computer. Although everyone is aware of
keeping as much as possible in the cache and trying not to use large
LUTS if you can help it, here is a quick reminder video of where
things are right now, and where they are headed in the future:

http://channel9.msdn.com/Events/Build/2014/4-587
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Simulating Valve Amps

2014-06-25 Thread Andrew Simper
PS: the keyword I left out here is memory bound


On 26 June 2014 12:31, Andrew Simper a...@cytomic.com wrote:
 On 26 June 2014 03:11, robert bristow-johnson r...@audioimagination.com 
 wrote:
 well, in the year 2014, let's consider that relative cost.  how expensive is
 a 1/2 MB in a computer with 8 or more GB?  unlike MIPS, which increase
 linearly with the number of simultaneous voices and such, a large lookup
 table can be used in common with every voice

 both speed and memory have been increasing according to, roughly, Moore's
 Law.  but, because of the speed of propagation of electrons, i am seeing a
 wall approaching on speed (not considering parallel or multiprocessing which
 is a complication that hurts my old geezer brain to think about - just the
 scheduling is a fright), but with 64-bit words (and 64-bit addresses), i am
 not seeing a similar wall on memory.

 but i am fully aware of the tradeoff.  ...

 if it's a Freescale 56K chip in a stompbox, your point is well-taken.


 I think the only way to tell is profiling on the target platform. The
 point here is that huge table lookups are likely not going to be the
 best approach on a modern computer. Although everyone is aware of
 keeping as much as possible in the cache and trying not to use large
 LUTS if you can help it, here is a quick reminder video of where
 things are right now, and where they are headed in the future:

 http://channel9.msdn.com/Events/Build/2014/4-587
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Simulating Valve Amps

2014-06-24 Thread Urs Heckmann

On 24.06.2014, at 10:16, Stefan Stenzel stefan.sten...@waldorfmusic.de wrote:

 
 On 24 Jun 2014, at 0:37 , Urs Heckmann u...@u-he.com wrote:
 
 
 (Odyssee?) - fully analogue synths. That's currently the only way to get 
 something decent in hardware. Proper digital models seem to only make it 
 into software plug-ins.
 
 
 Careful with such an arrogant claim, and maybe consider it might be annoying 
 for those of us who make hardware.

Please take my apologies, I was generalising a bit too eagerly. I have no idea 
what you (and most others) do in hardware and how it sounds. 

Yet us plug-in guys live with some sort of stigma that software is something 
somewhat lesser than hardware, hence this is where I'm coming from when someone 
plays the hardware card. It was the wrong moment though, and the wrong 
audience. Sorry for that.

Would be great to hear what concepts you favour to create digital models of 
distinct analogue filters.

Cheers,

- Urs



--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Simulating Valve Amps

2014-06-23 Thread Urs Heckmann

On 23.06.2014, at 06:37, robert bristow-johnson r...@audioimagination.com 
wrote:

 the other thing Urs brought up for discussion is an iterative and recursive 
 process that converges on a result value, given an input.  i am saying that 
 this can be rolled out into a non-recursive equivalent, if the number of 
 iterations needed for convergence is finite.  this is a totally different 
 issue.

Yes, it's a totally different issue, but I arrived there from the same 
assumptions Andy did (actually, I think it was Andy who brought me there):

1. The filter algorithm can be derived from circuit analysis directly, without 
taking a detour over an abstraction layer

2. If done so, non-linear elements of the circuit can be modeled at the right 
place

Regarding the iterative method, unrolling like you did

   y0 = y[n-1]
   y1 = g * ( x[n] - tanh( y0 ) ) + s
   y2 = g * ( x[n] - tanh( y1 ) ) + s
   y3 = g * ( x[n] - tanh( y2 ) ) + s
   y[n] = y3

is *not* what I described in general. It's a subset won't ever converge in 3 
iterations :^)

Thing is, while starting with y[n-1] is a possibility, it's not the only one 
and in my experience it's hardly ever a good one.

A better solution is to start calculating the boundaries of the solution. In 
this case it can be done easily:

y_max = g * ( x - (+1) ) + s;
y_min = g * ( x - (-1) + s;

... because +1 and -1 is the highest absolute value we'll ever get for tanh( y 
).

Then you can immediately find the root of the linear equivalent by taking ( 
y_max + y_min ) / 2

This would be a better starting point, and it would *not* be derived from any 
history.

The actual iteration is then something like this, which is an example of the 
bisection method

while ( ( y_max - y_min )  0.1 )
{
y_test = ( y_max + y_min ) / 2

y_result = g * ( x + tanh( y_test ) ) + s;

if( y_result  y_test ) y_min = y_test;
   else y_max = y_test;
}

http://en.wikipedia.org/wiki/Bisection_method

(not sure about getting the compare right, I need more coffee...)

To make this a filter of course we have to then also update s, which in case of 
Euler or the trapezoidal rule can be done afterwards.

Cheers,

- Urs
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Simulating Valve Amps

2014-06-23 Thread Andrew Simper
On 23 June 2014 17:11, Ivan Cohen ivan.co...@orosys.fr wrote:
 Hello everybody !

 I may be able to clarify a little the confusion here...

Thanks Ivan for your great email contribution. I will only reply to
the one and only correction / clarification to what I have posted
previously.


 The bilinear transform is a tool allowing people to get from a Laplace
 transfer function H(s) an equivalent Z transfer function H(z). It is based
 on the fact that z is precisely equivalent to exp(sT) with T the sampling
 period. So, s equals 1/T * ln(z), which may be approximated with its first
 Taylor series term, 2/T * (z - 1) / (z + 1).

Thanks for pointing this out clearly! In derivations I've seen of the
bi-linear transform they use trapezoidal integration to get there,
taking the first taylor series expansion of 1/T ln(z) makes more sense
to distinguish it from trapezoidal integration, even though it results
in the same thing. I would love to see the original derivation by
Tustin, in particular where the idea came from, I've searched but
can't find it. In addition  Wikipedia /  Oppenheim doesn't help here
either:
...where T is the numerical integration step size of the trapezoidal
rule used in the bilinear transform derivation.[1]
[1] Oppenheim, Alan (2010). Discrete Time Signal Processing Third
Edition. Upper Saddle River, NJ: Pearson Higher Education, Inc. p.
504. ISBN 978-0-13-198842-2.

Again thanks for your post, I just hope people not only read it, but
take the time to understand it.
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Simulating Valve Amps

2014-06-23 Thread Theo Verelst
Always good to have a nice interaction about the theoretical basis of 
scientific work, and related practical implementations, isn't it ?


TO add a little positive note to the whole story, after maybe having 
bashed some peoples' work in a theoretically limited corner too much 
for their immediate liking, a little (and partial) explanation of the 
subjects I've been working on, in the context of exactly the music 
related signal processing I think is the honorable thing to work on in 
this context.


First, the sampling error issue in many cases can have an easy solution, 
if you take it that nothing much thus far is perfect, and thus a 
solution is only an iteration with better accuracy: higher the 
sampling frequency, make sure your vertical (quantization error) 
resolution in the intermediate result computations is sufficient, and 
your results are going to become more accurate, pretty much in general.


Second, there are various ways to deal with the making and producing of 
digital signals that improve the possibility of getting the final result 
true to nature. For instance to get natural e-powers in the samples 
signal that are pretty correct, use a quality analog equalizer, and 
properly sample the the output of it, that will put the right impulse 
response in the sample. And the advantage of doing that all with high 
quality is: the resulting samples can be perfectly reconstructed, or 
seriously and with high quality (== much processor burden) upsampled, 
which is a partial reconstruction. I favor the idea of making a sample 
stream either intended for some sort of DA convertor (most DA convertors 
are in only a few groups of reconstruction filter kind, and a in that 
group are at least alike), or intended to act as reasonably perfect 
samples, as in that up-sampling or a serious attempt to do perfect 
reconstruction will not fail. My practical experience is that this all 
can work fine, but of course it becomes a more prevalent issue that all 
kinds of digital filter (and other types of processing, including 
non-linear and harmonics inducing) we've gotten accustomed to use, 
aren't (yet ?) very perfect.


Third, as it is my perception that it has been going on for a very long 
time in the A-grade (studio ) materials, it is reasonable to search for 
certain signal properties that aren't much in the sampling error range, 
and that have musical meaning. Taking a medium short signal convolution 
as a starting point, it is possible to detect, and in some cases to 
free, various obvious and less obvious constantly present imperfections, 
and by dynamically working with certain averaged sub-bands (as I've done 
work with) musical, acoustic, and sampling error dissociating 
improvements can be achieved.


T.V.

--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Simulating Valve Amps

2014-06-23 Thread robert bristow-johnson

On 6/23/14 1:18 AM, Andrew Simper wrote:

On 23 June 2014 12:37, robert bristow-johnsonr...@audioimagination.com  wrote:

Andy and Urs, i have been making consistent and clear points and challenges
and the response is not addressing these squarely.

let's do the Sallen-Key challenge, Andy.  that's pretty concrete.

With respect Robert, I have really tried to address your points,
please go and read all the links I've posted so that you get a better
picture of what I am saying.



no, i'm letting *you* make the case.  i'm not accepting deflection to 
someone or something else.


i'm the one that's being clear and concrete so we can make a 
falsifiable test that can go one way or the other and we can all see 
which way it goes.



you pick the circuit (i suggested the one at wikipedia) so we have a common
reference.  then you pick an R1, R2, C1, C2 (or an f0 and Q, i don't care).
let's leave the DC gain at 0 dB.  then we'll have a common and unambiguous
H(s).

In the LTI case, with enough precision, then any implementation
structure (if done properly) will result in the same output.


no it won't!  digital filters implemented from analog prototypes using, 
say, impulse invariant will *not* result in the same output as those 
implemented from the same analog prototype using bilinear transform.



There is no question here, I have already state this any number of times.


is that what you said here?:

please show me the derivation for a 2 pole Sallen Key using the
bi-linear transform, then I'll show you the difference between using
trapezoidal integration ... and the bi-linear transform used by engineers ...


you've consistently said that trapezoidal integration results in 
something different than bilinear, and i've been saying consistently 
that, *for* *LTI*, they result in the same transfer function and 
frequency response.  you limit or hedge your claim conceding they're the 
same for the 1-pole case:


In the one pole case they come down to the same thing (there is only
one capacitor), but the bi-linear transform is actually derived from
the time domain trapezoidal rule.

but the fact is they're the same for 2-pole or 3-pole or N-pole filters, 
if you consistently replace each continuous-time integrator with the 
trapezoidal rule, you will get precisely the same filter transfer 
function as what you would get with bilinear transform without 
prewarping any frequency specs.


I'M the one making the tougher, more restrictive claim.  it's a claim 
that can be refuted with a single counter-example.  you, Andy, made an 
offer of a counter example that i want to take up.  but i am not gonna 
do all the work regarding it.  *you* have to spell out the difference 
equations you get from the Sallen-Key circuit and applying the trapezoid 
rule to both integrators (and you get to assert that it will come out 
different than bilinear), and *i* get to show that it will come out to 
be the same transfer function that comes from the same H(s) using 
bilinear (without prewarping).



  If we want to now look at how things differ in the time varying case, which
is what I was talking about,


not all the time.

and i have been consistently hedging about time-varying.  i was clear 
that, without clipping or nonlinearities, and *in* the steady state, 
that, given the same analog filter, the frequency response (and 
transfer function) of the filter you derive replacing integrators with 
the trapezoid rule is precisely the same as the frequency response 
obtained using bilinear transform.


we are all clear that different topologies will have different behavior 
with clipping, quantization noise, and transients when the knob is 
twisted.  i am yet unconvinced that the these issues are not adequately 
disposed of using lattice or normalized-ladder filters.



  then we can continue. Otherwise I have a
feeling that people reading this thread will be getting bored with the
repetition of these posts (I know I am).


fine.  but you have come nowhere close to showing that for linear 
filters that modeling a Sallen-Key with trapezoidal rule for integration 
comes up with anything new or novel.


and Urs has come nowhere close to showing that the iterative recursive 
non-linear function evaluation comes out differently than the properly 
set up linear process (and, by linear process, i do *not* mean 
anything LTI, i mean program flow or code that starts with y[n-1]. 
y[n-2]... and x[n], x[n-1], x[n-2]... and defines explicitly y[n] from 
those given values).  we get that linear process by unrolling the loop 
a finite number of times, because it was claimed that a finite (and 
small) number of iterations was sufficient.


--

r b-j  r...@audioimagination.com

Imagination is more important than knowledge.



--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp

Re: [music-dsp] Simulating Valve Amps

2014-06-23 Thread Andrew Simper
On 23 June 2014 19:43, Andrew Simper a...@cytomic.com wrote:
 On 23 June 2014 17:11, Ivan Cohen ivan.co...@orosys.fr wrote:
 Hello everybody !

 I may be able to clarify a little the confusion here...

 Thanks Ivan for your great email contribution. I will only reply to
 the one and only correction / clarification to what I have posted
 previously.


 The bilinear transform is a tool allowing people to get from a Laplace
 transfer function H(s) an equivalent Z transfer function H(z). It is based
 on the fact that z is precisely equivalent to exp(sT) with T the sampling
 period. So, s equals 1/T * ln(z), which may be approximated with its first
 Taylor series term, 2/T * (z - 1) / (z + 1).

 Thanks for pointing this out clearly! In derivations I've seen of the
 bi-linear transform they use trapezoidal integration to get there,
 taking the first taylor series expansion of 1/T ln(z) makes more sense
 to distinguish it from trapezoidal integration, even though it results
 in the same thing. I would love to see the original derivation by
 Tustin, in particular where the idea came from, I've searched but
 can't find it. In addition  Wikipedia /  Oppenheim doesn't help here
 either:
 ...where T is the numerical integration step size of the trapezoidal
 rule used in the bilinear transform derivation.[1]
 [1] Oppenheim, Alan (2010). Discrete Time Signal Processing Third
 Edition. Upper Saddle River, NJ: Pearson Higher Education, Inc. p.
 504. ISBN 978-0-13-198842-2.

 Again thanks for your post, I just hope people not only read it, but
 take the time to understand it.


Ok, I'm still stumped here. Can someone please show me a reference to
how the bi-linear transform is created without using trapezoidal
integration?

I found this: http://www.josiahland.com/archives/1178 , but
trapezoidal integration is used in the middle section (a - b) * (f(a)
+ f(b))/2

Likewise this: 
http://www.d-filter.ece.uvic.ca/SupMaterials/Slides/DSP-Ch11-S6,7.pdf;
, also uses trapezoidal integration on frame 6 slide 7.

And again here: http://donalprice.com/dsp/bilinear-transform/
trapezoidal used in the derivation

and so on.

Can anyone help out here?
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Simulating Valve Amps

2014-06-23 Thread Andrew Simper
Here is a quote from one of my first replies to you Robert:

--
 of course a VCF driven by a constantly changing LFO waveform (or its digital
 model) is a different thing.  i was responding to the case where there is an
 otherwise-stable filter connected to a knob.  sometimes the knob gets
 adjusted before the song or the set or gig starts and never gets moved after
 that for the evening.  for filter applications like that, i am not as
 worried myself about the right time-varying behavior (whatever right
 is).

I agree that if the knobs aren't touched and their is either double
precision or some error correction feedback to make a fixed point
implementation work properly then a DF1 or other abstract filter
realisation work well and have a low cpu overhead.
--

I have time and time again stated that in the LTI case, with enough
precision, that _any_ implementation form of an abstract filter
derived from the bi-linear transform will be the same as direct
trapezoidal integration. The differences come with finite precision
and time varying behaviour.


 In the LTI case, with enough precision, then any implementation
 structure (if done properly) will result in the same output.


 no it won't!  digital filters implemented from analog prototypes using, say,
 impulse invariant will *not* result in the same output as those implemented
 from the same analog prototype using bilinear transform.

yes it will, you are taking this out of context, we are talking about
bi-linear transformed and direct trapezoidal integrated. The LTI will
match (given infinite processing resolution) perfectly between the
two.

I will reply to the rest once you can agree with this short snippet,
since you seem unable to read what I actually write, and instead make
things up.
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Simulating Valve Amps

2014-06-23 Thread Andrew Simper
Ok, but where does
On 23 June 2014 22:59, robert bristow-johnson r...@audioimagination.com wrote:
 On 6/23/14 10:50 AM, Andrew Simper wrote:

 Ok, I'm still stumped here. Can someone please show me a reference to
 how the bi-linear transform is created without using trapezoidal
 integration?


 not that you wanna hear from me, but the usual derivation in textbooks
 goes something like this:


z  =  e^(sT)

   =  e^(sT/2) / e^(-sT/2)

   =approx   (1 + sT/2)/(1 - sT/2)   (hence the bilinear approximation)


 solving for s gets


 s  =approx  2/T * (z-1)/(z+1)


Ok, but what are the origins of that approximation? Where did it
actually come from. Looking at it in hindsight is fine, but doesn't
tell me anything.
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Simulating Valve Amps

2014-06-23 Thread Ivan Cohen
Not sure about what you mean here, but to get these approximations, you 
use the Taylor series of exp(x) and ln(x) for x - 0 :


exp(x) =  sum_(k=0 to N) x^k / k !
exp(x) = 1 + x + x^2/2! + x^3/3! + ...

ln(x) = 2 * sum(k=0 to N) 1 / (2k+1) ((x - 1) / (x + 1))^(2k-1)
ln(x) = 2 ( (x - 1)/(x+1) + 1/3 ((x - 1)/(x + 1))^3 + ... )

So, if we do the normal substitution, we can write this :

z = exp (sT)
z = exp (sT/2) * exp (sT/2) = exp (sT/2) / exp (-sT/2)
z = (1 + sT/2 + (sT/2)² / 2... ) / (1 - sT/2 + (-sT/2)² / 2 + ...)

With the inverse mapping :

s = 1/T * ln(z)
s = 2/T * ( (z - 1)/(z+1) + 1/3 (z-1)^3 /(z+1)^3 + ...)

At the first order you get the approximations z = (1 + sT/2) / (1 - 
sT/2) or s = 2/T (z -1)/(z+1)



Ivan COHEN
http://musicalentropy.wordpress.com


Le 23/06/2014 17:58, Andrew Simper a écrit :

Ok, but where does
On 23 June 2014 22:59, robert bristow-johnson r...@audioimagination.com wrote:

On 6/23/14 10:50 AM, Andrew Simper wrote:

Ok, I'm still stumped here. Can someone please show me a reference to
how the bi-linear transform is created without using trapezoidal
integration?


not that you wanna hear from me, but the usual derivation in textbooks
goes something like this:


z  =  e^(sT)

   =  e^(sT/2) / e^(-sT/2)

   =approx   (1 + sT/2)/(1 - sT/2)   (hence the bilinear approximation)


solving for s gets


 s  =approx  2/T * (z-1)/(z+1)


Ok, but what are the origins of that approximation? Where did it
actually come from. Looking at it in hindsight is fine, but doesn't
tell me anything.
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp



--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Simulating Valve Amps

2014-06-23 Thread robert bristow-johnson

On 6/23/14 11:58 AM, Andrew Simper wrote:

Ok, but where does
On 23 June 2014 22:59, robert bristow-johnsonr...@audioimagination.com  wrote:

On 6/23/14 10:50 AM, Andrew Simper wrote:

Ok, I'm still stumped here. Can someone please show me a reference to
how the bi-linear transform is created without using trapezoidal
integration?


not that you wanna hear from me, but the usual derivation in textbooks
goes something like this:


z  =  e^(sT)

   =  e^(sT/2) / e^(-sT/2)

   =approx   (1 + sT/2)/(1 - sT/2)   (hence the bilinear approximation)


solving for s gets


 s  =approx  2/T * (z-1)/(z+1)


Ok, but what are the origins of that approximation? Where did it
actually come from. Looking at it in hindsight is fine, but doesn't
tell me anything.


i dunno where the insight to split up e^(sT) into half of it (the 
geometric half) in the numerator and the reciprocal of the other half 
in the denominator.  i dunno who thought of that.  when x is small, it's 
commonly known that e^x =approx 1 + x .  (they're the first two terms of 
the Taylor series.)


a similar question could be asked about various implementation of 
different Riemann summations to approximate the Riemann integral.  who 
first decided to split the difference between


   bN-1
integral{ f(x) dx}  =   D * SUM{ f(a + nD) }
   an=0

and

   b N
integral{ f(x) dx}  =   D * SUM{ f(a + nD) }
   an=1


where N*D = b-a in both cases?


splitting the difference between the two is the trapezoid rule of 
integration.


--

r b-j  r...@audioimagination.com

Imagination is more important than knowledge.



--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Simulating Valve Amps

2014-06-23 Thread Theo Verelst
Let me express my agreement with the nice choice of subject: the 
simulation of tube amps. Of course during and before the advent of solid 
state systems, some people may have laughed about the idea alone 
(because tubes sound so annoying after while), but in the context of 
guitars, it's usually nice.


Of course there's the question as to what is nice to simulate about 
them, such as it is often suggested that a tube stage will mostly add 
second harmonic distortion, but of course certain transients, multi-pole 
filter and multiplicative effects are there as well. Probably for a 
guitar, the added harmonics, which are a function of the tones and 
volume you play, can be best generated by a mild (non-resonant) band 
pass with at least a true high-pass (zero DC component remaining), and 
possibly you want the average the result, maybe with a potent at least 
multi-coupled impulse cabinet simulation.


About delay: clearly, it's important to acknowledge the existence of 
delay in just about every digital system, while it is possible to get 
rid of it altogether (except for the speed of light) like I've worked on 
with digitally controlled analog parts, in a full digital system and 
current clock rates: it's always there. The character of the delay has 
been mentioned, sure that's important too, and it's not a ad idea to 
think that in most current practical cases, there's a certain form of 
limited reconstruction filter in the DA converter, that certainly has a 
character (try feeding the sound round and round a few times, you''ll 
get a good idea of that character). To be sure to know the difference 
between a general delay and a digital one, use a speaker at big 
distance, or a tape echo. The  constancy of the delay is another factor 
that is important to players and singers, with as possible exception of 
naturally sounding equalization (which also introduces some phase 
delay): the more constant the delay is, for my personal preference to 
the accuracy of a mid frequency wave length easily, the better. Finally, 
there's the digital hole in the sound you get when a (reconstruction 
error containing) digital system amplifies a microphone and is heard 
back live: probably it can be instantaneously spotted for it's 
unnaturalness, compared with an analog system, which is something to 
think about.


Concerning the sources having created the advent of the h(z) things, 
from what I recall from my university text-books, the main idea was 
simply the congruence of the h9z0 with the h(s) or H(1/(2 PI jW)). Of 
course the undergrad EE student will have the understanding that the 
digital signal will be a sequence of impulses, which in principle can be 
reconstructed into and analog signal, and of course we also know the 
idea of a z-1 delay is in principle an infinite order system, so tht 
it's a bit different transform than a perfect digital form of the Linear 
System Theory and its very complicated complex plane analysis.


T.

--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Simulating Valve Amps

2014-06-23 Thread Andrew Simper
-- cytomic -- sound music software --


On 23 June 2014 21:58, robert bristow-johnson r...@audioimagination.com wrote:
 On 6/23/14 12:43 AM, Andrew Simper wrote:

 On 23 June 2014 11:25, robert bristow-johnsonr...@audioimagination.com
 wrote:

 On 6/22/14 10:48 PM, Andrew Simper wrote:

 I think the important thing to note here as well is the phase.
 Trapezoidal keeps the phase and amplitude correct at dc, cutoff, and
 nyquist.


 Nyquist?  are you sure about that?

 Yes,

 PS: I was agreeing with you by saying Yes, and thanking you for
 spotting the mistake. It's called English,


 it *is* English.  i responded serially.  Yes to the question asked meant
 that you *were* sure about it.  later i see that we agreed that the
 discretized filter using trapezoidal integration behaves at Nyquist the same
 way the original continuous-time filter behaves at infinity.  funny thing
 is, so does the discretized filter using bilinear transform.  might one
 suspect they have something in common?


   and it is a bit vague at
 times, but please try and read what I'm saying not just take random
 word snippets and construct your own interpretation of them so you can
 vent.


 it wasn't random.  i asked a clear and challenging question and you answered
 it with the opposite binary word than the answer you evidently intended.


ok, lets try again:

 I think the important thing to note here as well is the phase.
 Trapezoidal keeps the phase and amplitude correct at dc, cutoff, and
 nyquist.

 Nyquist?  are you sure about that?

Ahh yes, thanks for spotting that, I am so used to having nyquist warped
to inifinity that I use them interchanably in my mind. What I mean
was: at dc, cutoff, and infinity (which is nyquist in the warped
digital case)

Do you get it now? This is a fairly standard turn of phrase that you
decided to jump on and interpret out of context.
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Simulating Valve Amps

2014-06-23 Thread robert bristow-johnson

On 6/23/14 3:39 PM, Bogac Topaktas wrote:

On Mon, June 23, 2014 7:37 am, robert bristow-johnson wrote:

the other thing Urs brought up for discussion is an iterative and
recursive process that converges on a result value, given an input.  i am
saying that this can be rolled out into a non-recursive equivalent, if the
number of iterations needed for convergence is finite.  this is a totally
different issue.

For most circuits involving non-linear feedback loops,
number of iterations needed for convergence depends on
signal level (while required accuracy kept constant).

For instance, in Tube Screamer Overdrive circuit, number
of iterations decreases when the signal gets closer to rails.



it gets better behaved as the signal swings close to the rails?  that's 
an odd non-linear behavior.  (perhaps, not so odd, zero-crossing 
distortion you get from Class B and AB amps have an issue about that.



Of course, it's possible to derive alternative equations to
reach a fixed-step approximation (which -as you pointed out-
is much more suitable for real-time operation).

I totally agree with Urs. Today's market is all about authenticity,
efficiency is just a plus.


when have i advocated inauthenticity?

and how authentic might this sound when the signal swings to a high 
level and the thing clicks or blaps or hiccups on you?


--

r b-j  r...@audioimagination.com

Imagination is more important than knowledge.



--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Simulating Valve Amps

2014-06-23 Thread Ethan Duni
rbj
Urs
Regarding the iterative method, unrolling like you did

   y0 = y[n-1]
  y1 = g * ( x[n] - tanh( y0 ) ) + s
  y2 = g * ( x[n] - tanh( y1 ) ) + s
 y3 = g * ( x[n] - tanh( y2 ) ) + s
  y[n] = y3
 is *not* what I described in general.

it *is* precisely equivalent to the example you were describing
with one more iteration than you were saying was necessary.

No, the iterations you have written out there are fixed-point iterations
(simply applying the function repeatedly and hoping it converges). That's a
very simple, special case of the general approach the Urs suggested. It
will work under certain conditions, but even then isn't very efficient. Urs
was pretty clear about using Newton or bisection for the root finding, not
fixed-point iteration. For Newton, each iteration would look like:

y1 = y0 - f(y0)/f'(y0)

where f(y) = g*(x[n] - tanh(y)) + s - y, and f'(y) is its derivative wrt y.

if there is a solid, fixed, and finite maximum number of iterations
needed,
that the iterated process can be rolled out into some linear code (code
with a beginning and an end).

I think the term you want here is non-iterative or some such, rather than
linear. I was getting confused by the usage of linear, thinking you
were arguing that the end result is a linear function or something like
that.

computationally more efficient, like table-lookup with a very big table.

Well, that's only more computationally efficient if the computational
resources you care about are processor cycles and not memory. In some
contexts, the priorities are the other way around.

or in a manner that allows some theoretical understanding of the nature
  of the function to allow one to determine what oversampling ratio is
  needed to keep aliasing at bay.

That's a good point. The trouble with these implicit functions is that it's
hard to see what you're dealing with directly. Probably we've all at some
point had the experience of trying to model some circuit in PSPICE and
needing to kind of arbitrarily crank up the step size resolution until we
get results that are stable and make sense. And maybe that's fine for doing
off-line circuit simulations as an undergrad student, but for applying this
approach to audio in a systematic way it does seem to leave a big question
mark over key issues such as aliasing/oversampling.

Moreover, this is kind of a neat way of addressing the issue. I.e., many
times the underlying implicit functions may not even admit any analytical
parameterization, which makes them a pain. But if we can identify iterative
approximations that we are convinced are good enough, we can then leverage
those approximations to parameterize the functions and get ahold of them,
via loop unrolling as rbj describes. The results are likely to be nasty,
but presumably one could always turn to Mathematica or the like to assist
there.

But I think the biggest practical issue there is what Andy (I think it
was?) described earlier, that once you start coupling together various
non-linearities in non-trivial circuits, the dimensionality of these
quantities blows up and you end up with a huge mess.

E


On Mon, Jun 23, 2014 at 10:18 AM, robert bristow-johnson 
r...@audioimagination.com wrote:


 hi Urs,

 On 6/23/14 11:36 AM, Urs Heckmann wrote:

 On 23.06.2014, at 16:37, robert bristow-johnsonr...@audioimagination.com
  wrote:

  because it was claimed that a finite (and small) number of iterations
 was sufficient.

 Well, to be precise, all I claimed was an *average* of 2 iterations for a
 given purpose, and with given means to optimise (e.g. vector registers). I
 did so to underline that an implementation for real time use is possible. I
 had no intention of saying that any finite (and small) number of iterations
 was sufficient in any arbitrary case and condition - I can only speak about
 the models that we have implemented and observed.


 okay, so the above consequentially brings us back to the original issue:

  On 6/22/14 1:20 PM, Urs Heckmann wrote:

 On 22.06.2014, at 19:04, robert bristow-johnsonr...@audioimagination.com
  wrote:

 On 6/22/14 7:11 AM, Urs Heckmann wrote:

  2. Get the computer to crunch numbers by iteratively predicting,
 evaluating and refining values using the actual non-linear equations until
 a solution is found.

 perhaps in analysis.  i would hate to see such iterative processing in
 sample processing code.  (it's also one reason i stay away from terminology
 like zero-delay feedback in a discrete-time system.)

 We're doing this a lot. It shifts the problem from implementation to
 optimisiation.


 now, i know that real-time algorithms that run native have to deal with
 the I/O latency issues of the Mac or PC, and i am not sure (nor am
 concerned) about how you guys deal with that and the operating system.  i
 know Apple deals with it with audio units, and i seem to remember that this
 was a fright with the PC and the Windoze OS.  but there is no physical
 reason it can't be dealt with in either platform, i 

Re: [music-dsp] Simulating Valve Amps

2014-06-23 Thread Urs Heckmann

On 23.06.2014, at 19:18, robert bristow-johnson r...@audioimagination.com 
wrote:

 it *is* precisely equivalent to the example you were describing with one more 
 iteration than you were saying was necessary.

Now I'm really angry I wasted so much time. An example is just that, an 
example. I deliberately kept it very simple to show the principle. In Germany 
we have a saying that roughly translates to pass someone a finger and he'll 
grab the whole hand. That's what you did. Good luck storing a one pole filter 
in a 3 dimensional look up table. You think we haven't thought of that in five 
years of researching that field? Even if you succeed for a measly non-linear 
one pole filter, try look up tables for a two pole filter. Or one for a six 
pole filter, which is the minimum if you want to sound close to that 
transistor ladder filter and which makes, uhm, something like 18 dimensions? 
THAT is a scary concept.

Also, it's a good idea that you bring up hardware synths with their economic 
limits. Today's digital hardware synths are not even close to the sound quality 
of state of the art software plug-ins. I cringe when I have to listen to those 
levels of aliasing. Not all plug-ins are good, but some are extremely good. 
Most digital hardware synths are meh in comparison. I guess that's why Korg has 
even reissued their MS20 and have plans for more (Odyssee?) - fully analogue 
synths. That's currently the only way to get something decent in hardware. 
Proper digital models seem to only make it into software plug-ins.

Try our analogue modeling synth plug-in. It works. If your computer is too 
weak, we spread the load across multiple cores (did you know that parallelism 
and vectorization is happening in the CPU world? It's playing in our hands, 
really). If your CPU still too weak, use naive methods for realtime play 
(draft mode), then render to disc in best quality mode (divine). Or try 
Andy's filter plug-in. Or Native Instruments' Monark synth. This is reality. It 
isn't a made up illformed concept we're trying too seek anyone's academic 
blessing for. It's there, out for anyone to try and the sonic properties speak 
for themselves.

Then try to achieve that sound and behaviour with anything based on 2+ order DF 
filters. Why has no-one been able to do this before when all those biquads had 
been beeping around for decades? If someone can do it with DF filters, you 
might have your point back (I've tried. Alas, I wasted years trying hacks to 
make biquads behave well and sound good enough for my purpose). Otherwise I 
think you just don't want to see the woods for the trees, and there's no use 
for me in trying to proof anything. I'm not in it for academic fame (my 
education isn't sufficent anyway, I don't talk in the right codes and my 
vocabulary is limited). I'm in it because I seek challenge, and I would like to 
discuss novel ideas based on *our* status quo. People who keep holding the 
candle to DF filters however have nothing new to offer.

In fact we have moved further long time ago. Those models of analogue filters 
we're talking about are nearly 3 years old, that's a long time in plug-in 
speak, and since we know how to do it, we have lost interest in doing it 
ourselves. Our *interns* work on new analogue models, for sake of completing 
the collection. Maybe one day we'll have the sound of all our vintage analogue 
synthesizers preserved in code. For future generations and all that. That is 
that, however, the new challenge is finding models that can not be created with 
analogue parts at reasonable cost, or maybe not even at all. The new challenge 
is all about finding algorithms that exceeds the properties of analogue 
circuitry in ways we want. Try our Bazille plug-in which is currently in public 
beta testing. The filters are not based on causal models. Nothing sounds and 
behaves like this, and it's absolutely musical. People love it.

My one most important question was never answered though: What advantage does a 
DF-anything filter have over a direct implementation when modeling a circuit in 
software? My vague feeling: There is no answer.

I had to get this off my chest. I'm off for a drink.

- Urs
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Simulating Valve Amps

2014-06-23 Thread Andrew Simper
On 24 June 2014 06:37, Urs Heckmann u...@u-he.com wrote:

 On 23.06.2014, at 19:18, robert bristow-johnson r...@audioimagination.com 
 wrote:

 it *is* precisely equivalent to the example you were describing with one 
 more iteration than you were saying was necessary.

 Now I'm really angry I wasted so much time. An example is just that, an 
 example. I deliberately kept it very simple to show the ?  principle. In 
 Germany we have a saying that roughly translates to pass someone a finger 
 and he'll grab the whole hand. That's what you did. Good luck storing a one 
 pole filter in a 3 dimensional look up table. You think we haven't thought of 
 that in five years of researching that field? Even if you succeed for a 
 measly non-linear one pole filter, try look up tables for a two pole filter. 
 Or one for a six pole filter, which is the minimum if you want to sound close 
 to that transistor ladder filter and which makes, uhm, something like 18 
 dimensions? THAT is a scary concept.

Please keep in mind Urs that RBJ is being a grumpy old engineer that
is resistant to change just for the sake of it. He has a proven track
record in taking anything you say and do his grumpy best to
misinterpret what you have written just to be difficult, since
stalling means he doesn't have to learn anything new.


 My one most important question was never answered though: What advantage does 
 a DF-anything filter have over a direct implementation when modeling a 
 circuit in software? My vague feeling: There is no answer.

 I had to get this off my chest. I'm off for a drink.

 - Urs

ATTENTION RBJ: Before you reply to this I think what Urs left out is
HE IS NOT TALKING ABOUT THE LTI CASE. In the LTI case (ie static
filter that never changes from now to the end of time) and with
infinite precision then there are loads of equivalent methods that
will produce identical results eg: is you use H(s) and the bilinear
transform with an implementation structure of DF1/2/transposed/digital
wave/lattice/... or you use direct trapezoidal integration.
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Simulating Valve Amps

2014-06-22 Thread Andrew Simper
 um, it's a semantic thing that i just wrote about in response to Urs.  i
 don't use the term myself, but i am defining nodal analysis the way i see
 virtually all other lit doing it.  when spice is modeling non-linear
 circuits, it is using Kirchoff's current law on every node, Kirchoff's
 voltage law on every loop, and the explicit volt-amp characteristic on every
 device connected between those nodes.  those are the physical axioms.  if
 you wanna call that nodal analysis, fine by me (but i don't use that term,
 myself).

 but if you're using Spice to give you a quickie frequency response to an
 ideal op-amp circuit with resistors and capacitors, it's doing what we call
 the node-voltage method (in the s-domain) which **is** limited to
 independent and dependent sources and impedances.  and it solves a system of
 linear equations.  you need to count each of these parts between nodes as
 something that obeys the s-domain version of Ohm's law.  that's the only way
 you can write the linear equations and solve them to get a meaningful and
 consistent frequency response which is a function only of the circuit with
 respect to its input and output terminals.  that frequency response is not a
 function of the applied signal or anything outside of the circuit.


Few! And I thought I would have to pull all my products from sale
because the didn't work! ;)

Another slight possible confusion is that most circuit simulation
isn't nodal analysis but modified nodal analysis which is just
done so they can stick everything into a matrix form.



 i'm no more an expert on DF-anything.  i am just saying that, when it's
 LTI (and i am referring to some hand-written papers i've seen of yours a
 couple years ago) it's gonna boil down to an H(z).  then there is, as far as
 the input-output behavior and frequency response goes, an equivalence of the
 various topologies that get the same H(z).  the different topologies have
 different behaviors regarding clipping and quantization noise and
 time-varying behavior (less worried about the occasional knob twist than for
 the filter modulated with a tremelo signal).  it is *for* *those* *reasons*
 i have an interest in the different topologies, but not for calculating
 coefficients to get a desired frequency response.

LTI = no changes in parameters, so sure, then it comes down to the
numerical properties of the realisation


 You just store a single state per capacitor actually, please read these
 for
 some examples: www.cytomic.com/technical-papers



 i've been there a while ago.  are there new papers now?

I have updated a few of them, and added a breakdown showing how a one
pole active / passive low pass is solved and leads to an identical
implementation when using nodal analysis which has been around for
ages and compare this to some newly invented methods.



 I would guess that for guitar circuits if the cutoff frequencies are
 around
 80 hz at the lowest then after two seconds of not moving any knobs the
 that
 any linear filter that hasn't blown up would settle to sound the same
 given
 infinite processing resolution, but I have not done research into this so
 I
 would not bank on it, especially when direct circuit simulation is so
 straight forward, and then the non-linear case can also be handled with
 identical machinery at the cost of more cpu.


 1/(2 seconds) is about 1/2 Hz.  two orders of magnitude between 0.5 Hz and
 80.  i'm would be surprized, outside of a chaotic system (which is
 non-linear on steroids), if there was any transient left after even a half
 second.

If time varying behaviour is important then even 1/2 is too long. It
depends on the specific project as to what is an appropriate tradeoff
of accuracy and cpu use. For a non-modulatable linear tone stack I
would say a double precision 3 pole DF1 is a good low cpu solution and
but there should be decent bandwidth limiting of automation to prevent
problems. The thing with music is that fast automation can sound
really cool too, so if it's only a slight cpu overhead then I think it
is worth supporting that.
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Simulating Valve Amps

2014-06-22 Thread socialmedia
Usually all analog can be generalized and approxmiated with the 
simplest means.


These designs are usually simple to begin with, as less components, 
meant more profit.


Sometimes far below audio-grade components were used, for instance in 
monophonic synths, or feedback paths or similar.


On vintage stuff, that typically was a wide knee, adding saturation to 
the monophonic voice. And probably a higher grade component if summmed 
in a polyphonic synth. On synths a gradually lower clipping before each 
filter-stage will give a good approximation, while avoiding too much 
coloration.


Valve has been a great discussion over many years, but as a musician, 
the most complex path, you will ever need for sufficient saturated 
guitar sound, is really EQ-compressor-softclip-EQ. Forget about the 
nonlinearities of the components. They are not necessary to think about. 
That is simply exaggaration, and seeing ghosts where there are none. And 
add delay and reverb on that aswell, and play back on high-end speakers, 
to really outpeform any old tube amp. :)


The softclip needs to sound good though. And here again, no exaggaration 
or ubertheory is necessary. The simplest 3rd order knee, softclip will 
usually do, and then you can also add some bias to that, for guitar. And 
ofcourse threshold for kneedepth. Usually deeper for more vintage sound.


I already did this, in my softclip plugin. Which is a tweaked version 
of such an algorithm, for most transparent softclip. If you go to the 
parameters page, you can tweak it aswell, for deeper knee, etc.


Peace Be With You,
Ove Karlsen
Producer, researcher, engineer,
Monotheo
www.monotheo.biz
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Simulating Valve Amps

2014-06-22 Thread Urs Heckmann
Dear Robert,

On 22.06.2014, at 04:19, robert bristow-johnson r...@audioimagination.com 
wrote:

 it's possible that this is only a semantic issue.

Thanks for clearing this up. It's indeed a semantic issue (use of the term 
nodal analysis), which then leads to further misunderstandings.

What we do is, for each node we write down equations that may indeed contain 
non-linear terms. We end up with equations that look like this:

Vout1 = g * ( tanh( Vin - feedback * Vout2 ) - tanh( Vout1 ) ) + iceq1
Vout2 = g * ( tanh( Vout1 ) - tanh( Vout2 ) ) + iceq2

Even in this simplified case it's clearly impossible to solve these linearly, 
be it in a matrix or brooding over one big sausage of an equation for each 
unknown value.

What guys like us have done for a decade or two is what Hal Chamberlin has 
written in his famous book and what Smith/Stilson described in their equally 
famous paper: We have simply added artificial delay elements. We did so in 
order to do two things:

1. Drag the computation of the non-linear term out of the equation so that both 
sides can be written as linearly dependent

2. Make the computation of each equation independent of each other, so they 
need not be solved simultaneously

Sometimes only one of those two methods is necessary, sometimes they are the 
same, and often adding a delay element is identical with Euler's method (i.e. 
Chamberlin's SVF implementation). In case of above equations, one would end up 
with following:

Vout1 = g * ( tanh( Vin - feedback * Vout2z1 ) - tanh( Vout1z1 ) ) + iceq1
Vout2 = g * ( tanh( Vout1 ) - tanh( Vout2z1 ) ) + iceq2

I've marked delayed elements with a z1 suffix. By doing so the two equations 
can be solved in sequence and no complex math or iterative computation is 
required. However, it doesn't really sound right. Those delay elements create a 
plethora of problems.

When I talked about the other method, I was talking about solving the above 
set of equations simultaneously and without any added delay element. Before we 
run into another misunderstanding, like Andy pointed out, the iceq values that 
represent the current source equivalents of the capcitors *are* delay elements. 
They are necessary for integration of the time step, but no matter what method 
of integration we use, we always end up with the same set of equations to solve 
for the actual step.

Two methods occur to be common to solve these equations simultaneously in 
realtime applications, without artificially added delay elements:

1. Treat the non-linear elements as piecewise linear and correct the result in 
iterative steps.

2. Get the computer to crunch numbers by iteratively predicting, evaluating and 
refining values using the actual non-linear equations until a solution is found.

Both methods work well and make use of root finding algorithms such as Newton's 
method. Both methods buy numerical accuracy at the expense of computational 
effort over the methods using extra delay elements. Nevertheless, computers 
have recently become fast enough to make these methods viable for musical 
applications. Hence our enthusiasm!

The gist is, being able to eliminate those delays was such a nice step forward 
in making good sounding synthesizers, someone had to coin a term for it. That 
term, as insufficient as it is, became zero delay feedback filters. It means 
what I said before, no unit delays were artificially added on top of the delays 
required for integration of the time step. Nowadays we're certainly using more 
differentiated terms as well, but that's the one that's most widely used (and 
abused) among our crowd. Please take my apologies for popularising this term 
all too lightheaded - I guess it wasn't ever clear enough that we meant to 
apply it *only* to filters based on nodal analysis as we define it and like I 
described above.

On 22.06.2014, at 04:19, robert bristow-johnson r...@audioimagination.com 
wrote:

 only if the thing goes chaotic (which is wildly non-linear with a helluva 
 lotta feedback). otherwise, there is no reason a non-chaotic non-linear 
 circuit can't get to a steady state if the input is stationary.  steady 
 state means the transients have died down to negligible levels.  like a 
 second or two after twisting the knob.

Yep, wildly non-linear with a helluva lotta feedback is exactly what we're 
dealing with - above equations expanded to 6 poles (4 LP + 2 HP in the feedback 
path) with roughly 11 tanh-ish terms and some extra dirt in the code sound 
marvellous when feedback is cranked up far beyond 4 and the self oscillation 
clashes with the input. It's ear popping.

Cheers,

- Urs

--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Simulating Valve Amps

2014-06-22 Thread Theo Verelst


And once more, still: taking a bunch of difference equations (and some 
of those were built up in a respectable way, not as a random opportunist 
algorithm), and taking their behavior to be exactly the same as a 
sampled analog system requires a little rethinking on behalf of a lot of 
people talking here.


First: is there such a thing as perfect equivalence between some analog 
system (being linear and time-invariant or not) and a digital system 
such that insome way, the output of both for the same input signals is 
actually the same, like in a algebraic manipulation system with infinite 
accuracy the same ?


And is this even possible ? In a certain sense, with frequency limited 
signals, and using perfect reconstruction, or talking about the samples 
being taken perfectly equi-distant and perfectly instantaneous and being 
compared without vertical quantization. I could think of signals and 
systems that should work, but do they ? Is a slur of digital 
multiplications creating a power sequence actually the same as perfect 
samples of a decaying e-power ? It cannot be in a certain 
interpretation: the perfect e-powers which decay and multiply many 
functions in analog systems ARE NOT FREQUENCY LIMITED. That's a problem. 
Also, you'll want your digital system to be shift-invariant in every way 
to qualify as a perfect equivalent analog system, which isn't usually 
even discussed: the response of the digital system to a (possibly 
infinite, for sake of theoretical considerations) input signal shifted 
back or forth by half a sample for instance, should be the exact same 
response as the un-shifted response, also shifted by the same amount. Go 
figure...


Finally, in EE, as RBJ clearly stated, there are many commonalities that 
work for realistic electronics circuits, but there are many of those, 
and the numeric conditioning of many interesting circuits (I'm network 
theory educated as you can tell) is not great to say the least, and that 
can be so for equalizers, too, and the tools for more advanced 
electrical analysis aren't sticking with audio rate sampling and simple 
approximations, and include error analysis.


So please get of my lawn with too much sh*t.

T.

--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Simulating Valve Amps

2014-06-22 Thread Urs Heckmann

On 22.06.2014, at 19:04, robert bristow-johnson r...@audioimagination.com 
wrote:

 i don't think i agree with the following claim, Urs,
 
 ... but no matter what method of integration we use, we always end up with 
 the same set of equations to solve for the actual step.
 
 different methods of performing numerical integration result in different 
 equations that define y[n] from y[n-1] and earlier values.  am i missing 
 something?  the statement appears, at face value, to be mistaken.

That may be. If we talk about Euler vs. trapezoidal integration, it's true 
though. Possibly for more methods, should they be viable.

 
 Two methods occur to be common to solve these equations simultaneously in 
 realtime applications, without artificially added delay elements:
 
 1. Treat the non-linear elements as piecewise linear and correct the result 
 in iterative steps.
 
 but, because of the breakpoints, you can't use LTI analysis.  only if you're 
 sure it never crosses a breakpoint can you linearize the analysis.

Well, I'm not an expert for this method myself, I just know that some such 
thing exists. Maybe Andy can chime in.

 
 2. Get the computer to crunch numbers by iteratively predicting, evaluating 
 and refining values using the actual non-linear equations until a solution 
 is found.
 
 perhaps in analysis.  i would hate to see such iterative processing in sample 
 processing code.  (it's also one reason i stay away from terminology like 
 zero-delay feedback in a discrete-time system.)

We're doing this a lot. It shifts the problem from implementation to 
optimisiation. With proper analysis of the system behaviour, for any kind of 
circuit we've done (ladder, SVF, Sallen-Key) one can choose initial estimates 
that converge with results after an average of just over 2 iterations. 
Therefore it's not even all that expensive, and oversampling makes the initial 
guesses easier.

If you're interested, our Diva synthesizer plug-in lets one compare naive 
implementations vs. iterative solvers:

http://www.u-he.com/diva

- Urs
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Simulating Valve Amps

2014-06-22 Thread robert bristow-johnson

On 6/22/14 1:20 PM, Urs Heckmann wrote:

On 22.06.2014, at 19:04, robert bristow-johnsonr...@audioimagination.com  
wrote:


2. Get the computer to crunch numbers by iteratively predicting, evaluating and 
refining values using the actual non-linear equations until a solution is found.

perhaps in analysis.  i would hate to see such iterative processing in sample processing 
code.  (it's also one reason i stay away from terminology like zero-delay 
feedback in a discrete-time system.)

We're doing this a lot. It shifts the problem from implementation to 
optimisiation. With proper analysis of the system behaviour, for any kind of 
circuit we've done (ladder, SVF, Sallen-Key) one can choose initial estimates 
that converge with results after an average of just over 2 iterations. 
Therefore it's not even all that expensive, and oversampling makes the initial 
guesses easier


so, if you can guarantee that you're limited to 2 iterations (or 3 or 4, 
but fix it), doesn't this mean you can roll out the looped iterations 
into a linear process?  then, at the bottom line, you still have y[n] 
as a function of y[n-1], y[n-2], ... and x[n], x[n-1], x[n-2]...  isn't 
that the case?  you *don't* have y[n] as a function of y[n] (because you 
didn't define y[n] yet).  no?



--

r b-j  r...@audioimagination.com

Imagination is more important than knowledge.



--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Simulating Valve Amps

2014-06-22 Thread Urs Heckmann

On 22.06.2014, at 20:24, robert bristow-johnson r...@audioimagination.com 
wrote:

 On 6/22/14 1:20 PM, Urs Heckmann wrote:
 On 22.06.2014, at 19:04, robert bristow-johnsonr...@audioimagination.com  
 wrote:
 
 2. Get the computer to crunch numbers by iteratively predicting, 
 evaluating and refining values using the actual non-linear equations until 
 a solution is found.
 perhaps in analysis.  i would hate to see such iterative processing in 
 sample processing code.  (it's also one reason i stay away from terminology 
 like zero-delay feedback in a discrete-time system.)
 We're doing this a lot. It shifts the problem from implementation to 
 optimisiation. With proper analysis of the system behaviour, for any kind of 
 circuit we've done (ladder, SVF, Sallen-Key) one can choose initial 
 estimates that converge with results after an average of just over 2 
 iterations. Therefore it's not even all that expensive, and oversampling 
 makes the initial guesses easier
 
 so, if you can guarantee that you're limited to 2 iterations (or 3 or 4, but 
 fix it), doesn't this mean you can roll out the looped iterations into a 
 linear process?  then, at the bottom line, you still have y[n] as a 
 function of y[n-1], y[n-2], ... and x[n], x[n-1], x[n-2]...  isn't that the 
 case?  you *don't* have y[n] as a function of y[n] (because you didn't define 
 y[n] yet).  no?

No, if we take a typical equation like

y = g * ( x - tanh( y ) ) + s

We form it into

Y_result = g * ( x - tanh( Y_estimate ) ) + s

as x, g and s are known, we simply use an iterative root finding algorithm 
(Newton's method, bisection method etc.) over Y_estimate until Y_estimate and 
Y_result become nearly equal. During that process g, x and s do not change. 
Hence, as a result we have found y as a function of itself.

An alternative notation would be

error = g * ( x - tanh( y ) ) + s - y

where the root finding algorithm optimises y to minimize the error, like this

while( abs( error )  0.0001 )
{
y = findABetterGuessUsingMethodOfChoice( y, error );
error = g * ( x - tanh( y ) ) + s - y;
}

This way we hadn't had to find an actual mathematical solution, we let the 
computer crunch it. Therefore the implementation is very simple (even 
trivial...), and all it boils down to is finding a good method to come up with 
an Y_estimate that's as close as possible to Y_result.

Cheers,

- Urs
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Simulating Valve Amps

2014-06-22 Thread robert bristow-johnson

On 6/22/14 6:01 PM, Urs Heckmann wrote:

On 22.06.2014, at 20:24, robert bristow-johnsonr...@audioimagination.com  
wrote:


On 6/22/14 1:20 PM, Urs Heckmann wrote:

On 22.06.2014, at 19:04, robert bristow-johnsonr...@audioimagination.com   
wrote:


2. Get the computer to crunch numbers by iteratively predicting, evaluating and 
refining values using the actual non-linear equations until a solution is found.

perhaps in analysis.  i would hate to see such iterative processing in sample processing 
code.  (it's also one reason i stay away from terminology like zero-delay 
feedback in a discrete-time system.)

We're doing this a lot. It shifts the problem from implementation to 
optimisiation. With proper analysis of the system behaviour, for any kind of 
circuit we've done (ladder, SVF, Sallen-Key) one can choose initial estimates 
that converge with results after an average of just over 2 iterations. 
Therefore it's not even all that expensive, and oversampling makes the initial 
guesses easier

so, if you can guarantee that you're limited to 2 iterations (or 3 or 4, but fix it), doesn't this 
mean you can roll out the looped iterations into a linear process?  then, 
at the bottom line, you still have y[n] as a function of y[n-1], y[n-2], ... and x[n], x[n-1], 
x[n-2]...  isn't that the case?  you *don't* have y[n] as a function of y[n] (because you didn't 
define y[n] yet).  no?

No, if we take a typical equation like

y = g * ( x - tanh( y ) ) + s

We form it into

Y_result = g * ( x - tanh( Y_estimate ) ) + s

as x, g and s are known, we simply use an iterative root finding algorithm 
(Newton's method, bisection method etc.) over Y_estimate until Y_estimate and 
Y_result become nearly equal. During that process g, x and s do not change.


i understand this.  so let's use either indices or different labels for 
the same quantities.  what are you gonna start with for Y_estimate?  
it's not explicit, but maybe y[n-1] is a good initial guess (if i am 
wrong, then you have to explicitly define the first Y_estimate).  let's 
say, for shits and grins, that 3 iterations is enough.



y0 = y[n-1]
y1 = g * ( x[n] - tanh( y0 ) ) + s
y2 = g * ( x[n] - tanh( y1 ) ) + s
y3 = g * ( x[n] - tanh( y2 ) ) + s
y[n] = y3

since we have already fixed the maximum number of iterations, then y2 
and y3 are sufficiently equal.  (at least, if we were to iterate again 
and get a y4, it wouldn't be enough different from y3 to bother with.)


now, by the mathimagical use of substitution, we have

   y[n] = g*(x[n]-tanh( g*(x[n]-tanh( g*(x[n]-tanh( y[n-1] ))+s ))+s ))+s

which is an explicit function of two variables x[n] and y[n-1] with some 
parameters derived from g and s.


but, by the nature of your expectation of convergence.  this is really 
just a function of x[n].  y[n-1] was just an initial guess for y[n].  so 
you have a net function of a single variable, x[n].  if this is not the 
case and you have to stay with a function of two variables, then you 
cannot assert convergence (i.e. y[n-1] is a crappy guess or you haven't 
iterated enough or, perhaps, no amount of iteration will converge this 
thing to an unambiguous y[n], independent of how good (within reason) 
the initial guess is.


so whether it's a function of a single variable or a function of two 
variables with your previous output in recursion, why not just 
explicitly define that function and evaluate it?  if it's about tube 
curves being the nonlinearity inside, fine, use your initial tube curve 
data in a MATLAB (or C or whoever) analysis program and crank out a 
dense curve for y[n] = func(x[n]).  then fit a finite order polynomial 
to that to get some idea what your upsampling ratio has gotta be, if you 
wanna stay clean.  if you're not worried about staying perfectly clean, 
then continue to use the same func(..) derived from the tube curves (so 
it's not the finite order polynomial that would approximate it), but you 
still use the upsampling ratio derived as if it *were* that finite-order 
polynomial.



  Hence, as a result we have found y as a function of itself.


this iterative recursion does not create a new paradigm of mathematics.  
you can *still* evaluate the function the old-fashioned way, and you 
have an idea on how curvy or nasty it is.




An alternative notation would be

error = g * ( x - tanh( y ) ) + s - y

where the root finding algorithm optimises y to minimize the error, like this

while( abs( error )  0.0001 )
{
 y = findABetterGuessUsingMethodOfChoice( y, error );
 error = g * ( x - tanh( y ) ) + s - y;
}

This way we hadn't had to find an actual mathematical solution, we let the 
computer crunch it.


and all's i'm saying is to crunch it in advance.  then represent your 
data in such a way that minimizes the crunching during the real-time 
sample processing.


sorta the same philosophy as wavetable synthesis compared to adding up 
the harmonics in real time as you would with additive synthesis.  just 

Re: [music-dsp] Simulating Valve Amps

2014-06-22 Thread Andrew Simper
 It is different for a circuit that isn't a 1 pole RC.


 no, it's whenever an integrator (1/s in the s universe) is implemented
 numerically with the trapezoid rule.  doesn't matter whether it's a C or
 anything else.

RBJ: please show me the derivation for a 2 pole Sallen Key using the
bi-linear transform, then I'll show you the difference between using
trapezoidal integration (which preserves each capacitors state
directly in from the time domain structure of the circuit) and the
bi-linear transform used by engineers (which operates on the s-domain
generic 2 pole biquad laplace representation so throws away the
original structure and state of the circuit)

In the one pole case they come down to the same thing (there is only
one capacitor), but the bi-linear transform is actually derived from
the time domain trapezoidal rule.
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Simulating Valve Amps

2014-06-22 Thread Andrew Simper
 I think the important thing to note here as well is the phase.
 Trapezoidal keeps the phase and amplitude correct at dc, cutoff, and
 nyquist.



 Nyquist?  are you sure about that?

Yes, thanks for spotting that, I am so used to having nyquist warped
to inifinity that I use them interchanably in my mind. What I mean
was: at dc, cutoff, and infinity (which is nyquist in the warped
digital case)
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Simulating Valve Amps

2014-06-22 Thread Ethan Duni
rbj
another semantic to be careful about is transfer function.
we mean something different when it's applied to LTI systems
 (the H(z) or H(s)) than when applied to a diode.  the latter
 semantic i don't use.  i would say volt-amp characteristic
of the diode or vacuum tube.  or if it was a nonlinear system
with an input and output, i would say input-output mapping
function and leave transfer function to the linear and time-invariant.

Now that you mention it, I notice that I avoid transfer function
generally, since it is too generic a term. I tend to use frequency
response or impulse response when talking about LTI systems. They're
about as long as transfer function and are more specific anyhow, so...

Likewise with static nonlinearities, it's always soft-clipping curve or
center-clipper or whatever, never the general transfer function.

E




On Sat, Jun 21, 2014 at 7:19 PM, robert bristow-johnson 
r...@audioimagination.com wrote:

 On 6/21/14 7:21 AM, Urs Heckmann wrote:

 On 20.06.2014, at 17:37, robert bristow-johnsonr...@audioimagination.com
  wrote:

  On 6/20/14 10:57 AM, Andrew Simper wrote:

 On 20 June 2014 17:11, Tim Goetzet...@quitte.de   wrote:

  [Andrew Simper]

 On 18 June 2014 21:01, Tim Goetzet...@quitte.de   wrote:

 I absolutely agree that this looks to be the most promising approach
 in terms of realism.  However, the last time I looked into this, the
 computational cost seemed a good deal too high for a realtime
 implementation sharing a CPU with other tasks.  But perhaps I'll need
 to evaluate it again?

 The computational costs of processing the filters isn't high at all,
 just
 like with DF1 you can compute some simplified coefficients and then
 call
 process using those. Since everything is linear you end up with a
 bunch of
 additions and multiplies just like you do in a DF1, but the energy in
 your
 capacitors is preserved when you change coefficients just like it is
 when
 you change the knobs on a circuit.

 Yeh's work on the Fender tonestack is just that: symbolic nodal
 analysis leading to an equivalent linear digital filter.   I
 mistakenly thought you were proposing nodal analysis including also
 the nonlinear aspects of the circuit including valves and output
 transformer (which without being too familiar with the method I
 believe to lead to a system of equations that's a lot more complicated
 to solve).


  Nodal analysis can refer to linear or non-linear, so sorry for the
 confusion.

 well, Kirchoff's laws apply to either linear or non-linear.  but the
 methods we know as node-voltage (what i prefer) or loop-current do
 *not* work with non-linear.  these circuits (that we apply the node-voltage
 method to) have dependent or independent voltage or current sources and
 impedances between the nodes.

 I don't quite understand. Are you saying that one can not write down an
 equation for each node once there's a diode or a transistor in the circuit?

 I was of the opinion that transfer functions of, say, diodes are well
 known, and even though the resulting equations need more than just adds and
 multiplies and delays, they are still correct equations albeit ones that
 require more complex measures than undergrade math to solve. I could be
 wrong though, and in this case I'd be happy to learn!


 it's possible that this is only a semantic issue.  Nodal analysis seems
 a little less specific, but when i google it, all of the primary hits go to
 the Node-voltage method, which together with the Loop-current method
 are the two primary ways electrical engineers are taught to analyze
 circuits containing independent or proportionally-dependent voltage or
 current sources and LTI impedances.  only that.

 the underlying physics: Kirchoff's current law, Kirchoff's voltage law,
 and the volt-amp characteristics of each element *is* applicable to any
 lumped element circuit, one with any combination of linear or nonlinear
 elements or memoryless or non-memoryless elements.  but the Loop-current
 (sometimes called mesh-current analysis) and Node-voltage methods allow
 one to write the matrix of linear equations governing the circuit directly
 from inspection of the circuit.  it's an analysis discipline we learn in EE
 class.  write the matrix equation down on a big piece of paper, then go
 about solving the system of linear equations.  but Loop-current and
 Node-voltage does not work with nonlinear volt-amp characteristics.

 another semantic to be careful about is transfer function.  we mean
 something different when it's applied to LTI systems (the H(z) or H(s))
 than when applied to a diode.  the latter semantic i don't use.  i would
 say volt-amp characteristic of the diode or vacuum tube.  or if it was a
 nonlinear system with an input and output, i would say input-output
 mapping function and leave transfer function to the linear and
 time-invariant.



I was trying to point out that the linear analysis done by Yeh
 starts the circuit but then throws it away and instead 

Re: [music-dsp] Simulating Valve Amps

2014-06-22 Thread robert bristow-johnson

On 6/22/14 10:41 PM, Andrew Simper wrote:

It is different for a circuit that isn't a 1 pole RC.


no, it's whenever an integrator (1/s in the s universe) is implemented
numerically with the trapezoid rule.  doesn't matter whether it's a C or
anything else.

RBJ: please show me the derivation for a 2 pole Sallen Key using the
bi-linear transform,


to have a common reference, let's use this circuit:

  
http://en.wikipedia.org/wiki/Sallen%E2%80%93Key_topology#Application:_Low-pass_filter


use the same Q as specified and define f0 = omega0/(2*pi)

then go to the cookbook, plug in f0, Q, and the sample rate Fs that 
you're using with your trapazoid rule.  don't confuse the Wikipedia 
alpha with the cookbook alpha.


then blast out the coefficient values.  be sure to normalize a0 and get 
5 unambiguous coefficients.


that's the derivation.  *but*, although Q is the same (if there is a 
little peak resonance, it will be the same height in dB), f0 is already 
prewarped.  so you have to prewarp your resistors so that your f0 will 
agree with mine.  you do that, and your filter and mine will be 
identical from an input/output point-of-view.


it doesn't mean the two internal states will be the same, but there will 
be an unambiguous 2-by-2 *linear* mapping of your capacitor states and 
the simple DF2 (not DF1) states.  or you can map it to a lattice or 
Hal's SVF.  just another 2-by-2 linear mapping again.  might get a 
topology where the linear mapping is the Identity matrix.


there are an infinite number of topologies for a 2nd-order biquad.  but 
each can be made to have an identical transfer function and the only 
difference of the states is a linear mapping.



  then I'll show you the difference between using
trapezoidal integration (which preserves each capacitors state
directly in from the time domain structure of the circuit) and the
bi-linear transform used by engineers (which operates on the s-domain
generic 2 pole biquad laplace representation so throws away the
original structure and state of the circuit)


simply using the trapezoid rule for the integration that the capacitors 
do is equivalent to the bilinear transform *without* pre-warping 
anything.  that's already proven.  if it comes up with a different set 
of 5 coefficients, then the way you're pre-warping the resistors is 
not equivalent to pre-warping f0 (fudging the two resistors are not 
independent of each other.



In the one pole case they come down to the same thing (there is only
one capacitor), but the bi-linear transform is actually derived from
the time domain trapezoidal rule.


it *can* be derived from the time-domain trapezoidal rule, but it's not 
the only way (and isn't the normal way it's derived in textbooks).  what 
can be said is that consistently realizing numerical time-domain 
integration with the trapezoidal rule (i.e. you can't model one 
capacitor with trapezoidal rule integration and use something else for 
the other cap) is equivalent to the bilinear transform *without* 
prewarping any frequency specifications.  that's already proven, Andy.


--

r b-j  r...@audioimagination.com

Imagination is more important than knowledge.



--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Simulating Valve Amps

2014-06-22 Thread Andrew Simper
RBJ: direct integration like I am proposing is a good idea can be
solved in many ways, what results is a set of linearised equations to
be solved, these can be for nodal voltages, or differences in
voltages, the latter is called state space. Have a read of this:

DISCRETIZATION OF PARAMETRIC ANALOG CIRCUITS FOR REAL-TIME SIMULATIONS
Kristjan Dempwolf, Martin Holters and Udo Zölzer
http://dafx10.iem.at/papers/DempwolfHoltersZoelzer_DAFx10_P7.pdf

They cover it all much better than I can do via email, but everything
shown are useful variations of MNA type possibly non-linear nodal
analysis. What you end up with is a trapezoidal integrated model that
preserves the state of each capacitor.
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] Simulating Valve Amps

2014-06-22 Thread robert bristow-johnson

On 6/22/14 11:24 PM, Andrew Simper wrote:

so whether it's a function of a single variable or a function of two
variables with your previous output in recursion, why not just explicitly
define that function and evaluate it?  if it's about tube curves being the
nonlinearity inside, fine, use your initial tube curve data in a MATLAB (or
C or whoever) analysis program and crank out a dense curve for y[n] =
func(x[n]).  then fit a finite order polynomial to that to get some idea
what your upsampling ratio has gotta be, if you wanna stay clean.  if you're
not worried about staying perfectly clean, then continue to use the same
func(..) derived from the tube curves (so it's not the finite order
polynomial that would approximate it), but you still use the upsampling
ratio derived as if it *were* that finite-order polynomial.



   Hence, as a result we have found y as a function of itself.


this iterative recursion does not create a new paradigm of mathematics.  you
can *still* evaluate the function the old-fashioned way, and you have an
idea on how curvy or nasty it is.

Yes, you are right, in the simple one pole example that urs posted


Andy, please note what Ethan Duni just said about bit of confusion here.

the iterative mapping thing Urs posted is not a one pole anything.  
sometimes semantics are important.  we already have a use for the terms 
poles and zeros and this thing from Urs is not about poles.  
ultimately, it's a non-linear and (if 2 or 3 iterations are enough for 
convergenc) a memoryless function.  no poles.  no frequency response.  
the recursion y[n-1] is just supposed to be an initial guess for y[n].  
it should converge to nearly the same y[n] with any other starting value 
of the same scale.



you
have a function of two variables that you can explicitly evaluate
using your favourite route finding mechanism, and then use an
approximation to avoid evaluating this at run time. This 2D
approximation is pretty efficient and will be enough to solve this
very basic case. But each non-linearity that is added increases the
space by at least one dimension, so your function gets big very
quickly and you have to start using a non-linear mapping into the
space to keep things under control.


i haven't been able to decode what you just wrote.


  If can be more efficient to only
use these approximations locally and then use a root finding method to
find the global solution


sigh  it's a function.  given his parameters, g and s, then x[n] goes 
in, iterate the thing 50 times, and an unambiguous y[n] comes out.  
doesn't matter what the initial guess is (start with 0, what the hell).  
i am saying that *this* net function is just as deserving a candidate 
for modeling as is the original tanh() or whatever.  just run an offline 
program using MATLAB or python or C or the language of your delight.  
get the points of that function defined with a dense look-up-table.  
then consider ways of modeling *that* directly.  maybe leave it as a 
table lookup.  whatever.  but at least you can see what you're dealing 
with and use that net function to help you decide how much you need to 
upsample.



--

r b-j  r...@audioimagination.com

Imagination is more important than knowledge.



--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Simulating Valve Amps

2014-06-22 Thread Andrew Simper
 you
 have a function of two variables that you can explicitly evaluate
 using your favourite route finding mechanism, and then use an
 approximation to avoid evaluating this at run time. This 2D
 approximation is pretty efficient and will be enough to solve this
 very basic case. But each non-linearity that is added increases the
 space by at least one dimension, so your function gets big very
 quickly and you have to start using a non-linear mapping into the
 space to keep things under control.


 i haven't been able to decode what you just wrote.

This is important to understand if you want to use massive tables to
model this stuff. Perhaps read Yeh's thesis paper? I have already
posted this to another thread, but here it is again:

... but missed David Yeh's dissertation
https://ccrma.stanford.edu/~dtyeh/papers/DavidYehThesissinglesided.pdf
which contains a great description of MNA and how it relates to the
DK-method. I highly recommend everyone read it, thanks David!!

I really hope that an improved DK-method that handles multiple
nonlinearities more elegantly that it currently does. A couple of
things to note here, in general, this method uses multi-dimensional
tables to pre-calculate the difficult implicit equations to solve the
non-linearities, but as the number of non-linearities increases so
does the size of your table as noted in 6.2.2:

The dimension of the table lookup for the stored nonlinearity in
K-method grows with the number of nonlinear devices in the circuit. A
straightforward table lookup is thus impractical for circuits with
more than two transistors or vacuum tubes. However, function
approximation approaches such as neural networks or nonlinear
regression may hold promise for efficiently providing means to
implement these high-dimensional lookup functions.


As you add more non-linearities to the circuit it becomes less
practical to pre-calculate tables to handle the situation. Also
changes caused by things like potentiometers add extra dimensions, and
you will spend all your time doing very high dimensional table
interpolation and completely blow the cache.

 sigh  it's a function.  given his parameters, g and s, then x[n] goes in,
 iterate the thing 50 times, and an unambiguous y[n] comes out.  doesn't
 matter what the initial guess is (start with 0, what the hell).  i am saying
 that *this* net function is just as deserving a candidate for modeling as is
 the original tanh() or whatever.  just run an offline program using MATLAB
 or python or C or the language of your delight.  get the points of that
 function defined with a dense look-up-table.  then consider ways of modeling
 *that* directly.  maybe leave it as a table lookup.  whatever.  but at least
 you can see what you're dealing with and use that net function to help you
 decide how much you need to upsample.

sigh sigh sigh please at least try and understand what I wrote
before sighing at me! Yes, I agree that for low dimensional cases this
is a good approach, but for any realistic circuit things get
complicated and inefficient really quickly and you are better off with
other methods.
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Simulating Valve Amps

2014-06-22 Thread Andrew Simper
 sigh sigh sigh please at least try and understand what I wrote
 before sighing at me! Yes, I agree that for low dimensional cases this
 is a good approach, but for any realistic circuit things get
 complicated and inefficient really quickly and you are better off with
 other methods.

What I mean by other methods is that you can use low dimensional
tables/approximations for local non-linear chunks of circuit, but then
use a global root finder for the inter-connection of these chunks.
Here is an example of how to do this on a realistic circuit:

GUITAR PREAMP SIMULATION USING CONNECTION CURRENTS
Jaromir Macak
http://dafx13.nuim.ie/papers/27.dafx2013_submission_10.pdf

The drawback of this method is that it still requires usage of
numerical algorithm to solve the equations but if the inner blocks are
approximated, then the number of unknown variables to be solved
numerically is equal to number of connection currents, which is much
lower than original unknown variables.
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Simulating Valve Amps

2014-06-22 Thread robert bristow-johnson

On 6/23/14 12:16 AM, Andrew Simper wrote:

you
have a function of two variables that you can explicitly evaluate
using your favourite route finding mechanism, and then use an
approximation to avoid evaluating this at run time. This 2D
approximation is pretty efficient and will be enough to solve this
very basic case. But each non-linearity that is added increases the
space by at least one dimension, so your function gets big very
quickly and you have to start using a non-linear mapping into the
space to keep things under control.


i haven't been able to decode what you just wrote.

This is important to understand if you want to use massive tables to
model this stuff. Perhaps read Yeh's thesis paper? I have already
posted this to another thread, but here it is again:

... but missed David Yeh's dissertation
https://ccrma.stanford.edu/~dtyeh/papers/DavidYehThesissinglesided.pdf
which contains a great description of MNA and how it relates to the
DK-method. I highly recommend everyone read it, thanks David!!

I really hope that an improved DK-method that handles multiple
nonlinearities more elegantly that it currently does. A couple of
things to note here, in general, this method uses multi-dimensional
tables to pre-calculate the difficult implicit equations to solve the
non-linearities, but as the number of non-linearities increases so
does the size of your table as noted in 6.2.2:

The dimension of the table lookup for the stored nonlinearity in
K-method grows with the number of nonlinear devices in the circuit. A
straightforward table lookup is thus impractical for circuits with
more than two transistors or vacuum tubes. However, function
approximation approaches such as neural networks or nonlinear
regression may hold promise for efficiently providing means to
implement these high-dimensional lookup functions.


As you add more non-linearities to the circuit it becomes less
practical to pre-calculate tables to handle the situation. Also
changes caused by things like potentiometers add extra dimensions, and
you will spend all your time doing very high dimensional table
interpolation and completely blow the cache.


sigh   it's a function.  given his parameters, g and s, then x[n] goes in,
iterate the thing 50 times, and an unambiguous y[n] comes out.  doesn't
matter what the initial guess is (start with 0, what the hell).  i am saying
that *this* net function is just as deserving a candidate for modeling as is
the original tanh() or whatever.  just run an offline program using MATLAB
or python or C or the language of your delight.  get the points of that
function defined with a dense look-up-table.  then consider ways of modeling
*that* directly.  maybe leave it as a table lookup.  whatever.  but at least
you can see what you're dealing with and use that net function to help you
decide how much you need to upsample.

sigh  sigh  sigh  please at least try and understand what I wrote
before sighing at me! Yes, I agree that for low dimensional cases this
is a good approach, but for any realistic circuit things get
complicated and inefficient really quickly and you are better off with
other methods.


Andy and Urs, i have been making consistent and clear points and 
challenges and the response is not addressing these squarely.


let's do the Sallen-Key challenge, Andy.  that's pretty concrete.

you pick the circuit (i suggested the one at wikipedia) so we have a 
common reference.  then you pick an R1, R2, C1, C2 (or an f0 and Q, i 
don't care).  let's leave the DC gain at 0 dB.  then we'll have a common 
and unambiguous H(s).


then let's agree on a common sampling rate, Fs.

then you do your trapezoid rule to both of the integrators (C1 and C2) 
in the circuit and solve for your discrete-time model.  then spell out 
to the rest of us the difference equations.


i will take those difference equations and turn them into an unambiguous 
H(z) in the z-plane.  then the question is if it will be the same H(z) 
we get from the original H(s) and applying the bilinear transform 
without any prewarping of anything.  i am saying they will be the same, 
you had been saying they will not be the same.  this is a falsifiable 
experiment, and we can see who's correct.


now that's one thing.  it's about applying the trapezoid-rule 
integration or the bilinear transform (without prewarping) to the same 
continuous-time LTI circuit.


the other thing Urs brought up for discussion is an iterative and 
recursive process that converges on a result value, given an input.  i 
am saying that this can be rolled out into a non-recursive equivalent, 
if the number of iterations needed for convergence is finite.  this is a 
totally different issue.


i'm happy to address either (or both) issues directly without deferring 
to any other distraction.  i can state clearly what my claim is and can 
we can test that claim.  likewise, you can state clearly what your claim 
is, and we can test that.


--

r b-j  

Re: [music-dsp] Simulating Valve Amps

2014-06-22 Thread Andrew Simper
On 23 June 2014 11:25, robert bristow-johnson r...@audioimagination.com wrote:
 On 6/22/14 10:48 PM, Andrew Simper wrote:

 I think the important thing to note here as well is the phase.
 Trapezoidal keeps the phase and amplitude correct at dc, cutoff, and
 nyquist.


 Nyquist?  are you sure about that?

 Yes,

PS: I was agreeing with you by saying Yes, and thanking you for
spotting the mistake. It's called English, and it is a bit vague at
times, but please try and read what I'm saying not just take random
word snippets and construct your own interpretation of them so you can
vent.
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Simulating Valve Amps

2014-06-22 Thread rohit
I think you should look at this like a tool set. Table look up is one tool that 
you can use as it iterative function evaluation. What tools you use depends on 
circumstances. On the PC platform you have big caches, lots of memory and real 
fast CPU clocks. If you go FPGA clock rate goes down as does amount of RAM that 
you can access. You have better control of process delay. FPGA favors more 
computation while the PC platform likes table look ups.

Sent from my Samsung Corby
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Simulating Valve Amps

2014-06-21 Thread Urs Heckmann

On 20.06.2014, at 17:37, robert bristow-johnson r...@audioimagination.com 
wrote:

 On 6/20/14 10:57 AM, Andrew Simper wrote:
 On 20 June 2014 17:11, Tim Goetzet...@quitte.de  wrote:
 
 [Andrew Simper]
 On 18 June 2014 21:01, Tim Goetzet...@quitte.de  wrote:
 I absolutely agree that this looks to be the most promising approach
 in terms of realism.  However, the last time I looked into this, the
 computational cost seemed a good deal too high for a realtime
 implementation sharing a CPU with other tasks.  But perhaps I'll need
 to evaluate it again?
 The computational costs of processing the filters isn't high at all, just
 like with DF1 you can compute some simplified coefficients and then call
 process using those. Since everything is linear you end up with a bunch of
 additions and multiplies just like you do in a DF1, but the energy in your
 capacitors is preserved when you change coefficients just like it is when
 you change the knobs on a circuit.
 Yeh's work on the Fender tonestack is just that: symbolic nodal
 analysis leading to an equivalent linear digital filter.   I
 mistakenly thought you were proposing nodal analysis including also
 the nonlinear aspects of the circuit including valves and output
 transformer (which without being too familiar with the method I
 believe to lead to a system of equations that's a lot more complicated
 to solve).
 
 
 Nodal analysis can refer to linear or non-linear, so sorry for the
 confusion.
 
 well, Kirchoff's laws apply to either linear or non-linear.  but the methods 
 we know as node-voltage (what i prefer) or loop-current do *not* work 
 with non-linear.  these circuits (that we apply the node-voltage method to) 
 have dependent or independent voltage or current sources and impedances 
 between the nodes.

I don't quite understand. Are you saying that one can not write down an 
equation for each node once there's a diode or a transistor in the circuit?

I was of the opinion that transfer functions of, say, diodes are well known, 
and even though the resulting equations need more than just adds and multiplies 
and delays, they are still correct equations albeit ones that require more 
complex measures than undergrade math to solve. I could be wrong though, and in 
this case I'd be happy to learn!

 
  I was trying to point out that the linear analysis done by Yeh
 starts the circuit but then throws it away and instead uses a DF1, and a
 DF1 does not store the state of each capacitor individually, so when you
 turn the knob you don't get the right time varying behaviour.
 
 in the steady-state (say, a second after the knob is turned) is there the 
 right behavior with the DF1?

But of course - in a linear filter, such as one composed out of RC networks. 
But what about voltage controlled filters?

 
  I am saying
 you don't have to throw the circuit away, you can still get an efficient
 implementation since in the linear case everything reduces to a bunch of
 adds and multiplies.
 
 and delay states.

But of course there has to be a state history. But Andy talks about 
computational efficency here, so he was referring to actual mathematical 
operations, not loads and stores which may be optimised away if the results can 
be kept in a CPU register within a tight loop.

 
 For non-linear modelling you need additional steps, and depending on the
 circuit there are many different methods that can be tried to find the best
 fit for the particular requirements
 
 if, the sample rate is high enough (and it *should* be pretty fast because of 
 the aliasing issue) the deltaT used in forward differences or backward 
 differences (or predictor-corrector) or whatever should be pretty small.  in 
 my opinion, if you have a bunch of memoryless non-linear elements connected 
 in a circuit with linear elements (with or without memory), it seems to me 
 that the simple Euler's forward method (like we learned in undergraduate 
 school) suffices to model it.

That's where it's a bit tricky. In my naive way of thinking, I see non-linear 
elements, say a diode, or something tanh-ish, as voltage dependent resistors. 
Therefore the signal in a non-linear filter, naively spoken, influences the 
overall filter frequency. This explains why for instance in an analogue 
synthesizer the frequency of the self oscillating filter sounds like it's 
modulated by the filter input (in actual fact, run an LFO through a 
self-oscillating filter and you get a filter sweep. Alternatively, drive a 
square wave really hard in a lowpass filter and hear the upper harmonics 
attenuate)

Now, if we could agree that, naively spoken, the signal performs audio rate 
modulation of the cutoff frequency in a non-linear filter, then we would have 
to determine whether or not the effect of using simple Euler has any 
disadvantage over, say, trapezoidal integration and solving feedback loops 
delayless. That is, is there an audible difference in a practical case, even at 
really high samplerates?

We have 

Re: [music-dsp] Simulating Valve Amps

2014-06-21 Thread Andrew Simper
On 20 June 2014 23:37, robert bristow-johnson r...@audioimagination.com
wrote:

 well, Kirchoff's laws apply to either linear or non-linear.  but the
 methods we know as node-voltage (what i prefer) or loop-current do
 *not* work with non-linear.  these circuits (that we apply the node-voltage
 method to) have dependent or independent voltage or current sources and
 impedances between the nodes.


If this is the case you should tell everyone that using Spice to stick to
linear components only since the non-linear ones won't work!



   I was trying to point out that the linear analysis done by Yeh
 starts the circuit but then throws it away and instead uses a DF1, and a
 DF1 does not store the state of each capacitor individually, so when you
 turn the knob you don't get the right time varying behaviour.


 in the steady-state (say, a second after the knob is turned) is there the
 right behavior with the DF1?


If you start the filters with silence and don't change any settings and you
have infinite processing resolution they will be identical. I would guess
that the time for artificial energy injected into a DF1 due to parameter
changes to be unnoticeable would depend on the particular circuit, guessing
again I would say it is most likely that low frequencies with high
resonance would be most problematic, and from what I have noticed sweeping
downwards in frequency is the worst case, but you are the expert on DF1s
what are the results of research you have conducted into these issues?



I am saying
 you don't have to throw the circuit away, you can still get an efficient
 implementation since in the linear case everything reduces to a bunch of
 adds and multiplies.


 and delay states.


You just store a single state per capacitor actually, please read these for
some examples: www.cytomic.com/technical-papers



 For non-linear modelling you need additional steps, and depending on the
 circuit there are many different methods that can be tried to find the
 best
 fit for the particular requirements


 if, the sample rate is high enough (and it *should* be pretty fast because
 of the aliasing issue) the deltaT used in forward differences or backward
 differences (or predictor-corrector) or whatever should be pretty small.
  in my opinion, if you have a bunch of memoryless non-linear elements
 connected in a circuit with linear elements (with or without memory), it
 seems to me that the simple Euler's forward method (like we learned in
 undergraduate school) suffices to model it.


In tests I've done (where I've had to simplify some parts just to get it
explicit version to be stable) it takes around x8 oversampling of a base
rate of 44.1khz before the single step explicit version sounds the almost
identical to an implicit version, so



Andrew, i realize that you had been using something like that to emulate
 linear circuits with capacitors and resistors and op-amps.  it does make a
 difference in time-variant situations, but for the steady state (a second
 or two after the knob is twisted), i'm a little dubious of what difference
 it makes.


I would guess that for guitar circuits if the cutoff frequencies are around
80 hz at the lowest then after two seconds of not moving any knobs the that
any linear filter that hasn't blown up would settle to sound the same given
infinite processing resolution, but I have not done research into this so I
would not bank on it, especially when direct circuit simulation is so
straight forward, and then the non-linear case can also be handled with
identical machinery at the cost of more cpu.
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Simulating Valve Amps

2014-06-21 Thread Rich Breen
Just as a data point; Been measuring and dealing with converter and DSP 
throughput latency in the studio since the first digital machines in the early 
'80's; my own experience is that anything above 2 or 3 msec of throughput 
latency starts to become an issue for professional musicians; 5 msec becomes 
very noticable on headphones, and above 6msec is not usable.

best,
rich


On Jun 21, 2014, at 4:21 AM, music-dsp-requ...@music.columbia.edu wrote:

 Clearly a 25ms delay post-applied to the rhythm guitar is going to mess 
 with the groove in almost any musical setting. But the real question is 
 whether it's going to mess with the guitarist's performance of the 
 groove if they can hear the delay while they are playing (i.e. can they 
 compensate to recreate the same groove). And then, when comparing to 
 zero latency hardware, at what point does it become trivial to 
 compensate (i.e. doesn't introduce any additional cognitive burden).
 
 I believe that various studies have put the playable latency threshold 
 below 8ms. (Sorry, don't have time to look this up but I think that one 
 example is in one of Alex Carot's network music papers).

-
r...@richbreen.com
http://www.richbreen.com

--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Simulating Valve Amps

2014-06-21 Thread Ross Bencina

Hi Rich,

On 22/06/2014 1:09 AM, Rich Breen wrote:

Just as a data point; Been measuring and dealing with converter and
DSP throughput latency in the studio since the first digital machines
in the early '80's;


Out of interest, what is your latency measurement method of choice?



my own experience is that anything above 2 or 3
msec of throughput latency starts to become an issue for professional
musicians; 5 msec becomes very noticable on headphones, and above
6msec is not usable.


Do you think they notice below 2ms?

Ross.
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Simulating Valve Amps

2014-06-21 Thread Phil Burk



On 6/21/14, 8:09 AM, Rich Breen wrote:

5 msec becomes very noticable on headphones, and above
6 msec is not usable.


Note that the speed of sound in air is roughly 1125 feet/second. So if a 
guitar player is more than 7 feet from their amp then they will have 
more than 6 msec of latency.


For acoustic instrument players the speed of sound is not an issue. But 
if an electronic musician is looking for ultra-low latency then they 
must also consider their distance from the loudspeaker.


Phil Burk
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Simulating Valve Amps

2014-06-21 Thread Nigel Redmon
I agree, Phil, that the “6 msec is not usable” is not a realistic statement. 
First, the brain anticipates. Humans are incredible good at throwing things, 
for instance. (In a few minutes, I’m heading out to play basketball and drain 
some “threes”.) And the brain needs to tell the hand to release the baseball, 
or flick the wrist to launch the basketball, *way* before the hand is in 
position. Yet the accuracy with which we do it is amazing. We fall far short of 
the big cats in speed, and the strength of other primates even half our size, 
but boy can we bean something with a rock 100 feet away like no other critter 
on the planet. So the brain is pretty good about anticipatory timing.

If you’re in a living room or garage, x feet from the drummer, y feet from the 
bass cabinet, and z feet from your leslie, you can still cope.

Then it’s a little different thing to play solo. I already mentioned about the 
huge delay I had to get used to with the big pipe organ. It was bad, but it 
sure would have been horrible to play piano that way (easy enough simulated 
with a delay line and piano plug-in). When I’m recording via Ivory, say, I’ve 
got headphones, and I want a 64-sample buffer. 128 is reasonable. Beyond that, 
I quickly arrive at the point when I lose the sense of connection between my 
fingers and the instrument, and at minimum my comfort level suffers. If there 
were no alternative, I’d get used to it, but it’s not optimal. But “unusable” 
is a bit extreme.


On Jun 21, 2014, at 4:29 PM, Phil Burk philb...@mobileer.com wrote:
 
 On 6/21/14, 8:09 AM, Rich Breen wrote:
 5 msec becomes very noticable on headphones, and above
 6 msec is not usable.
 
 Note that the speed of sound in air is roughly 1125 feet/second. So if a 
 guitar player is more than 7 feet from their amp then they will have more 
 than 6 msec of latency.
 
 For acoustic instrument players the speed of sound is not an issue. But if an 
 electronic musician is looking for ultra-low latency then they must also 
 consider their distance from the loudspeaker.
 
 Phil Burk

--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Simulating Valve Amps

2014-06-21 Thread Nigel Redmon
PS—I actually took Rich’s “6ms is unusable” to mean “unacceptable”, which I do 
agree with. For instance, when I had a Korg Trinity, I liked the keyboard 
action (61 key) very much in general, but I would *never* enable the combi 
mode—it was so slow that it was unacceptable to me, even for the far richer 
textures it gave in return.


On Jun 21, 2014, at 5:09 PM, Nigel Redmon earle...@earlevel.com wrote:

 I agree, Phil, that the “6 msec is not usable” is not a realistic statement. 
 First, the brain anticipates. Humans are incredible good at throwing things, 
 for instance. (In a few minutes, I’m heading out to play basketball and drain 
 some “threes”.) And the brain needs to tell the hand to release the baseball, 
 or flick the wrist to launch the basketball, *way* before the hand is in 
 position. Yet the accuracy with which we do it is amazing. We fall far short 
 of the big cats in speed, and the strength of other primates even half our 
 size, but boy can we bean something with a rock 100 feet away like no other 
 critter on the planet. So the brain is pretty good about anticipatory timing.
 
 If you’re in a living room or garage, x feet from the drummer, y feet from 
 the bass cabinet, and z feet from your leslie, you can still cope.
 
 Then it’s a little different thing to play solo. I already mentioned about 
 the huge delay I had to get used to with the big pipe organ. It was bad, but 
 it sure would have been horrible to play piano that way (easy enough 
 simulated with a delay line and piano plug-in). When I’m recording via Ivory, 
 say, I’ve got headphones, and I want a 64-sample buffer. 128 is reasonable. 
 Beyond that, I quickly arrive at the point when I lose the sense of 
 connection between my fingers and the instrument, and at minimum my comfort 
 level suffers. If there were no alternative, I’d get used to it, but it’s not 
 optimal. But “unusable” is a bit extreme.
 
 
 On Jun 21, 2014, at 4:29 PM, Phil Burk philb...@mobileer.com wrote:
 
 On 6/21/14, 8:09 AM, Rich Breen wrote:
 5 msec becomes very noticable on headphones, and above
 6 msec is not usable.
 
 Note that the speed of sound in air is roughly 1125 feet/second. So if a 
 guitar player is more than 7 feet from their amp then they will have more 
 than 6 msec of latency.
 
 For acoustic instrument players the speed of sound is not an issue. But if 
 an electronic musician is looking for ultra-low latency then they must also 
 consider their distance from the loudspeaker.
 
 Phil Burk

--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Simulating Valve Amps

2014-06-21 Thread robert bristow-johnson

On 6/21/14 7:21 AM, Urs Heckmann wrote:

On 20.06.2014, at 17:37, robert bristow-johnsonr...@audioimagination.com  
wrote:


On 6/20/14 10:57 AM, Andrew Simper wrote:

On 20 June 2014 17:11, Tim Goetzet...@quitte.de   wrote:


[Andrew Simper]

On 18 June 2014 21:01, Tim Goetzet...@quitte.de   wrote:

I absolutely agree that this looks to be the most promising approach
in terms of realism.  However, the last time I looked into this, the
computational cost seemed a good deal too high for a realtime
implementation sharing a CPU with other tasks.  But perhaps I'll need
to evaluate it again?

The computational costs of processing the filters isn't high at all, just
like with DF1 you can compute some simplified coefficients and then call
process using those. Since everything is linear you end up with a bunch of
additions and multiplies just like you do in a DF1, but the energy in your
capacitors is preserved when you change coefficients just like it is when
you change the knobs on a circuit.

Yeh's work on the Fender tonestack is just that: symbolic nodal
analysis leading to an equivalent linear digital filter.   I
mistakenly thought you were proposing nodal analysis including also
the nonlinear aspects of the circuit including valves and output
transformer (which without being too familiar with the method I
believe to lead to a system of equations that's a lot more complicated
to solve).



Nodal analysis can refer to linear or non-linear, so sorry for the
confusion.

well, Kirchoff's laws apply to either linear or non-linear.  but the methods we know as 
node-voltage (what i prefer) or loop-current do *not* work with non-linear. 
 these circuits (that we apply the node-voltage method to) have dependent or independent voltage or 
current sources and impedances between the nodes.

I don't quite understand. Are you saying that one can not write down an 
equation for each node once there's a diode or a transistor in the circuit?

I was of the opinion that transfer functions of, say, diodes are well known, 
and even though the resulting equations need more than just adds and multiplies 
and delays, they are still correct equations albeit ones that require more 
complex measures than undergrade math to solve. I could be wrong though, and in 
this case I'd be happy to learn!


it's possible that this is only a semantic issue.  Nodal analysis 
seems a little less specific, but when i google it, all of the primary 
hits go to the Node-voltage method, which together with the 
Loop-current method are the two primary ways electrical engineers are 
taught to analyze circuits containing independent or 
proportionally-dependent voltage or current sources and LTI impedances.  
only that.


the underlying physics: Kirchoff's current law, Kirchoff's voltage law, 
and the volt-amp characteristics of each element *is* applicable to any 
lumped element circuit, one with any combination of linear or 
nonlinear elements or memoryless or non-memoryless elements.  but the 
Loop-current (sometimes called mesh-current analysis) and Node-voltage 
methods allow one to write the matrix of linear equations governing the 
circuit directly from inspection of the circuit.  it's an analysis 
discipline we learn in EE class.  write the matrix equation down on a 
big piece of paper, then go about solving the system of linear 
equations.  but Loop-current and Node-voltage does not work with 
nonlinear volt-amp characteristics.


another semantic to be careful about is transfer function.  we mean 
something different when it's applied to LTI systems (the H(z) or 
H(s)) than when applied to a diode.  the latter semantic i don't use.  
i would say volt-amp characteristic of the diode or vacuum tube.  or 
if it was a nonlinear system with an input and output, i would say 
input-output mapping function and leave transfer function to the 
linear and time-invariant.




  I was trying to point out that the linear analysis done by Yeh
starts the circuit but then throws it away and instead uses a DF1, and a
DF1 does not store the state of each capacitor individually, so when you
turn the knob you don't get the right time varying behaviour.

in the steady-state (say, a second after the knob is turned) is there the right 
behavior with the DF1?

But of course - in a linear filter, such as one composed out of RC networks. 
But what about voltage controlled filters?



of course a VCF driven by a constantly changing LFO waveform (or its 
digital model) is a different thing.  i was responding to the case where 
there is an otherwise-stable filter connected to a knob.  sometimes the 
knob gets adjusted before the song or the set or gig starts and never 
gets moved after that for the evening.  for filter applications like 
that, i am not as worried myself about the right time-varying behavior 
(whatever right is).  for the case of modulated filters, we gotta 
worry about it and a few different papers have made that case (i 
remember a good one from Laroche).  

Re: [music-dsp] Simulating Valve Amps

2014-06-20 Thread Tim Goetze
[Andrew Simper]
On 18 June 2014 21:01, Tim Goetze t...@quitte.de wrote:
 I absolutely agree that this looks to be the most promising approach
 in terms of realism.  However, the last time I looked into this, the
 computational cost seemed a good deal too high for a realtime
 implementation sharing a CPU with other tasks.  But perhaps I'll need
 to evaluate it again?

The computational costs of processing the filters isn't high at all, just
like with DF1 you can compute some simplified coefficients and then call
process using those. Since everything is linear you end up with a bunch of
additions and multiplies just like you do in a DF1, but the energy in your
capacitors is preserved when you change coefficients just like it is when
you change the knobs on a circuit.

Yeh's work on the Fender tonestack is just that: symbolic nodal
analysis leading to an equivalent linear digital filter.   I
mistakenly thought you were proposing nodal analysis including also
the nonlinear aspects of the circuit including valves and output
transformer (which without being too familiar with the method I
believe to lead to a system of equations that's a lot more complicated
to solve).
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Simulating Valve Amps

2014-06-20 Thread Urs Heckmann

On 20.06.2014, at 11:11, Tim Goetze t...@quitte.de wrote:

  I
 mistakenly thought you were proposing nodal analysis including also
 the nonlinear aspects of the circuit including valves and output
 transformer (which without being too familiar with the method I
 believe to lead to a system of equations that's a lot more complicated
 to solve).

Well, it might not be as straight forward, but there are various methods to 
solve non-linear systems of equations.

The main problem arises when a non-linear element occurrs on both sides of an 
equation, such as

Vout = g * ( tanh( Vin ) - tanh( Vout ) ) + iceq

As one can't solve this for Vout, there's a few different approaches to deal 
with it computationally.

The most common one is to introduce a unit delay (aka the naive method)

Vout[ n ] = g * ( tanh( Vin[ n ] ) - tanh( Vout[ n - 1 ] ) ) + iceq

This however leads to mangled phase/frequency response and worse time invariant 
properties, e.g. when sweeping g

A quick and easy method to resolve this is to only apply the unit delay to the 
effect of the non-linearity itself, but keep the system delay-less for its 
linear components:

Vout = g * ( tanh( Vin ) - f[ n-1 ] * Vout  ) + iceq
f[ n ] = tanh( Vout )/Vout;

This way the equation can be solved for Vout and the non-linearity becomes a 
factor that's applied with one sample delay.

Similar methods apply an offset and a tangent to linearize the non-linearity 
for a moment in time.

So far this is computationally easy going, but it's also a compromise in 
accuracy. To become numerically accurate one typically uses iterative methods 
to look into the future. A very simple approach is to estimate the value in 
question, calculate the equation and compare the estimate with the result. From 
there on one can refine the estimate until it converges with the result in any 
desired accuracy, such as

while ( abs( Vout_result - Vout_estimate )  0.0001 )
{
Vout_estimate = rootFindingAlgorithmOfChoice( Vout_result );
Vout_result = g * ( tanh( Vin ) - tanh( Vout_estimate ) ) + iceq
}

There you go. It's very basic and it can be optimised to run on a modern CPU.

Cheers,

- Urs

--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Simulating Valve Amps

2014-06-20 Thread Andrew Simper
On 20 June 2014 17:11, Tim Goetze t...@quitte.de wrote:

 [Andrew Simper]
 On 18 June 2014 21:01, Tim Goetze t...@quitte.de wrote:
  I absolutely agree that this looks to be the most promising approach
  in terms of realism.  However, the last time I looked into this, the
  computational cost seemed a good deal too high for a realtime
  implementation sharing a CPU with other tasks.  But perhaps I'll need
  to evaluate it again?
 
 The computational costs of processing the filters isn't high at all, just
 like with DF1 you can compute some simplified coefficients and then call
 process using those. Since everything is linear you end up with a bunch of
 additions and multiplies just like you do in a DF1, but the energy in your
 capacitors is preserved when you change coefficients just like it is when
 you change the knobs on a circuit.

 Yeh's work on the Fender tonestack is just that: symbolic nodal
 analysis leading to an equivalent linear digital filter.   I
 mistakenly thought you were proposing nodal analysis including also
 the nonlinear aspects of the circuit including valves and output
 transformer (which without being too familiar with the method I
 believe to lead to a system of equations that's a lot more complicated
 to solve).


Nodal analysis can refer to linear or non-linear, so sorry for the
confusion. I was trying to point out that the linear analysis done by Yeh
starts the circuit but then throws it away and instead uses a DF1, and a
DF1 does not store the state of each capacitor individually, so when you
turn the knob you don't get the right time varying behaviour. I am saying
you don't have to throw the circuit away, you can still get an efficient
implementation since in the linear case everything reduces to a bunch of
adds and multiplies.

For non-linear modelling you need additional steps, and depending on the
circuit there are many different methods that can be tried to find the best
fit for the particular requirements.
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Simulating Valve Amps

2014-06-20 Thread robert bristow-johnson

On 6/20/14 10:57 AM, Andrew Simper wrote:

On 20 June 2014 17:11, Tim Goetzet...@quitte.de  wrote:


[Andrew Simper]

On 18 June 2014 21:01, Tim Goetzet...@quitte.de  wrote:

I absolutely agree that this looks to be the most promising approach
in terms of realism.  However, the last time I looked into this, the
computational cost seemed a good deal too high for a realtime
implementation sharing a CPU with other tasks.  But perhaps I'll need
to evaluate it again?

The computational costs of processing the filters isn't high at all, just
like with DF1 you can compute some simplified coefficients and then call
process using those. Since everything is linear you end up with a bunch of
additions and multiplies just like you do in a DF1, but the energy in your
capacitors is preserved when you change coefficients just like it is when
you change the knobs on a circuit.

Yeh's work on the Fender tonestack is just that: symbolic nodal
analysis leading to an equivalent linear digital filter.   I
mistakenly thought you were proposing nodal analysis including also
the nonlinear aspects of the circuit including valves and output
transformer (which without being too familiar with the method I
believe to lead to a system of equations that's a lot more complicated
to solve).



Nodal analysis can refer to linear or non-linear, so sorry for the
confusion.


well, Kirchoff's laws apply to either linear or non-linear.  but the 
methods we know as node-voltage (what i prefer) or loop-current do 
*not* work with non-linear.  these circuits (that we apply the 
node-voltage method to) have dependent or independent voltage or current 
sources and impedances between the nodes.



  I was trying to point out that the linear analysis done by Yeh
starts the circuit but then throws it away and instead uses a DF1, and a
DF1 does not store the state of each capacitor individually, so when you
turn the knob you don't get the right time varying behaviour.


in the steady-state (say, a second after the knob is turned) is there 
the right behavior with the DF1?



  I am saying
you don't have to throw the circuit away, you can still get an efficient
implementation since in the linear case everything reduces to a bunch of
adds and multiplies.


and delay states.


For non-linear modelling you need additional steps, and depending on the
circuit there are many different methods that can be tried to find the best
fit for the particular requirements


if, the sample rate is high enough (and it *should* be pretty fast 
because of the aliasing issue) the deltaT used in forward differences 
or backward differences (or predictor-corrector) or whatever should be 
pretty small.  in my opinion, if you have a bunch of memoryless 
non-linear elements connected in a circuit with linear elements (with or 
without memory), it seems to me that the simple Euler's forward method 
(like we learned in undergraduate school) suffices to model it.


Andrew, i realize that you had been using something like that to emulate 
linear circuits with capacitors and resistors and op-amps.  it does make 
a difference in time-variant situations, but for the steady state (a 
second or two after the knob is twisted), i'm a little dubious of what 
difference it makes.


--

r b-j  r...@audioimagination.com

Imagination is more important than knowledge.



--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Simulating Valve Amps

2014-06-19 Thread Sampo Syreeni

On 2014-06-19, Ross Bencina wrote:

There is a segment of the market that values accurate models--at any 
computational cost.


Then, can you do that at low latency, so that your model is also 
playable? That's of course the next frontier. And no, there's no 
shortcut there: those giga-computations not only take memory bandwidth 
but the kind of low latency which just isn't there.


That's why we're seeing this kind of technology in offline plugins, but 
the musicians still play analog instruments. Which of course have zero 
latency. ;)

--
Sampo Syreeni, aka decoy - de...@iki.fi, http://decoy.iki.fi/front
+358-40-3255353, 025E D175 ABE5 027C 9494 EEB0 E090 8BA9 0509 85C2
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Simulating Valve Amps

2014-06-19 Thread Rohit Agarwal



In terms of computational complexity, most of the complexity is in
modelling, tuning the parameters to fit data. However, once you're done
with this offline task, running the result should not be that heavy. That
process should be real-time on new CPUs. Your latency should then be just
the buffering which should get you down to 25 ms.

 


From:Ross Bencina rossb-li...@audiomulch.com

Sent:A discussion list for music-related DSP
music-dsp@music.columbia.edu

Date:Thu, June 19, 2014 11:59 am

Subject:Re: [music-dsp] Simulating Valve Amps

 Hi Sampo,



 On 19/06/2014 4:06 PM, Sampo Syreeni wrote:

 On 2014-06-19, Ross Bencina wrote:



 There is a segment of the market that values accurate
models--at any

 computational cost.



 Keep in mind that this was a response to your claim that
nobody's going

 to pay you. I'm not taking a position on the value of that
path.





 Then, can you do that at low latency, so that your model is
also

 playable? That's of course the next frontier.



 It depends what you mean by playable. People have been
living with the

 latency of digital gear for a long time.





 And no, there's no

 shortcut there: those giga-computations not only take memory
bandwidth

 but the kind of low latency which just isn't there.



 Can you expand on just isn't there?



 I guess that there is some fundamental lower limit on achievable
latency

 due to the band limiting filters in ADCs and DACs. What is that
number?

 (I've heard on the order of 24 samples for one direction). Is it a
hard

 limit?



 Let's assume that we move to sample by sample processing (or maybe

 pipelined processing) on FPGAs synchronously coupled to the ADC and
DAC.

 What's the theoretical lower bound on latency that can be achieved
here?

 What does it need to be to be equivalent to analog. I'm
guessing that

 it needs to be in the low hundreds of microseconds range, but maybe
even

 then people will notice?





 That's why we're seeing this kind of technology in offline
plugins, but

 the musicians still play analog instruments. Which of course have
zero

 latency. ;)



 I'm not sure your theory of causality is correct here. That's not to
say

 latency isn't a problem.



 Ross.

 --

 dupswapdrop -- the music-dsp mailing list and website:

 subscription info, FAQ, source code archive, list archive, book
reviews,

 dsp links

 http://music.columbia.edu/cmc/music-dsp

 http://music.columbia.edu/mailman/listinfo/music-dsp


--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Simulating Valve Amps

2014-06-19 Thread Ross Bencina

On 19/06/2014 4:52 PM, Rohit Agarwal wrote:

In terms of computational complexity, most of the complexity is in
modelling, tuning the parameters to fit data. However, once you're done
with this offline task, running the result should not be that heavy. That
process should be real-time on new CPUs. Your latency should then be just
the buffering which should get you down to 25 ms.


Sure, but the point Sampo was making is that 25ms is one or two orders 
of magnitude too large for musicians. Especially musicians who have the 
option of using analog gear with zero latency.


Ross.
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Simulating Valve Amps

2014-06-19 Thread Nigel Redmon
On Jun 18, 2014, at 11:52 PM, Rohit Agarwal ro...@khitchdee.com wrote:

 In terms of computational complexity, most of the complexity is in
 modelling, tuning the parameters to fit data. However, once you're done
 with this offline task, running the result should not be that heavy. That
 process should be real-time on new CPUs. Your latency should then be just
 the buffering which should get you down to 25 ms.

Ouch, about four times what I consider “acceptable. It’s all relative...I used 
to play a large pipe organ in high school; besides the mechanical delays, the 
pipes were pretty far out to each side, and up high. It took some getting used 
to the fingers being three or four notes ahead of what I was hearing when 
playing Bach prelude and fugues.

But I can’t stand much delay at all playing a sampled grand, for instance. A 64 
sample buffer feels real solid, but I can tolerate 128. I don’t ever want to 
revisit the days of my D50, which I loved for rich textures, but was useless 
for playing anything percussive (I had the sense that the latency wan’t 
constant, but could be pretty bad on arbitrary notes—I never measured it in any 
manner, I just quickly learned to avoid using it as a digital piano).


 
 
 From:Ross Bencina rossb-li...@audiomulch.com
 
 Sent:A discussion list for music-related DSP
 music-dsp@music.columbia.edu
 
 Date:Thu, June 19, 2014 11:59 am
 
 Subject:Re: [music-dsp] Simulating Valve Amps
 
 Hi Sampo,
 
 
 
 On 19/06/2014 4:06 PM, Sampo Syreeni wrote:
 
 On 2014-06-19, Ross Bencina wrote:
 
 
 
 There is a segment of the market that values accurate
 models--at any
 
 computational cost.
 
 
 
 Keep in mind that this was a response to your claim that
 nobody's going
 
 to pay you. I'm not taking a position on the value of that
 path.
 
 
 
 
 
 Then, can you do that at low latency, so that your model is
 also
 
 playable? That's of course the next frontier.
 
 
 
 It depends what you mean by playable. People have been
 living with the
 
 latency of digital gear for a long time.
 
 
 
 
 
 And no, there's no
 
 shortcut there: those giga-computations not only take memory
 bandwidth
 
 but the kind of low latency which just isn't there.
 
 
 
 Can you expand on just isn't there?
 
 
 
 I guess that there is some fundamental lower limit on achievable
 latency
 
 due to the band limiting filters in ADCs and DACs. What is that
 number?
 
 (I've heard on the order of 24 samples for one direction). Is it a
 hard
 
 limit?
 
 
 
 Let's assume that we move to sample by sample processing (or maybe
 
 pipelined processing) on FPGAs synchronously coupled to the ADC and
 DAC.
 
 What's the theoretical lower bound on latency that can be achieved
 here?
 
 What does it need to be to be equivalent to analog. I'm
 guessing that
 
 it needs to be in the low hundreds of microseconds range, but maybe
 even
 
 then people will notice?
 
 
 
 
 
 That's why we're seeing this kind of technology in offline
 plugins, but
 
 the musicians still play analog instruments. Which of course have
 zero
 
 latency. ;)
 
 
 
 I'm not sure your theory of causality is correct here. That's not to
 say
 
 latency isn't a problem.
 
 
 
 Ross.
 
 --
 
 dupswapdrop -- the music-dsp mailing list and website:
 
 subscription info, FAQ, source code archive, list archive, book
 reviews,
 
 dsp links
 
 http://music.columbia.edu/cmc/music-dsp
 
 http://music.columbia.edu/mailman/listinfo/music-dsp
 
 
 --
 dupswapdrop -- the music-dsp mailing list and website:
 subscription info, FAQ, source code archive, list archive, book reviews, dsp 
 links
 http://music.columbia.edu/cmc/music-dsp
 http://music.columbia.edu/mailman/listinfo/music-dsp

--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Simulating Valve Amps

2014-06-19 Thread Sampo Syreeni

On 2014-06-19, Rohit Agarwal wrote:

I'm surprised by that statement quite honestly. At a tempo of 200 bpm, 
this latency would be roughly 10% of the beat interval which seems to 
me quite small.


Then you obviously don't know techno. ;)
--
Sampo Syreeni, aka decoy - de...@iki.fi, http://decoy.iki.fi/front
+358-40-3255353, 025E D175 ABE5 027C 9494 EEB0 E090 8BA9 0509 85C2
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Simulating Valve Amps

2014-06-19 Thread Rohit Agarwal



Enlighten me, does that mean faster tempo or is 10% too much delay for
that?

 


From:Sampo Syreeni de...@iki.fi

Sent:A discussion list for music-related DSP
music-dsp@music.columbia.edu

Date:Thu, June 19, 2014 2:04 pm

Subject:Re: [music-dsp] Simulating Valve Amps

 On 2014-06-19, Rohit Agarwal wrote:



 I'm surprised by that statement quite honestly. At a tempo of 200
bpm,

 this latency would be roughly 10% of the beat interval which
seems to

 me quite small.



 Then you obviously don't know techno. ;)

 --

 Sampo Syreeni, aka decoy - de...@iki.fi, http://decoy.iki.fi/front

 +358-40-3255353, 025E D175 ABE5 027C 9494 EEB0 E090 8BA9 0509 85C2

 --

 dupswapdrop -- the music-dsp mailing list and website:

 subscription info, FAQ, source code archive, list archive, book
reviews,

 dsp links

 http://music.columbia.edu/cmc/music-dsp

 http://music.columbia.edu/mailman/listinfo/music-dsp


--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Simulating Valve Amps

2014-06-19 Thread Ross Bencina

On 19/06/2014 7:09 PM, Rohit Agarwal wrote:

Enlighten me, does that mean faster tempo or is 10% too much delay for
that?


I think that this conversation is at risk of going off the rails. Make 
sure that you're asking the right question.


There are a number of different ways that delays can impact a musical 
performance, including:


- perceived impact of time alignment on grove as heard by external 
listeners


- perceived impact of time jitter (Nigel's D50 example) for performers 
and listeners.


- effect of self-delay and inter-musician delay on the musical performers.

I think the last point is the one that's relevant here.

Talking about time alignment in techno is not really relevant if we're 
talking about valve amps for guitarists.


Clearly a 25ms delay post-applied to the rhythm guitar is going to mess 
with the groove in almost any musical setting. But the real question is 
whether it's going to mess with the guitarist's performance of the 
groove if they can hear the delay while they are playing (i.e. can they 
compensate to recreate the same groove). And then, when comparing to 
zero latency hardware, at what point does it become trivial to 
compensate (i.e. doesn't introduce any additional cognitive burden).


I believe that various studies have put the playable latency threshold 
below 8ms. (Sorry, don't have time to look this up but I think that one 
example is in one of Alex Carot's network music papers).


Ross.
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Simulating Valve Amps

2014-06-18 Thread STEFFAN DIEDRICHSEN
That’s a wild theory. ;-)
E.g. A Leslie 122 amp has a rather small power supply transformer, which has to 
deliver B+ and heater voltage. The output transformer is about 1.5 times the 
size of the power tranny. 
If it comes to output transformers, the distortion caused by them is rather 
mild. It increases toward lower frequencies, but you never reach saturation in 
any guitar amp, because that sound truly sucks. 
Loudspeakers are indeed a good source of harmonic distortion, that’s indeed a 
field of work to model those. 

Steffan 


On 17 Jun 2014, at 04:51, robert bristow-johnson r...@audioimagination.com 
wrote:

 but it's usually not the tube stages, it's the output transformer (with its 
 nonlinearities and hysteresis) and the loudspeaker (terribly non-linear) and 
 the box.  m



--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Simulating Valve Amps

2014-06-18 Thread Tim Goetze
[Sampo Syreeni]
 From what (very little!) I know of hardcore analog simulations, I'd say 
 that is part of a more general and much nastier problem. That's the 
 interaction
 one: whereas digital signal graphs have a definite direction of signal flow,
 there's no such thing on the analog side no matter what you do. Sure, you 
 often
 try to go from low to high impendance and make use of tons of other tricks in
 order to make your circuit approach a directional signal flow, but lo and
 behold, most of the nicest sounding circuits actually measurably feed back all
 the way from the speaker-air-interface to the input jack, affecting anything
 and everything on the way.

This, in my opinion, is possibly the most important difference between
good warm analog and cold lifeless harsh sounding gear.  I
think that the interplay between virtually all components touched by
the signal flow causes the timbre to constantly evolve, adding life.
In fact, without such enhancements the sound of a solidbody electric
guitar can appear rather plain, not to say dreadful.

For computational efficiency it appears we're still stuck emulating
amps and other gear with a succession of stages each lumping great
parts of the circuitry together.  But, imperfect though it may be, we
can modify the parameters of one boxed stage in relation to
measurements made anywhere else in the chain to create somewhat
similar timbral evolution.  (Such additions to the amp model have
certainly helped me come a lot closer to my personal idea of the
perfect guitar tone.)

Tim
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Simulating Valve Amps

2014-06-18 Thread Andrew Simper
On 18 June 2014 16:15, STEFFAN DIEDRICHSEN sdiedrich...@me.com wrote:

 Actually, it’s not rocket science to model a baxandall or those
 Treble/Mid/bass networks. A straight forward approach is modified nodal
 analysis, which gives you a model, that preserves the passivity of the
 filter network.

 Steffan


Indeed Steffan! I have a feeling some people are allergic to solving basic
sets of linear equations? The math behind it is about as basic as it gets,
it is literally a set of simultaneous linear equations, the kind of stuff
that is covered in high school! Things get more interesting when you need
to solve non-linear components.

As for tubes, I think more research is needed to accurately characterise
the time varying behaviour of tubes, but even existing empirical models
like the one proposed by Koren would probably be enough for audio work. The
more challenging part I feel would be in the electro mechanical power amp +
speaker stage, where you have more complicated interactions between
non-linear semi-rigid bodies coupled to the circuit via an electric field
with coils and magnets.
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] Simulating Valve Amps

2014-06-18 Thread Tim Goetze
[STEFFAN DIEDRICHSEN]
Actually, it's not rocket science to model a baxandall or those
Treble/Mid/bass networks. A straight forward approach is modified
nodal analysis, which gives you a model, that preserves the passivity
of the filter network. 

Perhaps I was being too vague; in any case, I wasn't talking about the
tone controls but about the whole amp and cabinet business.  Thanks to
the work of Yeh, I personally consider the tonestack a solved problem,
or at least one of least concern for the time being.

Cheers,
Tim

On 18 Jun 2014, at 10:01, Tim Goetze t...@quitte.de wrote:

 [Sampo Syreeni]
 From what (very little!) I know of hardcore analog simulations, I'd say 
 that is part of a more general and much nastier problem. That's the 
 interaction
 one: whereas digital signal graphs have a definite direction of signal flow,
 there's no such thing on the analog side no matter what you do. Sure, you 
 often
 try to go from low to high impendance and make use of tons of other tricks 
 in
 order to make your circuit approach a directional signal flow, but lo and
 behold, most of the nicest sounding circuits actually measurably feed back 
 all
 the way from the speaker-air-interface to the input jack, affecting anything
 and everything on the way.
 
 This, in my opinion, is possibly the most important difference between
 good warm analog and cold lifeless harsh sounding gear.  I
 think that the interplay between virtually all components touched by
 the signal flow causes the timbre to constantly evolve, adding life.
 In fact, without such enhancements the sound of a solidbody electric
 guitar can appear rather plain, not to say dreadful.
 
 For computational efficiency it appears we're still stuck emulating
 amps and other gear with a succession of stages each lumping great
 parts of the circuitry together.  But, imperfect though it may be, we
 can modify the parameters of one boxed stage in relation to
 measurements made anywhere else in the chain to create somewhat
 similar timbral evolution.  (Such additions to the amp model have
 certainly helped me come a lot closer to my personal idea of the
 perfect guitar tone.)

--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Simulating Valve Amps

2014-06-18 Thread Andrew Simper
On 18 June 2014 18:26, Tim Goetze t...@quitte.de wrote:

 ...  Thanks to
 the work of Yeh, I personally consider the tonestack a solved problem,
 or at least one of least concern for the time being.

 Cheers,
 Tim


A linear tonestack has been a solved problem way before Yeh wrote any
papers. Also I would not consider a mapping component values to direct form
1 biquad coefficients a good way to simulate a tone stack when you can
easily preserve the time varying behaviour as well if you use standard
circuit simulation techniques like nodal analysis.
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Simulating Valve Amps

2014-06-18 Thread Nigel Redmon
Of course, high gain amps have from four to six gain stages (and the stages may 
have attenuation). I’m simplifying by putting all the gain in one stage, but 
the point is that when you’re cranked on one of these amps, you can count on 
being locked into the hard-clip region of the curve, and the soft clip region 
might as well not be there.

On Jun 18, 2014, at 12:52 AM, STEFFAN DIEDRICHSEN sdiedrich...@me.com wrote:

 I had the pleasure to work on a Soldano designed amp (Yamaha T-100). Once I 
 got it fixed (2 resistors burnt off), it was a hell of an amp. What makes me 
 wonder, that compared to a Marshall amp, you could get a super overdriven 
 sound with very low noise. The marshall basically spit out noise on the same 
 level as the guitar signal. This made me thinking about this high gain 
 theories. Actually, a 12AX7 allows for about 20 dB gain. I need to dig 
 further down, but simply applying high gain and a clip curve doesn’t do the 
 trick.
 
 Steffan 
 
 On 17 Jun 2014, at 21:30, Nigel Redmon earle...@earlevel.com wrote:
 
 Yes, Robert…but, with the kind of gain necessary…OK, so you have the y-xis 
 as you output level, x-axis as input. To view the entire curve for a Soldano 
 Super Lead Overdrive, for instance, you draw the curve of your choice to 
 rise from y=0 and give you a soft bend into y=1 (full output). The bend will 
 be somewhere around x=1, ballpark (maybe it’s x=2 or 3, to allow for lower 
 input levels, but the point is that it’s a small number compared to what’s 
 coming next)…then you allow for x=3 or so (a flatline from the x=1..3 
 area). Is that not a pretty high order polynomial?
 

--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Simulating Valve Amps

2014-06-18 Thread Nigel Redmon
BTW, I do know that it was developed with the Sonic Core SCOPE SDK, and I 
suspect it’s just using fairly routine DSP blocks, with a lot of care in 
tweaking the sound. (It runs in my mind that I might have seen some block 
diagrams on a forum back when he was developing it—the point is that I don’t 
think there is any cutting-edge tech involved.) But that’s all I know.

On Jun 18, 2014, at 10:40 AM, Nigel Redmon earle...@earlevel.com wrote:

 No, Marco, sorry. I wish I did. The result is very good, and a huge leap from 
 the sim in the CX-3. Very annoying that they don’t support MIDI switching of 
 the speed…
 
 http://www.earlevel.com/main/2013/02/16/ventilator-adapter-in-a-mint-tin/
 
 
 On Jun 18, 2014, at 10:11 AM, Marco Lo Monaco marco.lomon...@teletu.it 
 wrote:
 
 Ciao Nigel, talking about the VENTILATOR, do you know something more about
 its secrets/internals?
 :)
 
 M.
 
 -Messaggio originale-
 Da: music-dsp-boun...@music.columbia.edu [mailto:music-dsp-
 boun...@music.columbia.edu] Per conto di Nigel Redmon
 Inviato: mercoledì 18 giugno 2014 08:22
 A: A discussion list for music-related DSP
 Oggetto: Re: [music-dsp] Simulating Valve Amps
 
 Well, some people think it’s close enough for rock n rock (amp sims),
 others
 don’t. It’s the same with analog synths and virtual analog. But there’s
 also
 the comfort of tube amps, and there’s the comfort of the limited sound
 palette of using the amp that you know and love. Amp sims are really about
 variety (can’t afford a Plexi, a Twin Reverb, SLO, AC-30, and a few
 boutique
 amps? Now you can).
 
 I appreciate the old stuff, but I appreciate the convenience and
 flexibility of
 the new stuff—to me it *is* close enough for rock n roll (new stuff in
 general—I don’t play much guitar). I play a B3 clone because I hauled a
 Hammond decades ago, and I hauled and still have (needs work) a Leslie,
 but
 I’d just as soon use my Ventilator pedal on the CX-3 (yes, with
 programmable
 leakiness and aging of the tone wheels, etc.)—more convenient, and gets
 the sound I want. Others would shudder at the thought. Well, until their
 backs start giving out…I know a hardcore, old-time B3 blues player
 (“Mule”—
 a nickname he earning for hauling around his B3 and Leslies) who picked up
 a
 clone for his aging back after hearing my CX-3 though the Ventilator. Not
 for
 all gigs, mind you, but as an option to go with for some gigs. The point
 is that
 if the tradeoffs are attractive enough, it’s easier to let yourself try
 new things
 even if you feel that it falls ever so slightly short of what you’re used
 to, or
 strays from your comfort zone.
 
 So while some might feel that amp sims haven’t arrived yet, other might
 feel,
 “where the heck have you been the past decade? ;-)
 
 
 On Jun 17, 2014, at 6:59 PM, robert bristow-johnson
 r...@audioimagination.com wrote:
 
 On 6/17/14 8:24 PM, Nigel Redmon wrote:
 (Thinking outside the nest…)
 
 (...maybe that means opening up the LPF as the gain knob setting is
 reduced)
 Yes
 
 And good discussion elsewhere in there, thanks Robert.
 
 yer welcome, i guess.
 
 you may be thinking outside the nest; i'm just thinking out loud.
 
 i think, like a multieffects box, we oughta be able to simulate all
 these amps
 (don't forget the Mesa Boogie) and their different settings in a single
 DSP
 box with enough MIPS and a lotta oversampling.  dunno if simulating the
 50/60 Hz hum and shot noise would be good or not (i know of a B3 emulation
 that simulates the din of all 60-whatever keys leaking into the mix even
 when they're all key-up).  but they oughta be able to model each
 deterministic thing: the power supply sag, changing bias points,
 hysteresis in
 transformers, capacitance in feedback around a non-linear element (might
 use Euler's forward differences in doing that), whatever.  whatever it is,
 if
 you take out the hum and shot noise, it's a deterministic function of
 solely
 the guitar input and the knob settings, and if we can land human beings on
 the moon, we oughta be able to figure out what that deterministic function
 is.  for each amp model.  it shouldn't be more mystical than that (but
 there
 *is* a sorta mysticism with musicians about this old analog gear that we
 just
 cannot adequately mimic).
 
 and thanks to you, Nigel.
 
 L8r,
 
 r b-j
 
 
 On Jun 17, 2014, at 4:07 PM, robert bristow-
 johnsonr...@audioimagination.com  wrote:
 On 6/17/14 3:30 PM, Nigel Redmon wrote:
 This is getting…nesty...
 yah 'vell, vot 'r ya gonna do?  :-)
 
 On Jun 17, 2014, at 10:42 AM, robert bristow-
 johnsonr...@audioimagination.com   wrote:
 
 On 6/17/14 12:57 PM, Nigel Redmon wrote:
 On Jun 17, 2014, at 9:09 AM, robert bristow-
 johnsonr...@audioimagination.comwrote:
 
 On 6/17/14 5:30 AM, Nigel Redmon wrote:
 
 ...
 Anyway, just keep in mind that the particular classic amps
 don’t sound better simply because they are analog. They sound
 better because over the decades they’ve been around, they
 survived—because they do

Re: [music-dsp] Simulating Valve Amps

2014-06-18 Thread Theo Verelst


Once more, you'd have to think about this problem as to include a 
curtain test (within reason, I mean just to make the point) where you 
take the optimal example tube amp, place the mic in front of it, put 
that on the PA or monitoring in another room (or in the same room, to 
let the guitar feedback work properly, certainly for high gain examples) 
in an analog fashion, and then put in a AD/DA insert somewhere, and see 
what the difference is.


If you put the digital insert before the guitar amp, probably things 
won't work all too good, unless it's a tuned and intended digital effect 
unit, and I'm sure that monitoring the mic-in-from-of-the-guitar-amp 
digital instead of analog makes a difference. And amping that mic with a 
digital conversion in between, unless the PA is tuned for it, is going 
to change the sound, and it should, at least there's a few milliseconds 
shift in phase, and IMO for sure if the sound isn't a very degenerate 
sound, the aliasing/reconstruction will be audible, all the more so if 
the analog-digital-analog signal sub-path is part of the sound feeding 
back to the guitar body, and therefore to the sound.


Also, applying the proper theory is cool and I find it great when people 
try to make nice plugins and so on, but be aware of it that even a 
simple filter will in most cases have significant distortion in the 
digital domain, even if it's a nice (natural) IIR, because of the 
integration not including a sinc function partial, and the not 
necessarily perfect curve of the power sequence (impulse response) 
compared with a natural e-power curve, regardless of zero-th order 
reconstruction, average DA convertor reconstruction filtering, or 
(near-)perfect signal reconstruction. I mean it sounds nice to presume 
perfection, but those integrals are going to be off, circuits are more 
natural, especially relatively high impedance circuits with 
non-electrolytic capacitors with tube connections: there will be some 
seriously accurate long-term integration in those natural poles and 
zeros, that simply isn't that easy to approach in the digital domain as 
people might think, might take more than single precision to accumulate 
in some cases, and most human beings are used to judging the naturalness 
of low order systems, and very sensitive to small signal aberrations.


It might pay to convert some of the private and various state funded 
thinking to making sure a decent EE designs some well-parametrized 
solutions, and start from those, probably cheaper...


Also, about the seemingly trivial to defend approximations in the sense 
of large signal amplification and clipping: most amps will do a lot of 
things while a part of their analog state is in clipping. For instance 
the pre-amp and filters will still do accurate signal tracking and 
integration, and the clipping doesn't take place at a sampling grid, so 
the PWM behavior of the guitar-amp-speaker system can be accurately 
driven, which brings me to the last bit: the whole system works very 
accurately in the feedback loop.


It's like landing a 3 ton chopper with a stick: it's perfectly possible, 
but takes training, and involving any of the crude simplifications some 
people take for granted would make the thing un-flyable and dangerous, 
so please think about this a bit.


T.
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Simulating Valve Amps

2014-06-17 Thread Nigel Redmon
On Jun 16, 2014, at 7:51 PM, robert bristow-johnson r...@audioimagination.com 
wrote:
 one thing that is hard to replicate is a sample rate that is infinity (which 
 is how i understand continuous-time signals to be).  but i don't think you 
 should need to have such a high sample rate.  one thing we know is that for 
 *polynomial curves* (which are mathematical abstractions and maybe have 
 nothing to do with tube curves), that for a bandwidth of B in the input and a 
 polynomial curve of order N, the highest generated frequency is N*B so the 
 sample rate should be at least (N+1)*B to prevent any of these generated 
 images from aliasing down to below the original B.  if you can prevent that, 
 you can filter out any of the aliased components and downsample to a sample 
 rate sufficient for B (which is at least 2*B).


This really goes out the window when you’re modeling amps, though. The order of 
the polynomial is too high to implement practically (that is, you won’t end up 
utilizing the oversampling rate necessary to follow it), so you still be 
dealing with aliasing. Modern high gain amps have huge gain *after* saturation. 
In practical terms, you round into it (with a polynomial, for instance), then 
just hard clip from there on out, and there goes your polynomial (it can be 
replaced by an approximation that's very high order, but what’s the point).

Anyway, you pay your money, you make your choices. Obviously some really good 
musicians making really interesting music use modeling amps. They don’t have to 
be better than tubes, in order to be a win, just good enough to be worth all 
the benefits. If you’re a session music, you can bring in the truck with all of 
the kinds of amps that might be called on, or you can bring a modeling amp, for 
instance. And going direct into the PA or your recoding equipment…etc. I’m not 
going to make judgments on what people should like, so I’ll leave it at that.

One happy thing about the aliasing is that, given a decent level of 
oversampling, it won’t be bad at lower overdrive levels. At the higher the 
overdrive levels, the harder it is to hear aliasing through all that harmonic 
distortion you’re generating. So it could be worse...

Stephen, I’m a former Line 6 DSP engineer, in the interests of full disclosure. 
I have no idea how you Fender modeler rates.


On Jun 16, 2014, at 7:51 PM, robert bristow-johnson r...@audioimagination.com 
wrote:

 On 6/16/14 9:08 PM, ChordWizard Software wrote:
 Hi all,
 
 As an electric jazz guitarist (strat copy) I have long appreciated the tone 
 color and subtle overdrive that valve amps offer compared to their solid 
 state cousins.
 
 me too (not that i am any jazz guitarist).  on my side of the pond, we call 
 them's tubes.
 
 But as a software engineer, I always believed it would only be a matter of 
 time until someone produced a pedal that would emulate this effect 
 faithfully enough to skip the amp and just DI the wet signal into a PA.
 
 
 that's the theory, at least.
 
 
 
 ...  So what's going on here?  Logic tells me that with the right DSP in 
 place, it should be possible to replicate exactly what is coming out of my 
 Vox, given the same input signal.
 
 logic tells me the same thing.  they guitar amp is doing a mathematical 
 operation on the input signal (that may be combined with two other input 
 signals: the 50 or 60 Hz hum from the power supply and the shot noise, which 
 we may or may not want to model in the tube emulation).
 
 
 Aesthetics aside, is it really so hard to accurately model the sound of 
 valve amps?  I would have thought the characteristics would be well 
 understood by now
 
 one thing that is hard to replicate is a sample rate that is infinity (which 
 is how i understand continuous-time signals to be).  but i don't think you 
 should need to have such a high sample rate.  one thing we know is that for 
 *polynomial curves* (which are mathematical abstractions and maybe have 
 nothing to do with tube curves), that for a bandwidth of B in the input and a 
 polynomial curve of order N, the highest generated frequency is N*B so the 
 sample rate should be at least (N+1)*B to prevent any of these generated 
 images from aliasing down to below the original B.  if you can prevent that, 
 you can filter out any of the aliased components and downsample to a sample 
 rate sufficient for B (which is at least 2*B).
 
 so, while we might be able to trace out the tube curves, to make sure we can 
 deal with the aliasing, we would want to as closely as possible, model those 
 tube curves (or the net curve, which is the mapping of the biased grid 
 voltage to plate voltage for each tube stage) with a finite-order polynomial, 
 so we have some idea how much we need to upsample to cleanly emulate the tube 
 curve.
 
 another thing to emulate is the feedback around any tubes, and when there is 
 a capacitor involved (and there may be just from inter-electrode 
 capacitance), you are mixing parts with 

Re: [music-dsp] Simulating Valve Amps

2014-06-17 Thread Aengus Martin
Hi,

I've only the vaguest idea of this area but I do find it interesting. From
what you said, Nigel, aliasing is the main issue. Is it the case then that
amp modeling would be more or less a solved problem if you could sample at
arbitrarily high rates?

Cheers,

Aengus.


On Tue, Jun 17, 2014 at 6:58 PM, Nigel Redmon earle...@earlevel.com wrote:

 On Jun 16, 2014, at 7:51 PM, robert bristow-johnson 
 r...@audioimagination.com wrote:
  one thing that is hard to replicate is a sample rate that is infinity
 (which is how i understand continuous-time signals to be).  but i don't
 think you should need to have such a high sample rate.  one thing we know
 is that for *polynomial curves* (which are mathematical abstractions and
 maybe have nothing to do with tube curves), that for a bandwidth of B in
 the input and a polynomial curve of order N, the highest generated
 frequency is N*B so the sample rate should be at least (N+1)*B to prevent
 any of these generated images from aliasing down to below the original B.
  if you can prevent that, you can filter out any of the aliased components
 and downsample to a sample rate sufficient for B (which is at least 2*B).


 This really goes out the window when you’re modeling amps, though. The
 order of the polynomial is too high to implement practically (that is, you
 won’t end up utilizing the oversampling rate necessary to follow it), so
 you still be dealing with aliasing. Modern high gain amps have huge gain
 *after* saturation. In practical terms, you round into it (with a
 polynomial, for instance), then just hard clip from there on out, and there
 goes your polynomial (it can be replaced by an approximation that's very
 high order, but what’s the point).

 Anyway, you pay your money, you make your choices. Obviously some really
 good musicians making really interesting music use modeling amps. They
 don’t have to be better than tubes, in order to be a win, just good enough
 to be worth all the benefits. If you’re a session music, you can bring in
 the truck with all of the kinds of amps that might be called on, or you can
 bring a modeling amp, for instance. And going direct into the PA or your
 recoding equipment…etc. I’m not going to make judgments on what people
 should like, so I’ll leave it at that.

 One happy thing about the aliasing is that, given a decent level of
 oversampling, it won’t be bad at lower overdrive levels. At the higher the
 overdrive levels, the harder it is to hear aliasing through all that
 harmonic distortion you’re generating. So it could be worse...

 Stephen, I’m a former Line 6 DSP engineer, in the interests of full
 disclosure. I have no idea how you Fender modeler rates.


 On Jun 16, 2014, at 7:51 PM, robert bristow-johnson 
 r...@audioimagination.com wrote:

  On 6/16/14 9:08 PM, ChordWizard Software wrote:
  Hi all,
 
  As an electric jazz guitarist (strat copy) I have long appreciated the
 tone color and subtle overdrive that valve amps offer compared to their
 solid state cousins.
 
  me too (not that i am any jazz guitarist).  on my side of the pond, we
 call them's tubes.
 
  But as a software engineer, I always believed it would only be a matter
 of time until someone produced a pedal that would emulate this effect
 faithfully enough to skip the amp and just DI the wet signal into a PA.
 
 
  that's the theory, at least.
 
 
 
  ...  So what's going on here?  Logic tells me that with the right DSP
 in place, it should be possible to replicate exactly what is coming out of
 my Vox, given the same input signal.
 
  logic tells me the same thing.  they guitar amp is doing a mathematical
 operation on the input signal (that may be combined with two other input
 signals: the 50 or 60 Hz hum from the power supply and the shot noise,
 which we may or may not want to model in the tube emulation).
 
 
  Aesthetics aside, is it really so hard to accurately model the sound of
 valve amps?  I would have thought the characteristics would be well
 understood by now
 
  one thing that is hard to replicate is a sample rate that is infinity
 (which is how i understand continuous-time signals to be).  but i don't
 think you should need to have such a high sample rate.  one thing we know
 is that for *polynomial curves* (which are mathematical abstractions and
 maybe have nothing to do with tube curves), that for a bandwidth of B in
 the input and a polynomial curve of order N, the highest generated
 frequency is N*B so the sample rate should be at least (N+1)*B to prevent
 any of these generated images from aliasing down to below the original B.
  if you can prevent that, you can filter out any of the aliased components
 and downsample to a sample rate sufficient for B (which is at least 2*B).
 
  so, while we might be able to trace out the tube curves, to make sure we
 can deal with the aliasing, we would want to as closely as possible, model
 those tube curves (or the net curve, which is the mapping of the biased
 grid voltage to plate 

Re: [music-dsp] Simulating Valve Amps

2014-06-17 Thread Thomaz Oliveira
I'm aquainted to DSP and analogue electronics and have played a lot of
guitar over the last 18 years..   I still think there is nothing like the
sound of a good old valve amp..


On Tue, Jun 17, 2014 at 10:13 AM, Thomaz Oliveira thomazcha...@gmail.com
wrote:

 I have made some simulations of valve amps..   I have writen some articles
 and a PHD thesis on why these amps are so hard to model...

 I'm sending you the link of one article on this topic:

 if you have any questions about simulation of valve amps please contact me.

 here is the link:

 http://www.dcc.ufla.br/infocomp/images/artigos/v12.1/art02.pdf


 On Tue, Jun 17, 2014 at 6:30 AM, Nigel Redmon earle...@earlevel.com
 wrote:

 Well…yes, aliasing is the main issue that separates the digital world
 from analog when it comes to amp modeling, but no, I don’t think it’s the
 main issue in simulating a good amp :-)

 There are a lot of details in simulating classic amps—the controls of the
 passive filters interact, etc. And you have to recreate the filtering
 before and after the “tube”—guitar amps aren’t about flat frequency
 response. And the cabinets are a huge part of the sound.

 Then there’s the approach that’s more like sampling (Kemper Profiling
 Amp), a totally different direction—does it sound better? I don’t know,
 it’s very cool, but I haven’t listened to it enough to know (only at NAMM
 Shows).

 Anyway, just keep in mind that the particular classic amps don’t sound
 “better” simply because they are analog. They sound better because over the
 decades they’ve been around, they survived—because they do sound good.
 There are plenty of awful sounding analog guitar amps (and compressors, and
 preamps, and…) that didn’t last because they didn’t sound particularly
 good. Then, the modeling amp has the disadvantage that they are usually
 employed to recreate a classic amp exactly. So the best they can do is
 break even in sound, then win in versatility. And an AC-30 or Matchless
 preset on a modeler that doesn’t sound exactly like the amp it models loses
 automatically—even if it sounds better— because it failed to hit the
 target. (And it doesn’t helped that amps of the same model don’t
 necessarily sound the same. At Line 6, we would borrow a coveted amp—one
 that belonged to a major artist and was highly regarded, for instance, or
 one that was rented out for sessions because it was known to sound awesome.)


 On Jun 17, 2014, at 2:09 AM, Aengus Martin aen...@am-process.org wrote:

  Hi,
 
  I've only the vaguest idea of this area but I do find it interesting.
 From
  what you said, Nigel, aliasing is the main issue. Is it the case then
 that
  amp modeling would be more or less a solved problem if you could sample
 at
  arbitrarily high rates?
 
  Cheers,
 
  Aengus.
 
 
  On Tue, Jun 17, 2014 at 6:58 PM, Nigel Redmon earle...@earlevel.com
 wrote:
 
  On Jun 16, 2014, at 7:51 PM, robert bristow-johnson 
  r...@audioimagination.com wrote:
  one thing that is hard to replicate is a sample rate that is infinity
  (which is how i understand continuous-time signals to be).  but i don't
  think you should need to have such a high sample rate.  one thing we
 know
  is that for *polynomial curves* (which are mathematical abstractions
 and
  maybe have nothing to do with tube curves), that for a bandwidth of B
 in
  the input and a polynomial curve of order N, the highest generated
  frequency is N*B so the sample rate should be at least (N+1)*B to
 prevent
  any of these generated images from aliasing down to below the original
 B.
  if you can prevent that, you can filter out any of the aliased
 components
  and downsample to a sample rate sufficient for B (which is at least
 2*B).
 
 
  This really goes out the window when you’re modeling amps, though. The
  order of the polynomial is too high to implement practically (that is,
 you
  won’t end up utilizing the oversampling rate necessary to follow it),
 so
  you still be dealing with aliasing. Modern high gain amps have huge
 gain
  *after* saturation. In practical terms, you round into it (with a
  polynomial, for instance), then just hard clip from there on out, and
 there
  goes your polynomial (it can be replaced by an approximation that's
 very
  high order, but what’s the point).
 
  Anyway, you pay your money, you make your choices. Obviously some
 really
  good musicians making really interesting music use modeling amps. They
  don’t have to be better than tubes, in order to be a win, just good
 enough
  to be worth all the benefits. If you’re a session music, you can bring
 in
  the truck with all of the kinds of amps that might be called on, or
 you can
  bring a modeling amp, for instance. And going direct into the PA or
 your
  recoding equipment…etc. I’m not going to make judgments on what people
  should like, so I’ll leave it at that.
 
  One happy thing about the aliasing is that, given a decent level of
  oversampling, it won’t be bad at lower overdrive levels. At the higher
 

Re: [music-dsp] Simulating Valve Amps

2014-06-17 Thread Thomaz Oliveira
I have made some simulations of valve amps..   I have writen some articles
and a PHD thesis on why these amps are so hard to model...

I'm sending you the link of one article on this topic:

if you have any questions about simulation of valve amps please contact me.

here is the link:

http://www.dcc.ufla.br/infocomp/images/artigos/v12.1/art02.pdf


On Tue, Jun 17, 2014 at 6:30 AM, Nigel Redmon earle...@earlevel.com wrote:

 Well…yes, aliasing is the main issue that separates the digital world from
 analog when it comes to amp modeling, but no, I don’t think it’s the main
 issue in simulating a good amp :-)

 There are a lot of details in simulating classic amps—the controls of the
 passive filters interact, etc. And you have to recreate the filtering
 before and after the “tube”—guitar amps aren’t about flat frequency
 response. And the cabinets are a huge part of the sound.

 Then there’s the approach that’s more like sampling (Kemper Profiling
 Amp), a totally different direction—does it sound better? I don’t know,
 it’s very cool, but I haven’t listened to it enough to know (only at NAMM
 Shows).

 Anyway, just keep in mind that the particular classic amps don’t sound
 “better” simply because they are analog. They sound better because over the
 decades they’ve been around, they survived—because they do sound good.
 There are plenty of awful sounding analog guitar amps (and compressors, and
 preamps, and…) that didn’t last because they didn’t sound particularly
 good. Then, the modeling amp has the disadvantage that they are usually
 employed to recreate a classic amp exactly. So the best they can do is
 break even in sound, then win in versatility. And an AC-30 or Matchless
 preset on a modeler that doesn’t sound exactly like the amp it models loses
 automatically—even if it sounds better— because it failed to hit the
 target. (And it doesn’t helped that amps of the same model don’t
 necessarily sound the same. At Line 6, we would borrow a coveted amp—one
 that belonged to a major artist and was highly regarded, for instance, or
 one that was rented out for sessions because it was known to sound awesome.)


 On Jun 17, 2014, at 2:09 AM, Aengus Martin aen...@am-process.org wrote:

  Hi,
 
  I've only the vaguest idea of this area but I do find it interesting.
 From
  what you said, Nigel, aliasing is the main issue. Is it the case then
 that
  amp modeling would be more or less a solved problem if you could sample
 at
  arbitrarily high rates?
 
  Cheers,
 
  Aengus.
 
 
  On Tue, Jun 17, 2014 at 6:58 PM, Nigel Redmon earle...@earlevel.com
 wrote:
 
  On Jun 16, 2014, at 7:51 PM, robert bristow-johnson 
  r...@audioimagination.com wrote:
  one thing that is hard to replicate is a sample rate that is infinity
  (which is how i understand continuous-time signals to be).  but i don't
  think you should need to have such a high sample rate.  one thing we
 know
  is that for *polynomial curves* (which are mathematical abstractions and
  maybe have nothing to do with tube curves), that for a bandwidth of B in
  the input and a polynomial curve of order N, the highest generated
  frequency is N*B so the sample rate should be at least (N+1)*B to
 prevent
  any of these generated images from aliasing down to below the original
 B.
  if you can prevent that, you can filter out any of the aliased
 components
  and downsample to a sample rate sufficient for B (which is at least
 2*B).
 
 
  This really goes out the window when you’re modeling amps, though. The
  order of the polynomial is too high to implement practically (that is,
 you
  won’t end up utilizing the oversampling rate necessary to follow it), so
  you still be dealing with aliasing. Modern high gain amps have huge gain
  *after* saturation. In practical terms, you round into it (with a
  polynomial, for instance), then just hard clip from there on out, and
 there
  goes your polynomial (it can be replaced by an approximation that's very
  high order, but what’s the point).
 
  Anyway, you pay your money, you make your choices. Obviously some really
  good musicians making really interesting music use modeling amps. They
  don’t have to be better than tubes, in order to be a win, just good
 enough
  to be worth all the benefits. If you’re a session music, you can bring
 in
  the truck with all of the kinds of amps that might be called on, or you
 can
  bring a modeling amp, for instance. And going direct into the PA or your
  recoding equipment…etc. I’m not going to make judgments on what people
  should like, so I’ll leave it at that.
 
  One happy thing about the aliasing is that, given a decent level of
  oversampling, it won’t be bad at lower overdrive levels. At the higher
 the
  overdrive levels, the harder it is to hear aliasing through all that
  harmonic distortion you’re generating. So it could be worse...
 
  Stephen, I’m a former Line 6 DSP engineer, in the interests of full
  disclosure. I have no idea how you Fender modeler rates.
 
 
  

Re: [music-dsp] Simulating Valve Amps

2014-06-17 Thread robert bristow-johnson

On 6/17/14 9:15 AM, Thomaz Oliveira wrote:

I'm aquainted to DSP and analogue electronics and have played a lot of
guitar over the last 18 years..   I still think there is nothing like the
sound of a good old valve amp..


as are the analog mini-moogs as such from the day.  as are a bunch of 
excellent-sounding stomp boxes from the day.


still doesn't mean, with sufficient study of the components (as well as 
the circuit) and if we're both careful and not silly, that we cannot 
emulate the mathematical relationship of the physics with equivalent 
mathematics in a DSP.


--

r b-j  r...@audioimagination.com

Imagination is more important than knowledge.



--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Simulating Valve Amps

2014-06-17 Thread robert bristow-johnson

On 6/17/14 5:30 AM, Nigel Redmon wrote:

Well…yes, aliasing is the main issue that separates the digital world from 
analog when it comes to amp modeling, but no, I don’t think it’s the main issue 
in simulating a good amp :-)

There are a lot of details in simulating classic amps—the controls of the 
passive filters interact, etc.


and if they stay linear, you oughta be able to lump some of these 
passive filters, no?  if some component of the passive filter goes 
non-linear, you have to first decide if the nonlinearity is salient, and 
if it is, then model it.



  And you have to recreate the filtering before and after the “tube”—guitar 
amps aren’t about flat frequency response.


isn't that par-for-the-course, Nigel?


  And the cabinets are a huge part of the sound.



*that*, and the loudspeakers themselves, is the hardest part, no?  it's 
all three: 1. salient (so you can't ignore it), 2. non-linear, and 3. 
non-memoryless.



Then there’s the approach that’s more like sampling (Kemper Profiling Amp), a 
totally different direction—does it sound better?


i don't understand how they're a totally different direction.  i haven't 
seen nor heard these amps, i am merely reading the reference manual, but 
it appears they are doing the Power Sagging, Tube Shape, and Tube 
Bias emulation that, at least appears ostensibly the same direction as 
other modeling efforts.



Anyway, just keep in mind that the particular classic amps don’t sound “better” 
simply because they are analog. They sound better because over the decades 
they’ve been around, they survived—because they do sound good. There are plenty 
of awful sounding analog guitar amps (and compressors, and preamps, and…) that 
didn’t last because they didn’t sound particularly good. Then, the modeling amp 
has the disadvantage that they are usually employed to recreate a classic amp 
exactly. So the best they can do is break even in sound, then win in 
versatility. And an AC-30 or Matchless preset on a modeler that doesn’t sound 
exactly like the amp it models loses automatically—even if it sounds better— 
because it failed to hit the target. (And it doesn’t helped that amps of the 
same model don’t necessarily sound the same. At Line 6, we would borrow a 
coveted amp—one that belonged to a major artist and was highly regarded, for 
instance, or one that was rented out for sessions because it was known to sound 
awesome.)


what did you guys do with the amps when you borrowed/rented them?  was 
your analysis jig just input/output, or did you put a few high-impedance 
taps inside at strategic places and record those signals simultaneously?


On Tue, Jun 17, 2014 at 6:58 PM, Nigel Redmon earle...@earlevel.com 
wrote:

On Jun 16, 2014, at 7:51 PM, robert bristow-johnson
r...@audioimagination.com  wrote:

one thing that is hard to replicate is a sample rate that is infinity
(which is how i understand continuous-time signals to be).  but i don't
think you should need to have such a high sample rate.  one thing we know
is that for *polynomial curves* (which are mathematical abstractions and
maybe have nothing to do with tube curves), that for a bandwidth of B in
the input and a polynomial curve of order N, the highest generated
frequency is N*B so the sample rate should be at least (N+1)*B to prevent
any of these generated images from aliasing down to below the original B.
if you can prevent that, you can filter out any of the aliased components
and downsample to a sample rate sufficient for B (which is at least 2*B).




This really goes out the window when you’re modeling amps, though. The
order of the polynomial is too high to implement practically (that is, you
won’t end up utilizing the oversampling rate necessary to follow it),


this is a curious statement *outside* of the case of hard clipping.  
oversample by 4x and you can do a 7th-order polynomial curve and later 
eliminate all of the aliasing.  oversample by 8x and it's 15th-order.  
do *no* oversampling and you can still make use of the fact that there's 
not a lot above 5 kHz in a guitar and amp (so 48 kHz is sorta 
oversampled to begin with).  you can fit a quite curvy curve with a 
7th-order polynomial.



  so
you still be dealing with aliasing. Modern high gain amps have huge gain
*after* saturation. In practical terms, you round into it (with a
polynomial, for instance), then just hard clip from there on out, and there
goes your polynomial (it can be replaced by an approximation that's very
high order, but what’s the point).


yes, we splice a constant function against a curve.  if at the splice as 
many possible derivatives are zero as possible, that splice appears 
pretty seamless.  this is why i had earlier (on this list) been plugging 
these curves:


x
f(x)  =  C * integral{ (1 - u^2)^M du }
0

(C gets adjusted so that f(1) = 1 and f(-1) = -1.)

you can splice that to flat values at +/- 1 and the nature of the 
function will not 

Re: [music-dsp] Simulating Valve Amps

2014-06-17 Thread robert bristow-johnson

On 6/17/14 12:41 PM, ro...@khitchdee.com wrote:

You should be able to take one that sounds sweet, model its parameters based on 
theory then scope it and measure stuff to calibrate.


exactly.  and the way i would scope it would be dig out 8 or more 
channels of Pro Tools (or whatever multichannel DAW) fired up to 96 kHz, 
and put high-impedance taps at various important places inside the amp 
(like at the grids and plates of each tube, but at other places in 
addition), and record what those nodes are doing when the amp is set to 
sound good and some jock is playing licks into it.  take that 
multichannel recording and try to determine the math that maps, say, the 
grid of some tube to the plate of the same tube.  if there is a Miller 
capacitance (feedback from plate to grid), you will have to include that 
(i would do it simply, with Euler forward differences) in that single 
model (from grid to plate).


the output transformer might be more difficult.

and the loudspeaker and cabinet is bitchiest bitch.

--

r b-j  r...@audioimagination.com

Imagination is more important than knowledge.



--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Simulating Valve Amps

2014-06-17 Thread rohit
If you had measurement mics with flat response and access to a local music 
studio, you should be able to scope the box also.

Sent from my Samsung Corby
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Simulating Valve Amps

2014-06-17 Thread robert bristow-johnson

On 6/17/14 12:57 PM, Nigel Redmon wrote:

On Jun 17, 2014, at 9:09 AM, robert bristow-johnsonr...@audioimagination.com  
wrote:


On 6/17/14 5:30 AM, Nigel Redmon wrote:


...

Anyway, just keep in mind that the particular classic amps don’t sound better 
simply because they are analog. They sound better because over the decades they’ve been 
around, they survived—because they do sound good. There are plenty of awful sounding 
analog guitar amps (and compressors, and preamps, and…) that didn’t last because they 
didn’t sound particularly good. Then, the modeling amp has the disadvantage that they are 
usually employed to recreate a classic amp exactly. So the best they can do is break even 
in sound, then win in versatility. And an AC-30 or Matchless preset on a modeler that 
doesn’t sound exactly like the amp it models loses automatically—even if it sounds 
better— because it failed to hit the target. (And it doesn’t helped that amps of the same 
model don’t necessarily sound the same. At Line 6, we would borrow a coveted amp—one that 
belonged to a major artist and was highly regarded, for instance, or one that was rented 
out for sessions because it was known to sound awesome.)

what did you guys do with the amps when you borrowed/rented them?  was your 
analysis jig just input/output, or did you put a few high-impedance taps inside 
at strategic places and record those signals simultaneously?

Yes. For instance, sweeping the EQ with incremental settings changes.



yes, another issue (which i didn't really touch on) is mapping the 
settings of the knob to the internal (to the DSP) coefficients and 
threshold values and such.  that is coefficient cooking and is the 
same issue as defining Q in EQs so that the knob behaves like the ol' 
Pultec or whatever.  your digital implementation might work very well, 
but if the position of the knob in the emulation is not nearly the same 
as it was for the venerable old gear (to get the same sound), someone 
might complain.




On Tue, Jun 17, 2014 at 6:58 PM, Nigel Redmonearle...@earlevel.com  wrote:

On Jun 16, 2014, at 7:51 PM, robert bristow-johnson
r...@audioimagination.com   wrote:

one thing that is hard to replicate is a sample rate that is infinity
(which is how i understand continuous-time signals to be).  but i don't
think you should need to have such a high sample rate.  one thing we know
is that for *polynomial curves* (which are mathematical abstractions and
maybe have nothing to do with tube curves), that for a bandwidth of B in
the input and a polynomial curve of order N, the highest generated
frequency is N*B so the sample rate should be at least (N+1)*B to prevent
any of these generated images from aliasing down to below the original B.
if you can prevent that, you can filter out any of the aliased components
and downsample to a sample rate sufficient for B (which is at least 2*B).


This really goes out the window when you’re modeling amps, though. The
order of the polynomial is too high to implement practically (that is, you
won’t end up utilizing the oversampling rate necessary to follow it),

this is a curious statement *outside* of the case of hard clipping.  oversample 
by 4x and you can do a 7th-order polynomial curve and later eliminate all of 
the aliasing.  oversample by 8x and it's 15th-order.  do *no* oversampling and 
you can still make use of the fact that there's not a lot above 5 kHz in a 
guitar and amp (so 48 kHz is sorta oversampled to begin with).  you can fit a 
quite curvy curve with a 7th-order polynomial.


  so
you still be dealing with aliasing. Modern high gain amps have huge gain
*after* saturation. In practical terms, you round into it (with a
polynomial, for instance), then just hard clip from there on out, and there
goes your polynomial (it can be replaced by an approximation that's very
high order, but what’s the point).

yes, we splice a constant function against a curve.  if at the splice as many 
possible derivatives are zero as possible, that splice appears pretty seamless. 
 this is why i had earlier (on this list) been plugging these curves:

x
f(x)  =  C * integral{ (1 - u^2)^M du }
0

(C gets adjusted so that f(1) = 1 and f(-1) = -1.)

you can splice that to flat values at +/- 1 and the nature of the function will 
not change appreciably from the polynomial in the region of the splice.

anyway, the whole point is to give the guys with golden ears no cause to 
complain about hearing aliases.  same with emulating sawtooths and hard-sync 
synthesis.


Anyway, you pay your money, you make your choices. Obviously some really
good musicians making really interesting music use modeling amps. They
don’t have to be better than tubes, in order to be a win, just good enough
to be worth all the benefits. If you’re a session music, you can bring in
the truck with all of the kinds of amps that might be called on, or you can
bring a modeling amp, for instance. And 

Re: [music-dsp] Simulating Valve Amps

2014-06-17 Thread robert bristow-johnson

On 6/17/14 1:38 PM, ro...@khitchdee.com wrote:

If you had measurement mics with flat response and access to a local music 
studio, you should be able to scope the box also


sure, but with all those goofy non-linear and non-memoryless functions 
going on inside the box, it's really a bitch to model the box as if it 
were a black-box and all you have are a finite set of examples of the 
input-output characteristic.


and, since guitar amps are most often mic'd with an SM-57 or similar, 
why use an expensive BK instrumentation mic to do it in the scope the 
box procedure?  why not model the imperfect SM-57 along with it?


--

r b-j  r...@audioimagination.com

Imagination is more important than knowledge.



--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Simulating Valve Amps

2014-06-17 Thread rohit
If you modelled the system for all cases, that would make your task much more 
complicated. It would be much simpler to model sweet spots. Perhaps fix the 
guitar that sends in inputs. Narrow the range of the amps and the EQs. Even 
tune to playing styles of guitarists. All these steps help simplify the 
modelling task.
While it's true guitar amps are mic'd quite cheaply I'd use measurement mics at 
few distances. This is low hanging fruit that adds some options. 

Sent from my Samsung Corby
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Simulating Valve Amps

2014-06-17 Thread Jay
Looks interesting.  I wonder how symbolic regression is substantially
different from genetic programming.

And speaking of modeling by way of function approximation, I've often
wondered why I have such a difficult time finding anything on the topic of
oversampling in the context of neural networks for time series prediction (
assuming the use of a saturating nonlinearity as the neuron activation
function -- oftentimes, it's tanh, for instance).




On Tue, Jun 17, 2014 at 11:19 AM, Olli Niemitalo o...@iki.fi wrote:

 Talking of black box modeling, I wonder if

   http://www.nutonian.com/products/eureqa/

 would be useful for audio DSP? It's an equation discovery system. You
 give it data and it tries to find equations to describe it. They call
 it symbolic regression. Dunno how well it handles time series. It's
 expensive as hell but there's a 30-day free trial.

 -olli
 --
 dupswapdrop -- the music-dsp mailing list and website:
 subscription info, FAQ, source code archive, list archive, book reviews,
 dsp links
 http://music.columbia.edu/cmc/music-dsp
 http://music.columbia.edu/mailman/listinfo/music-dsp

--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Simulating Valve Amps

2014-06-17 Thread Sampo Syreeni

On 2014-06-17, robert bristow-johnson wrote:


  And the cabinets are a huge part of the sound.


*that*, and the loudspeakers themselves, is the hardest part, no? 
it's all three: 1. salient (so you can't ignore it), 2. non-linear, 
and 3. non-memoryless.


From what (very little!) I know of hardcore analog simulations, I'd say 
that is part of a more general and much nastier problem. That's the 
interaction one: whereas digital signal graphs have a definite direction 
of signal flow, there's no such thing on the analog side no matter what 
you do. Sure, you often try to go from low to high impendance and make 
use of tons of other tricks in order to make your circuit approach a 
directional signal flow, but lo and behold, most of the nicest sounding 
circuits actually measurably feed back all the way from the 
speaker-air-interface to the input jack, affecting anything and 
everything on the way.


The Moog ladder is perhaps the most notorious circuit here, because 
ladder filters are distributed, equilibrium systems from the start. No 
true buffering/isolation to be seen, there. But to a lesser degree, at 
least in order to achieve a proper tube simulacrum, you do always have 
to consider the full electromechanical equilibrium (and the 
not-so-equilibrium, e.g. with transients) even within the 
enclosure-cone-magnet-wires-transformer-terminal-powertubes-powersupply 
whole. That's then a total bitch to do effectively yet still accurately, 
in code where the simple operations just don't feed back to the 
operands, like they used to.

--
Sampo Syreeni, aka decoy - de...@iki.fi, http://decoy.iki.fi/front
+358-40-3255353, 025E D175 ABE5 027C 9494 EEB0 E090 8BA9 0509 85C2
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Simulating Valve Amps

2014-06-17 Thread Nigel Redmon
This is getting…nesty...

On Jun 17, 2014, at 10:42 AM, robert bristow-johnson 
r...@audioimagination.com wrote:

 On 6/17/14 12:57 PM, Nigel Redmon wrote:
 On Jun 17, 2014, at 9:09 AM, robert 
 bristow-johnsonr...@audioimagination.com  wrote:
 
 On 6/17/14 5:30 AM, Nigel Redmon wrote:
 
 ...
 Anyway, just keep in mind that the particular classic amps don’t sound 
 better simply because they are analog. They sound better because over 
 the decades they’ve been around, they survived—because they do sound good. 
 There are plenty of awful sounding analog guitar amps (and compressors, 
 and preamps, and…) that didn’t last because they didn’t sound particularly 
 good. Then, the modeling amp has the disadvantage that they are usually 
 employed to recreate a classic amp exactly. So the best they can do is 
 break even in sound, then win in versatility. And an AC-30 or Matchless 
 preset on a modeler that doesn’t sound exactly like the amp it models 
 loses automatically—even if it sounds better— because it failed to hit the 
 target. (And it doesn’t helped that amps of the same model don’t 
 necessarily sound the same. At Line 6, we would borrow a coveted amp—one 
 that belonged to a major artist and was highly regarded, for instance, or 
 one that was rented out for sessions because it was known to sound 
 awesome.)
 what did you guys do with the amps when you borrowed/rented them?  was your 
 analysis jig just input/output, or did you put a few high-impedance taps 
 inside at strategic places and record those signals simultaneously?
 Yes. For instance, sweeping the EQ with incremental settings changes.
 
 
 yes, another issue (which i didn't really touch on) is mapping the settings 
 of the knob to the internal (to the DSP) coefficients and threshold values 
 and such.  that is coefficient cooking and is the same issue as defining Q 
 in EQs so that the knob behaves like the ol' Pultec or whatever.  your 
 digital implementation might work very well, but if the position of the knob 
 in the emulation is not nearly the same as it was for the venerable old gear 
 (to get the same sound), someone might complain.

Oh yes, they *will* complain ;-)

 On Tue, Jun 17, 2014 at 6:58 PM, Nigel Redmonearle...@earlevel.com  
 wrote:
 On Jun 16, 2014, at 7:51 PM, robert bristow-johnson
 r...@audioimagination.com   wrote:
 one thing that is hard to replicate is a sample rate that is infinity
 (which is how i understand continuous-time signals to be).  but i don't
 think you should need to have such a high sample rate.  one thing we 
 know
 is that for *polynomial curves* (which are mathematical abstractions and
 maybe have nothing to do with tube curves), that for a bandwidth of B in
 the input and a polynomial curve of order N, the highest generated
 frequency is N*B so the sample rate should be at least (N+1)*B to 
 prevent
 any of these generated images from aliasing down to below the original 
 B.
 if you can prevent that, you can filter out any of the aliased 
 components
 and downsample to a sample rate sufficient for B (which is at least 
 2*B).
 
 This really goes out the window when you’re modeling amps, though. The
 order of the polynomial is too high to implement practically (that is, 
 you
 won’t end up utilizing the oversampling rate necessary to follow it),
 this is a curious statement *outside* of the case of hard clipping.  
 oversample by 4x and you can do a 7th-order polynomial curve and later 
 eliminate all of the aliasing.  oversample by 8x and it's 15th-order.  do 
 *no* oversampling and you can still make use of the fact that there's not a 
 lot above 5 kHz in a guitar and amp (so 48 kHz is sorta oversampled to 
 begin with).  you can fit a quite curvy curve with a 7th-order polynomial.
 
  so
 you still be dealing with aliasing. Modern high gain amps have huge gain
 *after* saturation. In practical terms, you round into it (with a
 polynomial, for instance), then just hard clip from there on out, and 
 there
 goes your polynomial (it can be replaced by an approximation that's very
 high order, but what’s the point).
 yes, we splice a constant function against a curve.  if at the splice as 
 many possible derivatives are zero as possible, that splice appears pretty 
 seamless.  this is why i had earlier (on this list) been plugging these 
 curves:
 
x
f(x)  =  C * integral{ (1 - u^2)^M du }
0
 
 (C gets adjusted so that f(1) = 1 and f(-1) = -1.)
 
 you can splice that to flat values at +/- 1 and the nature of the function 
 will not change appreciably from the polynomial in the region of the splice.
 
 anyway, the whole point is to give the guys with golden ears no cause to 
 complain about hearing aliases.  same with emulating sawtooths and 
 hard-sync synthesis.
 
 Anyway, you pay your money, you make your choices. Obviously some really
 good musicians making really interesting music use modeling amps. They
 don’t have to be better than tubes, in 

Re: [music-dsp] Simulating Valve Amps

2014-06-17 Thread robert bristow-johnson

On 6/17/14 3:30 PM, Nigel Redmon wrote:

This is getting…nesty...


yah 'vell, vot 'r ya gonna do?  :-)


On Jun 17, 2014, at 10:42 AM, robert bristow-johnsonr...@audioimagination.com 
 wrote:


On 6/17/14 12:57 PM, Nigel Redmon wrote:

On Jun 17, 2014, at 9:09 AM, robert bristow-johnsonr...@audioimagination.com  
 wrote:


On 6/17/14 5:30 AM, Nigel Redmon wrote:


...

Anyway, just keep in mind that the particular classic amps don’t sound better 
simply because they are analog. They sound better because over the decades they’ve been 
around, they survived—because they do sound good. There are plenty of awful sounding 
analog guitar amps (and compressors, and preamps, and…) that didn’t last because they 
didn’t sound particularly good. Then, the modeling amp has the disadvantage that they are 
usually employed to recreate a classic amp exactly. So the best they can do is break even 
in sound, then win in versatility. And an AC-30 or Matchless preset on a modeler that 
doesn’t sound exactly like the amp it models loses automatically—even if it sounds 
better— because it failed to hit the target. (And it doesn’t helped that amps of the same 
model don’t necessarily sound the same. At Line 6, we would borrow a coveted amp—one that 
belonged to a major artist and was highly regarded, for instance, or one that was rented 
out for sessions because it was known to sound awesome.)

what did you guys do with the amps when you borrowed/rented them?  was your 
analysis jig just input/output, or did you put a few high-impedance taps inside 
at strategic places and record those signals simultaneously?

Yes. For instance, sweeping the EQ with incremental settings changes.


yes, another issue (which i didn't really touch on) is mapping the settings of the knob 
to the internal (to the DSP) coefficients and threshold values and such.  that is 
coefficient cooking and is the same issue as defining Q in EQs so that the 
knob behaves like the ol' Pultec or whatever.  your digital implementation might work 
very well, but if the position of the knob in the emulation is not nearly the same as it 
was for the venerable old gear (to get the same sound), someone might complain.

Oh yes, they *will* complain ;-)


On Tue, Jun 17, 2014 at 6:58 PM, Nigel Redmonearle...@earlevel.com   wrote:

On Jun 16, 2014, at 7:51 PM, robert bristow-johnson
r...@audioimagination.comwrote:

one thing that is hard to replicate is a sample rate that is infinity
(which is how i understand continuous-time signals to be).  but i don't
think you should need to have such a high sample rate.  one thing we know
is that for *polynomial curves* (which are mathematical abstractions and
maybe have nothing to do with tube curves), that for a bandwidth of B in
the input and a polynomial curve of order N, the highest generated
frequency is N*B so the sample rate should be at least (N+1)*B to prevent
any of these generated images from aliasing down to below the original B.
if you can prevent that, you can filter out any of the aliased components
and downsample to a sample rate sufficient for B (which is at least 2*B).


This really goes out the window when you’re modeling amps, though. The
order of the polynomial is too high to implement practically (that is, you
won’t end up utilizing the oversampling rate necessary to follow it),

this is a curious statement *outside* of the case of hard clipping.  oversample 
by 4x and you can do a 7th-order polynomial curve and later eliminate all of 
the aliasing.  oversample by 8x and it's 15th-order.  do *no* oversampling and 
you can still make use of the fact that there's not a lot above 5 kHz in a 
guitar and amp (so 48 kHz is sorta oversampled to begin with).  you can fit a 
quite curvy curve with a 7th-order polynomial.


  so
you still be dealing with aliasing. Modern high gain amps have huge gain
*after* saturation. In practical terms, you round into it (with a
polynomial, for instance), then just hard clip from there on out, and there
goes your polynomial (it can be replaced by an approximation that's very
high order, but what’s the point).

yes, we splice a constant function against a curve.  if at the splice as many 
possible derivatives are zero as possible, that splice appears pretty seamless. 
 this is why i had earlier (on this list) been plugging these curves:

x
f(x)  =  C * integral{ (1 - u^2)^M du }
0

(C gets adjusted so that f(1) = 1 and f(-1) = -1.)

you can splice that to flat values at +/- 1 and the nature of the function will 
not change appreciably from the polynomial in the region of the splice.

anyway, the whole point is to give the guys with golden ears no cause to 
complain about hearing aliases.  same with emulating sawtooths and hard-sync 
synthesis.


Anyway, you pay your money, you make your choices. Obviously some really
good musicians making really interesting music use modeling amps. They
don’t have to be better than tubes, in 

Re: [music-dsp] Simulating Valve Amps

2014-06-17 Thread Nigel Redmon
(Thinking outside the nest…)

(...maybe that means opening up the LPF as the gain knob setting is reduced)

Yes

And good discussion elsewhere in there, thanks Robert.


On Jun 17, 2014, at 4:07 PM, robert bristow-johnson r...@audioimagination.com 
wrote:
 On 6/17/14 3:30 PM, Nigel Redmon wrote:
 This is getting…nesty...
 
 yah 'vell, vot 'r ya gonna do?  :-)
 
 On Jun 17, 2014, at 10:42 AM, robert 
 bristow-johnsonr...@audioimagination.com  wrote:
 
 On 6/17/14 12:57 PM, Nigel Redmon wrote:
 On Jun 17, 2014, at 9:09 AM, robert 
 bristow-johnsonr...@audioimagination.com   wrote:
 
 On 6/17/14 5:30 AM, Nigel Redmon wrote:
 
 ...
 Anyway, just keep in mind that the particular classic amps don’t sound 
 better simply because they are analog. They sound better because over 
 the decades they’ve been around, they survived—because they do sound 
 good. There are plenty of awful sounding analog guitar amps (and 
 compressors, and preamps, and…) that didn’t last because they didn’t 
 sound particularly good. Then, the modeling amp has the disadvantage 
 that they are usually employed to recreate a classic amp exactly. So the 
 best they can do is break even in sound, then win in versatility. And an 
 AC-30 or Matchless preset on a modeler that doesn’t sound exactly like 
 the amp it models loses automatically—even if it sounds better— because 
 it failed to hit the target. (And it doesn’t helped that amps of the 
 same model don’t necessarily sound the same. At Line 6, we would borrow 
 a coveted amp—one that belonged to a major artist and was highly 
 regarded, for instance, or one that was rented out for sessions because 
 it was known to sound awesome.)
 what did you guys do with the amps when you borrowed/rented them?  was 
 your analysis jig just input/output, or did you put a few high-impedance 
 taps inside at strategic places and record those signals simultaneously?
 Yes. For instance, sweeping the EQ with incremental settings changes.
 
 yes, another issue (which i didn't really touch on) is mapping the settings 
 of the knob to the internal (to the DSP) coefficients and threshold values 
 and such.  that is coefficient cooking and is the same issue as defining 
 Q in EQs so that the knob behaves like the ol' Pultec or whatever.  your 
 digital implementation might work very well, but if the position of the 
 knob in the emulation is not nearly the same as it was for the venerable 
 old gear (to get the same sound), someone might complain.
 Oh yes, they *will* complain ;-)
 
 On Tue, Jun 17, 2014 at 6:58 PM, Nigel Redmonearle...@earlevel.com   
 wrote:
 On Jun 16, 2014, at 7:51 PM, robert bristow-johnson
 r...@audioimagination.comwrote:
 one thing that is hard to replicate is a sample rate that is infinity
 (which is how i understand continuous-time signals to be).  but i 
 don't
 think you should need to have such a high sample rate.  one thing we 
 know
 is that for *polynomial curves* (which are mathematical abstractions 
 and
 maybe have nothing to do with tube curves), that for a bandwidth of B 
 in
 the input and a polynomial curve of order N, the highest generated
 frequency is N*B so the sample rate should be at least (N+1)*B to 
 prevent
 any of these generated images from aliasing down to below the 
 original B.
 if you can prevent that, you can filter out any of the aliased 
 components
 and downsample to a sample rate sufficient for B (which is at least 
 2*B).
 
 This really goes out the window when you’re modeling amps, though. The
 order of the polynomial is too high to implement practically (that is, 
 you
 won’t end up utilizing the oversampling rate necessary to follow it),
 this is a curious statement *outside* of the case of hard clipping.  
 oversample by 4x and you can do a 7th-order polynomial curve and later 
 eliminate all of the aliasing.  oversample by 8x and it's 15th-order.  do 
 *no* oversampling and you can still make use of the fact that there's not 
 a lot above 5 kHz in a guitar and amp (so 48 kHz is sorta oversampled to 
 begin with).  you can fit a quite curvy curve with a 7th-order polynomial.
 
  so
 you still be dealing with aliasing. Modern high gain amps have huge 
 gain
 *after* saturation. In practical terms, you round into it (with a
 polynomial, for instance), then just hard clip from there on out, and 
 there
 goes your polynomial (it can be replaced by an approximation that's 
 very
 high order, but what’s the point).
 yes, we splice a constant function against a curve.  if at the splice as 
 many possible derivatives are zero as possible, that splice appears 
 pretty seamless.  this is why i had earlier (on this list) been plugging 
 these curves:
 
x
f(x)  =  C * integral{ (1 - u^2)^M du }
0
 
 (C gets adjusted so that f(1) = 1 and f(-1) = -1.)
 
 you can splice that to flat values at +/- 1 and the nature of the 
 function will not change appreciably from the polynomial in the region of 
 the splice.
 

Re: [music-dsp] Simulating Valve Amps

2014-06-17 Thread robert bristow-johnson

On 6/17/14 8:24 PM, Nigel Redmon wrote:

(Thinking outside the nest…)


(...maybe that means opening up the LPF as the gain knob setting is reduced)

Yes

And good discussion elsewhere in there, thanks Robert.


yer welcome, i guess.

you may be thinking outside the nest; i'm just thinking out loud.

i think, like a multieffects box, we oughta be able to simulate all 
these amps (don't forget the Mesa Boogie) and their different settings 
in a single DSP box with enough MIPS and a lotta oversampling.  dunno if 
simulating the 50/60 Hz hum and shot noise would be good or not (i know 
of a B3 emulation that simulates the din of all 60-whatever keys 
leaking into the mix even when they're all key-up).  but they oughta be 
able to model each deterministic thing: the power supply sag, changing 
bias points, hysteresis in transformers, capacitance in feedback around 
a non-linear element (might use Euler's forward differences in doing 
that), whatever.  whatever it is, if you take out the hum and shot 
noise, it's a deterministic function of solely the guitar input and the 
knob settings, and if we can land human beings on the moon, we oughta be 
able to figure out what that deterministic function is.  for each amp 
model.  it shouldn't be more mystical than that (but there *is* a sorta 
mysticism with musicians about this old analog gear that we just cannot 
adequately mimic).


and thanks to you, Nigel.

L8r,

r b-j



On Jun 17, 2014, at 4:07 PM, robert bristow-johnsonr...@audioimagination.com  
wrote:

On 6/17/14 3:30 PM, Nigel Redmon wrote:

This is getting…nesty...

yah 'vell, vot 'r ya gonna do?  :-)


On Jun 17, 2014, at 10:42 AM, robert bristow-johnsonr...@audioimagination.com 
  wrote:


On 6/17/14 12:57 PM, Nigel Redmon wrote:

On Jun 17, 2014, at 9:09 AM, robert bristow-johnsonr...@audioimagination.com  
  wrote:


On 6/17/14 5:30 AM, Nigel Redmon wrote:


...

Anyway, just keep in mind that the particular classic amps don’t sound better 
simply because they are analog. They sound better because over the decades they’ve been 
around, they survived—because they do sound good. There are plenty of awful sounding 
analog guitar amps (and compressors, and preamps, and…) that didn’t last because they 
didn’t sound particularly good. Then, the modeling amp has the disadvantage that they are 
usually employed to recreate a classic amp exactly. So the best they can do is break even 
in sound, then win in versatility. And an AC-30 or Matchless preset on a modeler that 
doesn’t sound exactly like the amp it models loses automatically—even if it sounds 
better— because it failed to hit the target. (And it doesn’t helped that amps of the same 
model don’t necessarily sound the same. At Line 6, we would borrow a coveted amp—one that 
belonged to a major artist and was highly regarded, for instance, or one that was rented 
out for sessions because it was known to sound awesome.)

what did you guys do with the amps when you borrowed/rented them?  was your 
analysis jig just input/output, or did you put a few high-impedance taps inside 
at strategic places and record those signals simultaneously?

Yes. For instance, sweeping the EQ with incremental settings changes.


yes, another issue (which i didn't really touch on) is mapping the settings of the knob 
to the internal (to the DSP) coefficients and threshold values and such.  that is 
coefficient cooking and is the same issue as defining Q in EQs so that the 
knob behaves like the ol' Pultec or whatever.  your digital implementation might work 
very well, but if the position of the knob in the emulation is not nearly the same as it 
was for the venerable old gear (to get the same sound), someone might complain.

Oh yes, they *will* complain ;-)


On Tue, Jun 17, 2014 at 6:58 PM, Nigel Redmonearle...@earlevel.comwrote:

On Jun 16, 2014, at 7:51 PM, robert bristow-johnson
r...@audioimagination.com wrote:

one thing that is hard to replicate is a sample rate that is infinity
(which is how i understand continuous-time signals to be).  but i don't
think you should need to have such a high sample rate.  one thing we know
is that for *polynomial curves* (which are mathematical abstractions and
maybe have nothing to do with tube curves), that for a bandwidth of B in
the input and a polynomial curve of order N, the highest generated
frequency is N*B so the sample rate should be at least (N+1)*B to prevent
any of these generated images from aliasing down to below the original B.
if you can prevent that, you can filter out any of the aliased components
and downsample to a sample rate sufficient for B (which is at least 2*B).


This really goes out the window when you’re modeling amps, though. The
order of the polynomial is too high to implement practically (that is, you
won’t end up utilizing the oversampling rate necessary to follow it),

this is a curious statement *outside* of the case of hard clipping.  oversample 
by 4x and you can do a 7th-order polynomial