Re: [music-dsp] Simulating Valve Amps
On 20.06.2014, at 17:37, robert bristow-johnson r...@audioimagination.com wrote: On 6/20/14 10:57 AM, Andrew Simper wrote: On 20 June 2014 17:11, Tim Goetzet...@quitte.de wrote: [Andrew Simper] On 18 June 2014 21:01, Tim Goetzet...@quitte.de wrote: I absolutely agree that this looks to be the most promising approach in terms of realism. However, the last time I looked into this, the computational cost seemed a good deal too high for a realtime implementation sharing a CPU with other tasks. But perhaps I'll need to evaluate it again? The computational costs of processing the filters isn't high at all, just like with DF1 you can compute some simplified coefficients and then call process using those. Since everything is linear you end up with a bunch of additions and multiplies just like you do in a DF1, but the energy in your capacitors is preserved when you change coefficients just like it is when you change the knobs on a circuit. Yeh's work on the Fender tonestack is just that: symbolic nodal analysis leading to an equivalent linear digital filter. I mistakenly thought you were proposing nodal analysis including also the nonlinear aspects of the circuit including valves and output transformer (which without being too familiar with the method I believe to lead to a system of equations that's a lot more complicated to solve). Nodal analysis can refer to linear or non-linear, so sorry for the confusion. well, Kirchoff's laws apply to either linear or non-linear. but the methods we know as node-voltage (what i prefer) or loop-current do *not* work with non-linear. these circuits (that we apply the node-voltage method to) have dependent or independent voltage or current sources and impedances between the nodes. I don't quite understand. Are you saying that one can not write down an equation for each node once there's a diode or a transistor in the circuit? I was of the opinion that transfer functions of, say, diodes are well known, and even though the resulting equations need more than just adds and multiplies and delays, they are still correct equations albeit ones that require more complex measures than undergrade math to solve. I could be wrong though, and in this case I'd be happy to learn! I was trying to point out that the linear analysis done by Yeh starts the circuit but then throws it away and instead uses a DF1, and a DF1 does not store the state of each capacitor individually, so when you turn the knob you don't get the right time varying behaviour. in the steady-state (say, a second after the knob is turned) is there the right behavior with the DF1? But of course - in a linear filter, such as one composed out of RC networks. But what about voltage controlled filters? I am saying you don't have to throw the circuit away, you can still get an efficient implementation since in the linear case everything reduces to a bunch of adds and multiplies. and delay states. But of course there has to be a state history. But Andy talks about computational efficency here, so he was referring to actual mathematical operations, not loads and stores which may be optimised away if the results can be kept in a CPU register within a tight loop. For non-linear modelling you need additional steps, and depending on the circuit there are many different methods that can be tried to find the best fit for the particular requirements if, the sample rate is high enough (and it *should* be pretty fast because of the aliasing issue) the deltaT used in forward differences or backward differences (or predictor-corrector) or whatever should be pretty small. in my opinion, if you have a bunch of memoryless non-linear elements connected in a circuit with linear elements (with or without memory), it seems to me that the simple Euler's forward method (like we learned in undergraduate school) suffices to model it. That's where it's a bit tricky. In my naive way of thinking, I see non-linear elements, say a diode, or something tanh-ish, as voltage dependent resistors. Therefore the signal in a non-linear filter, naively spoken, influences the overall filter frequency. This explains why for instance in an analogue synthesizer the frequency of the self oscillating filter sounds like it's modulated by the filter input (in actual fact, run an LFO through a self-oscillating filter and you get a filter sweep. Alternatively, drive a square wave really hard in a lowpass filter and hear the upper harmonics attenuate) Now, if we could agree that, naively spoken, the signal performs audio rate modulation of the cutoff frequency in a non-linear filter, then we would have to determine whether or not the effect of using simple Euler has any disadvantage over, say, trapezoidal integration and solving feedback loops delayless. That is, is there an audible difference in a practical case, even at really high samplerates? We have
Re: [music-dsp] Simulating Valve Amps
On 20 June 2014 23:37, robert bristow-johnson r...@audioimagination.com wrote: well, Kirchoff's laws apply to either linear or non-linear. but the methods we know as node-voltage (what i prefer) or loop-current do *not* work with non-linear. these circuits (that we apply the node-voltage method to) have dependent or independent voltage or current sources and impedances between the nodes. If this is the case you should tell everyone that using Spice to stick to linear components only since the non-linear ones won't work! I was trying to point out that the linear analysis done by Yeh starts the circuit but then throws it away and instead uses a DF1, and a DF1 does not store the state of each capacitor individually, so when you turn the knob you don't get the right time varying behaviour. in the steady-state (say, a second after the knob is turned) is there the right behavior with the DF1? If you start the filters with silence and don't change any settings and you have infinite processing resolution they will be identical. I would guess that the time for artificial energy injected into a DF1 due to parameter changes to be unnoticeable would depend on the particular circuit, guessing again I would say it is most likely that low frequencies with high resonance would be most problematic, and from what I have noticed sweeping downwards in frequency is the worst case, but you are the expert on DF1s what are the results of research you have conducted into these issues? I am saying you don't have to throw the circuit away, you can still get an efficient implementation since in the linear case everything reduces to a bunch of adds and multiplies. and delay states. You just store a single state per capacitor actually, please read these for some examples: www.cytomic.com/technical-papers For non-linear modelling you need additional steps, and depending on the circuit there are many different methods that can be tried to find the best fit for the particular requirements if, the sample rate is high enough (and it *should* be pretty fast because of the aliasing issue) the deltaT used in forward differences or backward differences (or predictor-corrector) or whatever should be pretty small. in my opinion, if you have a bunch of memoryless non-linear elements connected in a circuit with linear elements (with or without memory), it seems to me that the simple Euler's forward method (like we learned in undergraduate school) suffices to model it. In tests I've done (where I've had to simplify some parts just to get it explicit version to be stable) it takes around x8 oversampling of a base rate of 44.1khz before the single step explicit version sounds the almost identical to an implicit version, so Andrew, i realize that you had been using something like that to emulate linear circuits with capacitors and resistors and op-amps. it does make a difference in time-variant situations, but for the steady state (a second or two after the knob is twisted), i'm a little dubious of what difference it makes. I would guess that for guitar circuits if the cutoff frequencies are around 80 hz at the lowest then after two seconds of not moving any knobs the that any linear filter that hasn't blown up would settle to sound the same given infinite processing resolution, but I have not done research into this so I would not bank on it, especially when direct circuit simulation is so straight forward, and then the non-linear case can also be handled with identical machinery at the cost of more cpu. -- dupswapdrop -- the music-dsp mailing list and website: subscription info, FAQ, source code archive, list archive, book reviews, dsp links http://music.columbia.edu/cmc/music-dsp http://music.columbia.edu/mailman/listinfo/music-dsp
Re: [music-dsp] Simulating Valve Amps
Just as a data point; Been measuring and dealing with converter and DSP throughput latency in the studio since the first digital machines in the early '80's; my own experience is that anything above 2 or 3 msec of throughput latency starts to become an issue for professional musicians; 5 msec becomes very noticable on headphones, and above 6msec is not usable. best, rich On Jun 21, 2014, at 4:21 AM, music-dsp-requ...@music.columbia.edu wrote: Clearly a 25ms delay post-applied to the rhythm guitar is going to mess with the groove in almost any musical setting. But the real question is whether it's going to mess with the guitarist's performance of the groove if they can hear the delay while they are playing (i.e. can they compensate to recreate the same groove). And then, when comparing to zero latency hardware, at what point does it become trivial to compensate (i.e. doesn't introduce any additional cognitive burden). I believe that various studies have put the playable latency threshold below 8ms. (Sorry, don't have time to look this up but I think that one example is in one of Alex Carot's network music papers). - r...@richbreen.com http://www.richbreen.com -- dupswapdrop -- the music-dsp mailing list and website: subscription info, FAQ, source code archive, list archive, book reviews, dsp links http://music.columbia.edu/cmc/music-dsp http://music.columbia.edu/mailman/listinfo/music-dsp
Re: [music-dsp] Simulating Valve Amps
Hi Rich, On 22/06/2014 1:09 AM, Rich Breen wrote: Just as a data point; Been measuring and dealing with converter and DSP throughput latency in the studio since the first digital machines in the early '80's; Out of interest, what is your latency measurement method of choice? my own experience is that anything above 2 or 3 msec of throughput latency starts to become an issue for professional musicians; 5 msec becomes very noticable on headphones, and above 6msec is not usable. Do you think they notice below 2ms? Ross. -- dupswapdrop -- the music-dsp mailing list and website: subscription info, FAQ, source code archive, list archive, book reviews, dsp links http://music.columbia.edu/cmc/music-dsp http://music.columbia.edu/mailman/listinfo/music-dsp
Re: [music-dsp] Simulating Valve Amps
On 6/21/14, 8:09 AM, Rich Breen wrote: 5 msec becomes very noticable on headphones, and above 6 msec is not usable. Note that the speed of sound in air is roughly 1125 feet/second. So if a guitar player is more than 7 feet from their amp then they will have more than 6 msec of latency. For acoustic instrument players the speed of sound is not an issue. But if an electronic musician is looking for ultra-low latency then they must also consider their distance from the loudspeaker. Phil Burk -- dupswapdrop -- the music-dsp mailing list and website: subscription info, FAQ, source code archive, list archive, book reviews, dsp links http://music.columbia.edu/cmc/music-dsp http://music.columbia.edu/mailman/listinfo/music-dsp
Re: [music-dsp] Simulating Valve Amps
I agree, Phil, that the “6 msec is not usable” is not a realistic statement. First, the brain anticipates. Humans are incredible good at throwing things, for instance. (In a few minutes, I’m heading out to play basketball and drain some “threes”.) And the brain needs to tell the hand to release the baseball, or flick the wrist to launch the basketball, *way* before the hand is in position. Yet the accuracy with which we do it is amazing. We fall far short of the big cats in speed, and the strength of other primates even half our size, but boy can we bean something with a rock 100 feet away like no other critter on the planet. So the brain is pretty good about anticipatory timing. If you’re in a living room or garage, x feet from the drummer, y feet from the bass cabinet, and z feet from your leslie, you can still cope. Then it’s a little different thing to play solo. I already mentioned about the huge delay I had to get used to with the big pipe organ. It was bad, but it sure would have been horrible to play piano that way (easy enough simulated with a delay line and piano plug-in). When I’m recording via Ivory, say, I’ve got headphones, and I want a 64-sample buffer. 128 is reasonable. Beyond that, I quickly arrive at the point when I lose the sense of connection between my fingers and the instrument, and at minimum my comfort level suffers. If there were no alternative, I’d get used to it, but it’s not optimal. But “unusable” is a bit extreme. On Jun 21, 2014, at 4:29 PM, Phil Burk philb...@mobileer.com wrote: On 6/21/14, 8:09 AM, Rich Breen wrote: 5 msec becomes very noticable on headphones, and above 6 msec is not usable. Note that the speed of sound in air is roughly 1125 feet/second. So if a guitar player is more than 7 feet from their amp then they will have more than 6 msec of latency. For acoustic instrument players the speed of sound is not an issue. But if an electronic musician is looking for ultra-low latency then they must also consider their distance from the loudspeaker. Phil Burk -- dupswapdrop -- the music-dsp mailing list and website: subscription info, FAQ, source code archive, list archive, book reviews, dsp links http://music.columbia.edu/cmc/music-dsp http://music.columbia.edu/mailman/listinfo/music-dsp
Re: [music-dsp] Simulating Valve Amps
PS—I actually took Rich’s “6ms is unusable” to mean “unacceptable”, which I do agree with. For instance, when I had a Korg Trinity, I liked the keyboard action (61 key) very much in general, but I would *never* enable the combi mode—it was so slow that it was unacceptable to me, even for the far richer textures it gave in return. On Jun 21, 2014, at 5:09 PM, Nigel Redmon earle...@earlevel.com wrote: I agree, Phil, that the “6 msec is not usable” is not a realistic statement. First, the brain anticipates. Humans are incredible good at throwing things, for instance. (In a few minutes, I’m heading out to play basketball and drain some “threes”.) And the brain needs to tell the hand to release the baseball, or flick the wrist to launch the basketball, *way* before the hand is in position. Yet the accuracy with which we do it is amazing. We fall far short of the big cats in speed, and the strength of other primates even half our size, but boy can we bean something with a rock 100 feet away like no other critter on the planet. So the brain is pretty good about anticipatory timing. If you’re in a living room or garage, x feet from the drummer, y feet from the bass cabinet, and z feet from your leslie, you can still cope. Then it’s a little different thing to play solo. I already mentioned about the huge delay I had to get used to with the big pipe organ. It was bad, but it sure would have been horrible to play piano that way (easy enough simulated with a delay line and piano plug-in). When I’m recording via Ivory, say, I’ve got headphones, and I want a 64-sample buffer. 128 is reasonable. Beyond that, I quickly arrive at the point when I lose the sense of connection between my fingers and the instrument, and at minimum my comfort level suffers. If there were no alternative, I’d get used to it, but it’s not optimal. But “unusable” is a bit extreme. On Jun 21, 2014, at 4:29 PM, Phil Burk philb...@mobileer.com wrote: On 6/21/14, 8:09 AM, Rich Breen wrote: 5 msec becomes very noticable on headphones, and above 6 msec is not usable. Note that the speed of sound in air is roughly 1125 feet/second. So if a guitar player is more than 7 feet from their amp then they will have more than 6 msec of latency. For acoustic instrument players the speed of sound is not an issue. But if an electronic musician is looking for ultra-low latency then they must also consider their distance from the loudspeaker. Phil Burk -- dupswapdrop -- the music-dsp mailing list and website: subscription info, FAQ, source code archive, list archive, book reviews, dsp links http://music.columbia.edu/cmc/music-dsp http://music.columbia.edu/mailman/listinfo/music-dsp
Re: [music-dsp] Simulating Valve Amps
On 6/21/14 7:21 AM, Urs Heckmann wrote: On 20.06.2014, at 17:37, robert bristow-johnsonr...@audioimagination.com wrote: On 6/20/14 10:57 AM, Andrew Simper wrote: On 20 June 2014 17:11, Tim Goetzet...@quitte.de wrote: [Andrew Simper] On 18 June 2014 21:01, Tim Goetzet...@quitte.de wrote: I absolutely agree that this looks to be the most promising approach in terms of realism. However, the last time I looked into this, the computational cost seemed a good deal too high for a realtime implementation sharing a CPU with other tasks. But perhaps I'll need to evaluate it again? The computational costs of processing the filters isn't high at all, just like with DF1 you can compute some simplified coefficients and then call process using those. Since everything is linear you end up with a bunch of additions and multiplies just like you do in a DF1, but the energy in your capacitors is preserved when you change coefficients just like it is when you change the knobs on a circuit. Yeh's work on the Fender tonestack is just that: symbolic nodal analysis leading to an equivalent linear digital filter. I mistakenly thought you were proposing nodal analysis including also the nonlinear aspects of the circuit including valves and output transformer (which without being too familiar with the method I believe to lead to a system of equations that's a lot more complicated to solve). Nodal analysis can refer to linear or non-linear, so sorry for the confusion. well, Kirchoff's laws apply to either linear or non-linear. but the methods we know as node-voltage (what i prefer) or loop-current do *not* work with non-linear. these circuits (that we apply the node-voltage method to) have dependent or independent voltage or current sources and impedances between the nodes. I don't quite understand. Are you saying that one can not write down an equation for each node once there's a diode or a transistor in the circuit? I was of the opinion that transfer functions of, say, diodes are well known, and even though the resulting equations need more than just adds and multiplies and delays, they are still correct equations albeit ones that require more complex measures than undergrade math to solve. I could be wrong though, and in this case I'd be happy to learn! it's possible that this is only a semantic issue. Nodal analysis seems a little less specific, but when i google it, all of the primary hits go to the Node-voltage method, which together with the Loop-current method are the two primary ways electrical engineers are taught to analyze circuits containing independent or proportionally-dependent voltage or current sources and LTI impedances. only that. the underlying physics: Kirchoff's current law, Kirchoff's voltage law, and the volt-amp characteristics of each element *is* applicable to any lumped element circuit, one with any combination of linear or nonlinear elements or memoryless or non-memoryless elements. but the Loop-current (sometimes called mesh-current analysis) and Node-voltage methods allow one to write the matrix of linear equations governing the circuit directly from inspection of the circuit. it's an analysis discipline we learn in EE class. write the matrix equation down on a big piece of paper, then go about solving the system of linear equations. but Loop-current and Node-voltage does not work with nonlinear volt-amp characteristics. another semantic to be careful about is transfer function. we mean something different when it's applied to LTI systems (the H(z) or H(s)) than when applied to a diode. the latter semantic i don't use. i would say volt-amp characteristic of the diode or vacuum tube. or if it was a nonlinear system with an input and output, i would say input-output mapping function and leave transfer function to the linear and time-invariant. I was trying to point out that the linear analysis done by Yeh starts the circuit but then throws it away and instead uses a DF1, and a DF1 does not store the state of each capacitor individually, so when you turn the knob you don't get the right time varying behaviour. in the steady-state (say, a second after the knob is turned) is there the right behavior with the DF1? But of course - in a linear filter, such as one composed out of RC networks. But what about voltage controlled filters? of course a VCF driven by a constantly changing LFO waveform (or its digital model) is a different thing. i was responding to the case where there is an otherwise-stable filter connected to a knob. sometimes the knob gets adjusted before the song or the set or gig starts and never gets moved after that for the evening. for filter applications like that, i am not as worried myself about the right time-varying behavior (whatever right is). for the case of modulated filters, we gotta worry about it and a few different papers have made that case (i remember a good one from Laroche).