Re: [music-dsp] What's the best way to render waveforms with accuarcy

2010-12-21 Thread Ross Bencina

Thomas Young wrote:
When you have more samples (than pixels) each horizontal pixel column will 
represent multiple samples, you will simply need a resampling algorithm to 
determine the max and min for that column.


You can also compute min and max for a set of fixed zoom levels and choose 
the appropriate one. Basically a 1d version of mipmapping:

http://en.wikipedia.org/wiki/Mipmap
This ensures that no matter what your zoom level is, your computational load 
is more or less constant.



As a begginer in DSP I can't figure out how to preserve the accuarcy
of the waveforms without having all the samples preserved.


The way I've implemented it in the past is that when you're zoomed right in, 
I just read the real samples off disk for that small visible portion. You 
can thread it so the rendering is only performed once the read from disk is 
completed if you want to mask the latency, but disks are fast enough that 
you might be able to get away with performing the read inside the paint 
routine. Otherwise you can use a prefetch routine to make sure you usually 
have data near the visible region in memory.


Ross.


--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Interpolation for SRC, applications and methods

2010-12-21 Thread Ross Bencina

robert bristow-johnson wrote:
one thing i might point out is that, when comparing apples-to-apples,  an 
optimal design program like Parks-McClellan (firpm() in MATLAB) or 
Least-Squares (firls()) might do better than a windowed (i presume  Kaiser 
window) sinc in most cases.  this is where you are trying to  impose the 
same stopband attenuation constraints and looking to  minimize the total 
number of FIR taps.


It's been posted here before, but there is an extensive introduction to the 
trade-offs of different interpolator design methods including those Robert 
mentions in:


Timo I. Laakso, Vesa Välimäki, Matti Karjalainen, and Unto K. Laine,
"Splitting the unit delay - tools for fractional delay filter design,"
IEEE Signal Processing Magazine, vol. 13, no. 1, January 1996.
http://signal.hut.fi/spit/publications/1996j5.pdf
http://www.acoustics.hut.fi/software/fdtools/

Ross. 


--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] resonance

2011-01-01 Thread Ross Bencina

Alan Wolfe wrote:

I have a future retro revolution (303 clone) and one of the knobs it
has is resonance.

Does anyone know what resonance is in that context or how it's 
implemented?


I was reading some online and it seems like it might be some kind of
feedback but I'm not really sure...


In general, analog synth filters have a Resonance control. It creates a 
pronounced resonant peak around the cutoff frequency. Some filters will 
self-oscillate at that frequency.


I'm not familiar with the exact filter circuit used by the 303 (links 
welcome :) but I do think of it as feedback, at least that's how it's 
implemented in the Stilson Moog ladder filter, maybe not in all filters 
though. Ignoring feedback for a moment, the filter will have a 180 or 360 
degree phase shift in the vicinity of the cutoff frequency. If you feed back 
the output to the input (in the 360 degree shift case, or the inverted 
output in the 180 degree shift case) then you'll end up with a resonant peak 
at the frequency where the phases match and reinforce each other -- and so 
the amount of feedback controls the amount of resonance at that frequency.


Sometimes the control is labeled "Q" instead of Resonance, this is a 
reference to Q factor (see http://en.wikipedia.org/wiki/Q_factor) which I 
guess is a more general parameter for underdamped systems and may be 
relevant to other topologies than just a big feedback loop around the whole 
filter -- hopefully someone else can explain that since I still don't fully 
understand the concept of Q factor as it relates to filters (and digital 
filters in particular).


Ross. 


--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


[music-dsp] New Android audio developers mailing list

2011-01-13 Thread Ross Bencina

Hi All

Google recently launched a new C/C++ programming interface (OpenSL ES) for 
audio development on Android devices. I expect this will mark the beginning 
of a new world of audio applications for Android phones and tablets.


Following discussions on various forums and lists a few of us have created a 
mailing list specifically to discuss topics related to Android audio 
software development (whether for music, games, media or whatever else you 
can imagine). If you're a developer and you're interested in audio software 
on Android devices, please join us. You can find an overview of the list and 
information about subscribing here:

http://music.columbia.edu/mailman/listinfo/andraudio

Please pass this invite on to anyone who you think might be interested. Feel 
free to repost, blog, link and tweet.


Many thanks to Douglas Repetto and Brad Garton at Columbia University Music 
Department for hosting the list.


Ross Bencina
Andraudio list admin

--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] New Android audio developers mailing list

2011-01-14 Thread Ross Bencina

Hi Dan

Dan Stowell wrote:
Interesting development... is that mailing list intended to have a 
different flavour to the google group 
 which has been 
going for a while?


Well, firstly, I have to admit that I had not heard of that group, and none 
of the other people I've been talking to so far have either, so just on that 
basis alone I think it will have a different flavour, but I'm sure there 
would be some overlap. Are you a member?


From the link you posted, it says "developing musical products for 
Android" -- Andraudio is much broader than "musical" or "products". Of the 
group of people that instigated andraudio I'm possibly the most "music 
oriented" others are middleware and game audio developers (some from 
PortAudio list some from elsewhere). But we all want the state of audio on 
Android to improve and we want to work together to see this happen.


I am hoping we will have some of the engineers who worked on the OpenSL ES 
spec on the mailing list so I expect we'll be able to host discussions about 
Android audio architecture at a lower level too.


In addition to mainstream Android audio using the public API's we're 
interested in platform internals -- things like producing an architecture 
document for AudioFlinger, developing low latency patches for audioflinger, 
custom framework hacking or AOSP mods etc.


Does that answer your question?

Ross. 


--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] New Android audio developers mailing list

2011-01-14 Thread Ross Bencina

Dan Stowell wrote:
As Oskari noted, here's hoping the angle of this new list (the low-level 
aspects you mention) can help speed Android towards good low-latency i/o!


Yeah, well that's the main thing that got us together so I hope so.

Some have already started looking at bypassing audioflinger for lower 
latency operation for example...


You may have seen my rant on android-ndk
http://groups.google.com/group/android-ndk/browse_thread/thread/1744088af0924ed9/82ed0c6cca0cc36a?lnk=raot&pli=1

Ross. 


--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] New patent application on uniformly partitioned convolution

2011-01-28 Thread Ross Bencina
I oppose patent trolls and trivial patents. Beyond that I think it's a bit 
more murky. My a basic rule of thumb would be: If I can think of a 
mathematical or algorithmic solution to some random problem in my field in 
less than a month I don't expect that solution to be patented or patentable. 
I have no PhD qualification in anything, let alone in software, engineering 
or mathematics. By these measures I'm not even "skilled in the art". I 
rarely apply myself to solving a single problem for more than a month or 
two.


On the other hand, if I'd spent 5 years banging my head up against a single 
problem in my field of knowledge/expertise, had developed unique theories 
and approaches, perhaps built an industrial research team around it, and no 
one else had thought of the same thing, I _might_ consider it reasonable to 
patent it, which is not to say I would patent it but it would at least make 
some sense to try to derive an income stream from my efforts.


The problem with that is that things that are non-obvious today might be 
completely obvious in 5 or 10 years. Interval research is a good example of 
thinking ahead of the curve and patenting things that only become relevant 
later.



Andy Farnell wrote:

More: By publishing you make the advance unpatentable. Therefore
a patent can _only_ be interpreted (in a modern context) as a
desire to inhibit progress.


Sorry Andy, I can't agree with you on that. There are far too many 
subtleties to the question of patents to paint a black and white picture 
about the interpretation of the intent of filing a patent.


Patents may inhibit many things but I'm not sure "progress" is one of them. 
Whose progress are we talking about, a Corporation's? a State's? the 
individual? "society's" progress? I'm not sure these are even concrete or 
separable concepts. Is the priviledged progress of a Corporation (for 
example) necessarily detrimental to the progress of the broader group of 
humans the Corporation is embedded in -- you'd have to adopt a particular 
economic-theoretic position to even argue this point and then there would be 
arguments over the position you've taken. Replace Corporation with "State", 
"research group", "individual", etc in the previous paragraph and repeat.


Patents provide limited monopoly rights to the holder, which arguably 
supports progress and innovation by the group/company/etc who have already 
proven themselves to be capable of undertaking the necessary research to 
create the idea.


In the sphere of corporate production, industrial and state research those 
that hold the patents _are_ progressing. Fraunhofer and CSIRO (for example) 
have derived income from their patent holdings which allows them to do more 
research. Without patents where would these organisations get their funding 
from? (a serious question)


Patents do inhibit direct competition in production to be sure, but this 
forces competitors to invent non-infringing technology which could be argued 
to actually increase diversity and therefore progress. I suspect that the 
tension between Patents vs Anti-patent producers is actually stimulating 
"progress".


I have never filed a patent and I have benefitted from (for example) the 
work of the Xiph and IETF folks on the CELT codec that Gregory Maxwell 
mentioned -- but if Xiph hadn't done that work, then perhaps my individual 
progress would have been limited. On the other hand, none of existing patent 
encumbered codecs have low enough latency for my application, so without the 
motivation that Xiph has (partially related to the creation of free IP), 
perhaps I wouldn't have a solution at all.




As a scientist, teacher and human being I find I'm morally
obliged to oppose such madness.


Fair enough. I'm a pragmatist, I don't see direct opposition "patent 
madness" as a whole as a particularly useful stance. I don't see patents as 
dirrectly in oposition to Science or teaching either. I actually think they 
do more harm to ISVs than they do to the march of scientific progress or the 
development of ideas.



What I don't like is having a random (and not particularly novel) idea, 
googling for a paper or wiki article about it, and finding that it's 
patented -- that's not cool. It's a bit like thinking "perhaps if I put salt 
on this stain on my carpet it will clean it" and then discovering that 
"Method for cleaning carpet stains using sodium chloride" is patented 
because someone already tried it and took out a patent on it. Of course, 
then I just go and use copper sulphate instead and I'm golden.. but I had 
the sodium chloride in my kitchen and now I've had to go and get some copper 
sulphate from somewhere.


A big part of the problem is that obviouness/non-obviousness is a difficult 
thing to assess. However, if I go to the MIT open courseware website and 
watch a first year lecture on stains and crystal absorption and the head of 
the American Society for Stains and Crystal Absorption (a lecturer there) 
sa

Re: [music-dsp] New patent application on uniformly partitioned convolution

2011-01-29 Thread Ross Bencina

Hi Andy

Andy Farnell wrote:

I don't want to open up a lengthy OT debate here. But
will reply privately to address some of your points in detail.


Fair enough. I guess the main reason I bought into this conversation is that 
I do feel like it's something that affects all of us here and I'm interested 
in whether anyone has any suggestions about pragmatic ways to deal with the 
issue since I feel a bit lost with it...




You can apply the above two points to the patent system as a whole, or
narrow them to apply only to software patents or some other subset.


This is the only issue for me. Patents for tangible products
make sense.


I don't understand why one makes sense and other other doesn't.

I don't really see a clear distinction between patenting (a) configurations 
of molecules bound by physical laws and (b) configurations of logical and/or 
electron routing patterns bound by logical/mathematical principles. Perhaps 
it relates to Nigel's earlier point about the examiners not having any idea 
of prior art nor sufficient background to assess things.




There is no case for software patents whatsoever.


You apparently don't agree with the commercial/economic/socionomic arguments 
Nigel put forward then.



To even
entertain the idea is to completely debase the principles of patents
and shows massive ignorance of what patents are and of human values and
knowledge in general.


Ok, I'm definitely missing something big here.



It is the extention of patents to include
non-patentable things that is nothing short of a disater for humanity.


Not sure what your criteria for "non-patentable" is.


I don't know about patent reform as a whole, but I'd sure like those 
pesky

patents on obvious things to go away. Does anyone know the most effective
way to lobby for reform in this area?


Yes. Whenever, wherever you see the topic of software patents discussed
you must speak up to denounce them and correct the misunderstandings of 
those

who support them (often smart, well meaning but deceived people).


Ok, waiting for correction of my misunderstandings...



If your conscience extends to it then opposing genetic, chemical
business process and other inappropriate abstract patents won't hurt,
but it makes sense that those who are educated defend their respective
areas of intellectual expertise where they can argue reasonably and
forcefully.


I'd be interested to hear an example of an "appropriate patent" by your 
standards. I'm not trying to take the piss here, just not at all clear on 
where the line is to be drawn or what is causing you to make such clear cut 
distinctions since I don't see any.




Is there are group that focuses on
calling for re-examination of obvious patents that I could support?


I don't know of any on that specific task of challenging existing bad
patents. I think that task is overwhelming now.


Setting up (or contributing to) an independently funded organisation that 
challenges obviously bad software patents might still be a lot easier than 
getting the system reformed.




If you look at the list on the right column of this page I think
you will realise we have no choice but to move for massive reform or
even abolition to remove whole swathes of the problem.

http://petition.eurolinux.org/


I find that page alarmist with insufficient legal argument to convince me of 
anything. It just sounds like some kind of southpark "patents are bad 
mmmkay" rant. The fact that it has a Linux affiliation and reads like Linux 
developers are annoyed that they can't just code anything they like without 
risking violating a patent doesn't help. The FAQs at the source link 
(http://www.ffii.org/Frequently%20Asked%20Questions%20about%20software%20patents) 
are simplistic and unconvicing (answer 2 ignores risks of exposing IP via 
reverse engineering for example). Note that I am sympathetic to the authors 
concerns, but I need them to be backed up by much stronger arguments (and I 
suspect the lawmakers do too).


We live and work in a complex commercial and legal environment with many 
relationships, constraints, regulations etc. Clamouring for some kind of 
free-IP libre-utopia isn't realistic (just my point of view of course).


Perhaps you're right that massive reform is the path of least resistance but 
I'm not holding out much hope. And besides, there are more important things 
to worry about, like the fate of the Amazon rainforest.


Ross.





--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] damn patents (was New patent application on uniformly partitioned convolution) [OT]

2011-01-29 Thread Ross Bencina

Hi Andy

I wish I were worthy of quoting Blaise Pascal here, but instead I will just 
apologise for the rant...



I think it has a bearing on all of us too. And thus you lure
me in. But if people complain that this is getting boring,
off-topic or ill-natured then let's quit it.


(Subject changed)



All I can offer you is my opinions, I can't help you with
your misunderstandings about the nature of reality. A study of
Shannon and Bohr might help you disentangle information
from atoms.


I doubt Shannon and Bohr will help me understand why different laws should 
apply to ideas about atoms and ideas about information structure though. We 
are not arguing that atoms should be treated differently from information 
(as goes the arguments regarding intellectual property and copyright). _All_ 
patents are intellectual property (designs, ideas, inventions, whatever you 
want to call them) whether they apply to software algorithms, mechanical 
mechanisms, chemical processes etc.




Straight up, I'm confused as to whether you support software patents
(if you want me to correct your misunderstandings) or whether
you want to help reform them, which implies that you don't.


Reforming them does not necessarily imply dismantling the patent system 
completely. I could support Software patents (in some cases) and at the same 
time wish to reform the system.


I don't support software patents but I don't currently oppose them either. I 
definitely oppose some practices of those who use/abuse/exploit software 
patents.




At the risk of falling off your pragmatic fence perhaps you
could lean over enough for me to see which way you're pitching.


I stated my position in an earlier email: I have a clear problem with 
obvious and trivial patents. I stated a proposed rule of thumb about what I 
think is trivial, Nigel has given some other ideas. To me, if those issues 
were resolved that would go a long way to reforming the system. I want to 
continue developing software. I would prefer not to have to deal with 
working around trivial patents and the legal risks of potentially violating 
patents I've never even heard of. On the other hand, I respect that patents 
provide a means to protect significant investment in R&D and unlike your 
freinds at eurolinux I don't believe that Copyright or Trade Secrets are 
sufficient legal mechanisms to achieve this protection since neither can 
fully protect an abstract algorithmic invention. (Perhaps we could have 
something like the performing rights organisations do for songs -- if you 
use my registered algorithm you pay me an algorithm royalty.. maybe not, but 
you get the idea).


As for dismantling the system completely, well I am still sitting on the 
fence. Not because I am unconvinced by your and other's appeals to the right 
to intellectual freedom, but because Patents are an economic mechanism that 
functions in the global industrial/economic domain (as I have said, a 
complex system, a complex dynamic system if you will -- I don't use these 
terms to shroud things as "deep" but to suggest a particular "organic" 
organisational dynamic that is quite different from a top-down organised 
logical system).


Patents don't principally act to restrict intellectual freedom in academia, 
in research, or in my lounge room (although I expect you will come up with 
examples of the ways they do), they principally act to restrict commercial 
freedom -- and I think it is a complex economic question as to whether 
disallowing software patents (as an isolated act) would really lead to 
"progress."  There are arguments both ways (you and Nigel have discussed 
some of them), but I don't think it is possible to know exactly what impact 
it would have on the market and the mechanisms of software production -- you 
can't just take one piece of a poorly understood complex ecosystem like the 
global economy and change or remove a piece of it and expect to know all of 
the consequences. I'm not arguing for maintenance of the status quo, it's 
just that I'm not sure that disestablishmentarianism is the way forward.


So perhaps the conversation should end here, since I am looking for economic 
and/or eco/sociological arguments and you are disinterested in discussing 
everything but the moral dimension (which I can't disagree with). Of course, 
I will go on anyway... :-)...




Legal arguments do not interest me since that just begs the question.
The problem _is_ a legal interpretation, therefore the Law sets the
conditions for what counts as a fact. I am not interested in trying
to use either reasoned or moral arguments on that wonky playing field.
I know how that games goes, it's like arguing with creationists.


The thing is, that _is_ the playing field. All this stuff is being played 
out in the complex dynamic system called the global economy. Perhaps I am 
morally bankrupt, but I have little sympathy for purist philosophical and 
moral arguments that are disconnected from effecting positive cha

Re: [music-dsp] damn patents (was New patent application on uniformly partitioned convolution) [OT]

2011-01-31 Thread Ross Bencina

Hi Andy

Andy Farnell wrote:

AXIOM: Ideas should not be patentable. Period.

Do I need to explain this?


Sorry, you've lost me a bit here. Pehaps you do need to explain it.. see if 
I'm twisting your words below or if you find that I'm addressing your 
position (of course I don't expect you to agree with my argument):


Are you suggesting by stating the above axiom that algorithms are _simply_ 
ideas and that for this reason alone they shouldn't be patentable? Is that 
the basis of your objection to software patents? That the patent system 
should only apply to mechanisms that operate soley in the world of atoms 
(like a design for a spiral ham slicer)? and not to mechanisms that operate 
soely or partially in the information domain (like a design for a particular 
exection structure for partitioned convolution) -- in spite of the fact that 
the information domain is now intimately interfaced as an active participant 
in much human (economic/industrial) activity?


I can understand Knuth's criticism of the futility of trying to distinguish 
between numerical and abstract-structural patentable concepts. But I can't 
understand how you can equate _functional_ information structures (whether 
algorithm or mathematical theorem) performing in an active role as an 
executing computer program  with all other "ideas" and say "sorry, that's 
off limits, not patentable." Given the intent of the patent system to grant 
monopoly rights over novel inventions I fail to see see how that's a valid 
distinction to draw unless your real objection is to all patents and you're 
just trying to keep them out of the software domain (and that is another 
argument entirely).


Much human activity is now conducted in the world of bits and bytes. 
Algorithms are functional mechanisms that operate in the world of bits and 
bytes. Why shouldn't they be patentable? Simply saying "because they are 
ideas" isn't an argument on its own. Why should we distinguish between a 
mechanism that performs partitioned convolution by juggling coloured marbles 
and one that performs partitioned convolution by switching bits?


A patent doesn't prohibit you from having the idea, thinking about an 
algorithm, or using the patented thing in research (these are other common 
things you do with "ideas"). I'm pretty sure you can also write books about 
patented things, build new theories upon them, etc. A software patent does 
place restrictions on use of that idea in its role as a concrete functional 
information mechanism (e.g. in a computer system).


I'm beginning to think that your previously stated moral objections are more 
concerned with the whole notion and structure of intellectual property as a 
legal construct than they are with software patents in particular -- would 
that be a reasonable characterisation? In that light a lot of your previous 
statements make a lot more sense to me.


Ross.

--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] damn patents (was New patent application on uniformly partitioned convolution) [OT]

2011-01-31 Thread Ross Bencina

Hi Andy

Are you suggesting by stating the above axiom that algorithms are 
_simply_

ideas and that for this reason alone they shouldn't be patentable?


Yes I am, you've got it.

An algorithm is unsufficiently concrete to deserve a patent, it is an
abstraction, a generalisation.


Ok...


An algorithm is not "performing in an active role as an executing
computer program", not any more than an imaginary line like the
equator can be used to tie up a bundle of sticks.

It would have to become computer program to do that.


Can you clarify what you mean by "a computer program"? Do you mean 
"compilable or interpretable source and/or object code" or would you accept 
"widely understood, translatable and human-readable pseudocode" in your 
definition of "a computer program"? If I translate from one programming 
language to another is that considered a derivative work for copyright 
purposes?




At that point it would meet the requirements for copyright
which would be sufficient for its commercial protection.


There seems to be some kind of judgement in that statement about what you 
consider "sufficient for commercial protection" but I can't quite put my 
finger on it. Would you be able to interpret the following hypothetical 
situation for me please? I'm not 100% sure whether I've captured the 
possibilities of copyright protection correctly so I may have made some wild 
mistake.. would be interesting to hear what you think...


Scenario: I invest 1000s of person-years devising a completely original 
ultra-fast zero-latency convolution algorithm. I take 2 days to code it in 
C.  I publish my new invention as copyrighted C code.


Two possible infringing situations arise:

A) You get a copy of my copyrighted publication, directly translate the C 
code to Fortran (thus creating a derivative work) and release a new 
commercial software package based on it. I sue you for copyright 
infringement and win.


B) You derive an abstract algorithmic definition from my C code and work out 
a way to translate it into Scheme in such a way that the structure of the 
source code is difficult to relate to the original but the structure of the 
algorithm remains the unchanged. You commercialise. I have no practical 
protection because on the basis of the evidence the relationship of the two 
codes would be undecidable, or at the least, less decidable than the current 
software patent regime.


Here are a couple of other possible scenarios that are currently protected 
by software patents that I don't _think_ would be protected under copyright 
law, but I guess you disapprove of these anyway:


C) You independently write/invent a C program that is substantially the same 
as my copyrighted C program (I think this is realistic for something like a 
convolution algorithm). You can prove you wrote it independently. I get no 
protection.


D) Person X writes a text book about convolution algorithms and produces a 
non-executable diagramatic explanation of my copyrighted C-code invention 
(they read my code, they didn't indepently invent the algorithm). You read 
the text book and write a new implementation of my algorithm. I get no 
protection.



Do the first two scenarios fit with your views of what should constitute 
"sufficient commercial protection"? Keep in mind that in the above I could 
replace "copyrighted C code" with "copyrighted machine code" and the same 
transformations and litigation would be possible in principle (D might be 
less likely if I only publish machine code, especially if you approve of 
DMCA-style anti-reverse engineering laws).


Ross.



--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] looking for a flexible synthesis system technically and legally appropriate for iOS development

2011-02-07 Thread Ross Bencina

Morgan wrote:

simply plugging unit generators in
to one another, not having to stop and think about how to, for
example, go from a mono oscillator signal to a stereo reverb signal.
I'd like to be able to work more like I work in SuperCollider, writing
higher-level code to create a "signal path", trusting that the
connections will be efficiently managed for me.


[snip]


SuperCollider -- GPL licence, would require that I open-source my app


Hi Morgan

Are you sure this is the case? even if you run scserver in a separate 
process (assuming you can do that on iOS) and call it from your own code via 
OSC without using sclang? I don't know either.. but it might be worth 
checking further.


I'm not really accross all of the FOSS options out there but I have a few 
ideas:




Am I missing something? Is there anything -- free, or not, which I
should look at for iOS development besides Pure Data?


LuaAV looks like it's BSD licenced:
http://lua-av.mat.ucsb.edu/blog/

CLAM offers commercial/dual licences according to their FAQ:
http://clam-project.org/

Phil Burk has synth engines that he has been known to licence. I'm thinking 
of the CSyn engine he wrote for JSyn.

http://www.softsynth.com

urMus sounds like it does what you want, I don't know what the licence is:
http://urmus.eecs.umich.edu/

Have you looked into the jMax sourcecode? Is that GPLed?

Ask all of the above people to recommend other options.

There are surely developers out there with closed-source code they might 
licence to you too, for a price.


I also wonder doesn't Apple's AudioUnit/AudioGraph framework do most of what 
you need?




Are there not
hundreds of other people with the same needs that I have?


Hard to say. I don't know what your requirements are exactly. This is the 
tricky thing about the system component you are looking for: you might 
_think_ it can be a generic library, but once you start developing you find 
all these points of variation between systems designed with slightly 
different requirements or design assumptions and you will find what you want 
might actually be unique to your use-cases. That's been my experience 
anyway.


Sure, someone could build a big, flexible system (even more flexible than 
SuperCollider say) but would those "hundreds of other people" pay enough to 
justify the cost of development? not sure.


I'm going to go out on a limb here and say that a huge segment of the open 
source computer music world participates in the GPL-oriented FOSS culture 
and that's why the bulk of what you're finding is GPLed. There are numerous 
reasons for this. But one pointworth pondering is this: there is an argument 
that in the creative arts, intellectual and artistic freedoms are more 
important than the freedom to commercially exploit through proprietary 
extension (something that the BSD licence allows and the GPL doesn't). This 
is congruent with artist's lives because people developing GPL music 
software are not generally making and selling music software as their means 
of making a living. As far as I can see, those that are making a living from 
computer music software development rarely engage in open sourcing core 
components of their products (counter-examples welcome :-).


But there are BSD licence friendly open source music software developers out 
there. You've discovered Pd. PortAudio (although perhaps not useful to you) 
is another group of people who contribute to a BSD-style OSS project. LuaAV 
I mentioned above.




Are my
options really limited to: Pure Data or rolling my own, or
open-sourcing my app?


I'm not sure. Keep looking and asking around I'd say. Visit the #musicdsp 
IRC channel (on efnet) for a while and ask there.. there are people there 
who rarely respond here. You might try asking on the PortAudio list (it 
would be an OT question there, but I imagine the group would be sympathetic 
to the question). You might want to refine the details of exactly what 
you're looking for though.


Assuming it fits your business model you could consider expanding your 
search to include:


- Licencing 3rd party code from independent developers (not just from people 
advertising frameworks)


- Attempting to negotiate a revenue sharing arrangement in exchange for a 
GPL exemption on a GLPed library (if it does what you want).


- Paying someone to write the code you don't want or know how to write.


If you are going to roll your own, feel free to ask some questions here. I 
wrote a chapter about SuperCollider server's architecture for the 
forthcoming SuperCollider book -- I think a preprint of that chapter is 
going to be available soon, email me privately (without -lists in my 
address) and I can keep you informed about that.


An audio signal graph is quite a simple thing in theory: it's typically a 
"synchronous data flow graph" (Google that!), the kind of thing you can 
learn about by reading an introductory book on compiler design. You can 
evaluate the graph by traversing a linked graph

Re: [music-dsp] Fwd: digital EQ (passive) adding gain

2011-03-12 Thread Ross Bencina

Andy Farnell wrote:

How do you know these filters don't have a resonance?

That could explain your results.


I doubt those filters would have explicit resonance/peaking at the cutoff 
(it is a lowpass EQ after all).


But assuming they are using Butterworth filters there are a couple of other 
possibilities. I have no idea whether either of these could cause 2dB gain 
change though (Robert?):


1. Digitisation of the analog prototype could introduce some passband 
ripple.


2. The step response of a Butterworth does overshoot, so it's possible on 
transients you could get the filter overshooting.




Chances are these are digital versions of classic analogue
responses with a peak at the cutoff. For your test to make
sense you need a perfectly flat passband response.


I don't agree with "chances are" but yes, you do need to use an analyser or 
something to double check that the passband response is flat.




Eldad Tsabary wrote:

I tried the scientific EQ in Adobe Audition, which is supposedly a well
designed low phase filter (same setting - 2nd order, 120 Hz), and it
resulted in only a 0.5 dB increase - but still an increase.


2nd order is always going to mean IIR I think. You could try using Bessel 
filters if Audition gives you the option, they won't overshoot.




Does anyone know of this? Anyone has knowledge or ideas about the
possible cause of this?

The several reasons that I have been thinking of are:
1. quantization error - though it seemed to me waaay too much of an
increase
2. some individual transients that were somehow corrupted in the process
3. dc offset
4. phase issue



My bet is on the final isse (phase). Try using a linear phase FIR filter and 
see what happens.


Ross.







--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] digital EQ (passive) adding gain

2011-03-13 Thread Ross Bencina

Hi Eldad

A quick way to get linear phase would be to run the filter twice, once 
forwards, once backwards:


- apply filter
- reverse wave file
- apply filter
- reverse wave file back to normal

Try this in Audition and see what you get

Ross.




- Original Message - 
From: "Eldad Tsabary" 
To: ; "A discussion list for music-related DSP" 


Sent: Monday, March 14, 2011 4:40 AM
Subject: Re: [music-dsp] digital EQ (passive) adding gain



Hello again
I did another test with a Bessel filter in Adobe Audition and had the same 
results
I took a Glen Branca symphony (just an example) and added a high pass 3rd 
order (Bessel) filter at 100 Hz
Instead of losing peak level it produced a file of a slightly higher peak 
(from -7.73 to -6.95)

I included an image of the spectrogram and filter setting here:
http://yaeldad.com/eq/spectrogramsAudition.pdf
I did a similar thing in SoundForge but with a sharper slope and a filter 
that is probably not as good and the resulting file had a peak over 2 dB 
higher!
spectrograms and filter settings here: 
http://yaeldad.com/eq/spectrogramsSoundForge.pdf


Your thoughts?
Thanks
Eldad




On 3/13/2011 10:57 AM, peiman khosravi wrote:

I'm interested in this. Would it be possible to post a sonogram
analysis of the signal made before and after the processing?

This should show if the gain corresponds with resonances formed due to
abrupt slopes. Also a simple FFT subtraction of the second sound from
the first might be enlightening.

And have you tried putting white noise through the same filters to test?

Do you have any noticeable gain if you just insert the plug-in with a
flat EQ curve?

I have had very bad experiences with protools 7 band EQ. Often
applying a low-pass filter to a sound with mostly extreme low freq
content (with the aforementioned plug-in) adds some high freq bubbly
hiss (sounds like quantisation error but not sure if it is). So I just
avoid that particular plug-in. In the bundled protools EQs the 'air
kill' or whatever it's called seems to work better.

P

On 12 March 2011 19:56, Eldad Tsabary  wrote:

Hello all,
While looking with students on methods of increasing dynamic range of
pieces, we EQed individual tracks that have no business in the lower 
range
with a high pass filter (2nd order) at 120 Hz. The idea was that getting 
rid
of rumble from all of the tracks (except bass-range tracks) can both 
clean

the overall mix and reduce measured amplitude peaks of individual tracks
without losing actual loudness (thus allowing to bring the entire mix to 
a

louder RMS).
This, to my surprise, didn't work at all. In all cases I tried so far,
instead of reducing the dB measurement, the signal after processing had 
a
higher dB peak measurement (I used non-realtime EQ in order to use 
higher

quality DSP but also to be able to measure the overall signal).

It doesn't make much sense to me because the HPF is supposedly just a
passive filter. Using HPF in  Pro Tool 8's EQ on a drum overhead track
reduced the overall audible loudness and got rid of the bassy sound of 
the
kick. It sounded softer but strangely it measured as 2 dB higher than 
the

original signal.

I tried the scientific EQ in Adobe Audition, which is supposedly a well
designed low phase filter (same setting - 2nd order, 120 Hz), and it
resulted in only a 0.5 dB increase - but still an increase.

Does anyone know of this? Anyone has knowledge or ideas about the 
possible

cause of this?

The several reasons that I have been thinking of are:
1. quantization error - though it seemed to me waaay too much of an 
increase

2. some individual transients that were somehow corrupted in the process
3. dc offset
4. phase issue

Any insights would be helpful
Thanks
Eldad


--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, 
dsp links

http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp 


--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] good introductory microcontroller platform for audiotasks?

2011-04-19 Thread Ross Bencina

Kevin Dixon wrote:

My EE friend is recommending I go PIC, but the Arduino looks
promising, especially for fast return on effort :) I guess startup
cost is an issue too, I'd like to be up and running for about 50 USD.

Any thoughts/recommendations? Thanks,


I wouldn't usually use "microcontroller" and "audio tasks" in the same 
sentence but if you're prepared to live with major limitations...


I'm not sure the AVR chip Arduino usually uses will do everything that you 
need but I highly recommend it as an easy to use starter platform. You'll 
also get the benefit of a big artistic user community.


My experience with PIC is 10 years old but at that time the instruction set 
was less expressive than AVR (I was programming in Assembler at the time) 
but low-cost PICs were more popular. These days my impression is that the 
balance has shifted to Arduino, at least in music/art circles.


For real audio DSP work you're going to be better off with a DSP platform 
(e.g. sharc) so you might want to consider your long term goals. Arduino is 
good at what it does but probably a dead end.


Ross.

--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] good introductory microcontroller platform for audio tasks?

2011-04-20 Thread Ross Bencina

Andy Farnell wrote:

http://www.hans-w-koch.org/installations/thankyou.html

then a PIC/uC would be the way to go because you could
make them for pennies with a bit of clever sourcing.

But when you look around at the huge array of SOC
and SOM stuff at the "Little board" or "Stick" scale
it's hard not to be taken with a route where you can
have a full Linux system in the palm of your hand.


That's a very good point. Startup costs for an Arduino vs Linux SBC might be 
similar but you probably won't be able to stamp out Linux SBC's for 
$5/each... or run Pd on an Arduino.


I should note that when I mentioned Arduino, I'm was really talking about 
AVR chips my friend bought in bulk and flashed with the Arduino bootloader. 
I have a couple of Arduino boards, but I treat them like 
programmers/prototyping boards and end up putting the AVRs into other 
circuits. This would be a similar scenario with PICs or raw-programmed Atmel 
chips (although the Arduino bootloader doesn't boot instantly, which may be 
a factor in some systems).


Horses for courses...

Ross. 


--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


[music-dsp] Fw: First SuperCollider Book sample chapter now available, 'Inside scsynth'

2011-04-22 Thread Ross Bencina

Hi All

Thought you might be interested in my book chapter "inside scsynth" from the 
new SuperCollider book. The chapter is now freely available in pdf form. It 
goes into some detail about the implementation of scsynth, which should be 
of interest to people implementing dynamic real-time audio systems. For a 
quick overview check my blog post: 
http://www.rossbencina.com/code/supercollider-internals-book-chapter


Full link to the pdf from Nick Collins' post below..

Ross.



From: "Click Nilson"
Dear all,

With the go ahead of MIT Press, we're releasing three sample chapters in the 
coming weeks, freely available. The first is actually a developer chapter, 
the last in the book, on the internals of scsynth, by Ross Bencina. It gives 
a good idea of the layout of the book for those who might not have seen 
inside yet, and will be particularly useful to devs exploring how 
SuperCollider is built and works, and to anyone curious to really go into 
the make-up of the synthesizer side of the program.


http://supercolliderbook.net/
http://supercolliderbook.net/rossbencinach26.pdf

best,
Nick


--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Sinewave generation - strange spectrum

2011-04-26 Thread Ross Bencina

eu...@lavabit.com wrote:

*out++ = data->amplitude[0] * sinf( (2.0f * M_PI) * data->phase[0] );
*out++ = data->amplitude[1] * sinf( (2.0f * M_PI) * data->phase[1] );

/* Update phase, rollover at 1.0 */
data->phase[0] += (data->frequency[0] / SAMPLE_RATE);
if(data->phase[0] > 1.0f) data->phase[0] -= 2.0f;
data->phase[1] += (data->frequency[1] / SAMPLE_RATE);
if(data->phase[1] > 1.0f) data->phase[1] -= 2.0f;


You haven't shown us the declarations/data types for data->phase and 
data->frequency. I hope they are doubles (or at least floats).


Ross. 


--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Trapezoidal and other integration methods appliedtomusical resonant filters

2011-05-16 Thread Ross Bencina

Vadim Zavalishin wrote:
Somewhere I recently found a reference to an article explicitly covering 
Moog filter, but I don't have it. 


You mean this one?

Analyzing the Moog VCF with Considerations for Digital Implementation
by Tim Stilson,  Julius Smith
http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.50.3093


Ross.
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Trapezoidal and other integrationmethodsappliedtomusical resonant filters

2011-05-17 Thread Ross Bencina

robert bristow-johnson wrote:
even though the cookbook yields coefficients for  Direct 1 or Direct 2 
forms, it's pretty easy to translate that to the  state-variable design if 
that is the form you wanna use.


I've often wondered about the relationship between the z-plane transfer 
function and state variable form. Can anyone recommend a good reference 
(book?) that clarifies the relationship between direct form and state 
variable form in digital filters?


Thank you

Ross. 


--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Trapezoidal and otherintegrationmethodsappliedtomusical resonant filters

2011-05-17 Thread Ross Bencina

Sorry to hear about your home Robert.

and i would be happy if someone would quote Hal's coefficient formulae 
and tell me which integrator has the delayed output.


The Chamberlin SVF formulas are here:
http://www.musicdsp.org/archive.php?classid=3#142

And here's the topology:
http://imagepaste.nullnetwork.net/viewimage.php?id=2270

Ross.




On May 17, 2011, at 6:27 PM, Ross Bencina wrote:


robert bristow-johnson wrote:
even though the cookbook yields coefficients for  Direct 1 or  Direct 2 
forms, it's pretty easy to translate that to the state- variable design 
if that is the form you wanna use.


I've often wondered about the relationship between the z-plane  transfer 
function and state variable form. Can anyone recommend a  good reference 
(book?) that clarifies the relationship between  direct form and state 
variable form in digital filters?


well, first , i should have explicitly added the qualifier "all- 
pole" to "direct form 1 or 2".  i meant to imply that there is a  little 
bit of fudginess with comparing an LPF designed from an all- pole s-plane 
design using bilinear transform with a two-pole, no-zero  z-plane.


the state-variable filter i'm thinking about is the one in Hal 
Chamberlin's "Musical Applications..." book.  it, essentially,  replaces 
one integrator, s^-1, with 1/(z-1) and the other with z/(z-1).


this is weird, but because of my current situation (comp.dsp people  know 
about this) where my home is literally flooded (Lake Champlain  has 
literally overtaken and annexed my house, there's a lotta flooding  going 
on in North America at the moment) i don't even know where (in  what box) 
my Chamberlin book is.  i just remember that the two  coefficients 
controlled resonant frequency and Q sorta independently.   the resonant 
frequency coefficient is nearly linear with f0 and, i  think depending on 
how Q or resonance is defined, the other  coefficient depends solely on 
that parameter (with no dependence on f0).


and, regarding the 4-pole Moog, the resonant frequency and the  resonant 
peak height (in dB over the gain at DC) would be the two  parameters to 
match.  besides being lazy (and working out another way  to do it) i liked 
the bilinear transform property that preserves that  resonant peak height. 
so however peaky the Moog filter gets, is  exactly as peaky this cascaded 
DF1 design gets.  now, if bilinear  xform of the analog Moog is not done, 
say we match the parameters to  two state-variable filters a. la. 
Chamberlin, i am not sure how well  this peakiness gets matched up as 
well.  but, with bilinear, those two  peaks get lined up at the same 
frequencies in the analog or digital  design (because of pre-warping in 
blinear), and their heights are  guaranteed to be the same (bilinear only 
messes up where a feature is  in frequency, not the height of the 
feature).  so the peaks might be a  little warped in shape, if the 
resonant frequency is close to Nyquist.


but Ross, i can't readily get the book and i don't want to take the  time 
to carefully rederive the coefficient mapping function.  what i  would 
show is, in both cases, what the relationship of the  coefficients is to 
the f0 and Q parameter.  maybe tonight, when i'm  bored i can rederive 
what i remember about Hal's state-variable filter  and then routinely 
convert it to direct form coefficients.


and i would be happy if someone would quote Hal's coefficient formulae 
and tell me which integrator has the delayed output.  if it has any 
zeros, q, in the z-plane, i think it's q=0, so it's virtually an all- pole 
filter.  i think that's the case.



Thank you


you're most welcome. :-)

--

r b-j  r...@audioimagination.com

"Imagination is more important than knowledge."



--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, 
dsp links

http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp 


--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Trapezoidal and other integration methods appliedtomusical resonant filters

2011-05-20 Thread Ross Bencina

robert bristow-johnson wrote:
i don't have time now to complete the analysis, but here is my first  pass 
at getting the z-plane transfer function (something to compare to  the DF1 
or DF2).


Thanks very much Robert,

I was able to follow your analysis below. Previously I didn't really 
understand how to formulate the transfer function for a digital circuit with 
outer feedback loops. I have really learnt a lot here so far. For anyone 
interested I've made a version of the diagram annotated with the various 
transfer function equations based on Robert's analysis below:

http://imagepaste.nullnetwork.net/viewimage.php?id=2275

Robert, were you applying a standard heuristic to know to break the graph at 
W? This business of solving around feedback loops is bending my brain a 
litte.




i would like it if someone checks this math


I checked the integrator z-transforms against those here:
http://en.wikipedia.org/wiki/Z-transform

And the rest of the algebra worked out the same for me, although the overall 
method remains a little obscure to me.


May I ask: did you do all the algebra by hand or do you use a software 
package? I must admit I wimped out near the end of working through the 
expansion of W(z)/X(z) and used Maxima. I need to go back and study my 
fractions I'm afraid.


Robert you said:

there are two state variables, but i won't give them extra symbols


Exactly what points in the graph would you call the "state variables" and 
why?


I will be interested to read the remainder of your analysis regarding the 
coefficient calculations.


And to Andrew: hopefully I now have enough knowledge to attempt getting the 
z-plane transfer function for your new linear SVF (which presumably has a 
zero at nyquist, given the frequency response plots you've posted).


Thanks

Ross


there are two integrators and i will include the gain block, Fc with  each 
integrator.  the left integrator has an extra sample delay at the  output. 
so the left integrator is


H1(z) = Fc * (z^-1)/(1 - z^-1) = Fc/(z-1)

the right integrator is nearly the same (but without the extra unit 
delay):


H2(z) = Fc * 1/(1 - z^-1) = (Fc*z)/(z-1)

there are two state variables, but i won't give them extra symbols (to 
solve this more quickly).  the output of the big adder on the left i 
*will* give a symbol (in the z domain), "W".  i'm calling the input  "X" 
and the LPF output, "Y".  so the adder does this:


W(z)  =  X(z)  -  Y(z)  -  Qc*H1(z)*W(z)

and the output Y is related to W as what happens when you pass W  through 
the two integrators:


Y(z)  =  H1(z)*H2(z) * W(z)


when you plug in Y(z), you can solve for the transfer function from X  to 
W as


W(z)/X(z)  =  (z-1)^2 / ( z^2 + (Fc^2 + Fc*Qc - 2)*z + (1 -  Fc*Qc) )

and the transfer function from W to Y is

Y(z)/W(z) = (Fc^2 * z) / (z-1)^2   .


then the overall transfer function is

H(z)  =  Y(z)/X(z)  =  Y(z)/W(z)  *  W(z)/X(z)

  =  (Fc^2 * z) / ( z^2 + (Fc^2 + Fc*Qc - 2)*z + (1 - Fc*Qc) )

so, it appears as i have remembered: other than a zero at the origin 
(nothing but a delay involved with that), this is an all-pole filter  in 
the z-plane.  so this cannot be an LPF designed with the bilinear 
transform, because bilinear will put two zeros at Nyquist (z=-1) for a 
biquad LPF.  so, although the poles can be directly compared to the  poles 
of the Direct Form (1 or 2), it can't be directly compared to  the 
bilinear mapped LPF (which is what i did splitting the Moog  transfer 
function into a cascade of two biquad LPFs).


but some comparison *can* be made and the poles can go where the Moog 
poles go.  but, because of the frequency warping property of the  bilinear 
transform where i could guarantee that the bumps in the  frequency 
response got mapped from the s-plane over to the z-plane  with their 
heights unchanged (and i can pre-warp the location of the  peaks and map 
them over exactly), i cannot guarantee that if the bumps  will have their 
heights and locations unchanged when using this state- variable filter.


i would like it if someone checks this math, either with the topology 
Ross posted or with the code that has been pointed to in the archive.   i 
tried to be careful, but may have screwed up.


later tonight, i will try to relate the resonant frequency f0 and the  Q 
to the coefficients Fc and Qc and see if i get the same thing that  Hal 
does.  i believe that Fc is a function solely of f0/Fs (Fs being  the 
sampling frequency) and Qc *may* be a function solely of Q.   perhaps 
someone else can figure that out in the meantime.


--

r b-j  r...@audioimagination.com

"Imagination is more important than knowledge."




--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, 
dsp links

http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp 


--
dupswapdrop -- the music-dsp mailing list and we

Re: [music-dsp] Trapezoidal and other integration methods applied tomusical resonant filters

2011-05-20 Thread Ross Bencina

On Thursday, May 19, 2011 12:20 PM Andrew Simper wrote:

I think there has been some confusion with the different responses to
different questions in this thread, so sorry if I have got this wrong,
but I believe Ross posted the block diagram of Hal Chamberlains
filter, which, as you have shown, is a slightly modified forward Euler
approach.


Andy, it is correct that previously I posted the Chamberin version. 
Apologies for the diversion, I didn't mean to change the subject. I take 
Vadim's earlier point about s-plane/z-plane equivalence for filters mapped 
via BLT however I am (at my own peril) more familiar with the digital domain 
and prefer to pursue the z-plane analysis since it does provide at least an 
alternative perspective. So, returning to your filter...




I have not posted or even bothered to draw a block diagram
for the trapezoidal svf filter, or any other


I think there is some utility to looking the block diagram...

Here is the block diagram of the "trapaoidal integration" digital linear SVF 
algorithm you posted on May 17:


http://imagepaste.nullnetwork.net/viewimage.php?id=2276

(from your post is here: 
http://music.columbia.edu/pipermail/music-dsp/2011-May/069898.html)


Note that I moved the delay state update to after the computation of the 
outputs, otherwise v0z=v0=input which I'm pretty sure is not what you meant.



Here again, from my previous post, is the Chamberlin SVF topology for 
comparison:

http://imagepaste.nullnetwork.net/viewimage.php?id=2275


Assuming I got the block diagrams correct, I have a few observations about 
your filter, Andy:


As Vadim noted earlier, compared to the Chamberlin SVF you have cascaded a 
nyquist zero (FIR section), which I suspected, since the lowpass frequency 
responses you posted earlier (here 
http://cytomic.com/files/dsp/trapezoidal-svf-responses.pdf) were warped to 
gain=0 at the nyquist frequency.


The other structural difference, which I think is interesting, is that the 
large feedback path around the two integrators tap off _after_ the second 
unit delay rather than before it as Chamberlin does. Thinking about it, this 
extra unit-sample of delay in the feedback path is going to flip the gain at 
nyquist (from reinforcing to cancelling) which I imagine is why your filter 
is stable for high-cutoff frequencies unlike Chamberlin's -- it does beg the 
question: why didn't we think of this earlier? and why did Chamberlin do it 
the way he did?


There is something about the gain structure of your filter that I'm not 
quite comfortable with: the two outer feedback loops include a common factor 
of 2. Also, the use of two adders in the v1z integrator could be interpreted 
as an integrator loop gain of 2 (assuming for a moment that we only care 
about the low-pass output). It all makes me wonder whether the outer loop 
multiplications by 2 couldn't be eliminated to yeild an equivalent 
structure -- I'm suggesting this mainly as an analysis technique to compare 
the filter to others.


Thoughts?

Ross



I always work directly
from the continuous case as shown here:

http://cytomic.com/files/dsp/sem-1a-linear-svf.jpg

you can integrate it with whichever method you want. The result of
integrating it with the trapezoidal method is above, and I have done
the math for you to cancel out all terms v0, v1, v2 to leave the
function v2/v0 below (and collected in powers of z):

low (z) = (g^2 + 2*g^2 * z + g^2 * z^2) / (1 + g*(g+k) + 2*(-1+g^2) *
z + (1+ g^2 - g*k) * z^2)

so in slightly shorter form the scalers for the powers of z are in the
numerator are:

low numerator = g^2, 2*g^2, g^2
band numerator = g, 0, -g
high numerator = 1, -2, 1

all other responses follow naturally from there and the plots of these
functions are here:

http://cytomic.com/files/dsp/trapezoidal-svf-responses.pdf


The actual forward euler code should from the linear svf diagram I
posted should be:

v1z = v1;
v2z = v2;
v1 = v1z + g * (v0 - (k)*v1z - v2z);
v2 = v2z + g * (v1z);

The one Hal Chamberlain posted integrates v2 using v1 not the v1z,
which corrects the phase at cutoff so you get correct resonance
behaviour:

v1z = v1;
v2z = v2;
v1 = v1z + g * (v0 - (k)*v1z - v2z);
v2 = v2z + g * (v1);

which corresponds to a change of damping factor in the normal forward
euler to k' = k+g:

v1z = v1;
v2z = v2;
v1 = v1z + g * (v0 - (g+k)*v1z - v2z);
v2 = v2z + g * (v1z);

Which, as you quite rightly point out (in a slightly different form)
has a transfer function of:

low (z) = (0 + g^2 * z) / (1 + (g*(g+k)-2) * z + (1 - g*k) * z^2)

and the coefficient lists for powers of z in the numerator for all the
responses are:

low = 0, g^2, 0
band = 0, g, -g
high = -1, 2, 1

The backward euler is different again, and like the forward euler
doesn't preserve the phase at cutoff, but it is stable at all
frequencies. For audio use there isn't much point in a backward euler
version since it requires a division so you may as well just use
trapezoidal, but I'll include it just for completene

Re: [music-dsp] Trapezoidal and other integration methodsappliedtomusical resonant filters

2011-05-23 Thread Ross Bencina

On Sunday, May 22, 2011 1:27 PM robert bristow-johnson wrote:

what i believe i missed is an  assumption
that the Q is pretty high.  then all the formulae from  Hal's book fall
into place.  does Hal explicitly make that assumption  in the book?


Not that I could find. I just re-read the section on the digital SVF and
there is no mention of that. He does indeed get the same result you 
mentioned though:



 Fc   =  2*sin(w0/2)


One point you make below, that Hal does acknowledge, is that obviously you
need at least one unit-delay in the feedback loop to make direct digital
implementation feasible. I don't believe he considers how this might affect
the behavior of the digital filter.

Your explanation of the Z-transform stuff, was really helpful. Trying to 
keep things back to the original topic now...


The corrected integral you posted:

t-1t
  y(t)   = integral{ x(u) du}   +   integral{ x(u) du}
   -inf  t-1

 ~= y(t-1)  +   x(t)



this is where you might apply something like trapazoidal integration  and
then you would put in (x(t)+x(t-1))/2 instead of x(t).  but i'm  not.


But if you had of, that's where the zero at nyquist comes from in the 
trapezoidal version...


y[n] = y[n-1] + (x[n] + x[n-1]) / 2

giving

Y(z)/X(z) = (z + 1 / z - 1) / 2

and perhaps that divisor of 2 explains why Andy's version has the multiples 
of two in the feedback loop...



one thing i'll point out is that the first integrator *had* to take  its
output from the delay and not before it.  that is because any  signal that
is fed back in a digital filter *must* be delayed by at  least one sample.
otherwise, how are you gonna write the code for it?


Indeed. Hal notes that he had to make that change to make it implementable.



i don't have one such (like Mathematica or Derive or whatever they got
now).  this is just 2 eqs and 2 unknowns.


Yeah. I'm getting the feeling that my mental algebra skills are letting me 
down here.




I will be interested to read the remainder of your analysis  regarding
the coefficient calculations.


be careful what you ask for.  this was a mess and i could simplify it
only with an assumption and approximation.


Thanks! I havn't followed it all the way through yet.. this may take longer 
for me to digest than it took you to write. I've made a start and will 
continue tomorrow.


One thing, you suggestion this correction:


Q   <-   Q * ( sin(w0/2)/(w0/2) )

before you do the reciprocal, 1/Q.  i dunno if Hal does this or not.
someone wanna check for me?


Hal doesn't make the  Q   <-   Q * ( sin(w0/2)/(w0/2) ) correction.

Thanks

Ross.




okay, so we got this from before:



   H(z)  =  Y(z)/X(z)

 =  (Fc^2 * z) / ( z^2 + (Fc^2 + Fc*Qc - 2)*z + (1 - Fc*Qc) )





first, here's a little history for context.  this is what an analog  2nd
order resonant filter is and how things get defined (with the  resonant
frequency normalized):

 H(s) =  1 / ( s^2  +  1/Q * s  +  1 )

when you un-normalize the resonant frequency, it's

 H(s) =  1 / ( (s/w0)^2  +  1/Q * (s/w0)  +  1 )

  =  w0^2 / ( s^2   +  w0/Q * s  +  w0^2 )

if you do the same analysis to Hal's analog prototype (on the top of  that
page you posted), you get precisely that transfer function where
coefficients

 A = w0
 B = 1/Q

the (analog) poles are what you make s take to make the denominator 0.

 H(s)  =  w0^2 / ( (s - p1)*(s - p2) )

from the quadratic formula:

 p1, p2  =  w0 * ( 1/(2Q) +/- j*sqrt( 1 - 1/(4Q^2) ) )

Then, it turns out, these poles determine the degree of resonance and
resonant frequency of the impulse response;

 h(t) = w0/sqrt(1-1/(4Q^2)) * exp(-w0/(2Q)*t) * sin( sqrt(1-1/
(4Q^2))*w0*t )

(use a table of Laplace transforms. i'm leaving out the unit step
function that turns this off for t < 0.)

There's a constant factor and an exponential decay factor scaling this
resonant sinusoid.  the *true* resonant frequency is

 sqrt(1-1/(4Q^2)) * w0

but if Q is large enough, we can approximate the resonant frequency as
simply w0.  The decay rate, w0/(2Q), is also proportional to w0  (filters
at higher frequencies die out faster than those at lower  resonant
frequencies, given the same Q).  you can relate Q directly to  a parameter
like RT60, how long (in resonant cycles) it takes for the  filter ringing
to get down 60 dB from the onset of a spike or step or  some edgy
transition.

okay, so that's an analog resonant LPF filter.  there is a method,  called
"impulse invariant" (an alternative to BLT) to transform this  analog
filter to a digital filter.  as it's name suggests, we have a  digital
impulse response that looks the same (again, leaving out the  unit step
gating function):

h[n] = w0/sqrt(1-1/(4Q^2)) * exp(-w0/(2Q)*n) * sin( sqrt(1-1/
(4Q^2))*w0*n )

here, since the resonant frequency is not normalized, i am free to
normalize the sampling frequency, Fs.  

Re: [music-dsp] Trapezoidal and otherintegrationmethodsappliedtomusical resonant filters

2011-05-23 Thread Ross Bencina

Vadim Zavalishin wrote:
What I meant is not the time-varying implementation, which of course is 
not difficult in the cases you describe. I was referring to the question 
of analysing how close is the digital model to the analog one in the 
time-varying case. For the time-invariant case we have the transfer 
function as our analysis tool, giving us full information about both 
versions of the system (so that we can e.g. say that bilinear-transformed 
implementations have amplitude/phase responses pretty close to their 
continuous time prototypes). However, the same thing doesn't work in the 
time-variant case. Thus, we typically just perform experimental analysis 
of the time-variant behavior (which, for music DSP purposes, includes the 
stability and the effect of parameter modulation on the output).



With regard to stability under time varying modulation, Jean Laroche has 
published some criteria:


On the Stability of Time-Varying Recursive Filters
http://www.aes.org/e-lib/browse.cfm?elib=14168

Using Resonant Filters for the Synthesis of Time-Varying Sinusoids
http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.146.1378


As to analysing the effect of parameter modulation on the output I guess 
this is a job for state-space models?



Ross. 


--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] (very) old limiter thread

2011-06-27 Thread Ross Bencina

Hey Bram

If I remember correctly there was a multi-envelope-follower algorithm 
proposed by James Chandler Jr a while back.


The idea was to have slow-attack slow-release envelope followers to deal 
with long-range high level signals and fast attack/release to deal with 
transients. I'm not sure if it was cascaded the way you describe below 
though. I thought it was more like:


input -> limiter1 -> limiter2 -> ... -> limiterN -> output

Where each limiter had increasingly fast attack/release, so the final 
limiter would limit only transients that got through the earlier (slower) 
stages.


Obviously you can optimise this in different ways.

And of course i could be remembering things wrong.

You can do similar things with multiple peak-hold running max filters using 
the O(1) algorithm I posted a while back.


Ross.

- Original Message - 
From: "Bram de Jong" 

To: "A discussion list for music-related DSP" 
Sent: Monday, June 27, 2011 6:16 PM
Subject: [music-dsp] (very) old limiter thread



Hello all,


quite a while (years rather than months!) there was this long
discussion about limiters at musicdsp and someone mentioned a design
of cascading envelope followers. I'm thinking it was RBJ, but not too
sure... The design was something like:

e1 = env_follower1(abs(signal))
e2 = env_follower2(e1)
...
eX = ...

limiter_signal = max(e1, e2, ...)

if (limiter_signal > thresh) ...

Can anyone recall this conversation? I'm looking for the original!


- Bram

--
http://www.samplesumo.com
http://www.freesound.org
http://www.smartelectronix.com
http://www.musicdsp.org

office: +32 (0) 9 335 59 25
mobile: +32 (0) 484 154 730
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, 
dsp links

http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp 


--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Orfanidis-style filter design

2011-12-08 Thread Ross Bencina



Is that not like saying "It is ok to use an illegal copy of software [x]
because it is so expensive they cannot expect a lot of people to buy it"?

Or do you find this situation to be different?


It might be like saying "State funded research should be available free 
of charge to the scientific community, not held behind for-profit 
paywalls." Except that in this particular case the paper is published by 
TC Electronic so the state funded argument fails.


In any case I think the free and open disemination of scientific 
knowledge can easily be argued to be different to the copying of 
proprietary software that is expressly produced for profit.


Whether or not distribution of copyright works is an appropriate 
response to this issue is a matter for the individual. I'm not sure AES 
makes members agree to an EULA prohibiting further copying -- but I 
would think that copying for individual research as Robert is proposing 
falls within the fair use guidelines of many if not all nations.


Best wishes

Ross.
(An AES member and also someone who paid for this paper a while back)


--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] anyone care to take a look at the Additivesynthesis > article at Wikipedia?

2012-01-11 Thread Ross Bencina



On 12/01/2012 4:01 AM, robert bristow-johnson wrote:

well, i cannot tell that the WP admins are going to do anything about
this other than wait for the page protection to expire (about 26 hours)
and then see what happens.  if enough of us converge upon the article,
then the "tendentious" editor will be outnumbered, and if they enforce
the 3 revert rule, it needs at least two of us to stand up to him.  more
would be better.  but if they *don't* enforce it evenhandedly, this
tendentious editor will simply call any revert against him as
"vandalism" and will not limit his own editing.


Hi Robert

Is there a specific issue that should be focused on at the moment?

I find the talk page format very difficult to parse to know what's most 
important.


Ross.
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Signal processing and dbFS

2012-01-17 Thread Ross Bencina

Hi Linda

Some (possibly spurious) thoughts...

I'm confused about what you're actually trying to achieve by referencing 
things relative to 0dBFS (which is measure of signal level relative to 
digital full scale). You talk about frequency responses below, which are 
expressed in terms of gain/attenuation multipliers, not signal levels.


The main ways I can see signal level concerns entering into a frequency 
response is if you are concerned with:

(1) precision effects
(2) frequency response under saturation (a.k.a. certain signals in your 
filter exceeding 0dbFS)


Neither of these would result in a frequency response expressed in dbFS 
but they could result in 0dbFS being relevant, given a specific 
fixed-point / integer implementation of your algorithm.



On 18/01/2012 5:34 PM, Linda Seltzer wrote:

Those who know me personally know that my background in music DSP is via
the AT&T Labs / Bell Labs signal processing culture.  So for the first
time I have encountered the way recording studio engineers think about
measurements.  They plot frequency response graphs using dbSF on the Y
axis.


I presume that you mean dbFS here as in the subject line.

http://en.wikipedia.org/wiki/DBFS
(when the SOPA and PIPA protest ends)   

For an integer signal, full strength is the maximum representable (peak) 
signal value.


In my experience I do *not* believe that this is typically the way 
recording studio engineers work (e.g. with a ProTools system).


Usually levels are calibrated such that 0dBFS maps to +24db (dbV? dbU?) 
on the mixing console meters (or some other relatively high level that 
allows for plenty of headroom when recording).


If anything, based on my discussions with audio engineers, dBFS is a 
concept that comes out of computer digital audio/computer music and is 
actually quite alien to studio engineers.




Recording engineers are accustomed to the idea of dbSF because of the
analog mixes where the needle looked like it was pegging at 0dB.


You'll need to explain that.

Analog engineers *don't* expect 0dB to be a clipping level. You can 
drive an analog desk much hotter than what meters as 0dB.




The problem is what this means in signal processing.  My understanding is that
different manufactureres have different voltage levels that are assigned
as 0dB.  In signal processing we are not normally keeping track of what
the real world voltage level is out there.


I agree with this. There is indeed confusion in this area. In the analog 
domain I think you might find that there are standards about what 
voltage 0dB maps to (not my area either) (dbV?). On the other hand I'm 
not aware of any standards regarding the analog voltage level of 0dbFS 
-- perhaps there are in certain domains such as broadcast TV or film 
etc). On the contrary, in a music studio, my understanding is that the 
dbFS <-> voltage level mapping is something that is a matter of taste 
depending on how hard you want to drive your analog console (assuming 
say a ProTools desk connected to an analog console). But my 
understanding is that an offset of 0dbFS = between +16 and +24db is 
common -- perhaps someone else can cite some standards or conventions.




When I am looking at a signal in Matlab it is a number in linear sampling
from the lowest negative number to the highest negative number.  Or it is
normalized to 1.


I don't understand what you're asking here. Are you asking about 
floating point numbers? I don't think 0dbFS has a strict meaning in 
floating point. [-1.0, 1.0] by convention could be considered 0dbFS in 
floating point terms but you need to be careful how this maps to 
integers (if you care about the LSB, different systems use different 
float-int scalars).




I don't know how to translate from my plot of a frequency response in
Matlab to a graph using dbFS.


A frequency response is a representation of *gain/attenuation*. I'm 
don't see how this can be usefully expressed in dBFS (a measure of 
absolute signal level relative to the digital representation's limits).


I would think that simply expressing gain in dB would be correct.

On the other hand if you're considering precision and range within an 
integer DSP path perhaps you need to look at time domain response. 0dBFS 
could come into it with regard to response to a particular signal (eg 
perhaps a step response).



Perhaps you don't mean frequency response above, but rather *amplitude 
spectrum* in which case you could reference your levels off 0dbFS.




Visually a signal processing engineer
wouldn't think about the peaks as much as a recording engineer sitting at
a mixing console does.  I can see that from their point of view they are
worrying about the peaks because of worries about distortion.  A signal
processing engineer assumes that was already taken care of by the times
the numbers are in a Matlab data set.  The idea of plotting things based
on full scale is a bit out of the ordinary in signal processing because
many of our signals never appr

Re: [music-dsp] Signal processing and dbFS

2012-01-18 Thread Ross Bencina



On 18/01/2012 7:20 PM, Uli Brueggemann wrote:

The result 32768/32768 = 1.0 (or 8388608/8388608) explains, why the
DSP maths often use values in the range -1.0 .. +1.0.


I agree with "often use." But if it really matters for your application, 
just keep in mind this assumption of 32768 being the maximum when 
converting between int and float is not universal. Sometimes 32767 or 
some other method is used:


http://blog.bjornroche.com/2009/12/int-float-int-its-jungle-out-there.html

Ross.
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] music-dsp Digest, Vol 97, Issue 21

2012-01-18 Thread Ross Bencina



On 19/01/2012 9:03 AM, Linda Seltzer wrote:

Why and under what circumstances is it advantageous to set up the Y axis
as dbFS rather than dbV, dbSPL,


out of the ones you mention, dBFS is the only one that has any meaning 
in the digital domain -- since as we've established, the others don't 
have a well defined mapping from sample values to volts or sound pressure.




or a linear scale?


That would be the same as asking why use a logarithmic scale at all. (1) 
because it has some perceptual relevance, (2) because it's a convention 
that is commonly followed so people are used to reading dB graphs.




If you use a dB scale
that is referenced to the lowest volume rather than to the loudest level,
then the issues of a clipping level all go away.


Well you can't reference the dB scale to silence since that is -inf dB.

What would be "the lowest volume"? an LSB? Then you'd get vastly 
different peak values for a 24 bit and 16 bit spectrum. A 24 bit signal 
is not 48dB louder than a 16 bit signal so it doesn't really make sense 
to reference off the LSB.




What do you learn from a
graph where the Y axis is in dbFS that is different from what you would
learn if you use dbV or dbSPL?


I can't think of much.

But it makes no sense to use dbV or dbSPL for a digital system unless 
you have a well defined mapping from 0dBFS to a reference voltage and/or 
sound pressure level.




Is there anything perceptually that would cause one to plot the data one
way or the other?

Is there anything you would learn about, in terms of problems,
distortions, etc., when using one type of dB over another?


Obviously the frequency weighted measures are weighted for a reason (you 
can look it up).


If you were coupled to a physical/analog system you might be more 
interested in the relation to analog than if everything is numerical.


For example if you were driving an analog circuit and you knew how it 
behaved you might be interested in dBV because you don't want to drive 
the circuit to clipping (or you do). Similarly for acoustically coupled 
systems.




There are other people doing all of their graphs this way and I haven't
been able to discern from them the reason.


I would expect it to be standard practice for DSP algorithms that have 
no well defined voltage/SPL relation to the analog domain.


You havn't put forward an alternative that makes sense in the digital 
domain.



Ross.

--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] choice of Q for graphic equalizers

2012-02-06 Thread Ross Bencina



On 7/02/2012 5:31 PM, Shashank Kumar (shanxS) wrote:

But for audio applications, like EQ design, isn't the phase response
supposed to be linear ?


I would say not.

Most audio EQ filters don't have linear phase.

Additionally, the ear is very sensitive to acausality so, especially for 
radical filtering, using a symmetric FIR filter for EQ does unnatural 
things to transients.


There are of course counter examples.

Ross.




--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


[music-dsp] stereo-wide pan law?

2012-02-07 Thread Ross Bencina

Hi Everyone,

Does anyone know if there's a "standard" way to calculate pan laws for 
stereo-wide panning ?


By "stereo-wide" I mean panning something beyond the speakers by using 
180-degree shifted signal in the opposite speaker. For example, for 
"beyond hard left" you would output full gain signal to the left 
speaker, and some inverted phase signal to the right speaker.


I know this is a somewhat dubious method but I'm wondering if there are 
known pan laws that handle this case.


Thank you,

Ross.
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] stereo-wide pan law?

2012-02-07 Thread Ross Bencina

Thanks for the responses,

Seems like I may have asked the wrong question.

Ralph Glasgal wrote:
> There is no valid psychoacoustic method to accomplish this and so
> there can be no valid pan laws to accomplish this.

In this instance I'm not really concerned with psychoacoustics. What I 
need is something that gives a sensible result under the assumption that 
I want to send some anti-phase in the opposite speaker. "Sensible" could 
be defined as "perceptually smooth", or "energy smooth".


Ambisonics uses anti-phase panning. What if we assume that the speakers 
are on either side of the head. Does that give a valid physical basis 
for stereo anti-phase panning?


As I already said, I realise that it's a somewhat dubious idea -- I'm 
not looking for criticisms of that. I'm working on a simple extension of 
an existing effect algorithm (a somewhat well known stereo chorus) that 
uses this inverse phase business to "pan" the chorused voices -- and I 
want to limit my algorithm to that.


What I'm aiming to achieve is one slider that can pan each voice between 
from left to right, and also smoothly cross into dubious "beyond the 
speakerness" by sending inverse phase to the opposite speaker. It could 
be as simple as ramping up the inverse phase signal but I thought it 
might be possible to formulate something that has some kind of basis in 
stereo panning law theory -- not necessarily concerning spatial 
perception but at least concerning perceived signal energy.



To get really concrete: at the moment I have two sliders (one for left 
level and one for right level) and two checkboxes for inverting the 
phase of each side. This is a most unsatisfactory user interface. I need 
to get to the point of having a single "pan" or "width" slider.



Tom's response comes closest:

On 8/02/2012 12:17 AM, Tom O'Hara wrote:
> L = ((1+w)L + (1-w)R)/2
> R = ((1+w)R + (1-w)L)/2
>
> 0<=w<=2
>
> 0 = mono
> 1 = normal
> 2 = full wide


But it is a stereo image processing formula not a panning formula, and 
it uses linear ramps (amplitude sums to unity for left and right) so 
doesn't appear to follow the usual power laws associated with stereo 
panning.


I could use a regular panning law for between the speakers and use Tom's 
linear constant-amplitude extension for beyond the speakers, but somehow 
increasing the amplitude of the the in-phase speaker to give a unity 
amplitude sum with the anti-phase speaker doesn't seem right. If 
anything I would think that the in-phase speaker amplitude should be 
reduced -- perhaps in a mirror image of the between-the speakers fade 
curves with the polarity of one side reversed?



I hope that makes my question clearer.

Thanks!

Ross


On 7/02/2012 9:20 PM, Ross Bencina wrote:

Hi Everyone,

Does anyone know if there's a "standard" way to calculate pan laws for
stereo-wide panning ?

By "stereo-wide" I mean panning something beyond the speakers by using
180-degree shifted signal in the opposite speaker. For example, for
"beyond hard left" you would output full gain signal to the left
speaker, and some inverted phase signal to the right speaker.

I know this is a somewhat dubious method but I'm wondering if there are
known pan laws that handle this case.

Thank you,

Ross.
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews,
dsp links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] stereo-wide pan law?

2012-02-08 Thread Ross Bencina



On 9/02/2012 1:06 AM, Olli Niemitalo wrote:

Now, it would be unreasonable if, compared to input, the
output would have an opposite polarity in L or R.


I'm not sure what you're getting at here, for example, the following is 
reasonable:


Considering the left channel only (right is opposite and is also added 
into the two outputs ):



leftVoice = modulated_delay( leftInput );

leftOutput = leftInput + -.2 * leftVoice;
rightOutput = leftVoice * .55;


In this case the modulated delay of the left input is panned "extreme 
right".


Ross.
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] stereo-wide pan law?

2012-02-08 Thread Ross Bencina



On 9/02/2012 1:17 AM, Olli Niemitalo wrote:

1 + g(p) = 2*f(p)
==>  g(p) = 2*f(p) - 1

For a chorus voice, as channel gains, use g(p) = 2f(p) - 1 and g(-p)
= 2f(-p) - 1, where p = -1..1 is the panning and f(p) is a vanilla
panning law of your choice. This means that with g(p), you will have
to re-label "full left" and "full right" to mean the values of p for
which f(p) = 0.5 or f(-p) = 0.5. Consequently, f(p) can't be a linear
panning law, but must satisfy f(0)>  0.5. A constant-power panning law
can be used as f(p).


That works quite nicely, thanks!

Ross.
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] stereo-wide pan law?

2012-02-08 Thread Ross Bencina

Hi Jerry,

On 9/02/2012 11:02 AM, Jerry wrote:

(Good grief, people.) You want the *very famous* Bauer's Law of Sines:

Benjamin B. Bauer, Phasor Analysis of Some Stereophonic Phenomena, IRE 
Transactions on Audio, January-February, 1962.


If anyone knows where this can be read without forking over $30 bucks to 
an institutional paywall I'd love to hear about it.




This panning law is mentioned in many introductory books on stereo theory.


Would you care to cite any "introductory books on stereo theory" -- I 
never read one, could be interesting.


And yes, after 20 years out of university (studying music), going and 
getting an EE degree is looking like a serious option.


Thanks

Ross.




Here it is, quoting from the paper:

Sin theta_I   (S_l - S_r)
--- = ---
Sin theta_A   (S_l + S_r)

where

theta_I is the azimuth angle of the virtual image, and
theta_A is the azimuth angleof the real sources.
S_l and S_r, are the strengths of the signals applied
to the left and right loudspeakers, respectively.

This we call the “stereophonic law of sines,” and it
shows that through appropriate distribution of in-phase
signals to the loudspeakers, the position of the virtual
image for the centrally placed observer may be adjusted
anywhere relative to the loudspeaker.

End of quote.

The angles are "half-angles" relative to the listeners nose, i.e., for 
loudspeakers at +/- 30 degrees, theta_A = 30 degrees.

This four-page paper is recommended reading for everyone. 8^)

This panning law agrees exactly with the panning described by HRTF methods at 
the low frequency limit (and only there).

Jerry


On Feb 7, 2012, at 11:10 PM, Ross Bencina wrote:


Thanks for the responses,

Seems like I may have asked the wrong question.

Ralph Glasgal wrote:

There is no valid psychoacoustic method to accomplish this and so
there can be no valid pan laws to accomplish this.


In this instance I'm not really concerned with psychoacoustics. What I need is something that gives a 
sensible result under the assumption that I want to send some anti-phase in the opposite speaker. 
"Sensible" could be defined as "perceptually smooth", or "energy smooth".

Ambisonics uses anti-phase panning. What if we assume that the speakers are on 
either side of the head. Does that give a valid physical basis for stereo 
anti-phase panning?

As I already said, I realise that it's a somewhat dubious idea -- I'm not looking for 
criticisms of that. I'm working on a simple extension of an existing effect algorithm (a 
somewhat well known stereo chorus) that uses this inverse phase business to 
"pan" the chorused voices -- and I want to limit my algorithm to that.

What I'm aiming to achieve is one slider that can pan each voice between from left to 
right, and also smoothly cross into dubious "beyond the speakerness" by sending 
inverse phase to the opposite speaker. It could be as simple as ramping up the inverse 
phase signal but I thought it might be possible to formulate something that has some kind 
of basis in stereo panning law theory -- not necessarily concerning spatial perception 
but at least concerning perceived signal energy.


To get really concrete: at the moment I have two sliders (one for left level and one for right 
level) and two checkboxes for inverting the phase of each side. This is a most unsatisfactory user 
interface. I need to get to the point of having a single "pan" or "width" 
slider.


Tom's response comes closest:

On 8/02/2012 12:17 AM, Tom O'Hara wrote:

L = ((1+w)L + (1-w)R)/2
R = ((1+w)R + (1-w)L)/2

0<=w<=2

0 = mono
1 = normal
2 = full wide



But it is a stereo image processing formula not a panning formula, and it uses 
linear ramps (amplitude sums to unity for left and right) so doesn't appear to 
follow the usual power laws associated with stereo panning.

I could use a regular panning law for between the speakers and use Tom's linear 
constant-amplitude extension for beyond the speakers, but somehow increasing 
the amplitude of the the in-phase speaker to give a unity amplitude sum with 
the anti-phase speaker doesn't seem right. If anything I would think that the 
in-phase speaker amplitude should be reduced -- perhaps in a mirror image of 
the between-the speakers fade curves with the polarity of one side reversed?


I hope that makes my question clearer.

Thanks!

Ross


On 7/02/2012 9:20 PM, Ross Bencina wrote:

Hi Everyone,

Does anyone know if there's a "standard" way to calculate pan laws for
stereo-wide panning ?

By "stereo-wide" I mean panning something beyond the speakers by using
180-degree shifted signal in the opposite speaker. For example, for
"beyond hard left" you would output full gain signal to the left
speaker, and some inverted phase signal to the right speaker.

I know this is a s

Re: [music-dsp] stereo-wide pan law?

2012-02-09 Thread Ross Bencina



On 9/02/2012 11:02 AM, Jerry wrote:

(Good grief, people.) You want the *very famous* Bauer's Law of Sines:

Benjamin B. Bauer, Phasor Analysis of Some Stereophonic Phenomena, IRE 
Transactions on Audio, January-February, 1962.

This panning law is mentioned in many introductory books on stereo theory.

Here it is, quoting from the paper:

Sin theta_I   (S_l - S_r)
--- = ---
Sin theta_A   (S_l + S_r)

where

theta_I is the azimuth angle of the virtual image, and
theta_A is the azimuth angleof the real sources.
S_l and S_r, are the strengths of the signals applied
to the left and right loudspeakers, respectively.

This we call the “stereophonic law of sines,” and it
shows that through appropriate distribution of in-phase
signals to the loudspeakers, the position of the virtual
image for the centrally placed observer may be adjusted
anywhere relative to the loudspeaker.

End of quote.

The angles are "half-angles" relative to the listeners nose, i.e., for 
loudspeakers at +/- 30 degrees, theta_A = 30 degrees.

This four-page paper is recommended reading for everyone. 8^)

This panning law agrees exactly with the panning described by HRTF methods at 
the low frequency limit (and only there).



Just wanted to write again and say thanks for this Jerry. It is indeed 
what I was looking for.


Solving for S_l^2 + S_r^2 = 1 it seems to work very well for my needs. 
It doesn't suffer from the dip in level in the middle which Olli's 
previous solution did.


For my case I set the speaker angle very narrow (7.5 degrees) so that I 
can get extreme-antiphase gain out of the equations. Then I warped 
theta_I by ^4 so that 50% of my panning range pans between the speakers 
and the outer 25% on each end of the panning range moves into the 
with-antiphase region.


Thanks for everyone elses comments. Clearly there's plenty of scope to 
go beyond this.


Best wishes,

Ross.





--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] stereo-wide pan law?

2012-02-11 Thread Ross Bencina



On 11/02/2012 2:27 PM, Jerry wrote:

Glad to help. With your set-up, if you try to put a loud low frequency signal 
well outside the loudspeaker array, you will notice that your speakers and/or 
amplifiers will have melted. To the extent that sin(theta_A) = theta_A 
(small-angle approximation), for every halving that you reduce the loudspeaker 
spacing, there is a doubling of low frequency amplitude requirements for images 
at 90 degrees. There is a cure for this--let me know if this situation bites 
you. (Also, for every halving of loudspeaker angle, there is an octave added to 
the frequency range over which the widening works.)


Hi Jerry,

I'm not sure I follow you here.

I'm not using ITD, only amplitude (a sum of in-phase and 180-degree 
phase signals). I don't see how this can impact the frequency response. 
Can you give me a clue?


Thanks

Ross.
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] google's non-sine

2012-02-23 Thread Ross Bencina



On 23/02/2012 6:22 PM, Oskari Tammelin wrote:

Come on, it's a perfect visualization of their understanding of audio.


+1
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] a little about myself

2012-02-24 Thread Ross Bencina

Hi Brad,

On 24/02/2012 3:01 PM, Brad Garton wrote:

Joining this conversation a little late, but what the heck...


Me too...


On Feb 22, 2012, at 9:18 AM, Michael Gogins wrote:


I got my start in computer music in 1986 or 1987 at the woof group at
Columbia University using cmix on a Sun workstation.


Michael was a stalwart back in those wild Ancient Days!


cmix has never
had a runtime synthesis language; even now instrument code has to be
written in C++.


One possible misconception -- by "runtime synthesis language" I'm sure Michael
means a design language for instantiating synthesis/DSP algorithms *in real 
time*
as the language/synth-engine is running.  I tend to think of languages like 
ChucK
or Supercollider more in that sense than Csound, and even SC differentiates 
between
the language and then sending the synth-code to the server.


My reading would be that Michael may be implying that there is a 
difference between interpretation and compilation.


CSound does not have a runtime synthesis language either. It's a 
compiler with a VM. There is no way to re-write the code while it's running.


SC3 is very limited in this regard too (you can restructure the synth 
graph but there's no way to edit a synthdef except by replacing it, and 
there's no language code running sample synchronously in the server). So 
you have a kind of runtime compilation model.


I didn't get much of a chance to play with SC1 but my understanding is 
that you could actually process samples in the synthesis loop (like you 
can with cmix). To me this is real runtime synthesis. You get this in 
C/C++ too -- your program can make signal dependent runtime decisions 
about what synthesis code to execute.


Anything else is just plugging unit generators together, which is 
limiting in many situations (one reason I abandoned these kind of 
environments and started writing my algorithms in C++).




RTcmix (http;//rtcmix.org) works quite well in real time, in fact it has now 
for almost
two decades.  The trade-off in writing C/C++ code is that it is one of the most
efficient languages currently in use.  We've also taken a route which allows it
to be 'imbedded' in other environments.  rtcmix~ was the first of the 'language
objects' I did for max/msp.  iRTcmix (RTcmix in iOS) even passes muster at the
clamped App Store, check out iLooch for fun:  
http://music.columbia.edu/~brad/ilooch/
(almost 2 years old now).

For me the deeper issue is how these various languages/environments shape
creative thinking.  I tend to like the way I think about music, especially 
algorthmic
composityon, using the RTcmix parse language than I do in, say SC.  Each system
has things 'it likes to do', and i think it important to be aware of these.


Indeed.

The problem with "plug unit generators languages" for me is that they 
privilege the process (network of unit generators) over the content (the 
signal). Programming in C++ makes the signal efficiently accessible. 
Nothing wrong with patchable environments of course :) just that their 
not the whole story.


Ross.




--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] a little about myself

2012-02-24 Thread Ross Bencina

Hi Charles,

On 24/02/2012 10:45 PM, Charles Turner wrote:

>  Anything else is just plugging unit generators together, which is limiting 
in many situations


Has it escaped me that Audio Mulch supports this kind of interpretation?


Hi Charles,

I'm not exactly sure what you think has escaped you. But I wasn't really 
bringing AudioMulch into this :)


AudioMulch is a high level environment for patching pre-built modules 
together. The modules are high level (at the level of musical effects, 
drum machines etc) and patching at this level makes sense (to me at 
least). It's a kind of component assembly model and isn't really related 
to programming per-se.


There are a couple of levels of abstraction below this (which aren't 
directly available to the user of AudioMulch unless they start writing 
plugins):


1. The "plugging unit generators together" which you get with Reaktor, 
Csound, Pd, etc.


2. The sample-manipulation level that you get with C/C++/Assembley 
language.


My point above being that level level (2) is distinct from level (1), 
and in my view, too much emphasis is placed on level (1). Cmix is one of 
the few systems I've used that provides access to level (1) and (2) in a 
relatively seamless way.


Ross.
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] a little about myself

2012-02-24 Thread Ross Bencina



On 25/02/2012 4:50 AM, Adam Puckett wrote:

Is there a minimal example of a complete working program that renders
a sine wave in realtime using Kernel Streaming that will compile with
just a bare MinGW install? (I have the latest GCC 4.5 on Windows XP
Service Pack 3). Which DirectX SDK do I need, the one from Microsoft,
or the one fromhttp://libsdl.org/extras/win32?


On WDM-KS note that portaudio (portaudio.com) recently merged WDK-KS 
with WaveRT to trunk. It might be an easier option than doing it from 
scratch with Microsoft SDKs.


Not sure whether it builds with MinGW though.

Ross.
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] a little about myself

2012-02-25 Thread Ross Bencina

Hi Andy,

On 25/02/2012 5:05 AM, Andy Farnell wrote:



The problem with "plug unit generators languages" for me is that they
privilege the process (network of unit generators) over the content


Some really interesting thoughts here Ross. At what level of
granularity does the trade-off of control, flexibility and
efficiency reach its sweet spot?


Part of my argument is that ther's a problem if your synthesis language 
is not turing complete. C/C++ and MP4-SAOL are two options here. I'm not 
sure Faust gives you that.




In some ways the unit generator or patchable code-block
model is to be considered a compromise between the overhead
of calling functions on single samples and being able
to process chunks. It comes bottom up, out of implementation needs
rather than being a top down shorthand. On the other hand,
because familiar structures like the filter, oscillator and so forth
make sense as basic units of design the VM + Ugen model makes
a lot of sense to practitioners coming from the studio.


Agreed.

As an aside: studio equipment (typically, not always) has this property 
that all couplings are impedence matched, or buffered. What something is 
plugged into doesn't affect what it does (to a first order 
approximation). This is not how acoustic systems nor electronic circuits 
work. So that's one problem with the abstraction. Indeed increasingly 
audio software is incorporating iterative solvers for this kind of 
thing, although we don't yet see computer music languages with full 
solving capabilities like QUCS or SPICE.





Plenty of analogous structures in general computer science
have similar rationales, like pipelines, SIMD, with the
question being at what level of granularity can you lump a
bunch of stuff together and process it all without sacrificing
flexibility? Even apparently atomic instructions are, from the
microprocessors point of view, collections of more atomic
register operations that we never consider unless programming
in machine code.


Right. But unless you're programming in assembley language these 
constraints (and benefits) don't usually impact the outcome. The 
compiler provides a generalised model of computation and optimises to 
these low level structures.


The problem in something like Music-N-with-control-rate derived 
languages is that the optimisation surfaces as a strict constraint in 
the structure of the high level language.


Faust is one example that gets away from this. Although I'm pretty sure 
it is not usefully turing complete.


MP4-SAOL is the best thing I've seen so far that allows control-rate 
semantics/optimisations but doesn't force them (you can write a single 
sample delay and the compiler will recognise it as such).




Anything else is just plugging unit generators together, which is
limiting in many situations (one reason I abandoned these kind of
environments and started writing my algorithms in C++).


As linguists and writers note (Wittgenstein, Orwell, Ayer, Chomsky etc)
language defines the modes of thought and facilitates or limits what
we can do more or less easily. I guess plenty of studies have been
done of the "expressibility" of computer languages, since they are
strictly formal and amenable to analysis. Though we tend to invoke
"Turing completeness" and assume all roads lead to Rome, clearly some
languages are much better for certain things than others.


I don't think we have to resort to the Saphir-Whorph hypothesis here. 
(there was a great thread on that on the POTAC list last year btw.)


I think you're right about turing completeness though: in my view, you 
need something that is expressively turing complete that can operate on 
individual samples within that language framework. C/C++ and MP4-SAOL 
give you this. CSound doesn't, at least not easily, perhaps if ksamps=1 
you can get close, neither do any of the split synthesis graph/control 
language systems (SC3, LuaAV, pd, maxmsp).





Grist for the mill in computing philosophy, but as musicians or
sound designers it takes on a freshness. For example, the ease with
which polyphony can be conceived in Supercollider and Chuck is
amazing compared to Pure Data/Max, which makes it an awkward hack
at the best of times. Csound is somewhere between. And of course,
though Csound is clearly conceived as a _composers_ language where
large scale structures are easy to build, abstraction is very obtuse.


Well I would question whether CSound is a composers language. To me it's 
an audio rendering language. Whenever I used CSound in my composition 
workflow I would write Python scripts or C programs to generate scores, 
so the composition didn't happen in CSound per-se.




I remember Gunter Geiger's thesis being a good comparative
treatment of different computer music languages, but that was
mainly from a computational rather than expressibility angle.
Maybe there's a good doctoral project for someone lurking in this
question.


Ah didn't realise he ever finished, I should look it up.



P

Re: [music-dsp] a little about myself

2012-02-25 Thread Ross Bencina

Hi Michael,

I agree with you on the below within the "dominant paradigm", but there 
are a few problematic assumptions that are central to this dicussion. 
What follows is a somewhat controversial rant, apologies in advance...



On 25/02/2012 6:21 AM, Michael Gogins wrote:

The two "languages" hidden under the single name "unit generators" are:

(1) An efficient way to implement a runtime compiler for block-based
audio DSP chains (these are what are in software synthesizers)...
which usually are based on


As if block-based DSP chains are the only way to make sound with a computer.



(2) A metaphor for signal processing operators as being linear time
invariant operations inside "little black boxes" (these are what are
inside analog synthesizers such as the original modular Moog or the
Buchla).


The whole LTI thing is incredibly problematic. We've known for a long 
time that time invarience was just a convenient lie. Things like BIBO 
stability have been relevant to audio-rate modulation for at least 10 
years. Then there is the issue with linearity -- almost nothing 
"musical" is linear. As I think I said in my previous post, non-linear 
solvers are now becoming common in analog modelling.




Both (1) and (2) assume linear time invariance which guarantees that
you can wire up blocks any old way you like and still understand what
the result will be, because it will all be linear addition and
multiplication. But, in practice, when you open up a block from level
(1) you don't usually find smaller more primitive blocks from level
(2), with the possible partial exception of Pure Data or Reaktor. What
you usually find is gnarly, hard to read procedural C code with loops,
arithmetic, and some functions operating in a linear time invariant
way on arrays of samples.


I don't really agree with you on this "gnarly, hard to read procedural C 
code" business, except in the case of CSound which is truly gnarly. I 
always found Cmix ugens to be very clear for example. SuperCollider 
ugens are pretty good too.


In any case, part of the point is, if you want to control the individual 
samples, and you know C++ then it's not that big a deal.




What stands in the way of adopting plain old C++ for software synthesizers is:

(a) The metaphor behind (2) is very useful for helping the
non-technical musician get a handle on what is going on, especially if
they have spent hours fooling around with a Buchla or something like
that.


The assumption here is that the target audience is the "non technical 
musician". I think this is completely spurious. Talking about 
limitations or capabilities of computer programming languages is kind of 
pointless if you allow for the novice user who doesn't even know what a 
turing complete language can do.


In my view, anyone who cares about being a composer of *computer music* 
(in the context of this dicussion) needs at least the following (or 
equivalent) undergraduate knowledge:


1) music composition degree
2) computer science and/or software engineering degree
3) electrical engineering degree

Maybe even this is required:
4) Tonmeister/ sound engineering degree

I only have (1) and maybe a I could pass some of (2) based on industry 
experience, and I'm starting to study (3).. so I don't yet consider 
myself qualified. But I know enough about composing and writing software 
to know that if you want algorithms to be part of your compositional 
palette (and why else use a computer?) then LTI signal flow is not enough.


Of course there are composers who use computers in different ways, say 
more along the lines of a traditional musical instruments, or as a kind 
of musical word processor, in this case the pre-requisites are of course 
different.




(c) It is easy to write and, more importantly, to read C++ signal
processing code that processes one sample at a time, but only code
that processes blocks of at least 20 to 200 samples at a time is
really efficient enough to use; unfortunately this kind of code is
significantly harder to write, and, more importantly, to understand -
the metaphor of (2), and other metaphors, inevitably becomes obscured.


I think this can be mostly resolved with a good vectorising compiler 
(which may or may not exist).


That said, I think the runtime environment is also important. A 
livecoding runtime like extempore is preferable to the traditional C++ 
toolchain.




That said, I still believe that properly written C++ unit generators
could be easy to implement, easy to write with, easy to understand,
and efficient. But this would be a great deal of work and the very
first steps have to be just right or the rest all goes astray. I think
it would be important to use blocks that would "look like" individual
samples, it would be important to use smart pointers or garbage
collection so the composer never has to worry about memory management,
and the overall naming conventions would have to be at once concise,
elegant, easy to read, and remind one of those 

Re: [music-dsp] a little about myself

2012-02-25 Thread Ross Bencina

On 25/02/2012 2:38 PM, Adam Puckett wrote:

What is WaveRT? I don't see it in the tarball.


WaveRT is a recent WDM-KS driver sub-model that was introduced in 
Windows Vista. It is the version of WDM-KS that people seem to get 
excited about as offering the lowest latency and efficiency. I can't 
remember the details but at its core I think it's similar to the ASIO 
driver model.


In any case, in PortAudio it's part of the wdmks host API.

Ross.
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] a little about myself

2012-02-25 Thread Ross Bencina

On 26/02/2012 6:23 AM, Bill Schottstaedt wrote:

In linguistics, it's known as
the Great Eskimo Vocabulary Hoax.


It's also known as the Sapir-Whorph Hypothesis. There are strong and 
weak versions of the hypothesis. The whole thing isn't necessarily 
completely a hoax.


http://en.wikipedia.org/wiki/Linguistic_relativity

(see Present status section).


Here's a short excerpt from a discussion on the POTAC list last year. I 
really liked Dan Stowell's introduction of the idea of long-term bias 
effects [1]:


Ross Bencina wrote:
>> In any case I am not reverting to strong-SW here.. just suggesting
>> that the notation system biases what we think of as music. It
>> doesn't mean we can't think of other things as music, or even work
>> out ways of notating them with CMN or programming languages.

In reply, Dan S wrote:
> Nicely put. I agree with all this. You don't need strong-SW, since
> mild biasing factors can have strong effects over an extended period
> of time (that's one of the mainstays of evolution by natural
> selection, of course, but in other types of system too). I've always
> thought that western music notation colours a lot of people's musical
> expression in ways I don't like. Perhaps because I'm into timbral
> gestures; or perhaps I'd think that anyway, whatever was the dominant
> representation.



I wrote the same basic kinds
of music whether by hand ("Daily Life"), using SCORE ("Sandcastle"),
using Pla + Samson box (many tunes), etc.


I'm not sure whether you want to unpack that, but a couple of obvious 
issues are that (1) I would think it's difficult for a composer to judge 
the effects of the tools they use on their compositional output, (2) you 
wrote Pla so it is really part of the composition, (3) just because you 
wrote the same "basic kind of music" independent of tool used I'm not 
sure that says much about the broad impact of tool structure on practice.


On the other hand, I'd agree that most of what actually consitutes music 
is not directly captured by the tools. By which I mean, what musical 
expression really is is mostly outside the domain of the directly 
encoded in programs, languages, algorithms, much in the same way as most 
human communication is non-verbal (and hence not encoded in words).



Ross.

[1] http://lists.lurk.org/pipermail/potac/2011-July/000107.html
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] a little about myself

2012-02-26 Thread Ross Bencina

On 26/02/2012 9:13 PM, Richard Dobson wrote:

On 26/02/2012 05:48, Ross Bencina wrote:
...

make this significantly easier to do. If someone wants to spend years
rewriting something that already works quite well, like Csound.


Depends if you think CSound works quite well. I think it is deeply
flawed. I propose a simple test:

If you can't synchronously execute a procedural algorithm at the exact
sample location of every zero crossing in an arbitrary input signal then
there is a problem with your synthesis language.

It's not the only thing that i'd find difficult in CSound but is one. I
also miss proper control flow, typed variables, and data structures.


Well, no system or language is "perfect", but anyway: Csound has had the
possibility to do per-sample processing for years, using ksmps = 1, and
in addition it now supports user-defined opcodes which can run at
ksmps=1 internally, so I see no reason at all why you could not do your
zero-crossing thing if you want to.


Perhaps I'm not being clear. My point is about being able to execute 
arbitrary code at an arbitrary time based on the value of some 
signal(s). The zero crossing thing was a simple example. You could also 
think of algorithms that are not strictly signal based such as 
multi-rate counters etc etc. Without a full-blown programming language 
the compromises become unacceptable once you know how to program in a 
"normal" language, whether that be C, C++, Java, Haskel, Scheme or Lisp.


Of course you can counter with "use the best tool for the job" -- which 
I agree with.




Whether in practice you really want
to do something every zero-crossing when you could get those every three
or four samples (or less, in principle) when the audio stream consists
of dither, is another question.


Sure, it was a simple example of something that's hard to do when you're 
dealing with a block-based system.




How would you do it in AM?


I wouldn't. I'd write some C++ code. As I said earlier I'm leaving 
AudioMulch out of it because I don't think it has relevance here. I will 
happily admit that AM is much less capable than CSound of being a 
general purpose programming language.




I am sure you
know that there is usually no such thing as "the exact sample location
of every zero crossing" as most of the time the actual crossing point is
between samples, so the best you can do is watch for a sign change. If
that is the only thing Csound can't do, it is a limitation I can easily
live with!


Are you being a pedant on purpose or did you miss my point? It was just 
an example. Anything that needs to efficiently to execute signal aware 
logic triggered at arbitrary locations in the sample stream. 
Implementing transient masked WSOLA might be another example.




We have rather a lot of control flow opcodes these days (remembering
that all control flow idioms after the first are in principle syntactic
sugar), including "else" and "until".


Can you provide a concrete example/link please? My knowledge is clearly 
out of date here. Last time I knew there was just the convoluted re-init 
mechanism to do control flow.


Are CSound control flow opcodes sufficient for me to implement a 
quicksort? That's the kind of control flow I'm thinking of -- something 
that can be used with data structures to implement algorithms.




Bearing in mind that conceptually, Csound is and always has been closer
to assembly code than to a "normal" HLL,


I think it's much closer to SPICE or synchronous dataflow languages than 
it is to assembley.




and that attempts to turn it
into C or C++ (or Supercollider) are doomed to failure


Sure. To be clear: I'm not trying to change CSound. I'm not suggesting 
it should be fixed to address the issues I'm raising. I'm happy enough 
using C++. I would love to use something runtime interpreted, but so far 
nothing has emerged that is both efficient and usable (extempore may be 
a possible exception but I havn't had a chance to explore it yet).




as they
completely miss the point (as are attempts to criticise it for not being
a HLL), the lack of data structures is somewhat moot;


It might be moot for CSound but it is not moot for actually getting 
anything useful done. Which is my point. The whole paradigm is limited. 
If it works for you, great. But you seem to be arguing that CSound is 
great for all the reasons I'm arguing that it's lacking (lack of ability 
to implement arbitrary algorithms and data structures, at the control 
and signal level, basically).




and with the development of the new parser such things are actively
and regularly discussed.
Csound will always be somewhat apart from a conventional
procedural language because its working semantic unit is not a single
operation but the audio-rate vector running at krate.


Which is exactly my point.

Re: [music-dsp] a little about myself

2012-02-26 Thread Ross Bencina

On 27/02/2012 1:11 AM, Brad Garton wrote:

We're fooling around with the new Max/MSP gen~ stuff in class, it
seems an interesting alternative model for low-level DSP coding.
Once they figure out how to do proper conditionals it will be really
powerful.


Why anyone would want to use a visual patcher to write low level code is 
beyond me though. Managing complexity in Max is already a nightmare 
compared to using text-based development methods and tools. Not to 
mention how quickly you can type code compared to patching it.
They should have just added an object for live coding assembly language 
(only half kidding).


Ross.
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] a little about myself

2012-02-26 Thread Ross Bencina

On 27/02/2012 1:22 AM, Brad Garton wrote:

I would like to agree with you, because I also value all these things
(and am pretty much a dilettante in all four).  But I see an analog
with the "is a DJ*really*  a [computer music]  composer?" question
that floats around (or in an earlier generation, "is a collage
artist...").


Other analogous questions include "Should the artist be a programmer?", 
"Must creative engagement with computers involve programming?"


Then there is the whole schtick of composer as Auteur directing the 
technical minions to do the programming for him/her. A variant might be 
Composer buying pre-made tools to use to make their music.


I liked Andy's reference to "second order culture" earlier. This is one 
way of looking at software reuse by composers. Another is the 
instrument-builder <-> instrument <-> composer-performer relation.




Most DJ things I find just annoying, but lately I've
heard a few that are quite interesting, and also operating
independent of the categories 1-4 above that I know and love.


Of course you don't need all the aforementioned computer skills to use a 
computer to make *music*, and I wouldn't dare to ascribe relative value 
to the different areas of musical engagement with computers. Great music 
is being made in all sorts of different ways -- and a computer is 
sometimes part of that. But I think it's when the composer engages with 
the computer *as a computer* (whatever that means, but I think it 
involves programming, or algorithmic processes, or some uniquely digital 
manipulation methods, not as a virtual "real" thing) that it becomes 
computer music composition in the specialised sense meant here.




I guess it's the "in the context of this discussion" qualifier that
makes the difference here.


Yeah. There is such a thing as specialised "computer music" composition 
that takes in all those disciplines -- and in my view, the limits of the 
tools we are discussing are especially relevant within that context.


If you come at it from a "found object" perspective, the tools have 
certain affordances -- they're good at certain things. My impression is 
that Richard has suggested that "what they are good at" and "what they 
are intended for" is the same thing.. but I'm not convinced.


Ross.


--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] a little about myself

2012-02-27 Thread Ross Bencina

Hi Andy,

Some comments, and questions for clarification...

> IIRC, most Music-N line of systems are multi-rate. That means we
> have a fast computation rate, on which audio signals are calculated,
> and a slower rate (obviously some integer factor of the audio rate),
> usually called the control rate, at which things like slow moving
> envelopes, MIDI inputs and suchlike are calculated.

That's my understanding too. I'm not sure when in the lineage the 
"control rate" was introduced.


Note that there is another wrinkle: some variants support 
sample-accurate event scheduling by slicing the audio period into 
multiple sub-blocks at event boundaries. I believe this was around from 
the early days.



Then, considering (Pd) message dataflow:

On 27/02/2012 2:20 AM, Andy Farnell wrote:
> The essential facts of this dataflow might be
>

There is_always_  an audio signal but there are sometimes no control messages

Control messages are computed on a block boundary


Given this formulation, that the control message scheduler is only 
pumped on the block boundary, isn't this equivalent to having a control 
rate available? Presumably there is a way to get a bang (evaluation) 
every block. From a signal processing angle this still allows the same 
semantics as Music-N multirate? (but see below).


Also note that at least in max, the scheduler doesn't always run 
block-synchronously with the audio thread. I think there are different 
scheduling modes. See "Scheduler in Audio Interrupt (SAI)" here for example:


http://cycling74.com/2004/09/09/event-priority-in-max-scheduler-vs-queue/

This actually makes your point clearer -- if the scheduler is *not* in 
the audio interrupt then clearly it is asynchronous and is quite 
different from the Music-N multirate approach.




The audio signal is hard real time, so failure of the control to complete
before the next interrupt will result in a drop out.


Assuming SAI, which is maybe the only option in Pd, but not in Max.



Okay, given these two approaches we can see that setting kr = ar
alternatively expressed as (ksamps = 1) in the former system seems to
be the same as setting the blocksize to 1 in the latter.

BUT:

In Csound we will still get an interleved series

[As_0, Ks_0, As_1, Ks_1, As_2, Ks_2 ... As_n, Ks_n]

wheras in Miller's dataflow we have As, the audio stream running
constantly, with the possibility for Ks to intervene_anywhere_
within the stream so long as they can complete before the next
As is demanded

[As_0, As_1, As_2, As_3 ...As_20, As_21, Ks_0, As_22 ...As_100, Ks_1, Ks_2, 
As_101]


This description doesn't strike me as completely consistent with what 
you said above. Are you saying that scheduler dispatch is triggered by 
the audio block rate but doesn't happen synchronously with it (ie it is 
still in a separate thread?) I'm not sure I understand what you mean by 
"Ks to intervene_anywhere_ within the stream" -- presumably (in the 
model you described) they only intervene at block boundaries? If the 
scheduler could be invoked at any sample location (ie sample accurate 
callbacks) then things could get interesting.




Obviously this requires one to think about difficult problems where
control and signals are closely coupled in subtly different ways for
each language.


Right, so for example, you don't implicitly have a "control sample rate" 
for doing control rate filtering etc.


Ross

P.S. Your post was in response to Brad's comment about ~gen, which I 
think is more like implementing audio-rate code in a max object, than 
any of the above methods.



--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] a little about myself

2012-02-27 Thread Ross Bencina

Hi Richard,

On 27/02/2012 3:01 AM, Richard Dobson wrote:

On 26/02/2012 11:33, Ross Bencina wrote:
..


Perhaps I'm not being clear. My point is about being able to execute
arbitrary code at an arbitrary time based on the value of some
signal(s). The zero crossing thing was a simple example.


In which case it is a trivial test. Way back when I first used Csound on
the Atari ST, the documentation cited the single-sample mode for
implementing audio feedback and recursion. Not as efficient as using
dedicated opcodes, but far from impossible. Conditional tests at krate
effectively become conditional tests per sample.


If running at ksamps=1 is acceptable, then I agree. But in my view this 
is an inefficient hack. Some people are happy with that.




A language is defined not simply by what it enables, but by what it
~supports~. Csound supports and is optimised for fast audio vector
processing, but it can do scalar processing too if you really need it.
And on modern machines it is not even all that slow. Some recently added
processes such as the Sliding DFT are defined as single-sample processing.

You could also

think of algorithms that are not strictly signal based such as
multi-rate counters etc etc. Without a full-blown programming language
the compromises become unacceptable once you know how to program in a
"normal" language, whether that be C, C++, Java, Haskel, Scheme or Lisp.




All your points centre on the last - that Csound is not a "full-blown
programming language". This is only stating the obvious, and is not a
Csound "flaw".


My basic contention is that it is an insurmountable shortcoming and 
obstacle to future human creative progress for any environment that 
employs this restrictive DSL-like paradigm for creative expression in 
the domain of computer music (as previously described). This is what I 
mean by "flaw".


This is not stating the obvious, it is a position on the design of 
software tools for "computer music", as previously defined. [and yes, 
AudioMulch fails the test far more than CSound].




~Of course~ it isn't, never was and has never claimed to
be - it is a scripted interpreter specialised for audio processing. It
is a "domain-specific" language, and as such is one among many - Matlab,
SQL, HTML and so on.


Right. Although Matlab, SQL and HTML are all very different in their 
relation to a "domain" so I'm not exactly sure what your point is here.


The bulk of this argument is what are the boundaries of the domain of a 
DSL for musical audio processing. You would like to define that domain 
as "what Csound does" whereas I conjecture that it is essentially no 
smaller than the domain of generalised computation. Which, lest I be 
misinterpreted, does *not* require C++, templates or exceptions, but it 
*does* require efficient sample-by-sample computation, support for data 
structures, etc.




In short, this is a completely straw-man argument,


If you think that, you are missing my point. The point being that 
musical DSLs that do not provide full blown programming features present 
undesirable limitations to the current state of practice in computer 
music. This may be a marginal view, but it is not a straw man.


Further, the fact that you cite the use of Python and Lua embedded in 
CSound suggests that there already exists support for this requirement.




as you are comparing apples to oranges.


Well yes, I am saying that apples are not a nutritional option.



Indeed, the single biggest issue I have with some on
the Csound list is precisely that they want to add more general-purpose
procedural idioms to Csound, in a misguided attempt to turn into it an
audio version of C++.


No need to group my opinions with your Csound list buddies. I'm quite 
happy for CSound to continue in its limited scope. I just think that 
scope is too limited and I for one would prefer a tool that 
*efficiently* combines all levels (from sample computation to 
algorithmic composition). To some degree, ironically, Max with ~gen 
approaches this, although it is still a multi-language approach which 
to me has its problems (the problems being with where and how the 
boundaries get drawn at the interfaces between languages).




Even the addition of opcodes to embed Python or
Lua code within the orchestra was something I had great misgivings about
(i.e. a whole opcode and even effectively a whole instrument can be
written in Python or Lua), as to me it is the wrong way around - Csound
is a low-level language most suited to being embedded within
higher-level ones. Still, they work (as far as I know), and I suppose if
I try them one day I might feel differently! So we can probably have the
slightly surreal spectacle of a Python script running Csound which in
turn runs a Python-defined opcode.

..


Are you being a pedant on purpose or did you miss my point? It was just
an example. Anything th

Re: [music-dsp] a little about myself

2012-02-27 Thread Ross Bencina



On 27/02/2012 3:58 AM, Andy Farnell wrote:

For my sonification work for LHCsound, I used Perl to parse data
>  files and generate Csound scores, simply because it is a task Perl
>  is canonically optimised to do and scripts can be run up very
>  quickly.

Just a quick +1 for Perl as an event generator in cooperation
with Csound, especially if the project involves any kind of
network processing.


Just a quick -1 for the whole idea of orchestra vs. score and using text 
as an intermediate encoding for high-bandwidth control data.


Ross.
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] a little about myself

2012-02-27 Thread Ross Bencina

On 27/02/2012 4:11 AM, Richard Dobson wrote:

On 26/02/2012 05:48, Ross Bencina wrote:
..

(1) An efficient way to implement a runtime compiler for block-based
audio DSP chains (these are what are in software synthesizers)...
which usually are based on


As if block-based DSP chains are the only way to make sound with a
computer.



Indeed they are not; but they are often the most efficient way to do it
while retaining flexibility of configuration. For much the same reason
that block-based access to a disk file is so much more efficient than
single-word access that the latter never happens.


Thing is, most of the time you can do word-level i/o, even if this is 
not how it's implemented. Same thing with processor cache's, they 
(mostly) transparently support word-level access.


This is not the same as being (effectively) forced to do page or block 
level access.




Most audio devices for
general-purpose computers have no choice but to deliver audio to the
host in bursts of, say, 32 samples, so the audio is already being
supplied as a block rather than single samples.


I don't really see how the functioning of the hardware sample delivery 
mechanism has any bearing on the required functional capabilities of a 
computer music programming system that (fundamentally) generates sound 
sample by sample.




Given the choice of one
function call on a vector, or 32 function calls on a sample, there can
still be many reasons to choose the former.


Agreed. And one would like to do so whenever possible, and not do so 
whenever required. Setting ksamps=1 may be sufficient in some 
circumstances, whereas I would say the option needs to be available at 
any moment (for each subexpression or statement say). This is more or 
less what Faust or MP4-SAOL give you for example.




The assumption here is that the target audience is the "non technical
musician". I think this is completely spurious. Talking about
limitations or capabilities of computer programming languages is kind of
pointless if you allow for the novice user who doesn't even know what a
turing complete language can do.



Exactly true. It is only relevant to programmers.


And as I have already said. Computer musicians, must be programmers *by 
definition*. Otherwise they are musicians, using computers. That's where 
I stand. If you want to take a different point of view that's fine, but 
know that this is the position from which all of my arguments in this 
thread stem.




We can drive a car
pefectly well, and even do or direct basic maintenance on them, without
knowing the minutiae of metallurgy or chemistry, and we can light and
enjoy a firework without knowing anything about Newton's laws (or
chemistry, again). Maybe just a bit about health and safety. And we can
enjoy a film and use a GPS system without having to know anything about
the interactions of photons and electrons or the relativistic
compensation of satellite signals. All these things are interesting, and
for those whose business is building them, essential. But carbon-based
life forms have managed to jump, swim, fly, throw and catch for
millennia without having any idea how gravity really works, or what
momentum really is.


Agreed. Indeed the skills required for driving may even be orthogonal to 
those required for metalurgy.




People (UK-based) concerned about the public's level of education with
respect to computing may be interested to join the Computing At School
community, dealing with the newly announced UK initiative to all but
replace the existing ICT curriculum with a computer science one
involving real programming. One tool widely used at primary level is
called "Scratch", developed at MIT:

http://scratch.mit.edu/

It supports some music programming, but arguably could do very much more
in that area.




In my view, anyone who cares about being a composer of *computer music*
(in the context of this dicussion) needs at least the following (or
equivalent) undergraduate knowledge:



Although I go along with the collective and use the term, I do not
believe there is actually such a thing as "computer music", outside the
usual limitations of the language myth. At the very least, I would
question if everyone using that term means it in the same way. There is
music (or "sound art") that has been composed or performed using
computers, and in practice there are sounds which we either know or
assume depended on computers (or tape, or strange electronics) to make
them. There is also music composed using algorithmic techniques, but
these do not all need computers, unless in the old meaning of "one who
computes". The vast majority of what is popularly called "computer
music" is well within the classical paradigm of instruments and notes,
rhythms and harmonies, drones and moments, active performers and passive
listeners, and can be listened to and enjoyed with no more than the
corrsp

Re: [music-dsp] high-level vs. low-level coding of algs

2012-02-27 Thread Ross Bencina


On 27/02/2012 10:58 AM, robert bristow-johnson wrote:

in my opinion, the optimal division of labor becomes obvious if your
system modularizes specific low-level algs.


I'm not sure it's what you meant, but I would say: if your system 
modularizes specific low-level algs there is perhaps an optimal division 
of labour.


If you're writing in a single-language system on a single instruction 
architecture the options for modularisation will be different than on a 
MPU+DSP architecture.


If you look at the modularisation in something like Apple's vDSP, the 
operations are low level, and make total sense as basic building blocks, 
but they (mostly) don't look anything like what go by the name of unit 
generators in computer music DSLs.




when i worked at Eventide 2
decades ago, i thought that division of high level vs. low level was
pretty much natural and optimal.

on the low level we programmed blocks where the sample processing was in
the 56K assembly and the "coefficient cooking" was in C. they wrote some
pretty good tools that fished the symbols outa the linkable object code
that came outa the assembler and the coefficient cooking code could
write to specific DSP variables as symbols. you didn't need to note
where the DSP address was, it would find it for you. the coefficient
cooking code was executed only when a knob was twisted, but the sample
processing code was running all the time after it was loaded. a typical
example would be an EQ filter where user parameters (like resonant
frequency, Q, boost/cut gain) would go into the block from the outside
and the coefficient cooking code would cook those parameters and send
the coefficients to the DSP where the DSP was expecting them to go.


A couple of interesting things here:

- The coefficients are recomputed using a push architecture ("when a 
knob was twisted"). This can be correlated to Pd's dataflow message passing.


- The structure of the "dsp building blocks" is actually a bit different 
from the Csound "unit generator" model where all the processing 
(typically, although not always, including the coefficient cooking) is 
bundled up into the unit generator. Notably, in the Eventide, 
coefficients become a kind of first class data type.




but patching these blocks together was (eventually) done with a visual
patch editor. if, to create an overall "effect", you're laying down a
modulatable delay line here, a modulatable filter there, and some other
algorithms that have already been written and tested, with definable
inputs and outputs, why not use a visual editor for that?

but, if you need a block that does not yet exist, you need to be able to
write hard-core sample processing code in a general purpose language
like C (or the natural asm for a particular chip). then turn that into a
block, then hook it up with a visual editor like all of the other blocks.


That works up to the point that you encounter algorithms that can't be 
easily implemented by combining blocks, and are at the same time too 
complex to be "simple building blocks" at the hard-core sample 
processing level (by which I mean that they end up requiring additional 
modularisation internally within a block). Then you end up with a bunch 
of "monster blocks" that kind of break the elegance and obviousness of 
the presumed optimal division of labour. I'm not sure about the 
Eventide, but many unit generator based systems seem to have acquired 
monster blocks -- Reaktor has grain objects with a bazillion paramaters, 
CSound has ugens for entire physical models, even though the bulk of the 
language is low-level unit generators MP4-SAOL defines a few effects 
opcodes (chorus, dynamics processors).


Ross.
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] a little about myself

2012-02-27 Thread Ross Bencina

On 28/02/2012 8:55 AM, Richard Dobson wrote:

On 27/02/2012 21:00, Ross Bencina wrote:
..


And as I have already said. Computer musicians, must be programmers *by
definition*. Otherwise they are musicians, using computers.


But there is no single universally agreed definition of "Computer
musician".


As previously posted this is my definition, like it or lump it:

On Feb 26, 2012, at 12:48 AM, Ross Bencina wrote:
> In my view, anyone who cares about being a composer of
> *computer music*
> (in the context of this discussion) needs at least the following (or
> equivalent) undergraduate knowledge:
>
> 1) music composition degree
> 2) computer science and/or software engineering degree
> 3) electrical engineering degree
>
> Maybe even this is required:
> 4) Tonmeister/ sound engineering degree

If you feel that my use of terminology is distracting from the central 
purpose of the discussion I would be quite happy to call them "User X". 
They are, I belive, the primary audience for "use-case Y" that is "DSL's 
for User X to create music."




It is very likely a super-category. You have your definition,
but it is not the only one out there.


Sure, but we don't need to further argue semantics, I've specifically 
qualified my usage as relevant to this discussion about limitations of 
some exisiting tools / paradigms.




In English we tend to aggregate
nouns, so we say "computer musician". What you really mean is "computing
musician". How about "music informaticist"? It may catch on. In the
meantime, people working intensely with DAW such as Pro Tools or Cubase
call themselves "programmers". And there is of course, I hope you will
agree, nothing wrong or inferior about being a musician who uses
computers.


Somewhere else in the thread I already explicitly said that I wouldn't 
dream of casting any aspersions or associate any relative merit or value 
to different approaches. However, just as their are better and worse 
violins for violinists, there are better and worse programming systems 
for User X.




And there are many many levels of "computing". What
equivalent two-word name would you give them?
There are musicians who do
not use a computer but who think and compose algorithmically. What do we
call them? For nobody really exists until we can call them something.


I am afraid you have lost me here. I have tried to provide you with 
sufficient description to contextualise my argument. If you don't like 
my use of two letter words that is fine. I'm not attached to them, 
especially since it appears that all they have done is distracted you. 
On the other hand, I am attached to the underlying notions that I used 
the two letter word to denote (perhaps modulo my omission of mathematics 
to the mix).


Ross.


--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] a little about myself

2012-02-28 Thread Ross Bencina

On 29/02/2012 11:41 AM, Richard Dobson wrote:

On 28/02/2012 16:03, Bill Schottstaedt wrote:

I don't think this conversation is useful. The only question I'd
ask is "did this person make good music?", and I don't care at all about
his degrees or grants. One of the best mathematicians I've known
does not even have a high-school diploma. If I find such a person,
then it's interesting to ask how she did it. But there are very few,
and no generalizations seem to come to mind.



I agree,


I'm not sure whether (or which) conversation is useful, but I totally 
agree with "good music" being the important thing in the end. That said, 
"result orientation" has been the subject of criticism in certain 
circles, and there exists a counter view that it's the process, not the 
result that matters.




but I think such conversations can be useful. "Computer Music"
does possibly suffer more than most from what I might mischievously call
the "expertise problem" - how does the "typical" listener recognise all
the skills that have been brought to bear in a piece?


(In my view) the listener need not recognise "all those skills." The 
audience is presented with acoustic stimulus and the first order 
experience is usually auditory. What happens beyond that depends, among 
other things, on the mode of listening, but in general it's safe to say 
it won't typically be an experience of thinking about expert-level 
technical production details.


The typical listener may not know much about late 19th century tonal 
harmony but can appreciate music that employs it. One knows when a 
seamstress/tailor/brain surgion/street cleaner/cabinet maker/film 
production team/ has done something "good" without needing to know about 
the specialised skills and tools involved.



> They presumably have to do that at least to some extent, to decide if
> the piece is "good".

They may recognise some of the skills, to some extent, perhaps -- but I 
think more often the "product" is experienced on its own terms, as 
sound. There's a genotype/phenotype relationship between skills and the 
musical result: maybe the skills and means are apparent, maybe they 
aren't -- maybe the composer has worked hard to conceal the techniques 
of production -- the appreciable skill might be the invisibility of the 
process (like the absence of visible brush strokes in the paintings of 
the Dutch masters.) Thuse the criteria for "good" music should not 
require recognition of "all the skills that have been brought to bear in 
a piece."


But such recognition can certainly be a component of music appreciation.
Art speaks into a context which comprises of some subset of knowledge of 
everything that has gone before, and this could, under some 
circumstances, include knowledge of production processes, organising 
principles and structuring techniques -- whether in general, or with 
respect to a particular composer or work.


There are forms of "computational art" (e.g. "net art", "software art", 
 perhaps also "live coding", and I mean these with reference to 
specific art movements) where the computational process has been 
aestheticised. (Perhaps analogous to the aestheticisation of "concept" 
in modern art.) One could also argue that in modern classical music, 
performance technique has become aestheticised. But this reification of 
"technitution" as Harry Partch called it, is by no means a universally 
accepted definition of what is or should be considered "good."




The temptation I see is to focus more on the process than the
product  - understandable enough for a composer, but not necessarily
practical for the listener.


Agreed. It can sometimes be a problem when the aestheticisation of the 
internal (technical) structure of an artwork is considered key to the 
appreciation of the artwork. Perhaps this is what you're getting at?


These days I see computer practice with a strong focus on "proces over 
product" as a branch of the "software art" movement. Personally I think 
it is critical that music must be "good" on merits related to the domain 
of musical/auditory perception and cultural experience. But allowing for 
appreciation of computational aesthetics as an integral part of the work 
may also be valid -- some might even argue that computational aesthetics 
are more important than the musical result.. but at that point I think 
it ceases being music, which is why categories like "software art" are 
useful.




Ross, for example, stipulated that a
"computer musician" not only needs to be a programmer, but also have
undergraduate level EE expertise. The process is clearly paramount.


The driving concern here is to be sufficiently prepared to engage with 
the computer in an act of composer/performer programmed computation that 
results in musical sound -- i.e. computation as a medium of musical 
expression. This isn't any more about "the process" as it is about 
having the requisite skills to create "the product."



> I do wonder how that would be unambi

Re: [music-dsp] very cheap synthesis techniques

2012-02-28 Thread Ross Bencina

On 29/02/2012 8:00 AM, douglas repetto wrote:

Oh, come on, transistors are for babies. Real composers roll their own
diodes!

http://hackaday.com/2010/03/05/diy-diodes



Etching your own transistors is still pretty cool:

http://www.youtube.com/watch?v=w_znRopGtbE

Might take a while to make enough to bootstrap a C++ compiler though...

R.

P.S. Jerri Elsworth is a total legend.
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] maintaining musicdsp.org

2012-04-05 Thread Ross Bencina

Hey Bjorn,

On 5/04/2012 1:52 AM, Bjorn Roche wrote:

Any thoughts about modernizing the whole thing with a fresh CMS? I
think it would be easier to maintain, have built-in spam filters, and
it would be easier to have multiple people do the work. Plus it would
look more attractive. I don't think it would take much effort to redo
the whole thing in, say, drupal.


Have you ever set up a Drupal site? I have. It is not for small-time, 
non-commercial, low-maintenance overhead projects imho.


Imho it would be a huge job to port the current site to Drupal and there 
is a lot of ongoing maintenance required to keep security patches up to 
date etc etc. Doing the theme port alone would be a lot of work.


Unless I'm completely out of touch it is really non-trivial to set up 
something like musicdsp.org in Drupal with adequate spam filtering. The 
standard Drupal capcha solution (Mollom) is not great -- in my 
experience it flags a lot of false positives (spam that isn't spam).


Anyway, this is really just a vote against Drupal for musicdsp.org, not 
against using a CMS.


I actually think the current ad-hoc php solution is not so bad -- but 
Bram knows more about these things than me.


Ross.
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] OSC problem on STM32F4Discovery

2012-04-09 Thread Ross Bencina

Hello Julian,

Assuming all variables are integers, if phase doesn't divide TABLESIZE 
then you will get a different waveform amplutude for adjacent cycles. 
Perhaps that's what you're seeing?


At each cycle start the he value of phase will be different. The pattern 
of different waveforms will repeat at the rate of the LCM of phaseInc 
and TABLESIZE.


Usually you would use a fixed point phase accumulator and phase 
increment so you can have subsample accurate increment. That won't 
change the fact that frequencies that don't have an integer-sample 
period will look different each cycle.


Ross.



On 9/04/2012 11:28 PM, Julian Schmidt wrote:

Hello all,

I'm trying to get an oscillator running on the STM32F4 discovery board.
I'm sending my audio with 44100Hz via I2S to the onboard codec.
However I'm experiencing strange problems.

my code ist quite straight forward:

phase += phaseInc(440); // calculate the phase increment for 440 hz and
add it to phase
if(phase >= TABLESIZE) phase -= TABLESIZE;
return phase/TABLESIZE;

this should give me a non bw limited saw from 0 to 1.f
(I also tried it with an actual wavetable in use. a sine is played fine.
but as soon as overtones are added i get the fluttering)

My problem is that i get periodic amplitude modulations on the overtones
about 2-3 times a second.
the root frequency amplitude is fixed, but the higher the overtone
frequency, the more it's amplitude drops every now and then.
this screenshot illustrates the waveform. the upper waveform has some
ringing on the discontinuity and the lower has not.
http://i.imgur.com/JyuEr.png (this image is taken from an inverted saw
wavetable. but the flutter is the same, no matter if i use a bandlimited
WT or output the phase counter directly)
the waveform oscillates between the upper and the lower image.

the amplitude flutter from the overtones is also frequency dependent.
at 439 and 441 Hz its not really audible.

so my tought is that is has to do with the wrap of the phase counter.
but i can't get my head around whats wrong.
any ideas where this change in overtone amplitude could come from?

cheers,
julian






--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews,
dsp links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


[music-dsp] To EE or not to EE (Was: Job at Waldorf and Possible Job Opportunity)

2012-05-02 Thread Ross Bencina

Hi All, (but especially Stefan and Al)

I'm wondering if I can draw you on what is it about Electrical 
Engineering qualifications that is important to these kind of jobs (I 
have some ideas, but not the full picture, since I'm not an EE).


I was interested to see in Stefan's recently posted job:

..."Advantageous:
- Some insight into electrical engineering
[...]
Given identical qualification, we prefer candidates without a formal 
degree."...

-- http://www.waldorfmusic.de/en/jobs.html

What is problematic about formal degrees in this context?


Then Al posted a job:
..."We are considering a broad range of candidates, from recent 
graduates (electrical engineering or convince us otherwise)"



I'm someone with a computer music and software development background 
who's just started taking some math subjects in my "spare time" to fill 
in some gaps -- so I'm guessing that mathematic modelling of electronic 
systems and digital signal processing mathematics are a big part of what 
you're after.


Can you clarify what skills you anticipate from EE graduates or people 
with "insight into EE"?


Thanks!

Ross.
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Wavetable interpolation

2012-05-07 Thread Ross Bencina

On 8/05/2012 7:45 AM, ChordWizard Software wrote:

Simple linear interpolation looks easy enough to achieve, but does it
introduce an unacceptable level of distortion?


The distortion is a function of (at least) the spectrum of the audio 
stored in the wavetable, the interpolation method, and the change in 
playback rate.


Note also that interpolation is a kind of filter. Consider linear 
interpolation of a point half way between two samples:


y[1] = (x[0] + x[1]) / 2.

That's a moving average with window size-two. It's going to damp the 
high frequencies (and cancel at nyquist). Although in general you're not 
always interpolating half way between samples, this filtering effect is 
sometimes detectable (if you can hear above 10k!), but it's not exactly 
distortion.




Is there a rule of
thumb for minimum numer of samples per waveform to keep artifacts of
this type undetectable?


"Undetectable" by whom?


I remember back in the day reading about some later Emulator samplers 
using optimised 8-point sinc interpolation with pre-filtered source 
samples (I think there's a patent if you want to look it up). Even that 
would not be classed as undetectable under some criteria.


On the other hand there is plenty of sound making it onto recordings 
that has been linear interpolated.


Potentially you can oversample the audio using a high quality off-line 
interpolator to limit the bandwidth and then apply a low-order 
interpolator (some discussion here, although it's not the best reference):


http://www.student.oulu.fi/~oniemita/dsp/deip.pdf

The standard introduction to interpolation is the "Splitting the Unit 
Delay" article:


http://signal.hut.fi/spit/publications/1996j5.pdf

I realise that's possibly more science than you're looking for.


As for rule of thunb, aside from what Robert already said, I would say 
the rule of thumb is "use linear or cubic hermite interpolation" unless 
it is important to have better (and slower) interpolation, in which case 
the sky's the limit -- you'll need to define proper error margins and 
evaluate your approach mathematically etc.


As usual there's a trade-off between space, time and quality.



Also, as an aside, is it general practice to calculate sample values
with doubles (8 bytes) or are floats (4 bytes) generally adequate?
I'm targeting CD-quality audio (shorts @ 2 bytes) so I'm not sure if
the extra precision during calculations helps much.


General practice is to do what is both efficient and accurate enough on 
your given architecture. If doubles are not slower, then you'd use them.


On a PC I don't think it would be uncommon to use a double for the phase 
index and phase increment. You don't necessarily need to use the full 
precision to perform the actual interpolation but it helps for frequency 
precision.


For a variable frequency design the precision of the phase index can 
also impact signal-to-noise.


Ross.









--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Pointers for auto-classification of sounds?

2012-06-08 Thread Ross Bencina

Hi Charles,

On 9/06/2012 3:36 AM, Charles Turner wrote:

Here's my project: say I have a collective of sound files, all short
and the same length, say 1 second in length. I want to classify them
according to timbre via a single characteristic that I can then
organize along one axis of a visual graph.


The research paradigm over the last decade in Music Information 
Retrieval (MIR) is to talk about "descriptors". These are often single 
values some how derived from a segment of audio. For example of simple 
descriptors you could look at the MPEG7 low level audio descriptor set:


They're listed under "AudioLLDScalarType" on the diagram here:
http://mpeg.chiariglione.org/technologies/mpeg-7/mp07-aud(ll)/index.htm

Googling "mpeg-7 low level audio descriptors" will get you some things 
to read.


Spectral centroid comes to mind as a simple descriptor you might want to 
investigate.


But there are literally 100s of descriptors that have been published in 
the literature.


Marsyas is a commonly used MIR analysis tool. You can probably get it to 
spit out a bunch of descriptors for your audio files:


http://marsyas.info/

Douglas also mentioned Echo Nest which might get you somewhere.


MIR is a pretty big research field in its own right these days.

Mailing list:
http://listes.ircam.fr/wws/info/music-ir

Conference (with proceedings published on line):
http://www.ismir.net/


In terms of "single characteristic" maybe you want to think about 
whether that single value you use to position sounds on your line is 
actually a single descriptor, or some function of other descriptors. The 
key phrase here is "dimensionality reduction". With this you can 
potentially take audio tagged with many descriptors and use this 
information via dimensionality reduction to place the sounds on a 2d Map.



HTH

Ross.







The files have these other properties:

   . Amplitude envelope. I don't need to classify by time
characteristic, but samples could have different characteristics,
 ranging from complete silence, to a classical ADSR shape, to
values pegged at either +-100% or 0% amplitude.

   . Timbre. Samples could range in timbre from noise to
(hypothetically) a pure sine wave.

Any ideas on how to approach this? I've looked at a few papers on the
subject, and their aims seem somewhat different and more elaborate
than mine (instrument classification, etc). Also, I've started to play
around with Emmanuel Jourdain's zsa.descriptors for Max/MSP, mostly
because of the eazy-peazy environment. But what other technology
(irrespective of language) might I profit from looking at?

Thanks so much for any interest!

Best,

Charles
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Pointers for auto-classification of sounds?

2012-06-09 Thread Ross Bencina

On 9/06/2012 3:36 AM, Charles Turner wrote:

Here's my project: say I have a collective of sound files, all short
and the same length, say 1 second in length. I want to classify them
according to timbre via a single characteristic that I can then
organize along one axis of a visual graph.


It occurs to me that there is some uncertainty in other people's 
responses about whether you really want to "classify" the sounds, or 
whether you want to project a parametric timbre space onto a line.


Models of timbre space generally consider multiple dimensions.

What does the line represent? Is it a continuum or is it broken into 
(disjoint) segments? (eg left half for harmonic sounds ordered according 
to one function and the right half for inharmonic sounds ordered to a 
different function).


I don't believe that the timbre space of real sounds is continuous, 
smooth, connected or evenly populated. These issues will need to be 
addressed in the mapping to a line. For example, you may not want a 
linear scale for example, but rather one that "zooms in" on the clusters 
of sounds.


Then there is the issue of whether parametric timbre space is somehow 
equivalent to descriptor space. Of course you could sidestep this issue 
by dealing only with descriptors (as Robert's solution does). However 
you need to consider whether by "timbre" you principally mean a 
mathematical description (following, say, Helmholz) or a description of 
human/cognative/perceptual experience (qualia) perhaps foollowing the 
programme of Pierre Schaeffer. For an English language summary of 
Shaeffer's typomorphology see Palombini's PhD thesis "Pierre Schaeffers 
typo-morphology of sonic objects." http://etheses.dur.ac.uk/1191/ See 
for example the various typological dimensions in the tables at the end 
of the thesis.


I'm not sure how well this latter qualia-centric interpretation of 
timbre is represented by current MIR research, it's a few years since I 
checked.


Cheers,

Ross.

--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] _ Pointers for auto-classification of sounds?

2012-06-14 Thread Ross Bencina

On 14/06/2012 5:29 PM, Andy Farnell wrote:

Maybe this isn't the same hazard of dimensionality that Dan
warns us of... I'm saying the space is definitely_warped_
with big areas of nothingness between apparently close
and similar points. Does that make sense?


Seems to me there are at least three difference spaces:

A- "environmental" timbre space (perhaps the space of physically 
realisable sounds? or the space of previously experienced stimuli)


B- "perceptual/cognitive" timbre space

C- synthesis parameter space(s)


Maybe B is a function of neural structures which are either genetically 
or experientially developed over past exposure to A.


C is independent of A and B, thus it is not really surprising that the 
C->B mapping is warped. You would expect B to be more reflective of A (B 
being a kind of self-organising map of the parts of A that humans "care" 
about).


http://xkcd.com/915/

Ross.
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] recommendation for VST host for dev. modifications

2012-06-29 Thread Ross Bencina

On 28/06/2012 7:05 AM, Michael Gogins wrote:

Sorry, I confused this discussion with a similar one I was trying to
get going on the Csound list, where I do want to get rid of the VST
SDK while still being able to redistribute Csound source code from
SourceForge.


I have no idea how you propose to use the VST interfaces without being 
subject to the VST licence agreement.


More than source code you need a sound legal theory. Do you have one?

Ross.
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] compensating for LPF phase delay in damped karplus strong/comb filter

2012-07-24 Thread Ross Bencina



On 23/07/2012 6:52 PM, Oli Larkin wrote:

At the moment I use basic linear interpolation for the comb filter with a 1P 
LPF in the feedback loop.


Hi Oli,

You might want to consider that the linear interpolation is also a 
lowpass filter. Most prominantly when out = x[i] * 0.5 + x[i + 1] * 0.5


As Robert suggested, for static delay times it is common to use an 
integer delay line length (hence no interpolation filtering) and then 
cascade an allpass filter for the fractional delay. This doesn't work so 
well for modulated delays however.


Ross.
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] stuck with filter design

2012-11-17 Thread Ross Bencina

Hello Shashank,

I'm interested in this stuff too, but I'm no expert. I've tried to give 
some pointers below. Hopefully someone else will correct me if I've made 
an error:


On 17/11/2012 8:24 PM, Shashank Kumar (shanxS) wrote:

I am a self taught Linux fanatic who is trying to teach himself Sound
Processing.

I have basic idea of signal processing.
My aim is to develop an intuition by which I can design a 2nd order
IIR audio filter given a 3dB bandwidth and a center frequency.

I am not following any specific book/text.


You might find these resources helpful:

A. RBJ's EQ cookbook has equations for the filter you want:

http://www.musicdsp.org/files/Audio-EQ-Cookbook.txt

I believe that Robert derives the digital filter from the analog 
butterworth transfer function. This is the method where you start with 
an s-plane prototype and then apply the bilinear transform to get a 
z-plane form. You pre-warp the important frequencies (cutoff, bandwidth) 
in the s-plane so they end up in the right place in the z-plane after 
warping.


I too have struggled with the mathematics behind the coefficient 
calculations. For me it comes down to understanding the derivation of 
the s-plane prototypes. I never studied laplace transform or much other 
continuous-domain maths. I think that would help and I should look at 
this again.


My impression is that the usual intuition develops out of the analog 
methods of filter bandwidth calculation, thus you need to understand 
these things:

- laplace transform
- mathematical foundations of the butterworth filter in the analog domain
- bilinear transform + pre-warping method to z-plane I mentioned above.


B. Robert also wrote a paper that surveys the different coefficient 
calculation methods, you might find that helpful in understanding the 
relation between coefficient computation methods. I haven't looked at it 
for a while, probably it addresses some of the issues I mention above:


"The Equivalence of Various Methods of Computing
Biquad Coefficients for Audio Parametric Equalizers":
http://thesounddesign.com/MIO/EQ-Coefficients.pdf


C. Jon Dattoro's "Implementation of Recursive Digital Filters
for High-Fidelity Audio" examines implementation structures suitable for 
digital audio. I think that will help you with your second question 
about the implementation of the filter itself:


https://ccrma.stanford.edu/~dattorro/HiFi.pdf
+ Errata: https://ccrma.stanford.edu/~dattorro/CorrectionsHiFi.pdf

Hopefully someone can correct me if I'm leading you astray...

Ross.



I understand as to where should I keep my zeros and poles in Z plane
so as to boost/attenuate a particular frequency.
But I am having 2 problems:
1. I can't control the bandwidth
2. The output there is some kind of aliasing in output sound. My
results for a basic LPF is on my blog (with sound o/p and freq
analysis): http://trystwithdsp.wordpress.com/2012/11/07/basic-lpf/

I know my filter sucks and I am stuck.

Can someone please give me a pointer regarding how should I improve my filters ?

Thanks a ton.
shashank

skype: shanx.shashank
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] stuck with filter design

2012-11-18 Thread Ross Bencina

On 19/11/2012 9:24 AM, Bjorn Roche wrote:

(Shashank wrote:)

I have one more question:

Why so many people use analog prototypes to get a digital filter
? Why not just put a few constraints on location of poles/zeros
on Z plane and get done with it ?


This is a really great question.


Indeed. I hope someone tries to answer the mathematical aspect of the 
question...


Perhaps student engineers (and their instructors) prefer to do a little 
algebra than calculus on complex-domain rational functions?




One answer is that analog filter design is a highly developed art,
and therefore serves as an excellent starting point.


I'm not sure how excellent it is given the problem of trying to map an 
infinite frequency space on to a periodic space.


It may well be an excellent starting point if you've undertaken a 
traditional education where you get one or two years of calculus and 
differential equations before you even look at a discrete system or a 
difference equation. But if you bypassed that and started programming 
computers before you got told the analog world was more important than 
the digital one...



All that said, some folks have started to think along the same lines
as you. After all 1. there may be unique digital solutions (and I'm
not just talking about FIR filters), and 2. you should be able to
learn digital filter design without also having to learn analog
filter design. To that end, here is one interesting
paper:

>http://www.elec.qmul.ac.uk/people/josh/documents/Reiss-2011-TASLP-ParametricEqualisers.pdf

Haven't seen that one... but a good paper in this direction is "A 
Generalization of the Biquadratic Parameteric Equaliser" by Knud Bank 
Christensen

http://www.aes.org/e-lib/browse.cfm?elib=12429



As a side note, there are, indeed, problems associated with filters
designed with an analog prototype. For example, let's say you design
a bell filter in the analog domain, and map it into the digital
domain with a sample rate of, say 44100 Hz. Let's further assume that
in the analog domain, the gain at the niquist frequency of this
filter is 1 dB. That means the filer will boost 20 kHz by 1 dB. When
you use the bilinear transform, though, the resulting filter will
have a gain a the niquist frequency of zero. This is usually not a
serious problem, and often not a problem at all. However, if you are
designing a parametric EQ, the difference will be noticeable at
extreme settings.


This problem is addressed directly by the Christensen paper I just linked.



You could say this is one reason to do audio
production at sample rates > 44100 Hz but there are arguments both
ways there as well.



Another thing to keep in mind is that even if you get the magnitude 
response correct without oversampling, the phase response will not be 
the same as the analog prototype. And high frequency phase coherence can 
be important in audio applications.



Ross.

P.S. Shashank: my blog is at rossbencina.com , not much DSP there 
though, mainly programming and random soapboxing.



--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] stuck with filter design

2012-11-18 Thread Ross Bencina

On 19/11/2012 3:29 PM, robert bristow-johnson wrote:

  Why
not just put a few constraints on location of poles/zeros on Z plane
and get done with it ?

what is "it" that you're getting done with?  sure we can see that
placing poles close to the unit circle causes a boost in dB at the
frequencies these poles are close to, and zeros cause a drop in dB.  but
if your goal was a "flat passband" and you wanted to accomplish that
well, you could either use an optimal design program that gets you an
answer without much understanding, or you can use a closed-form design
based on the analog legacy and use bilinear transform to map it to
digital implementation.  which one is better for a filter that changes
predictably when a knob is twisted?


Robert,

You appear to be reframing the question to > vs <> -- that's not how I understood the question.


From a mathematical standpoint, is an opaque optimal design program 
always necessary to avoid starting from analog theory?


Can't closed-form parametric equations be derived starting from H(z) of 
a given digital structure? Isn't that exactly what you did in the "The 
Equivalence of Various Methods..." paper?


Ross.
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] stuck with filter design

2012-11-20 Thread Ross Bencina

Thanks Robert,

On 20/11/2012 3:21 PM, robert bristow-johnson wrote:

Isn't that exactly what you did in the "The Equivalence of Various
Methods..." paper?


well, no, i didn't think so.


Quite right. I should have spent more time looking at the paper. You 
state the 4 "standard" constraints in terms of H(z) but then you go back 
to the s-plane...


> i designed the EQ as an analog design and bilinear-transformed
> it. same as with the other filters in the cookbook.

Thanks very much for clarifying. I see what you did now.



i said that since there were 5 coefficients to mess around with, there
are 5 degrees of freedom,  whether it's digital or analog.  4 of those
degrees of freedom are well-nailed down.  the 5th knob to turn, the
bandwidth knob, can have its tick-marks laid out differently with
different design methods, but *any* method of designing a peaking EQ is
gonna have the same bandwidth knob, just a different mapping of the
bandwidth user parameter to the bell shape.  but, other than different
bandwidths, it's the same family of bell shapes for everybody that agree
on the 4 other constraints.  in other words, if you design a biquad
peaking EQ with your method and i design one with the same human-set
parameters (f0, gain, BW) and that has the same four constraints (gain
at DC and Nyquist is 0 dB, derivative of gain at peak is 0, and peak is
at f0 with certain amount of gain), your filter and mine can only differ
with regard to bandwidth.  all i have to do is twist my BW knob a little
and my filter will be exactly like yours (but we'll call the BW two
different numbers), even if your design philosophy is completely different.

so, for that particular filter design problem, i basically said we can
be lazy and not consider a more difficult design concept or method (like
Andy Moorer's "Manifold Joys..." paper).  just do it the simplest way
and nail down the 5 parameters in such a way that it doesn't matter how
it was done.  (and then i *still* didn't accomplish that, there is this
BW approximation when going from analog to digital or back.)

but, i designed the EQ as an analog design and bilinear-transformed it.
same as with the other filters in the cookbook.

and i better note that Orfanidis made a *different* assumption about the
gain at Nyquist than me.  but he also designed it with bilinear
transform and not directly z-plane.  (so that different assumption about
Nyquist translated to a different gain at infinity, which is how the BLT
warps the frequency response.)

and i seem to remember Knud Christensen also did it in the s-plane, but
i dunno.  i would have to look again.


That's correct. The bulk of that paper works in the s-plane, or with 
back-and-forth mappings between s-plane and z-plane to deal with warping 
of the target response.


There is one small section that deals with "pseudo-least-squares" 
calculation of biquad coefficients from a target frequency response... 
that doesn't necessarily deal in the s-plane.


They have an algorithm for mapping 5 magnitude response constraints to 5 
coefficients. I'm kinda surprised that they didn't do this directly 
against the z-plane magnitude response.




they had some kinda "Symmetry"
parameter that could turn a 2nd-order bell into a shelving EQ.  it's
like having a separate knob for the gain difference between DC and
Nyquist and an overall gain knob.  but i think that the cookbook
"significant frequency" would map differently than Knud's definition
(which i think is the geometric mean of the resonant frequency of the
zeros and the resonant frequency of the poles).  i gotta review the
paper again.  but somehow, that paper had 5 different "user" parameters
and there are only 5 coefficients and thus 5 degrees of freedom, so
those 5 different user parameters should be able to cover everything in
the cookbook by twisting them one way or another.


They claim that it can represent all possible coefficients sets yes.

Thanks,

Ross.
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] stuck with filter design

2012-11-21 Thread Ross Bencina

On 19/11/2012 6:33 AM, Shashank Kumar (shanxS) wrote:

Why so many people use analog prototypes to get a digital filter ?


Further to this question, I just came accross this brief but 
enlightening piece by Ken Steiglitz, it discusses the dawn of the use of 
the BLT and music-dsp:


http://www.cs.princeton.edu/~ken/nov05_final.pdf

"Isomorphism as Technology Transfer", IEEE SIGNAL PROCESSING MAGAZINE 
IEEE SIGNAL PROCESSING MAGAZINE, NOVEMBER 2005


R.
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Precision issues when mixing a large number of signals

2012-12-09 Thread Ross Bencina

Hi Alessandro,

A lot has been written about this. Google "precision of summing floating 
point values" and read the .pdfs on the first page for some analysis. 
Follow the citations for more.


Somewhere there is a paper that analyses the performance of different 
methods and suggests the optimal approach. I think it is signal dependent.



On 10/12/2012 11:32 AM, Alessandro Saccoia wrote:

It's going to be a sort of comulative process that goes on in time,
so I won't necessarily know N in advance. If had a strong evidence
that I should prefer one method over the other, I could decide to
keep all the temporary X1, X2,… and recompute everything each time.
Performance and storage are not a strong concern in this case, but
the quality of the final and intermediate results is.


Dividing by N only when you need to compute the sum sounds like a good 
good idea, that way you won't be hashing the precision of each value 
prior to summing it.


To avoid throwing out any information you could use high-precision 
arithemetic. Did you say whether the signals are originally integers or 
floating point? If they're integers, can you keep the sum in a 64 bit 
int? Otherwise maybe a double. You can easily compute when you will 
start to loose precision in floating point based on the range of the 
input and the number of elements in the sum (summing 2 values requires 1 
extra bit of headroom, 4 values requires 2 bits, 8 values 3 bits etc). 
So for 1024 32 bit values you'll need 10 more bits to avoid any loss of 
precision due to truncation... etc. There is also arbitrary precision 
arithmetic if you don't want to throw any bits away. There is something 
called "double double" which is a software 128 bit floating point type 
that maybe isn't too expensive.


One problem with floating point is that adding a large value and a small 
value will truncate the small value (first it needs to be shifted so it 
has the same exponent as the large value). You didn't say much about 
your values, but assuming that you're adding values distributed around a 
non-zero mean, the accumulator will increase in value as you add more 
values. Thus later summands will be truncated more than earlier ones. 
One way to minimise this is to maintain multiple accumulators and only 
sum a certain number of values into each one (or sum into successive 
accumulators kind of like a ring buffer of accumulators). Then sum the 
accumulators together at the end. this reduces truncation effects since 
each accumulator has a limited range (hence higher precision of 
summation) and when you sum the final accumulators together (hopefully) 
they will all have a similar range).
A variation on this, if you know your signals have different magnitudes 
(eg you are summing both X and X^2), is to use separate accumulators for 
each magnitude class - since these are obviously going to have vastly 
different domains.


You also need to consider what form the final output will take. If the 
output is low precision then the best you can hope for is that each 
input makes an equal contribution to the output -- you need enough 
precision in your accumulator to ensure this.


For some uses you could consider dithering the output to improve the output.

Ross.

--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Precision issues when mixing a large number of signals

2012-12-09 Thread Ross Bencina

Hello Alessandro,

On 10/12/2012 12:18 PM, Alessandro Saccoia wrote:
> In my original question, I was thinkin of mixing signals of arbitrary
> sizes.

I don't think you have been clear about what you are trying to achieve.

Are you trying to compute the sum of many signals for each time point? 
Or are you trying to compute the running sum of a single signal over 
many time points?


What are the signals? are they of nominally equal amplitude?

Your original formula looks like you are looking for a recursive 
solution to a normalized running sum of a single signal over many time 
points.



I could relax this requirement, and forcing all the signals to
be of a given size, but I can't see how a sample by sample summation,
where there are M sums (M the forced length of the signals) could
profit from a running compensation.


It doesn't really matter whether the sum is accross samples of a signal 
signal or accross signals, you can always use error compensation when 
computing the sum. It's just a way of increasing the precision of an 
accumulator.




Also, with a non linear
operation, I fear of introducing discontinuities that could sound
even worse than the white noise I expect using the simple approach.


Using floating point is a non-linear operation. Your simple approach 
also has quite some nonlinearity (accumulated error due to recursive 
division and re-rounding at each step).


Ross
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Precision issues when mixing a large number of signals

2012-12-09 Thread Ross Bencina

On 10/12/2012 1:47 PM, Bjorn Roche wrote:

There is something called "double double" which is a software 128
bit floating point type that maybe isn't too expensive.


"long double", I believe


No. "long double" afaik usually means extended precision, as supported 
in hardware by the x86 FPU, and is 80 bits wide. It is not supported by 
some compilers at all (eg MSVC) since they tend to target SSE rather 
than x86 FPU.


double double is something else:

http://en.wikipedia.org/wiki/Double-double_arithmetic#Double-double_arithmetic

Ross.
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


[music-dsp] mechanisms that transfer of energy between modes in acoustic systems?

2012-12-16 Thread Ross Bencina

Hi Everyone,

I have a question which in a broad sense relates to physical modelling 
and acoustics:


Under what circumstances does a resonating (acoustic) system move energy 
from one frequency to another?


One gross example I can think of would be snares on a snare drum.

But aside from "rattling couplings" I'm wondering whether resonating 
objects exhibit energy migration between modes? if so, why/how? if not, 
why not? Objects I have in mind include solids, plates, bells, cymbals, 
strings, bridges and rooms.


In a synthetic system (say a feedback network of waveguides) a 
non-linear element (say a static waveshaper) can be used to disperse 
energy accross the spectrum. I'm wondering whether anything similar 
happens in physical media?


Am I correct to assume that in most cases resonating acoustic objects 
will only resonate at a mode if you put energy into the system at that 
mode? This seems to be the basis of commuted waveguide techniques. Are 
there exceptions/counterexamples?


Thanks!

Ross.
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] mechanisms that transfer of energy between modes in acoustic systems?

2012-12-16 Thread Ross Bencina

Thanks Risto,

On 17/12/2012 5:05 AM, Risto Holopainen wrote:

(*) Legge and Fletcher: Nonlinearity, chaos, and the sound of shallow
gongs. JASA 86(6), 1989.


is here:

http://www.ausgo.unsw.edu.au/music/people/publications/Leggeetal1989.pdf

Also at the same site:

Fletcher, N. H. 1994. ‘‘Nonlinear Dynamics and Chaos in Musical 
Instruments.’’ Complexity International 1.


http://www.ausgo.unsw.edu.au/music/people/publications/Fletcher1993c.pdf

Ross.
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Lerping Biquad coefficients to a flat response

2013-01-03 Thread Ross Bencina

On 4/01/2013 4:05 AM, Thomas Young wrote:

Is there a way to modify the bandpass coefficient equations in the
cookbook (the one from the analogue prototype H(s) = s / (s^2 + s/Q +
1)) such that the gain of the stopband may be specified? I want to be
able


I'm pretty sure that the BLT bandpass ends up with zeros at DC and 
nyquist so I'm not sure how you're going to define stopband gain in this 
case :)


Maybe start with the peaking filter and scale the output according to 
your desired stopband gain and then set the peak gain to give 0dB at the 
peak.


peakGain_dB = -stopbandGain_dB

(assuming -ve stopbandGain_dB).

Does that help?

Ross.
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Lerping Biquad coefficients to a flat response

2013-01-03 Thread Ross Bencina

Hi Thomas,

Replying to both of your messages at once...

On 4/01/2013 4:34 AM, Thomas Young wrote:

However I was hoping to avoid scaling the output since if I have to
do that then I might as well just change the wet/dry mix with the
original signal for essentially the same effect and less messing
about.


Someone else might correct me on this, but I'm not sure that will get 
you the same effect. Your proposal seems to be based on the assumption 
that the filter is phase linear and 0 delay (ie that the phases all line 
up between input and filtered version). That's not the case.


In reality you'd be mixing the phase-warped and delayed (filtered) 
signal with the original-phase signal. I couldn't tell you what the 
frequency response would look like, but probably not as good as just 
scaling the peaking filter output.




On 4/01/2013 6:03 AM, Thomas Young wrote:
> Additional optional mumblings:
>
> I think really there are two 'correct' solutions to manipulating only
> the coefficients to my ends (that is, generation of coefficients
> which produce filters interpolating from bandpass to flat):
>
> The first is to go from pole/zero to transfer function, basically as
> you (Nigel) described in your first message - stick the zeros in the
> centre, poles near the edge of the unit circle and reduce their radii
> - doing the maths to convert these into the appropriate biquad
> coefficients. This isn't really feasible for me to do in realtime
> though. I was trying to do a sort of tricksy workaround by lerping
> from one set of coefficients to another but on reflection I don't
> think there is any mathematical correctness there.
>
> The second is to have an analogue prototype which somehow includes
> skirt gain and take the bilinear transform to get the equations for
> the coefficients. I'm not really very good with the s domain either
> so I actually wouldn't know how to go about this, but it's what I was
> originally thinking of.

In the end you're going to have a set of constraints on the frequency 
response and you need to solve for the coefficients. You can do that in 
the s domain and BLT or do it directly in the z domain. See the "stuck 
with filter design" thread from November 17, 2012 for a nice discussion 
and links to background reading.


There is only a difference of scale factors between your constraints and 
the RBJ peaking filter constraints so you should be able to use them 
with minor modifications (as Nigel suggests, although I didn't take the 
time to review his result).


Assuming that you want the gain at DC and nyquist to be equal to your 
stopband gain then this is pretty much equivalent to the RBJ coefficient 
formulas except that Robert computed them under the requirement of unity 
gain at DC and Nyquist, and some specified gain at cf. You want unity 
gain at cf and specified gain at DC and Nyquist. This seems to me to 
just be a direct reinterpretation of the gain values. You should be able 
to propagate the needed gain values through Robert's formulas.


Cheers,

Ross.





--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Calculating the gains for an XY-pad mixer

2013-01-17 Thread Ross Bencina

On 18/01/2013 4:06 PM, Alan Wolfe wrote:

What you are trying to calculate is called barycentric coordinates,


Actually I don't think so.

Barycentric coordinates apply to triangles (or simplices), not squares (XY).

http://en.wikipedia.org/wiki/Barycentric_coordinate_system_(mathematics)
http://en.wikipedia.org/wiki/Simplex

Aengus wrote:
>> Do these seem like reasonable ways to get the gains for the two cases?

They seem reasonable to me. Do you have a reason for doubting?

Ross
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Starting From The Ground Up

2013-01-21 Thread Ross Bencina

Hello Jeff,

Before I attempt an answer, can I ask: what programming languages do you 
know (if any) and how proficient are you at programming?


Ross.


On 21/01/2013 9:49 PM, Jeffrey Small wrote:

Hello,

I'm a recently new computer programmer that is interested in getting into the 
world of Audio Plug Ins. I have a degree in Recording/Music, as well as a 
degree in Applied Mathematics. How would you recommend that I start learning 
how to program for audio from the ground up? I bought a handful of textbooks 
that all have to do with audio programming, but I was wondering what your 
recommendations are?

Thanks,
Jeff
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Starting From The Ground Up

2013-01-21 Thread Ross Bencina

Hi Jeff,

At your stage of learning with C the advice to "just write some code" 
seems most pertinent, but I guess it depends on your learning style. 
Coming up with an achievable project and seeing it through to completion 
is a good way to learn programming.



"Read lots of code" applies, and is important. What you will find is 
that there are many different coding styles. Many of the open source 
music-dsp codebases arose in different eras -- so you will be navigating 
a varied stylistic terrain at a time where you are just getting to grips 
with programming in C. That might be confusing, but it's probably 
unavoidable.


For source code I would recommend looking at (at least) the following 
open source projects:


Pd, CSound, SuperCollider, STK, CMix and/or RTCmix

Perhaps you're best off choosing one that you like best and learning how 
to use the system as a user, and also studying how it works from the 
inside. They are all quite different.


Back in the day I found CMix the most approachable since you actually 
write the whole DSP "instrument" routine in C by calling CMix DSP 
functions (also written in C). Most of the other environments I 
mentioned are virtual machines where the DSP code is buried a few layers 
deep. STK is maybe an exception.



You might want to check out Adrian Freed's
"Guidelines for signal processing applications in C" article -- at the 
very least to give you things to think about:


http://cnmat.berkeley.edu/publication/guidelines_signal_processing_applications_c


I'm not sure whether anyone mentioned it already, but there is the 
musicdsp.org source code snippet archive:


http://musicdsp.org/


Two chapters on SuperCollider internals (from the SuperCollider Book) 
are available for free download here: http://supercolliderbook.net/



Keep in mind that music dsp is, in some ways, just another form of 
numerical programming and you can learn a lot by reading more broadly in 
that area (eg. get a copy of Numerical Recipies in C). Similarly, a lot 
of modern analog modelling techniques come from the SPICE domain rather 
than music-dsp.



---

I don't know how much discrete-time signal processing theory you studied 
in your math degree but you should at least read one or two solid DSP 
texts (ask on comp.dsp or read reviews on Amazon). There are also a few 
books available on line.


DSP and music-dsp are not exactly the same thing. There are a lot of 
music-dsp books aimed more at programming musician and people with much 
less mathematical training than you. You will find these useful to 
bridge into the realm of music, but you can probably handle the hardcore 
math.


The JoS online books that were linked earlier are probably 
mathematically appropriate. They are written for readers with a solid 
engineering maths background.

https://ccrma.stanford.edu/~jos/

Further book links and suggestions are available at: "What is the best 
way to learn DSP?"

http://www.redcedar.com/learndsp.htm


I'm going to mention this one just because I found it on line recently:
"Signal Processing First," by James H. McClellan, Ronald W. Schafer, 
Mark A. Yoder is introductory, but since it is now available for free on 
arcive.org it might be a good way to refresh on DSP basics:


http://archive.org/details/SignalProcessingFirst


In a different direction, I'm not sure whether you've seen the recently 
released Will Pirkle Plugin Programming book. I haven't read it but my 
impression is that it's at the introductory level:


http://www.amazon.com/Designing-Audio-Effect-Plug-Ins-Processing/dp/0240825152

---

The DAFX Digital Audio Effects conference has all of its proceedings on 
line. There is a bunch of interesting algorithm knowledge there:


http://www.dafx.de/

The DAFX book isn't a bad introduction to some topics either but it 
won't help you with C coding.


Other conferences that have online materials you can search:

International Computer Music Conference proceedings archive:
http://quod.lib.umich.edu/i/icmc/

Linux Audio conference: http://lac.linuxaudio.org/

All the major research groups and many researchers have publication 
archives that you can find on line if you're looking for information 
about specific techniques.


At some stage you may want to browse the AES digital library: 
http://www.aes.org/e-lib/


---

Here are a few papers that I think everyone starting out needs to know 
about (maybe not the first step on the path, but an early step):


John Dattorro Digital Signal Processing papers, including "Effect 
Design" parts 1, 2, and 3:

https://ccrma.stanford.edu/~dattorro/research.html

"Splitting the Unit Delay"
http://signal.hut.fi/spit/publications/1996j5.pdf

---

If you're writing plugins then the host and plugin framework will take 
care of a lot of the non-dsp type stuff (scheduling, parameter handling 
etc etc) but be aware that for more complex projects you may need to 
move into realms of real-time programming that go beyond music-dsp.


---

If you're ju

Re: [music-dsp] Starting From The Ground Up

2013-01-21 Thread Ross Bencina

On 21/01/2013 9:49 PM, Jeffrey Small wrote:

I'm a recently new computer programmer that is interested in getting
into the world of Audio Plug Ins. I have a degree in Recording/Music,
as well as a degree in Applied Mathematics. How would you recommend
that I start learning how to program for audio from the ground up? I
bought a handful of textbooks that all have to do with audio
programming, but I was wondering what your recommendations are?


Another angle that I didn't cover is that of learning to program. You 
should get hold of at least one good C programming book. I program in 
C++ so I don't have any straight C examples to recommend but even 
something like Kernighan and Ritchie might be OK.


---

Reading a "style and practice" book might not be a bad idea, I'm 
thinking of books like "Code Complete" and "The Pragmatic Programmer".


When I was starting out I read a bunch of coding style guidelines.
If I remember correctly started out with the Indian Hill one:

http://www.cs.arizona.edu/~mccann/cstyle.html

But you will find others if you search for C programming style guides. 
Things like this:

"Best practices for programming in C"
http://www.ibm.com/developerworks/aix/library/au-hook_duttaC.html

---

Reading an introductory algorithms and data structures textbook would be 
a good idea.



To give an idea: I have over 75 "general" programming and software 
engineering books on my bookshelf and only about 50 (if that) 
DSP/music-dsp/computer-music books. I don't think a 2:1 spit between 
general programming study and music-dsp study is unreasonable -- a lot 
of the programming you do will be more general than simply dsp.


Ross.
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] 24dB/oct splitter

2013-02-07 Thread Ross Bencina

Hi Russell,

So to be clear, you're creating a Linkwitz-Riley crossover?

http://en.wikipedia.org/wiki/Linkwitz%E2%80%93Riley_filter

On 8/02/2013 6:05 PM, Russell Borogove wrote:
> I have two digital 12dB/octave state-variable filters, each with
> lowpass/highpass/bandpass/notch outputs; I'd like to use them as a
> 24db/octave low/high band splitter.

You didn't specify which state-variable filter you're using. There are a 
at least two linear SVFs floating round now (the Hal Chamberlin one and 
Andy Simper's [1] )




Will I be happy if I use the lowpass of the first filter as input to
the second, then take the lowpass and highpass outputs of the second
as my bands


The lowpass output of the first filter presumably has a zero at nyquist, 
so I don't think this is going to work out well if you highpass it.. you 
could try though.




or do I need to put the low and high outputs of the
first filter into two different second stage filters?


That's my impression.

Ross


[1] http://www.cytomic.com/files/dsp/SvfLinearTrapOptimised.pdf
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] filter smoothly changeable from LP<->BP<->HP?

2013-02-10 Thread Ross Bencina

Hi Bram,

"A Generalization of the Biquadratic Parametric Equalizer"
Christensen, Knud Bank
AES 115 (October 2003)
https://secure.aes.org/forum/pubs/conventions/?elib=12429

Defines equations with a "symmetry" parameter for smoothly moving 
between the states you mention. There are graphs so you can check it out.


The paper is excellent.

There is a related patent. I haven't looked at the patent so I can't 
comment on that.


Another approach might be to crossfade between the taps of an SVF. I'm 
not sure if that would work.


Ross



On 10/02/2013 10:23 PM, Bram de Jong wrote:

Hello everyone,

does anyone know of a filter design that can smoothly be changed from
LP to BP to HP with a parameter? IIRC LP/AP/HP could be done simply by
perfect reconstruction LP/HP filter pairs, but never seen something
similar for BP in the middle...

The filter doesn't need to be "perfect", it's for something
musical/creative rather than a purely scientific goal...

Any help very welcome! :-)

  - Bram


--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] filter smoothly changeable from LP<->BP<->HP?

2013-02-10 Thread Ross Bencina

On 11/02/2013 1:37 AM, robert bristow-johnson wrote:

maybe i shouldn't say this, but someone here likely has a pdf copy of
the paper in case it breaks your bank to buy it from AES.


Unfortunately not me. I lost the pdf in a data loss incident and only 
have a printout and don't have an AES digital library sub at the moment.


But this does raise an issue that I've been thinking about for a while:

Does anyone know whether the AES has any intentions of moving to open 
access for their publication archive? It seems overdue.


Ross.
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] TR : Production Music Mood Annotation Survey 2013

2013-02-19 Thread Ross Bencina

Can someone please explain the scientific basis for this kind of study?

Surely by now it is widely accepted that correlations between music and 
"mood" and "emotion" are culturally biased and socially acquired?


Does the study below control for cultural bias?

Please explain why an otherwise reputable institution (QMUL) is wasting 
resources by engaging in this kind of pseudoscience.


Thanks,

Ross


On 19/02/2013 10:43 PM, mathieu barthet wrote:

--
2nd Call for Participation
Apologies for potential cross-postings
--

Dear all,

We are conducting an experiment to determine the moods or emotions expressed by 
music. The tracks used in the experiment come from various production music 
catalogues and are typically used in film, television, radio and other media.

This work is done in collaboration with the BBC and I Like Music as part of the TSB 
project "Making Musical Mood Metadata" (TS/J002283/1).

The experiment consists of rating music track excerpts along six scales 
characterising the emotions they express or suggest. These scales are Arousal 
(or Activity), Valence (or Pleasantness), Tension, Dominance (or Power), Love 
(or Romance), and Fun (or Humor).

The link to the survey can be found below:

http://musiclab.cc.jyu.fi/login.html

The experiment will run until Saturday 23rd February 2013.

Please note that your will have to rate the emotions *expressed or suggested* 
by the music (perceived emotions) and not the emotions elicited or induced by 
the music (felt emotions).

The annotations can be done at your own pace, on any computer with a web 
access, using headphones or good quality speakers. Please rate at least 50 
excerpts, if possible. If you have the time, thanks for completing the 
experiment with all 205 excerpts. (The experiment requires approximatively one 
hour for ~60-100 excerpts.)

In order to participate, you will first have to register and fill in a brief 
form. Your participation doesn't tie you to anything else. It is fine if you 
register but choose not to participate. You can do the experiment in several 
steps by logging in at different times. Don't hesitate to take breaks after a 
certain amount of ratings.

If you have questions, comments or other enquiries, please contact Pasi Saari 
(email: pasi.sa...@jyu.fi).

Please forward this call to interested parties.

Many thanks for your participation,
Pasi Saari, Mathieu Barthet, George Fazekas.


Mathieu Barthet
Postdoctoral Research Assistant
Centre for Digital Music (Room 109)
School of Electronic Engineering and Computer Science
Queen Mary University of London
Mile End Road, London E1 4NS
Tel: +44 (0)20 7882 7986 - Fax: +44 (0)20 7882 7997

E-mail: mathieu.bart...@eecs.qmul.ac.uk
http://www.elec.qmul.ac.uk/digitalmusic/
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Sound effects and Auditory illusions

2013-02-20 Thread Ross Bencina

Hi Marcelo,

Just came accross this, maybe it is helpful:

Rorschach Audio – Art & Illusion for Sound On The Art
http://rorschachaudio.wordpress.com/about/

Ross.


On 19/02/2013 9:26 PM, Marcelo Caetano wrote:

Dear list,

I'll teach a couple of introductory lectures on audio and music processing, and 
I'm looking for some interesting examples of cool stuff to motivate the 
students, like sound transformations, auditory illusions, etc. I'd really 
appreciate suggestions, preferably with sound files.

Thanks in advance,
Marcelo
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] RE : TR : Production Music Mood Annotation Survey 2013

2013-02-21 Thread Ross Bencina

Hello Mathieu,

Thanks for responding. You answered my questions and your examples were 
interesting. I have made a few brief comments:


The description of the survey begins:

>> We are conducting an experiment to determine the moods or emotions
>> *expressed* by music.
(my emphasis)

As Richard has noted there is some question as to whether music 
expresses anything at all. But far more importantly, given that the 
survey uses an existing corpus of recorded music, and engages only with 
listeners (not creators, composers, improvisers or performers) it is 
difficult to understand how musical *expression* can be considered the 
subject of the experiment.


Your examples below (babies reacting, music lovers brought to tears), 
suggest that the research is concerned with evoked or induced emotional 
response.


Elsewhere [1] you cite Sloboda and Juslin [2], as making reference to 
expressed emptions "(percieved emotions)." I only have access to the 
abstract of [2] right now, it uses the phrase "how and why we experience 
music as expressive of emotion." Experiencing music as expressive of 
emotion is quite different from music expressing emotion. I would be 
interested to know where the transposition in your survey title 
originated since it does not appear in the introduction to [1]:


"""music can either (i) elicit/induce/evoke emotions in listeners (felt
emotions), or (ii) express/suggest emotions to listeners (perceived 
emotions)"""


Perhaps a more accurate title for your survey is: "an experiment to 
determine the moods or emotions percieved to be expressed by music." ?




Does the study below control for cultural bias?

--> Yes, up to a certain extent. During registration

> (http://musiclab.cc.jyu.fi/register.html), we collect demographic
> information about participants such as background (listener, musician,
> trained professional), country, gender, age (voluntary).

However the survey appears to require internet access and is presented 
only in English.



> I fully agree that the way we perceive music is influenced e.g. by
> our culture, past experiences, tastes and the listening context (e.g.
> at work, at home).
> I do also believe that there are some strong invariants across human
> listeners, given a specific culture.

So the research is focused on discovering "strong invariants" within a 
specific culture? In that case perhaps we are in agreement and the 
experiment is intended purely to document culturally conditioned 
emotional responses to music -- an anthropological study if you will.


My concern is that by framing the experiment as determining "moods or 
emotions expressed by music" without drawing attention to the culturally 
relativistic nature of the investication (your: "strong invariants 
across human listeners, given a specific culture.") That the work 
perpetuates a myth and obscures far deeper issues. Juslin and Västfjäll, 
for example, propose 7 potential underlying mechanisms.[1] (For anyone 
reading, the paper provides a nice overview of the current controversial 
state of play.)




Please explain why an otherwise reputable institution (QMUL) is
wasting resources by engaging in this kind of pseudoscience.

--> I don't think that the funding body for this research survey is

> wasting money, nor that QMUL is wasting resources; commenting further
> on that one would certainly waste my time ;-)

My concern is with intellectual rather than financial waste.

Ross.

[1] Multidisciplinary Perspectives on Music Emotion Recognition: 
Implications for Content and Context-Based Models

http://cmmr2012.eecs.qmul.ac.uk/sites/cmmr2012.eecs.qmul.ac.uk/files/pdf/papers/cmmr2012_submission_101.pdf

[2] Psychological perspectives on music and emotion. Sloboda, John A.; 
Juslin, Patrik N. Juslin, Patrik N. (Ed); Sloboda, John A. (Ed), (2001). 
Music and emotion: Theory and research. Series in affective science., 
(pp. 71-104). New York, NY, US: Oxford University Press, viii, 487 pp.

http://psycnet.apa.org/psycinfo/2001-05534-001

[3] Emotional responses to music: the need to consider underlying 
mechanisms. Juslin PN, Västfjäll D. Behav Brain Sci. 2008 Oct;31(5):559-75;

http://www.psyk.uu.se/digitalAssets/31/31194_BBS_article.pdf


On 20/02/2013 2:20 PM, mathieu barthet wrote:

Dear Ross,

Please see comments below.

Best wishes,

Mathieu Barthet
Postdoctoral Research Assistant
Centre for Digital Music (Room 109)
School of Electronic Engineering and Computer Science
Queen Mary University of London
Mile End Road, London E1 4NS
Tel: +44 (0)20 7882 7986 - Fax: +44 (0)20 7882 7997

E-mail: mathieu.bart...@eecs.qmul.ac.uk
http://www.elec.qmul.ac.uk/digitalmusic/
____
De : music-dsp-boun...@music.columbia.edu 
[music-dsp-boun...@music.columbia.edu] de la part de Ross Bencina 
[rossb-li...@audiomulch.com]
Date 

Re: [music-dsp] M4 Music Mood Recommendation Survey

2013-02-21 Thread Ross Bencina



On 22/02/2013 9:54 AM, Richard Dobson wrote:

"Listen to each track at least once and then select which track is the
best match with the seed. If you think that none of them match, just
select an answer at random.
"

Now I am no statistician, but with only four possible answers offered
per test, and with "none of the above" excluded as an answer (which
rather begs the question...),


You mean the one about adding to the large number of studies offering 
empirical evidence in support of the assumption?



"""However, despite a recent upswing of research on musical emotions 
(for an extensive review, see Juslin &Sloboda 2001), the literature 
presents a confusing picture with conficting views on almost every topic 
in the field.1 A few examples may suffice to illustrate this point: 
Becker (2001, p. 137) notes that “emotional responses to music do not 
occur spontaneously, nor ‘naturally’,” yet Peretz (2001, p. 126) claims 
that “this is what emotions are: spontaneous responses that are dif?cult 
to disguise.” Noy (1993, p. 137) concludes that “the emotions evokedby 
music are not identical with the emotions aroused by everyday, 
interpersonal activity,” but Peretz (2001, p. 122) argues that “there is 
as yet no theoretical or empiricalr eason for assuming such specifcity.” 
Koelsch (2005,p. 412) observes that emotions to music may be induced 
“quite consistently across subjects,” yet Sloboda (1996,p. 387) regards 
individual differences as an “acute problem.” Scherer (2003, p. 25) 
claims that “music does not induce basic emotions,” but Panksepp and 
Bernatzky(2002, p. 134) consider it “remarkable that any medium could so 
readily evoke all the basic emotions.” Researchers do not even agree 
about whether music induces emotions: Sloboda (1992, p. 33) claims that 
“there is a general consensus that music is capable of arousing deep and 
signifcant emotions,” yet Konec?ni (2003, p. 332) writes that 
“instrumental music cannot directly induce genuine emotions in 
listeners.” """


http://www.psyk.uu.se/digitalAssets/31/31194_BBS_article.pdf


Ross
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] M4 Music Mood Recommendation Survey

2013-02-24 Thread Ross Bencina

Hi Andy,

On 22/02/2013 10:54 AM, Andy Farnell wrote:

I have noticed Ross, that I tend to seek out music that reflects
an already emerging emotion, such that the music then precipitates
a physiological emotion. If I am in the mood to be excited by
Bizet or the Furious Five MCs, then Radiohead or Gorecki
cannot sadden me. And conversely.


I often find myself looking for music that is congruent with my mood -- 
meaning that it somehow resonates with or supports my mental state.

The wrong music doesn't change my mood/emotions, it just doesn't "work."

But I'm not sure whether this applies if I'm put in a neutral situation 
and asked to interpret the mood of a piece of music.




The longer I have
studied music the more it seems plausible; music does not
drive emotion, emotion drives music. At such times as we
encounter cultural supposition, as in a film score, the music
may resonate more strongly with expectation and "work its
magic on us", but the emotion is not extempore in the music,
it lives in the listener.  Production music, as a choice of score

> to complement activity is therefore a question of good fit.
> Such a gifted composer or music supervisor chooses carefully,
> informed by understanding of narrative context.

Film is interesting because there is an intentional suspension of 
disbelief, a constant process of "reading" or interpreting the stimulus. 
Film composers make use of whatever tricks they can to elicit a response 
(use of the alien/familiar/culturally-charged etc).


I don't think there's any question that if you feed someone a stimulus 
and require them to interpret it that they will select the 'most 
appropriate' response. The question is what factors lead to the choice.




The disagreement
with some MIR projects, if indeed it is their mistake, is that
they presume music to be the driver and suppose a strict causality.
This makes many industry investors, mainly advertisers, very excited
where they assume a manipulative (a la Bernays/Lippmann) application.


I think there is confusion here on what music is and who the subjects 
are. Is it any organised sound played to any human? or a very restricted 
set of "popular" music played to modern westeners inculcated in consumer 
and media culture. Perhaps as an "effectiveness metric" to measure how 
well people have been trained into certain emotional responses (by 
media, film, etc) this kind of study is valid. The problem is when such 
a relativistic context-dependent evaluation is put forward as 
"empirical" science. It reeks of cultural imperialism masquerading as 
empirical science.




I find many sensitive artists are repelled by this idea, not
from a sense of instrumental reason displacing the artist, but from
an understanding that the listener is not a passive subject
amenable to a behaviourist interpretation.


There is that. Personally I'm repelled by the reductionism, lack of 
nuance, misuse of the English language, and gross experimental bias.


Somehow this whole discussion reminds me of that Lauri Anderson song 
"Smoke Rings":


""""Bienvenidos. La primera pregunta es: Que es mas macho, pineapple o 
knife? Well, let's see. My guess is that a pineapple is more macho than 
a knife. Si! Correcto! Pineapple es mas macho que knife. La segunda 
pregunta es: Que es mas macho, lightbulb o schoolbus? Uh, lightbulb? No! 
Lo siento, Schoolbus es mas macho que lightbulb.""""



Best to you,

Ross.



best @ all
Andy



On Fri, Feb 22, 2013 at 10:19:02AM +1100, Ross Bencina wrote:



On 22/02/2013 9:54 AM, Richard Dobson wrote:

"Listen to each track at least once and then select which track is the
best match with the seed. If you think that none of them match, just
select an answer at random.
"

Now I am no statistician, but with only four possible answers offered
per test, and with "none of the above" excluded as an answer (which
rather begs the question...),


You mean the one about adding to the large number of studies
offering empirical evidence in support of the assumption?


"""However, despite a recent upswing of research on musical emotions
(for an extensive review, see Juslin &Sloboda 2001), the literature
presents a confusing picture with conficting views on almost every
topic in the field.1 A few examples may suffice to illustrate this
point: Becker (2001, p. 137) notes that “emotional responses to
music do not occur spontaneously, nor ‘naturally’,” yet Peretz
(2001, p. 126) claims that “this is what emotions are: spontaneous
responses that are dif?cult to disguise.” Noy (1993, p. 137)
concludes that “the emotions evokedby music are not identical with
the emotions aroused by everyday, interpersonal activity,” but
Peretz (2001, p. 122) argues that “there is as yet no theoretical or
empiricalr eason for assuming such

Re: [music-dsp] Efficiency of clear/copy/offset buffers

2013-03-07 Thread Ross Bencina

Stephen,

On 8/03/2013 9:29 AM, ChordWizard Software wrote:

a) additive mixing of audio buffers b) clearing to zero before
additive processing


You could also consider writing (rather than adding) the first signal to 
the buffer. That way you don't have to zero it first. It requires having 
a "write" and an "add" version of your generators. Depending on your 
code this may or may not be worth the trouble vs zeroing first.


In the past I've sometimes used C++ templates to paramaterise by the 
output operation (write/add) so you only have to write the code that 
generates the signals once


c) copying from one buffer to another

Of course you should avoid this whereever possible. Consider using 
(reference counted) buffer objects so you can share them instead of 
copying data. You could use reference counting, or just reclaim 
everything at the end of every cycle.



d) converting between short and float formats


No surprises to any of you there I'm sure.  My question is, can you
give me a few pointers about making them as efficient as possible
within that critical realtime loop?

For example, how does the efficiency of memset, or ZeroMemory,
compare to a simple for loop?


Usually memset has a special case for writing zeros, so you shouldn't 
see too much difference between memset and ZeroMemory.


memset vs simple loop will depend on your compiler.

The usual wisdom is:

1) use memset vs writing your own. the library implementation will use 
SSE/whatever and will be fast. Of course this depends on the runtime


2) always profile and compare if you care.



Or using HeapAlloc with the
HEAP_ZERO_MEMORY flag when the buffer is created (I know buffers
shouldn’t be allocated in a realtime callback, but just out of
interest, I assume an initial zeroing must come at a cost compared to
not using that flag)?


It could happen in a few ways, but I'm not sure how it *does* happen on 
Windows and OS X.


For example the MMU could map all the pages to a single zero page and 
then allocate+zero only when there is a write to the page.




I'm using Win32 but intend to port to OSX as well, so comments on the
merits of cross-platform options like the C RTL would be particularly
helpful.  I realise some of those I mention above are Win-specific.

Also for converting sample formats, are there more efficient options
than simply using

nFloat = (float)nShort / 32768.0


Unless you have a good reason not to you should prefer multiplication by 
reciprocal for the first one


const float scale = (float)(1. / 32768.0);
nFloat = (float)nShort * scale;

You can do 4 at once if you use SSE/intrinsics.

> nShort = (short)(nFloat * 32768.0)

Float => int conversion can be expensive depending on your compiler 
settings and supported processor architectures. There are various ways 
around this.


Take a look at pa_converters.c and the pa_x86_plain_converters.c in 
PortAudio. But you can do better with SSE.




for every sample?

Are there any articles on this type of optimisation that can give me
some insight into what is happening behind the various memory
management calls?


Probably. I would make sure you allocate aligned memory, maybe lock it 
in physical memory, and then use it -- and generally avoid OS-level 
memory calls from then on.


I would use memset() memcpy(). These are optimised and the compiler may 
even inline an even more optimal version.


The alternative is to go low-level and benchmark everything and write 
your own code in SSE (and learn how to optimise it).


If you really care you need a good profiler.

That's my 2c.

HTH

Ross.






Regards,

Stephen Clarke Managing Director ChordWizard Software Pty Ltd
corpor...@chordwizard.com http://www.chordwizard.com ph: (+61) 2 4960
9520 fax: (+61) 2 4960 9580



-- dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book
reviews, dsp links http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Efficiency of clear/copy/offset buffers

2013-03-08 Thread Ross Bencina

On 9/03/2013 9:53 AM, ChordWizard Software wrote:

Maybe you can advise me on a related question - what's the best
approach to implementing attenuation?   I'm guessing it is not
linear, since perceived sound loudness has a logarithmic profile - or
am I confusing amplifier wattage with signal amplitude?


What I do is use a linear scaling value internally -- that's the number 
that multiplies the signal. Let's call it linearGain. linearGain has the 
value 1.0 for unity gain and 0.0 for infinite attenuation.


there is usually some mapping from "userGain"

linearGain = f( userGain );

If userGain  is expressed in decibels you can use the standard decibel 
to amplitude mapping:


linearGain = 10 ^ (gainDb / 20.)


If your input is MIDI master volume you have to map from the MIDI value 
range to linear gain (perhaps via decibels). Maybe there is a standard 
curve for this?



Note that audio faders are not linear in decibels either, e.g.:
http://iub.edu/~emusic/etext/studio/studio_images/mixer9.jpg

Ross.
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Efficiency of clear/copy/offset buffers

2013-03-08 Thread Ross Bencina



On 9/03/2013 2:55 PM, Ross Bencina wrote:

Note that audio faders are not linear in decibels either, e.g.:
http://iub.edu/~emusic/etext/studio/studio_images/mixer9.jpg


There is some discussion here:

http://www.kvraudio.com/forum/viewtopic.php?t=348751


Ross.
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Efficiency of clear/copy/offset buffers

2013-03-09 Thread Ross Bencina

On 10/03/2013 7:01 AM, Tim Goetze wrote:

[robert bristow-johnson]

>On 3/9/13 1:31 PM, Wen Xue wrote:

>>I think one can trust the compiler to handle a/3.14 as a multiplication. If it
>>doesn't it'd probably be worse to write a*(1/3.14), for this would be a
>>division AND a multiplication.

>
>there are some awful crappy compilers out there.  even ones that start from gnu
>and somehow become a product sold for use with some DSP.

Though recent gcc versions will replace the above "a/3.14" with a
multiplication, I remember a case where the denominator was constant
as well but not quite as explicitly stated, where gcc 4.x produced a
division instruction.


I don't think this has anything to do with "crappy compilers"

Unless multiplication by reciprocal gives exactly the same result -- 
with the same precision and the same rounding behavior and the same 
denormal behavior etc then it would be *incorrect* to automatically 
replace division by multiplicaiton by reciprocal.


So I think it's more a case of conformant compilers, not crappy compilers.

I have always assumed that it is not (in general) valid for the compiler 
to automatically perform the replacement; and the ony reason we can get 
away with it is because we make certain simplifying assumptions.


Ross.
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Efficiency of clear/copy/offset buffers

2013-03-09 Thread Ross Bencina


On 10/03/2013 3:51 PM, Alan Wolfe wrote:
> index = index + 1;
> if (index >= count)
>   index = 0;



Another, more compact way could be to do it this way:
index = (index + 1) % count;



I am suspicious about whether the mask is fast than the conditional for 
a couple of reasons:


- branch prediction works well if the branch usually falls one way

- cmove (conditional move instructions) can avoid an explicit branch

Once again, you would want to benchmark.



There's a neat technique to do this faster that I have to admit i got
from Ross's code a few years ago in his audio library PortAudio.


Probably I learnt that from Phil Burk. He is the master of bitmasks and 
circular buffers. There are also some other tricks in the PA code beyond 
what you mention here too...




That
technique requires that your circular buffer is a power of 2, but so
long as that is true, you can do an AND to get the remainder of the
division.  AND is super fast (even faster than the if / set) so it's a
great improvement.

How you do that looks like the below, assuming that your circular
buffer is 1024 samples large:
index = ((index + 1) & 1023);   // 1023 is just 1024-1

if your buffer was 256 samples large it would look like this:
index = ((index + 1) & 255); // 255 is just 256 - 1

Super useful trick so wanted to share it with ya (:



My preferred technique is to avoid tests and masks in the inner loop by 
precomputing the loop length and hoisting the tests:


int samplesToProcess = ?

int i=0;
while( samplesToProcess > 0 ){
  int samplesToEndOfBuffer = bufferSize - index;
  int n = min( samplesToEndOfBuffer, samplesToProcess );
  for( int j=0; j < n; ++j ){

output[i++] = buffer[ index++ ];
  }

  if( index == count )
index = 0; // wrap index

  samplesToProcess -= n;
}

this way the index is only ever tested/incremented outside the loop (no 
masking or no conditionals in the inner loop).


So long as the increment is significantly shorter than the buffer length 
you can make it work for non-integer increments too.


You can do this with more than one test (hoisting multiple unrelated 
conditionals from the inner loop).


Ross.



On Sat, Mar 9, 2013 at 12:14 PM, Tim Goetze  wrote:

[Tim Blechmann]

Though recent gcc versions will replace the above "a/3.14" with a
multiplication, I remember a case where the denominator was constant
as well but not quite as explicitly stated, where gcc 4.x produced a
division instruction.


not necessarily: in floating point math a/b and a * (1/b) do not yield
the same result. therefore the compile should not optimize this, unless
explicitly asked to do so (-freciprocal-math)


I should have added, "when employing the usual suspects, -ffast-math
-O6 etc, as you usually would when compiling DSP code".  Sorry!

Tim
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp

--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Efficiency of clear/copy/offset buffers

2013-03-14 Thread Ross Bencina

On 15/03/2013 6:02 AM, jpff wrote:

"Ross" == Ross Bencina  writes:

  Ross> I am suspicious about whether the mask is fast than the conditional for
  Ross> a couple of reasons:
  Ross> - branch prediction works well if the branch usually falls one way
  Ross> - cmove (conditional move instructions) can avoid an explicit branch
  Ross> Once again, you would want to benchmark.

I did the comparison for Csound a few months ago. The loss in using
modulus over mask was more than I could contemplate my users
accepting.  We provide both versions for those who want non-power-of-2
tables and can take the considerable hit (gcc 4, x86_64)


Hi John,

I just want to clarify whether we're talking about the same thing:

You wrote:

John> The loss in using modulus over mask

Do you mean :

x = x % 255 // modulus
x = x & 0xFF // mask

?

Because I wrote:

Ross> whether the mask is fast than the conditional

Ie:

x = x & 0x255 // modulus
if( x == 256 ) x = 0; // conditional


Note that I am referring to the case where the instruction set has CMOVE 
(On IA32 it was added with Pentium Pro I think).


Ross.
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Efficiency of clear/copy/offset buffers

2013-03-14 Thread Ross Bencina

On 15/03/2013 7:27 AM, Sampo Syreeni wrote:

Quite a number of processors have/used to have explicit support for
counted for loops. Has anybody tried masking against doing the inner
loop as a buffer-sized counted for and only worrying about the
wrap-around in an outer, second loop, the way we do it with unaligned
copies, SIMD and other forms of unrolling?


Yes. I usually do that when I can. I posted code earlier in the thread.

Doesn't work so well if your phase increment varies in non-simple ways 
(ie FM).


Ross.
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Efficiency of clear/copy/offset buffers

2013-03-14 Thread Ross Bencina

On 12/03/2013 5:58 AM, Nigel Redmon wrote:

 // round up to nearest power of two
 unsigned int v = theSize;
 v--;// so we don't go up if already a power of 2
 v |= v >> 1;// roll the highest bit into all lower bits...
 v |= v >> 2;
 v |= v >> 4;
 v |= v >> 8;
 v |= v >> 16;
 v++;// and increment to power of 2


The "Hackers Delight" book is a good source for this type of thing:

http://www.hackersdelight.org/

Ross.
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Strange problem in fixed point overlap-add processing

2013-03-22 Thread Ross Bencina

Hello Elias,

On 22/03/2013 7:39 PM, Elias Kokkinis wrote:

The processing I am applying is in the form of a real-valued gain on
each complex-valued frequency bin. When this gain is the same of every
bin, any value but the same for every bin, the output is once again
normal. When this gain is anything else, I get these "discontinuities"
effect.


What about if you're gain changes smoothly, such as a cosine-bell from 0 
at dc to 0 at nyquist? Does that still have the problem?


My understanding is that your gain value cannot change arbitrarily 
between bins, or you will get the kind of effects you're hearing. It's 
related to the fact that the finite window smears the time domain signal 
accross multiple frequency bins.


So you need to make sure your gain curve is smooth enough. My knowledge 
gets hazy at this point but I would guess that means frequency-domain 
convolving each gain factor with the spectrum of your window.


> I should note that when I record the signal I cannot see such
> discontinuities.

If you record your gain processing on a sine wave you may find 
modulation at the window rate.


Ross.
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


  1   2   3   >