Re: [music-dsp] list postings

2012-02-25 Thread Tom Wiltshire
Here's an example of a basic RTF document:

{\rtf1\ansi\ansicpg1252\cocoartf1038\cocoasubrtf360
{\fonttbl\f0\fswiss\fcharset0 Helvetica;}
{\colortbl;\red255\green255\blue255;}

\paperw11900\paperh16840\margl1440\margr1440\vieww9000\viewh8400\viewkind0

\pard\tx566\tx1133\tx1700\tx2267\tx2834\tx3401\tx3968\tx4535\tx5102\tx5669\tx6236\tx6803\ql\qnatural\pardirnatural

\f0\fs24 \cf0 This is a basic rtf document\
}

I don't know what headers Apple's Mail.app sends out with this, but I can see 
why you wouldn't want to read that as plain text. Even if it got through (eg if 
the headers were ok) we'd still struggle to read it.

(The file was saved from Apple's TextEdit.app, in case anyone wants to know 
where the example came from)

T.


On 25 Feb 2012, at 19:07, douglas repetto wrote:

 
 It may be that Apple is adding something to the header indicating rich 
 text/html even though you don't end up with offending characters in the 
 email. The list software rejects email based on the headers, not on the 
 actual content.
 
 There's no fundamental reason why the list can't accept html mail, btw. So if 
 people really want it we can make a change. In the past it's been about spam 
 control and saving bandwidth, but those issues aren't such big concerns 
 anymore, I think. Although I personally find that reading a list like this in 
 different fonts/colors/styles can be unpleasant.
 
 douglas
 
 On 2/25/12 2:02 PM, Nigel Redmon wrote:
 I've had problems in the past when html-style font tags make their
 way into the email. For instance, this happens in Apple's Mail.app.
 Even though it's not an html email, per se, they sometimes get
 rejected (but not always). If I do Make Plain Text from the Format
 menu before sedning, then they always get through—there is no visible
 change to the email either (because they are just some default font
 tags—I'm not really formatting the text).
 
 
 On Feb 25, 2012, at 10:38 AM, Brad Garton wrote:
 Hey music-dsp-ers --
 
 Has anyone else experienced troubles getting posts to show up on
 our list?  I've sent (and re-sent) several this morning and they
 just vanished.  I've checked with douglas about it, but was
 wondering if anyone else has had problems.
 
 brad http://music.columbia.edu/~brad
 
 -- dupswapdrop -- the music-dsp mailing list and website:
 subscription info, FAQ, source code archive, list archive, book
 reviews, dsp links http://music.columbia.edu/cmc/music-dsp
 http://music.columbia.edu/mailman/listinfo/music-dsp
 
 
 -- 
 ... http://artbots.org
 .douglas.irving http://dorkbot.org
 .. http://music.columbia.edu/cmc/music-dsp
 ...repetto. http://music.columbia.edu/organism
 ... http://music.columbia.edu/~douglas
 
 --
 dupswapdrop -- the music-dsp mailing list and website:
 subscription info, FAQ, source code archive, list archive, book reviews, dsp 
 links
 http://music.columbia.edu/cmc/music-dsp
 http://music.columbia.edu/mailman/listinfo/music-dsp

--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] google's non-sine

2012-02-24 Thread Tom Wiltshire
I agree as well. Why should it have to be a sine wave? Hertz didn't invent the 
sine wave! A square wave has 'frequency' just as much as a sine does, and 
presumably 'frequency' was the point of the googledoodle. Put the odd harmonics 
in and get a circular waveform, it's fine by me.

The amplitude and frequency modulation is a bit weird though!

T.

On 24 Feb 2012, at 07:56, Nigel Redmon wrote:

 Eh, I still say they weren't going for a sine wave at all. Look at their 
 other doodles. I'm sure that their designers would have felt that a sine wave 
 would have missed the point for them.
 
 http://www.zazzle.com/robert_schumanns_200th_birthday_tshirt-235517387819488097

--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] FM Synthesis

2011-09-14 Thread Tom Wiltshire

On 14 Sep 2011, at 10:29, Emanuel Landeholm wrote:

 The sad news is that FM with feedback cannot be done the naïve way.
 You need to account for aliasing. Someone upthread suggested adding
 noise instend of feedback, this is probably a good idea. But it will
 not make your FM synthesis engine sound like the real thing.


How did Yamaha deal with this in in 1983?
Given the resources they had at the time, it must have been fairly basic.
I know the DX7 used a reasonably high sample rate, but it wouldn't have been 
enough on it's own.
Anyway, doesn't the DX7 have quite a bit of aliasing if pushed? (I haven't got 
one, so I can't try it).

T.

--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Java for audio processing

2011-09-13 Thread Tom Wiltshire

On 13 Sep 2011, at 19:01, Phil Burk wrote:

 
 In a thread about FM synthesis, Tom Wiltshiret...@electricdruid.net wrote:
 
 If there's any heresy, it's probably using Java for audio processing! ;)
 
 There are plenty of reasons *not* to use Java for audio. But in some 
 circumstances it can be quite delightful.  I recently converted the synthesis 
 engine in JSyn from native 'C' to pure Java and I'm glad I did.
 
 I'm not interested in a flame war. Every language is great and has pluses and 
 minuses. I program mostly in 'C' to pay the rent. But I love Java and thought 
 I would share some of my experience with audio processing in Java.
 
 First, why *not* use Java:
 
 1) Garbage collection: When the JVM does garbage collection it can cause 
 threads to pause. This means you have to use bigger output buffers and suffer 
 higher latency as a result.  Luckily one can tune the garbage collector so 
 this is not too bad. Also there are real-time JVMs that effectively eliminate 
 this problem but they are very expensive and some run only on Linux.
 
 2) Performance: Java code generally runs slower than equivalent 'C' code. It 
 used to be much slower. But the HotSpot just-in-time compiler has improved 
 performance significantly. I have seen reports that HotSpot code can be 
 faster in some cases than code compiled for generic x86 because HotSpot can 
 optimize the code for the actual processor model it is running on. I found 
 that my Java code runs about 70-80% as fast as my old 'C' code.  I often use 
 no more than 10-20% of the CPU anyway so it really makes no difference to me.
 
 3) JavaSound: JavaSound was optimized for streaming so it tends to have high 
 latency.  Also JavaSound on Mac is a bit broken and pops every few seconds. 
 JavaSound also does not support multi-channel (N2) devices very well.  I am 
 working on a Java wrapper for PortAudio that will hopefully address these 
 issues.
 
 So given these problems, why use Java:
 
 A) Tools: I love the Eclipse IDE for Java. It has very powerful refactoring 
 tools. The code practically writes itself.
 
 B) Safety: If I over-index an array or miscast an object then Java tells me 
 immediately and gives me a stack trace. I don't crash five minutes later 
 wondering how I scribbled memory. So I don't waste a lot of time debugging 
 obscure pointer bugs. I am more confident that the code I ship is stable.
 
 C) Cross-platform: I used to spend most of my development time for JSyn 
 trying to maintain Mac, Windows and Linux versions of the native code. I had 
 to deal with 32 vs 64-bit OSes, browser plugins, installers, etc. Yuck. Now I 
 just build a JSyn JAR that is pure Java and it works everywhere. I can even 
 write large GUI apps in Swing using Threads and networking and then drag them 
 from Mac to PC or Linux and they just work. I can even write Applets that run 
 in a browser. There are a few gotchas related to file paths etcetera but they 
 are minor and easy to avoid once you learn how.  Now I can concentrate on 
 writing synthesis and music code.
 
 I am happy to trade off latency and performance issues for the luxury of 
 writing in pure Java.
 
 Phil Burk
 http://www.softsynth.com/jsyn/

A very interesting post, Phil, thanks.

My remark about Java was just me being facetious. It's interesting to have 
someone who knows it in more detail explain the pluses and minuses. I wrote 
Java applets some(many?) years ago, but Java has moved on a long way since then.

It's also interesting to see the different focus here from some of the other 
lists I follow. People here seem to mostly use (correct me if I'm wrong) 
desktop platforms and processors, so you've got bags of power and memory, and 
you're writing at a fairly high level, with several layers of drivers/OS and 
such like between you and the hardware. Hence your remark that you typically 
only use 10-20% of the CPU. My own current work is writing for lightweight 
embedded processors, so both memory and MIPS are severely limited, and you 
usually run out of one or the other. Hopefully you can get all/most/a useful 
subset of your desired features in before you do. Secondly, you rarely have 
anything else in the chip to worry about - no 'sound drivers' or OS unless you 
write them. This is both a blessing and a curse. A curse 'cos you have to do 
everything yourself, but a blessing because you have complete control over what 
is going on.

T
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] a multiband compression experiment

2011-02-08 Thread Tom Wiltshire
Very nice.

Can we hear a 'before' and 'after' for the compression, please?

Thanks,
Tom

On 8 Feb 2011, at 19:51, Theo Verelst wrote:

 Hi all
 
 Using my new I7 motherboard's 192kS/s converters I thought I'd record a short 
 jazz piece to test a multiband compression scheme at that sample rate.
 
 So I recorded 3 pieces with a Kurzweil PC3 into rosegarden, using a Lexicon 
 compression/reverb, mixed them together and fed them through the 15 band 
 filter/compression bank, and converted the result to a 44.1 mpg3:
 
   http://www.theover.org/Kurz/ehwyg.mp3(3.9 Mega Byte)
 
 The song is made after the first part of Everything Happens When You;re 
 Gone from the famous Don't try this at home from Michael Brecker, which I 
 studied long ago.
 
 It works good, and I needed no additional production means or tricks, so the 
 whole path appears to work neutral.
 
 Theo Verelst
 
 --
 dupswapdrop -- the music-dsp mailing list and website:
 subscription info, FAQ, source code archive, list archive, book reviews, dsp 
 links
 http://music.columbia.edu/cmc/music-dsp
 http://music.columbia.edu/mailman/listinfo/music-dsp

--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Approaches to multiple band EQ

2011-01-11 Thread Tom Wiltshire
I'd approach this from a analogue-thinking angle and design a tunable 
parametric EQ stage and then parallel a load of them up, like Robert suggested.

For the EQ, I'd start by looking at a digital 12dB/oct SVF design, like the 
Chamberlin filter. This allows you to tweak resonance independently of 
frequency, so you could adjust it depending on how wide the bands are. The 
per-band calculation wouldn't be too bad, so you could easily do multiple bands.

And don't worry about the nonlinear phase responses - have you thought about 
what a analogue EQ does to the phase? Any digital implementation will be no 
worse, and will have less noise to boot. Does it sound good? is the question 
to ask, unless you've got some particular reason to need to protect the phase?

HTH,
Tom


On 11 Jan 2011, at 18:23, Thomas Young wrote:

 Hi all
 
 I need to develop a real-time multiple band EQ DSP effect, but I am unsure 
 about how to approach it. 
 
 My preferred approach would be to FFT- Modify Spectrum- IFFT, however I 
 think that will end up being too slow (or at least using up far more 
 processing power than I would like.) The only other approach I can think of 
 is a number of IIR band stop filters in series, would this be practical? I am 
 concerned that there would be some negative interaction between the filters, 
 or some unpredictable results due to different (non linear) phase responses 
 of the filters. It's important that the DSP introduces minimal distortion and 
 is acoustically transparent when 'flat'.
 
 Information about any other common approaches to multiple band EQ's would be 
 helpful too.
 
 Thanks
 
 Thomas Young
 
 Core Technology Programmer
 Rebellion Developments LTD
 
 --
 dupswapdrop -- the music-dsp mailing list and website:
 subscription info, FAQ, source code archive, list archive, book reviews, dsp 
 links
 http://music.columbia.edu/cmc/music-dsp
 http://music.columbia.edu/mailman/listinfo/music-dsp

--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp