Re: [music-dsp] FFTW Help in C

2015-06-11 Thread Danny van Swieten
When setting up the audio callback for PortAudio you can give it a void* to 
some data. Set up the fft plan and set the fft object as the void*.
In the callback you can use a cast to get the fft object from the void*

Good luck

Sent from my iPhone

 On 11 Jun 2015, at 16:20, Connor Gettel connorget...@me.com wrote:
 
 Hello Everyone,
 
 My name’s Connor and I’m new to this mailing list. I was hoping somebody 
 might be able to help me out with some FFT code. 
 
 I want to do a spectral analysis of the mic input of my sound card. So far in 
 my program i’ve got my main function initialising portaudio, inputParameters, 
 outputParameters etc, and a callback function above passing audio through. It 
 all runs smoothly. 
 
 What I don’t understand at all is how to structure the FFT code in and around 
 the callback as i’m fairly new to C. I understand all the steps of the FFT 
 mostly in terms of memory allocation, setting up a plan, and executing the 
 plan, but I’m still really unclear as how to structure these pieces of code 
 into the program. What exactly can and can’t go inside the callback? I know 
 it’s a tricky place because of timing etc… 
 
 Could anybody please explain to me how i could achieve a real to complex 1 
 dimensional DFT on my audio input using a callback? 
 
 I cannot even begin to explain how grateful I would be if somebody could walk 
 me through this process. 
 
 I have attached my callback function code so far with the FFT code 
 unincorporated at the very bottom below the main function (should anyone wish 
 to have a look)
 
 I hope this is all clear enough, if more information is required please let 
 me know.
 
 Thanks very much in advance!
 
 All the best,
 
 Connor.
 
 
 Callback_FFT.c
 --
 dupswapdrop -- the music-dsp mailing list and website:
 subscription info, FAQ, source code archive, list archive, book reviews, dsp 
 links
 http://music.columbia.edu/cmc/music-dsp
 http://music.columbia.edu/mailman/listinfo/music-dsp
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] MOOC on Audio Signal Processing for Music Applications

2014-09-06 Thread Danny
This is so cool. Thank you very much!

Verstuurd vanaf mijn iPhone

 Op 6 sep. 2014 om 14:18 heeft Ming Li liming@gmail.com het volgende 
 geschreven:
 
 Sorry for cross posting.
 
 -- Forwarded message --
 From: Serra Xavier xavier.se...@upf.edu
 Date: Sat, Sep 6, 2014 at 12:50 AM
 Subject: [ISMIR-Community] MOOC on Audio Signal Processing for Music
 Applications
 To: commun...@ismir.net
 
 
 In collaboration with Prof. Julius Smith from Stanford University, I have
 put together a 10 week long course on Audio Signal Processing for Music
 Applications in the Coursera online platform. The course will start on
 October 1st and the landing page is https://www.coursera.org/course/audio
 but you can already go to the class page and check the content of the first
 week.
 
 We have designed a course that should be of interest and accessible to
 people coming from diverse backgrounds while going deep into several signal
 processing topics. We focus on the spectral processing techniques of
 relevance for the description and transformation of sounds, developing the
 basic theoretical and practical knowledge with which to analyze,
 synthesize, transform and describe audio signals in the context of music
 applications.
 
 The course is based on open software and content. The demonstrations and
 programming exercises are done using Python under Ubuntu, and the
 references and materials for the course come from open online repositories.
 The software and materials developed for the course are also distributed
 with open licenses.
 
 Each week of the course is structured around six types of activities:
 - Theory: video lectures covering the core signal processing concepts.
 - Demos: video lectures presenting tools and examples that complement the
 theory.
 - Programming: video lectures presenting the needed programming skills to
 implement the techniques described in the theory.
 - Quiz: questionnaire to review the concepts covered.
 - Assignment: programming exercises to implement and use the methodologies
 presented.
 - Advanced topics: videos and written documents that extend the topics
 covered.
 
 Preparing this has been a big challenge and we are eager to keep improving
 it with the feed back from people, so let me know any comments you might
 have.
 
 Please share this information with students or with anyone that might be
 interested in this topic.
 
 
 Xavier Serra
 Music Technology Group
 Universitat Pompeu Fabra, Barcelona
 http://www.dtic.upf.edu/~xserra/
 --
 dupswapdrop -- the music-dsp mailing list and website:
 subscription info, FAQ, source code archive, list archive, book reviews, dsp 
 links
 http://music.columbia.edu/cmc/music-dsp
 http://music.columbia.edu/mailman/listinfo/music-dsp
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Hosting playback module for samples

2014-02-27 Thread Danny
If i understand correctly, Juce would be the solution.
You say you already have the working c++ code, so you could use that and add an 
audioprocessor from juce to do your playback.

 Op 27 feb. 2014 om 06:36 heeft Ross Bencina rossb-li...@audiomulch.com het 
 volgende geschreven:
 
 Hello Mark,
 
 On 27/02/2014 3:52 PM, Mark Garvin wrote:
 Most sample banks these days seem to be in NKI format (Native
 Instruments). They have the ability to map ranges of a keyboard into
 different samples so the timbres don't become munchkin-ized or
 Vader-ized. IOW, natural sound within each register.
 
 A playback engine is typically something like Native Instruments'
 Kontakt, which is 'hosted' by the main program (my composition
 software, for ex). then NI Kontakt can load up NKI files and
 deliver sound when it receives events.
 
 The whole process of linking events, etc is what usually what
 stymies programmers who are new to VST-based programming. And
 even many who are familiar.
 
 Yes the VST SDK is not the best documented in the world.
 
 
 Personally I would avoid managed code for anything real-time (ducks).
 
 Actually, C# can be faster than pre-compiled code!
 
 Speed has nothing to do with real-timeness.
 
 Real-time is all about deterministic timing. Runtime-JIT and garbage 
 collection both mess with timing. It may be that CLR always JITs at load 
 time. That doesn't save you from GC (of course there are ways to avoid GC 
 stalls in C#, but if you just used a deterministic language this wouldn't be 
 necessary).
 
 
 You're  need to build a simple audio engine (consider PortAudio or the
 ASIO SDK). And write some VSTi hosting code using the VST SDK. It's this
 last bit that will require some work. But if you limit yourself to a
 small number of supported plugins to begin with it should not be too
 hard. MIDI scheduling in a VSTi is not particularly challenging -- the
 plugins do the sub-buffer scheduling, you just need to put together a
 frame of MIDI events for each audio frame.
 
 That's inspiring. I'm not sure that this is done in the same way as a
 regular plugin though.
 
 I'm not sure what you mean by a regular plugin.
 
 I have a commercial VST host on the market so I do know what I'm talking 
 about.
 
 
 And I believe it's pretty difficult to host a
 VSIi in managed code. That is pretty much the crux of the problem right
 there. I've heard of a lot of people who started the project but were
 never aboe to get it off te ground.
 
 So you're insisting on using C# for real-time audio? As noted above I think 
 this is a bad idea. There is no rational reason to use C# in this situation.
 
 Just use unmanaged C++ for this part of your program. Things will go much 
 better for you. Not the least because both real-time audio APIs and the VST 
 SDK are unmanaged components.
 
 
 If there's any kind of synchronisation with the outside world things
 will get trickier, but if you can clock the MIDI time off the
 accumulated sample position it's not hard.
 
 I could do without sync to external for now.
 
 ... I guess the main
 approaches would be to either (A) schedule MIDI events ahead of time
 from your C# code and use a priority queue (Knuth Heap is easy and
 relatively safe for real-time) in the audio thread to work out when to
 schedule them; or (B) maintain the whole MIDI sequence in a vector and
 just play through it from the audio thread. Then you need a mechanism to
 update the sequence when it changes (just swap in a new one?).
 
 The internals of a VSTi host are beyond me at present. I was hoping
 for some simple thing that could be accessed by sending MIDI-like events
 to a single queue.
 
 I'm sure there are people who will licence you something but I don't know of 
 an open source solution. JUCE might have something maybe?
 
 Ross.
 
 --
 dupswapdrop -- the music-dsp mailing list and website:
 subscription info, FAQ, source code archive, list archive, book reviews, dsp 
 links
 http://music.columbia.edu/cmc/music-dsp
 http://music.columbia.edu/mailman/listinfo/music-dsp
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp