Knut Petersen <knut_peter...@t-online.de> writes:
> Hi everybody!
> Given there is a) a lilypond score and b) a recording of the music
> described in the score.
> It's easy to export the music events from the score, it's also easy
> to feed the music to a fft.
That's sort of backwards. You know the pitches you expect: you can just
use one-tap delay lines (possibly with linear interpolation) to subtract
the signal with the expected periodicity and get a residue signal, then
use a Markov chain on the strength of the residue to follow the music.
You really don't want to match an independent analysis of the music to
the score. Where's the point in that? That's not what a human
> Does anyone know of any existing software to correlate these two
> Lilypond music events could be translated to a set of expected fft
> coefficients, and a set of transformations could be searched that
> gives the best correlation of the two datasets.
> One possible use would be to export the actual tempo found in the
> recording, transform it to a sequence of \tempo commands, add it to a
> nullvoice in a score, run lilypond again, produce a score video
> without audio and combine that with the recording.
Frankly, I think highlighting single notes instead of having a smoothly
moving time bar (namely letting LilyPond typeset stuff only once and
doing the graphical processing without LilyPond) is going to be visually
lilypond-devel mailing list