In the old days the text to songs must be verse, as opposed to prose, in
that they needed comply with such things as measure and rhythm, which
would automatically make some sense when adapted to music with the same
matrices. This still governs many songs today, although there's the
tendency of texts going more and more prosaic.
Aligning verse to music is straightforward: verse comes in alternate
stressed and unstressed syllables, music comes in alternate down and up
beats. One aligns a stressed syllable to a downbeat and an unstressed
one to an upbeat. On a wider scale the verse also comes in couplets, as
music in pairs of phrases. By stretching some stressed syllables over
multiple beats one can align couplets to phrase pairs seamlessly.
To achieve similar effect by signal processing you'll need to segment
speech into (the equivalent of) couplets and stressed/unstressed
syllables, and music into phrase pairs and down/up beats. The closer the
match between the two structures the better sense it makes, presumably.
xue
On 19/03/13 10:42, Danijel Domazet wrote:
Hi music dsp,
Does anyone know how Songify mobile app works? The one that "turns speech
into music automatically". The app takes two inputs, user speech, and
predefined underlying music (probably pre-analyzed too). The speech is
processed and mixed into the music. It is obvious that pitch-shifter with
heavy auto-tuning does the job, but where and what to pitch-shift in order
for all this to make sense? What would be the steps to achieve something
similar?
Thanks,
Danijel Domazet
LittleEndian.com
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp