Huh, interesting video. For more than the visualizations. I've always been intrigued by how people use their computers, and this is superb. Rarely see people use their computers as literal instruments, at least I don't. Fun.
Thanks for the share, heidi. And I dug your sound as type alignment analogy, Dana. Quite effective. /johnny https://abledaccess.com <http://abledaccess.com/> https://abledaccess.com/twitter <http://abledaccess.com/twitter> access must be abled > On Aug 31, 2015, at 3:12 PM, Valles, Heidi <hval...@ocadu.ca> wrote: > > Don’t judge me! But I recently watched this video of Diplo/Skrillex/Biebs > talking about a song they made, and there are some neat visuals overtop of > the video that relay info about the tune using dots/waves… the beat, the size > of the sounds, their colour/pitch, etc. > > https://www.youtube.com/watch?v=1mY5FNRh0h4 > <https://www.youtube.com/watch?v=1mY5FNRh0h4> > > :) > > > > --- > Heidi Valles > > WebSavvy Accessibility Services > http://websavvy.idrc.ocad.ca/ <http://websavvy.idrc.ocad.ca/> > > Inclusive Design Research Centre > OCAD University > > > > > > >> On Aug 31, 2015, at 2:06 PM, Ayotte, Dana <dayo...@ocadu.ca >> <mailto:dayo...@ocadu.ca>> wrote: >> >> Hi All, >> >> I spent some time on a simple sonification experiment and wanted to share my >> thoughts with you. I was looking at how to arrange the stacking of sounds. >> I’ve tried to visually represent it here (and have also provided examples). >> In this case there are 5 individual tracks/values/sounds. The dashed lines >> represent the duration of the sound (playing from left to right and all >> sounds playing at once). >> >> what I was thinking of as “left justified”: >> >> ————————————— >> ——————————— >> ———————— >> ————— >> —— >> >> Example: >> >> >> >> and “right justified: >> ————————————— >> ——————————— >> ———————— >> ————— >> —— >> >> Example: >> >> >> >> and “centered”: >> ————————————— >> ——————————— >> ———————— >> ————— >> —— >> >> Example: >> >> >> In the first case (“left justified”) my thought was that it’s difficult to >> separate out all the sounds when you hear them all play together right away. >> Is it better to “right justify” them, so that you hear each sound join >> individually? Or perhaps this can also be achieved by “centering” the sounds? >> >> The tricky part of course is that there are so many parameters - for >> example, one instrument each playing a different pitch (as heard here), vs a >> variety of instruments, and ascending/descending scales vs random pitch, >> etc. Also thinking about how to combine this with with a sequential >> representation, i.e.: >> ——————————— | ———————— | ————— | ——— | —— >> >> Lots to experiment with. Your thoughts are welcome! >> >> Dana >> <sonification stacked strings Left - 2015-08-31, 10.37 AM.mp3><sonification >> stacked strings Right - 2015-08-31, 10.40 AM.mp3><sonification stacked >> strings Center - 2015-08-31, 10.41 >> AM.mp3>_______________________________________________________ >> fluid-work mailing list - fluid-work@lists.idrc.ocadu.ca >> <mailto:fluid-work@lists.idrc.ocadu.ca> >> To unsubscribe, change settings or access archives, >> see http://lists.idrc.ocad.ca/mailman/listinfo/fluid-work >> <http://lists.idrc.ocad.ca/mailman/listinfo/fluid-work> > _______________________________________________________ > fluid-work mailing list - fluid-work@lists.idrc.ocadu.ca > To unsubscribe, change settings or access archives, > see http://lists.idrc.ocad.ca/mailman/listinfo/fluid-work
_______________________________________________________ fluid-work mailing list - fluid-work@lists.idrc.ocadu.ca To unsubscribe, change settings or access archives, see http://lists.idrc.ocad.ca/mailman/listinfo/fluid-work