On 2012-10-05, Eric Carmichel wrote:

(Ville, once again the reason why I linked you in is to be found lower down the post.)

Surround controllers, on the other hand, are generally limited in their number of channels or become expensive. One solution to my 'dilemma' was to use a DAW surface controller. The simplest implementation of this idea was an attempt to use a MIDI volume controller to remotely control the Master fader.

Early vector synths possess joysticks well integrated with MIDI. What you'd need then is a counterpart which can parse the serial MIDI stream and make it into a) a continuous stream of panning points and b) into go interpolation between them, despite c) the two separate controller value updates arriving at different times, with no other time information to connect them together.

I've never seen a controllee which could do this sort of stuff. If you want to get somebody to implement this stuff, in a production level thingie, I'd seriously consider contacting Charlie Richmond.

A kit available from midikits.net23.net provided an easy to build and flexible solution. This is a hardware device with a USB interface that serves to control the (software) Master fader.

Over MIDI the best solution would prolly be the standard pan controller used for a mid/Y channel, combined with a proprietary controller for X/side channel. Two more if you're dealing with full W-format, for W'==W-X-Y, and W-Z. Roughly speaking.

But by building my own preamp, I achieved a large channel count by using serially-connected Burr Brown PGA2311 ICs.

Why not go all-digital, with something like the (Cirrus derived, I belive) Crystal CS4234 or the like? Those puppies can be coaxed to work in full tandem as well, or at least synched to analog-kind perfection.

A single rotary pulse encoder controls all channels, but now I have the added benefit of software control.

A Gray coded knob works pretty well with digital electronics.

Thanks to all who wrote. The info on Richard Furse's site helped immensely.

They always do. I'd very much like to have a zipped or otherwise compressed representation of every site out there, for preservation, as well. Not always because I want to redistribute the stuff as part of the Motherlode, but simply because I believe in offline preservation of the Good Stuff as well.

Regarding my 6th (or roaming speaker): This channel stands alone for a few reasons that I didn't explain but will comment on here: First, my current study involves SNRs in reverberant environments. The primary noise source is talkers and room reflections... specifically, talkers at a distance. The signal is speech from a nearby talker. This represents a scenario found in restaurants, and a listening condition that is difficult for cochlear implant users. [...]

That's a research application. Do you already have a format in which to present/preserve both your source data and your conclusiosn? I could help select a few, or then participate in developing yet another one. (I'm a data representation freak even above my capacity as an ambisonic and relational database one.)

This way, I use a handheld response box containing, say, 8 words written on push-buttons, and the subject simply pushes the buttons in the order the words are heard.

As a hearing deficited person who knows a bit about auditory tests and statistics, I believe that design is a bit dangerous on multiple fronts...

(Keyboards or word recognition software to collect responses becomes unwieldy and unreliable). When the listener makes x consecutive mistakes, the SNR is automatically improved to make listening easier (or decreased to make it more difficult in the case of consecutive correct responses).

Optimally you'd do an interpolation search over the whole SNR range, for speed, and with stochastic backrack, in order to get tighter error bars.

The noise is surround noise via an Ambisonic set up and auralizaton/or live recordings of restaurant noise.

In here, I'd seriously suggest you compare your notes with what Ville Pulkki and his research team did with DirAC. It is *highly* doubtful whether background noise played over a low order ambisonic system actually masks direct sounds the way real life noise does. In fact it's almost certain it doesn't -- once you rerandomize the nondirectional, "noise" component, even via computational means starting with a soundfield recording, suddenly the soundfield takes on a much more natural and extended quality. Over which I at least, as a hearing impaired person, compensate my problem much better than over a low order ambisonic noise field. I've never seen the end result measured in a proper fashion, but if they were, I'd guess the difference between a fully randomized disperse ground and a one played back via first order ambisonic could be as much as 10-15dB, at least for people like me.

Although reverberant noise is generally diffuse, localization cues and "glimpsing" aid the listener in segregating and understanding the signal. At least that's the idea.

Prolly true. Just mind the fact that correlated and uncorrelated background "hum" are rather different as masking signals.

I have an array of speakers at home, but data collected from a living room hardly qualifies as "controlled" or scientific.

If it's well-designed, it actually does count. Because we can't always fully know what happens in even the best (or the worst) of living spaces, unless we actually try them out. So, once again, one of the things Ville Pulkki showed me at one time was something you might call "a research quality, optimal, living room". Acoustically well damped and dispersed, but by no means dead as the room upstairs; that's where he made me a believer in computational randomization of the soundfield. Upstairs, in the anechoic one, was where he convinced me about the fact that more speakers in ambisonic playback isn't always better. ;)

This was why I was wondering whether an Ambiophonic-Ambisonic hybrid system might be possible.

It is, and it has been done already. Just browse over the Ambiophonics site, and look for Ralph's and Robin's publications. E.g. something they call "panorambiophonics" http://www.ambiophonics.org/files/AES24Banff_1.html .

Ideally, I'd like to construct a system that is portable.

There, go with digital, always, n'est ce pas?

Gobos and flats may work, particularly if they are constructed of materials that provide absorption across the speech spectrum of frequencies. Low-frequency absorption via a gobo would be a more daunting task, though the right combo of mass and compliance could yield a low Q absorber. Just ideas...

Physical acoustics about such complicated objects aren't the forte of most people around here, I think. Not mine at least -- yet there are some like Filippo (Fazi), Robert (Greene) and Angelo (Farina) who've been there and done that already, or at least eat the math for lunch.

In the meantime, I continue to enjoy make Ambisonic field recordings solely for the fun of it. My last recording was an attempt to record a bald eagle at a nearby lake. Most of what I captured was an agitated squirrel and a flying insect that found the mic's windscreen to be a good landing pad. Fun effects!

For the longest time I've wanted to bring a full 3D mic into a BDSM party. I'm used to flying with that crowd, after all. There, you don't have to wait for the squirrel to squeak. It's just that there -- for some odd reason -- people are somewhat wary of letting any kind of recording device near...their arse. ;)
--
Sampo Syreeni, aka decoy - [email protected], http://decoy.iki.fi/front
+358-50-5756111, 025E D175 ABE5 027C 9494 EEB0 E090 8BA9 0509 85C2
_______________________________________________
Sursound mailing list
[email protected]
https://mail.music.vt.edu/mailman/listinfo/sursound

Reply via email to