Re: [music-dsp] An example video: C to FPGA programming

2020-01-12 Thread raito
Andrew,

I think that the information at http://www.clifford.at/icestorm/ would be
useful to you. It describes the icestorm toolchain and uses Lattice's
development boards. I've done some musical things using both the smaller
and larger boards.

It's a pretty inexpensive way to get started.

Neil Gilmore
ra...@raito.com

On Fri, January 10, 2020 5:48 am, Andrew Luke Nesbit wrote:
> On 10/01/2020 10:18, Theo Verelst wrote:
>
>> Hi all
>>
>
> Hi Theo,
>
>
>> Maybe it's not everybody's cup of tea, but I recall some here are
>> (like me) interested in music applications of FPGA based signal
>> processing.
> Lately I have been researching exactly this topic.  It's one of the
> primary areas of DSP research that I am considering directing my career
> towards and making a significant investment of resources in learning.site
>
>
> There is a lot of meaningful context in all of this.  I'm looking
> forward to deploying my new website that should explain it.
>
> I have a strong backgroumd in music and audio signal processing.  Not
> with FPGA however.
>
>> I made a video showing a real time "Silicon Compile" and test program
>> run on a Zynq board using Xilinx's Vivado HLS to create an FPGA bit file
> I am overwhelmed by where to start in FPGA.  This includes finding a
> hardware recommendation for a beginnerdevelopment kit.
>
> Nevertheless I have yet to look up a vendor of this FPGA development kit
> and toolchain and then to find out what prices.
>
>> that initializes a 64k short integer fixed point sine lookup table
>> (-pi/2 .. pi/2) which can be used like a C function with argument
>> passing by a simple test program running on ARM processors.
> This is great!  It's simple, useful, and can be visualized with known
> expected results.  It seems like a perfect starting project.
>
>> The main point is the power the C compilation can provide the FPGA
>> with, and to see the use of the latest 2019.2 tools at work with the
>> board,
> Might I rephrase this as the following?
>
>
> -   It's an exercise in selecting an appropriate FPGA development kit.
> This kit would be a good investment and sufficienly repurposeable for
> future DSP projects.
>
> -   Setting up the toolchain; learning a workflow; and acquainting
> oneself with the ecosystems of:
>
> -   FPGA-based DSP;
>
>
> -   the Xilinx and FPGA support communities;
>
>
> -   edge computing; and...
>
>
> circling back to the beginning...
>
> perhaps even providing a basic introduction to FPGA for somebody (like
> me?).
>
> In this last case what would be an appropriate "Step 1. Introduction to
> FPGA"?
>
>
> I guess that Xilinx's own documentation for new users of FPGA technology
> would be a good place to start.
>
> If anybody has recommendations for additional books, blogs, forums, etc,
> please let me know.  Thank you!!
>
> In summary: Is Xilinx a good company to invest time into learning its
> ecosystem?  This obviously includes spending money on dev kits with the aim
> of FPGA-basd DSP.  For examples, is Xilinx's support good?  Is the
> community ecosystem healthy?
>
> Kind regards,
>
>
> Andrew
> --
> OpenPGP key: EB28 0338 28B7 19DA DAB0  B193 D21D 996E 883B E5B9
> ___
> dupswapdrop: music-dsp mailing list
> music-dsp@music.columbia.edu
> https://lists.columbia.edu/mailman/listinfo/music-dsp
>
>
>

___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp



Re: [music-dsp] Auto-tune sounds like vocoder

2019-01-15 Thread raito
I have an unproven theory about that.

The vocal tract has different filtering characteristics at different
pitches. If you take a vocal sound and just pitch-shift it, you're also
shifting the filter characteristics, and that doesn't sound right.

I imagine that some clever person could do something akin (but definitely
not the same) as an IR for vocals. Then there might be a shot at
correcting pitch correctly.

Neil Gilmore
ra...@raito.com

On Tue, January 15, 2019 1:05 pm, David Reaves wrote:
> Iā€™m wondering about why the ever-prevalent auto-tune effect in much of
> today's (cough!) music (cough!) seems, to my ears, to have such a
> vocoder-y sound to it. Are the two effects related?
>
>
> Just curious.
> David Reaves
> ___
> dupswapdrop: music-dsp mailing list
> music-dsp@music.columbia.edu
> https://lists.columbia.edu/mailman/listinfo/music-dsp


___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] Sound Analysis

2019-01-02 Thread raito
Robert,

The tonewheels are intended to be perfectly sinusoidal, though their mass
stamping does introduce various differences. The execption is the lowest
octave of certain models, B3 included, made at certain times, as described
here:

http://www.dairiki.org/HammondWiki/ToneWheel

Given the intent of the tonewheel, I wouldn't bother with this particular
experiment at all if I were trying to recreate a B3. But then, I wouldn't
do any foldback either. But it's also well-known that every tonewheel
organ has its own character due to being a very complex electro-mechanical
device, so I suppose I might try this analysis if I were attempting to
create a framework for creating such differenced organs.

The difference in frequency could also be explained by the state of
maintenance of the tone generator. If it has more than nominal friction
that could slow down the mechanism.

Neil Gilmore
ra...@raito.com

___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp



Re: [music-dsp] Playing a Square Wave

2018-06-13 Thread raito
Theo,

My tl;dr answer to your question is it's difficult because even if it's
digital, it's not digital. Ever. It's always analog.

Like you, I'm a university EE (and Comp. Sci.) because I wanted to go into
chip design. This was back in the early 80's. So maybe my classwork was
different than yours. It was always amusing to deal with my computer
science classmates when something that dealt with the inherent analogness
of even digital logic came up.

In the late 70's, when I was in high school, I also tried to build an
organ. But back then, I had few resources, monetary or otherwise. I didn't
know about top octave chips or anything. So what I used were TTL inverter
circuits to make square wave oscillators, and fed those into J/K
flip-flops for dividers (a bit overkill, that). I also used some other TTL
logic to take those square waves and make some very steppy sawtooth waves
(which I then filtered to make them smoother.)

Currently, I emulate one of those old MOSTEK chips + dividers on an FPGA.
I'm not using much of the FPGA, but since I need oodles of output pins, I
have to use something fairly stout. No DACs necessary. I may try a version
sometime that mixes the signals internally to a single bus, then use a DAC
on that, but I fear I won't get it to sound as nice.

Neil Gilmore
ra...@raito.com

>
> There's this preoccupation I have since the advent of going "digital",
> let's say since I
> heard music being played on CD in the early 80s. I grew up with access to
> electronics
> equipment that would generate "square waves" in some sorts of analogue
> fashion, including
> originally "digital" chips, even driven from frequency stable crystals and
> so on. In fact
> I built my own organ/synthesizer based on a top octave synthesizer chip
> around 1980 which
> I gave CMOS divider chips to get well symmetrical, pure and pretty
> undistorted square
> waves to a analog mixing rail construction, and I must say (I was a
> teenager) I recall the
> different sounds. the feel if you like, of all those different square
> waves by themselves
> and some the filter and modulation constructs I made quite well.
>
> Now, like everybody else, I'm used to listening to a lot of audio in some
> form of digital
> source format, ending up at one of the varying types of Digital to Analog
> Converters, to
> enjoy digital music on for instance a smart phone, a HDMI based digital
> stream converted
> by a TV/Monitor, a very high quality DIY kit based converter setup,
> standard computer and
> bluray player outputs (both not bad) and known brand studio quality USB
> ADC/DAC units
> (Lexicon, Yamaha, and a Burr Brown/TI chip based DIY kit) and finally from
> some variety of
> digital music synthesizers (a.o. a Kurzweil and a Yamaha).
>
> The simple question that forced itself on me often, as I"m sure some can
> relate, after
> having been used to all those early signal sources including a host of
> analog synthesizers
> I had in the past, and a lot of music in various analog forms from
> standard pop to G. Duke
> and Rose Royce to mention a few of my favorites from an earlier era, is
> how can it be that
> such a simple wave like the square wave, just two signal levels with a
> near instantaneous
> jump between them, can be so hard to make digital, if you listen with a
> HiFi system and
> some normal musical signal discernment ?
>
> The answer is relatively simple: a digital square wave for musical
> application comes out
> of all current standard DACs with imperfections that I recognize and have
> an immediate
> form of musical dislike about. Not that a software synth can't be put on,
> played and
> create some fun with square waves, I'm sure it can to some degree be fun
> and played with
> in some music, but for sound enthusiasts, all that digital signal
> processing does come
> across as often the same sounding and not as musical as I remember it can
> be by far.
>
> Is it possible to do something about that? I'm an univ. EE so im y
> official background
> knowledge, there's enough to understand some of the reasons for these
> sound limitations
> easily. Solving all of them will prove to be very hard, given standard DSP
> and normal
> current DACs, so there is that. To begin with the understanding *why* such
> a simple
> "digital" square wave doesn't sound warm and nicely flutey from a digital
> system in many
> cases: the wave as to be "rounded" to fit in the sample timing, and the
> DAC essentially
> doesn't necessarily "know" how to create those up and down signal edges
> with accurate
> timing. So for instance 1 standard 1kHz square wave coming out of a
> CD-rate (44.1e3
> samples per second) DAC will have maximum up and down square wave edge
> timing errors in
> the order of 1000/44100 * 100% ~= a few percent timing errors. Doesn't
> sound like much,
> but all the harmonics might be involved, and for a High Fidelity system,
> and error of 1/10
> of a percent nowadays just like in the early days of tube HiFi is
> considered 

Re: [music-dsp] What happened to that FPGA for DSP idea

2017-09-27 Thread raito
Hi everyone,

Some of this stuff looks an awful lot like the development of computer
graphics technology.

First, there were fixed pipeline software renderers.

Then, as the software renderers were starting to have programmable
pipelines, dedicated hardware came into use.
But the hardware used the old fixed pipelines.

Then the hardware matured, and we're back to programmable pielines.

Seems to me that DSP has had a lot of the same trajectory. Might be
something to be learned there.

Neil Gilmore
ri...@raito.com

> Glad to see there's good interest in the subject, I'm sure it will become
> more important
> with time, because of the traditional computer architectures being
> relatively ineffective
> for a lot of stuff FPGAs can do,
>
> Now, I've put Vivado high level design (free) on the fastest computer I
> use, even on SSD
> for testing it, and looked at the examples some more to "get it". There's
> a lotto get
> still, even simple things like making a C function with a array lookup,
> and it's clear the
> sw is still being built up, for instance such C function will work, and
> can be pipelined
> to a 2-delay, 1 repeat cycle lookup simply by using traditional C, but as
> soon as I
> changed the example's source code from "256" short array to any other
> memory size, the
> implementation would take like 256 or even 1000 clock cycles :) .
>
> Also, I'm feeling like looking a bit at stuff that (out of necessity)
> interested me as a
> student, like does it make a difference to write a standard formula, like
> exp(x)sin(x)
> with separate function blocks for the two library computations. In single
> precision
> floating point a design comes through the C-to-verilog+dedicated-blocks
> easily enough
> (though it takes a bit longer than simpler examples), and fits my small
> Zynq 7010, but now
> it would require a little lower clock than 100MHz (which never happened
> with the other,
> even more complicated examples, which usually could run higher).
>
> I had hoped to put my Bwise->Maxima->Fortran->C wave form formulas
> straight through the
> compiler , which is like really big to try out, and that's hopeless unless
> I make
> intelligent coprocessors for the basic function elements and somehow would
> make my own
> connections and schedules. It's still cool though that more or less these
> chips (mine was
> just a bit too small for this software version's output) I could run
> double precision
> trigonometric functions faster than a 3GHz I7 with 4 cores running could
> keep up with,
> using the same C code (if I didn't err in the data somewhere, but I don't
> think so).
>
> T.
> ___
> dupswapdrop: music-dsp mailing list
> music-dsp@music.columbia.edu
> https://lists.columbia.edu/mailman/listinfo/music-dsp
>
>


___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp



Re: [music-dsp] What happened to that FPGA for DSP idea

2017-09-15 Thread raito
> Novation's Peak synth has its raw voice's waveforms generated by FPGA, it
> just spews out the final part through a DAC that then goes through analog
> filters. Many other soundcard/FX companies are now implementing DSP stuff
> through FPGA. I have not found a wealth of information regarding how these
> are all implemented when compared to traditional DSP but you can find some
> academic papers even. There are more barriers when it comes to FPGA since
> you are confined to a small subset of tools and then the need to shift
> paradigm for FPGA thinking. Would love to know what people are using as
> test boards and if they're willing to share their experiences.

I've been using Lattice's iCE40 series, both with the iCEStick board and
ICE40HX8K-B-EVN dev boards (and using the open-source IceStorm toolchain
and Verilog as the HDL. Pretty low-cost stuff.

It was pretty easy to get it all set up, but there was something that I
had to twiddle to get the FTDI USB stuff to work right. I have the notes
somewhere. Basically, my Windows 7 machine wouldn't recognize the boards
as being on the other end of a USB serial connection.

I'm not doing anything complex with them at the moment, though I'm
contemplating it.

The first thing I wanted to do was to do a sort of modern take on the old
MOSTEK top octave chips used for divide-down tone generators in the 70's.
Easy enough to code a bunch of square waves. But the next stage of those
sorts of systems used flip-flops to generate other octaves from the top
octave's wave. So I got the other board, which I use to generate all the
square waves I need. I think I'm using something like 6% of the available
logic to do that, but I'm using all the available pins. Now to figure out
how to emulate the functionality of a TDA-1008...

I could internally mix those waves easily enough and put them on pins into
a DAC, but one of the charms of those instruments isn't just the sync of
the square waves themselves, but the relative levels. And I don't know
enough about the characteristics of DACs to know that I could get that
correct to within what it would need to be to sound the way it should.
I've been away from hardware for a while, and FPGAs and DACs weren't
really in use then (and , of course, HDLs are fairly new to me, too.)
Still, I'll try it and see how it goes.

If I were doing something a bit more complex, I might use the iCEStick and
set it up so that I could feed a DAC. But the version on the stick is much
smaller than the version on the bigger board, so I might need the bigger
chip just for the logic resources.

Neil Gilmore
ri...@raito.com

___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp



Re: [music-dsp] Using an actual database for storage of DSP settings

2016-12-28 Thread raito
Maybe, maybe not. One problem with sqlite is multiple setters. One tool I
use at work needed to change from the default sqlite to poqtgresql because
we used a new feature that had many processes attempting to access the
database simultaneously.

Not a problem if you're just using it to organize data. Might be one if
you tried to write a plugin to slurp all the settings and regurgitate
them. Hmm, interesting idea. A plugin that talks to the other plugins to
keep their settings separate from the DAW project...

Neil Gilmore
ra...@raito.com

also a guy who has abused the relational model in the past

> This is an interesting idea.  I wonder if sqlite would be a good fit for
> that problem scale?
>
> ā€“ Evan Balster
> creator of imitone 
>
> On Wed, Dec 28, 2016 at 7:52 AM, Sampo Syreeni  wrote:
>
>> On 2016-12-28, Theo Verelst wrote:
>>
>> Did anyone here do any previous work on such subject ? I mean I don't
>>> expect some to come up and say : "sure ! here's a tool that
>>> automatically
>>> does that for all Ladspa's" or something, but maybe people have
>>> ideas...
>>>
>>
>> I can't say I've done anything specialized towards sound, but in what
>> career I did once have, I've modelled my fair share of data within the
>> relational framework. If you need help in that regard, I'm available.
>> Enthusiastic, even. :)
>> --
>> Sampo Syreeni, aka decoy - de...@iki.fi, http://decoy.iki.fi/front
>> +358-40-3255353, 025E D175 ABE5 027C 9494 EEB0 E090 8BA9 0509 85C2
>>
>> ___
>> dupswapdrop: music-dsp mailing list
>> music-dsp@music.columbia.edu
>> https://lists.columbia.edu/mailman/listinfo/music-dsp
>>
>>
> ___
> dupswapdrop: music-dsp mailing list
> music-dsp@music.columbia.edu
> https://lists.columbia.edu/mailman/listinfo/music-dsp


___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] Can anyone figure out this simple, but apparently wrong, mixing technique?

2016-12-21 Thread raito
> The crossover between stats and signal processing can show up in
> surprising
> places.
>
> ā€“ Evan Balster
> creator of imitone 

Which is why there's a chapter on the necessary statistics in Hamming's
book on digital filters.

Neil Gilmore
ra...@raito.com

___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] automation of parametric EQ .

2015-12-21 Thread raito
Robert,

> i just would have thought that by now, 30+ years later,
> that a common practice would have evolved and something would have been
> published (and i could not find anything).

It's the same situation  with, for example, DMX512. 30 years later, and
there's NO consensus on these things. Manufacturers do not agree on the
simplest things. And even when they agree, the results are not correct. My
former employer spent upwards of a year trying to get every fixture they
could get their hand on to do something simple -- fade from one color to
another. I see the same in audio automation. And they only got close.
There's a hundred different ways to set up the controls, and even when
they're set up the same, control values do not translate.

> so then, in your session, you mix some kinda nice sound, save all of the
> sliders in PT automation and then ask "What would this sound like if I
> used iZ instead of McDSP?", can you or can you not apply that automation
> to corresponding parameters of the other plugin?  i thought that you
> could.

No. What you can do is put the new plugin in, and mess with its controls
until you get something you like better, or give up.

> if that is the case, then, IMO, someone in some standards committee at
> NAMM or AES or something should be pushing for standardization of some
> *known* common parameters.

So start one.

> i am thinking of putting together a discussion panel...

Then do so. I wish you luck. I avoid standards committees these days.

> does this sound reasonable?

Unfortunately, yes. And equally unfortunately, no.

Nobody wants their apples compared to someone else's apples. If they can
say they have an orange, no one can say definitively that the other guy is
better.

Neil Gilmore
ra...@raito.com

___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp



Re: [music-dsp] Microphones for measuring stuff, your opinions

2014-08-26 Thread raito
This page may be of some help:

http://theremin.music.uiowa.edu/MIS.html

Neil Gilmore
ra...@raito.com




 Would like some opinions on measurement mics as also best practices in
 using them. We're trying to model the characteristics of some Indian
 instruments in some sound-scapes.
 Rohit Agarwal, Khitchdee
 --
 dupswapdrop -- the music-dsp mailing list and website:
 subscription info, FAQ, source code archive, list archive, book reviews,
 dsp links
 http://music.columbia.edu/cmc/music-dsp
 http://music.columbia.edu/mailman/listinfo/music-dsp



--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp