In message [EMAIL PROTECTED]you write:
Does anyone know whether Gunther Geiger (the author of Hammerfall driver),
or anyone else for that matter, working on the RME's Hammerfall DSP (pcmcia
Gunter is not the author of the hammerfall driver.
Winfried Rietsch wrote the original native driver; I
can't see that in the terms. (and a personal request to make it truly viable
for both audio and video put SMPTE/MMC sync functionality on to it... so it
can lock to external video and audio devices correctly - I know Paul -
wordclock would be paramount too)
FYI: wordclock is a h/w issue.
external add-on for this weighs in against it.) The biggest problems I
see with the Hammerfall are the lack of onboard midi (could I use the
midi port from my SB or even my crummy motherboard sound card and have
audio go through the Hammerfall) and the fact that I hear it's somewhat
hard to use.
how irritating! a package has just shown up on debian by the name of jack, a
cd-ripper.
can you send me a URL or email address? we'll see if we can nip this
in the bud ...
one, however, would like a new release with a new, final name (jack, laaga,
foolib, whatever, I have my preferences but
I've been checking out OpenML[1], and I was wondering what, it could mean to
linux audio development?
it looks largely irrelevant to me. or to put it another way, its
relevant in the same way that ASIO is relevant: it might shape the h/w
designs that some manufacturers use, but nothing more.
i
whats the legal status of surround encoding? are we free to write
GPL'ed software that encodes 6 streams of audio data into DTS or Dolby
digital?
--p
Anyone know anything about kLADSPA? Its linked off SuSE's music apps page
and theres a screenshot
(http://www.suse.de/de/products/suse_linux/i386/images/kladspa.png), but a
search on google doesn't show up anything else.
Looks really useful.
it sure does, despite having a name that starts with
As we all know, the unwritten rule of the linux-audio-dev world is that if
your feature request is not implemented in 60 minutes, you will get one
extra feature for free!
LOL! pin this on my wall!
--p
place. I downloaded laaga-0.2.3.tar.gz and built it. First the
configure scrip t didn't check for fltk, I found out in the build
that I needed it.
good point. the fltk client is just an optional test client. building
it should be optional.
Thoughts? Can we get out another release, pretty
i wrote:
no pre-existing problems with jackd have NOT been addressed. feel
an inadvertent double negative. the point was: all pre-existing
code-based and design-based problems are still there.
--p
* 'engine' should be called jack_engine, or jack-engine, or just jack even.
or jack-server. or jackd.
* flclient should either be given a new name or not installed. also, it should
be optional to build it, and perhaps examples merit their own subdir in the
source tree.
* all references to
Don't laugh. This is from a review of the Creamware Luna II card, from
Remix magazine:
--
With some direction from CreamWare's technical support staff, I found
that Fruityloops works best with DirectSound drivers, whereas Acid
Perhaps if you actually try to send 64 channels, and if they're actually
using TCP/IP networking. But it was my understanding that they're not,
and that they're just sending a raw PCM signal over an ethernet cable.
(At least, I hope that's what they're doing.) Assuming that is the scenario,
I
which shares the central ice1712 chip with the delta series. there were
spurious strange discontinuities in the recorded signal then, but i
haven't found out what caused them - might be they were unnoticed xruns.
i think these might have been fixed by some recent changes to the
audioengine code
2 proposals for changes to LADSPA, with the intent of moving LADSPA
v1.0 to LADSPA v2.0.
over on ardour-dev, we've been discussing a new protocol (the LADSPA
Control Protocol; implemented) to allow standalone GUIs to control
LADSPA plugins via a host. the GUI has no direct knowledge of the
Gack!
What I described was not the best idea. Rather than:
and then in the LADSPA plugin descriptor add:
char *gui;
LADSPA_GUI_TYPE gui_type;
we should do the same as for ladspa_version():
char *ladspa_gui ();
LADSPA_GUI_TYPE ladspa_gui_type();
all the same
GUIS AS PART OF THE PLUGIN
As I've said before, I'd prefer an approach where an appropriate delivery
mechanism can be chosen depending on the toolkit. Flat files may work for
XML, or perhaps calls into the library containing the plugin itself. For
instance, a host written in GTK might want to
to do a specialized GUI for a single LADSPA plugin, he can just have
that communicate through the CORBA interface (which you may want to
hide in a little wrapper library to avoid too much confusion amongst
the plugin developers).
precisely. thats what the lcp wrapper is for. the plugin developer
Out of intererst, how does VST deal with getting large lumps of data to
the DSP part? A lot of VST plugins do that, but I don't remeber seeing
anything in the API that supports it.
they have the notion of a program. the setProgram() method is used
to do this, and can involve arbitrary chunks of
Umm, it will certainly work, though syntactically I'd prefer
an URL style notation [maybe do communication through http
protocol and use a browser for showing the GUI!? I.e. just
have each LADSPA plugin run its own webserver.].
[ ... laugh? ... cry? ... ??? ... ]
since the type of GUIs we're
using tcpip will simplify implementation. Also we could make use
of existing technology and simply present a http server to the GUI
(which in turn can be any browser) either from the LADSPA host
or directly from each plugin.
the point of LCP is to hide the nature of the communication between
the
URL syntax seems cleaner:
unix://tmp/tcp/pid
and
inet://host:port/
true, but its redundant. inet hostnames cannot contain '/' so the
inet: prefix is completely unnecessary. look at the places where this
would actually be used:
1) host fork/execs GUI
you won't even see the
http://www.op.net/~pbd/lcp-0.0.1.tar.gz
The API:
==
1) client side:
---
lcp_client_t *lcp_open (const char *name);
int lcp_close (lcp_client_t *client);
int lcp_write (lcp_client_t
Direct from the man himself (karl steinberg):
--
1) does VST envision that the actions of setParameter and/or
setParameterAutomated are carried out synchronously or asynchronously
with respect to
more seriously, do you think there should be support for the GUI is
not running on the same system as the LADSPA host case ?
I think there should at least be the potential for future support even if
it doesnt exist right now.
LADSPA 3.0?
no, support for this is already present in LCP
host1 ./s my-ip-address:port# the host
host2 ./c my-ip-address:port # the GUI
Fantastic. How you you get unix domain to work?
[swh@inanna lcp-0.0.1]$ ./s /tmp/lcp0
cannot bind server to socket (No such file or directory)
% ./s ladspa-host-name
Server is at
int lcp_write (lcp_client_t *client,
unsigned long plugin,
unsigned long port,
LADSPA_Data value);
Is this unsigned long plugin the ladspa_id? If so how does the server know
nope.
which instance its meant for
LCP.Version: 0.0.1
seems useful, though i hope it never is :)
LCP.Toolkit: GTK
we don't care about the toolkit in any way. we really don't.
and besides, GTK wouldn't be enough even if we did. we'd
need version info for the toolkit. ick.
DC.Coverage: 1220
Where
ok, there's a new version of LCP at
http://www.op.net/~pbd/lcp-0.1.0.tar.gz
it includes:
* steve's observations+suggestions about the API
* implementation of both lcp_read() and select/poll-based operation
* more documentation in lcp.h
lcp_read() is used when the client is NOT
i just want to be able to say: we need a GUI for plugin ID 1220,
can we find a suitably-named file in LADSPA_GUI_PATH?
Are you concerned about complexity or cpu/disk use?
complexity.
finding and using LADSPA plugins is very easy right now. the kinds of
schemes you're suggesting
in doing lots of testing on JACK, i've noticed that although the
trident driver now works (there were some patches from jaroslav and
myself), in general i still get xruns with the lowest possible latency
setting for that card (1.3msec per interrupt, 2.6msec buffer). with
the same settings on my
If I remember correctly http goes both ways. Also its not really a lot
of baggage - you can f.i. structure your plugin stuff in the filesystem
tree, so a GET parameters/factor[.html] would just get you a 1.233
f.i., and PUT (or was it PUSH?) of the same sort sets the value.
Note again, using an
http://aes.harmony-central.com/111AES/Content/Wave_Digital/PR/Microwave.html
no linux in sight, obviously :(
--p
complexity.
OK, fair call. I think we at least need a directory for guis that can provide
interfaces to all plugins though. That doesn't add much does it?
you mean a special directory in which every GUI was 100% generic? who
is going to write such a thing except for host authors, and why? but
In message [EMAIL PROTECTED]you write:
Could someone explain in simple english how these apps are intended to
work together please?
Do I just run one version of jack and then lcp, ardour, alsaplayer,
spiral loops, timidity, glame, pd and anything I do will be at low
latency and ardour can record
In message [EMAIL PROTECTED]you write:
Because if someone comes with a cool XML-GUI reader, and that I'm too lazy
to
provide an ad hoc GUI, it would be interesting to write a simple XML GUI
description and specifiy that the GUI program should be /usr/bin/xmlread
with
option plugin.xml
how to identify a gui is another question. we don't dlopen() them -
just fork/exec, so there needs to be a filesystem-based method of
identifying them as candidates.
--metadata? Or do you want something that doesn't involve execing them?
i'd prefer it. execing and reading and parsing stdout
Are there any JACK-aware apps at this point, even experimentally?
1) alsaplayer has JACK support
2) the jack code comes with jackrec, which records N-bit, N-channel
WAV files from any set of JACK ports
3) i have ported rythmnlab to use JACK.
I am still waiting for the right mood to strike
In the beginning there was esd, and unfortunately, it still is... It is
a latency hogging daemon which requires application compatibility to
work and is yet to be replaced by something better...
Then, we had (and still have) somewhat bloated Arts which was
originally designed to do something
I'm thinking for the benefits that low latency audio applications will
have from preemption of running task in favour of the waiting task just
at end of interrupt handler execution (instead of end of time slice).
I don't believe that such a mechanism is already in place in linux
kernel but I may
I understand what you are saying but that still seems to me a roundabout
way to solving the core of the problem. My thought is if we already have
a capability of low-latency interaction between apps via JACK, we should
rework it so that JACK becomes a powerful kernel daemon which would be
Audio apps, such as XMMS are nothing more than entertainment stuff, and
for all that I care, they mean very little to me outside that realm. So
when I mention audio apps, I mean serious
audio-capture/processing/music-making/reproduction stuff.
can you name one usable program in this category for
[ ah, here's that message i missed ... ]
So what is preventing us from taking CoreAudio-like path and reworking
the way audio is handled OS-wide?
CoreAudio has explicit kernel-side support in MacOSX. I find it
unlikely that this would ever be accepted by Linus. We have plenty of
mechanisms to
kind of crazy combination of that sort). If you also check the pd and
jmax lists, you'll get to hear every so often a great success story
using linux in live performance. That's why I believe that you are
seriously undermining the linux art scene.
i didn't want to undermine the arts scene.
i
things, but all the same, to say that it's impossible to create
good/interesting work with the currently available crop of Linux audio
software somewhat misses the mark, IMHO.
i agree. thats not what i said. i think you can definitely produce
good/interesting work with the currently available
P.S. Any thoughts as to what distribution will use the linux machines
you'll be selling? Also, any plans to soup 'em up with a powerful video
editing capabilities?
if i use any distribution at all, it will probably be demudi, which in
turn is based on debian.
but i really don't care about
Ok. Could we gather what we already have and what we need to get? In
which directions should we push which people? For example, we're getting
the glue (jackit) but what do we need to glue together? Is there
something nobody has started to make, even if it's clearly needed (well,
clearly for
[ Re: ALSA ]
The hard core developers on this list are already there so
the best bang for bucks in my view is getting the next round
of newbies up and coding asap.
So, you think that Apple wants to get the next round of Mac OS X audio
programming newbies writing stuff for the CoreAudio HAL
=
Why don't we strive to use or create inter-program protocols!
==
what do you think JACK is? what do you think the point of a protocol
that doesn't run in sample sync is?
* no real-time CD recorder (i have a somewhat working prototype)
this is important to people? why?
to me, anyway. so that i can burn CD's direct from audiostreams coming
from external devices without having to first record them to disk. if
a client comes to the studio and they have a DAT they
* no real-time CD recorder (i have a somewhat working prototype)
this is important to people? why?
to me, anyway. so that i can burn CD's direct from audiostreams coming
from external devices without having to first record them to disk. if
a client comes to the studio and they have a DAT
ALSA was not ready at that time, still isn't and maybe never will.
Could you be more specific about why alsa still isn't ready?
API is still constantly changing.
The API's haven't change substantially in months. Thats after a period
of intense development activity to refine a better,
Where can I download hardware specs for the RME Hammerfall series so that I
can write some own code for it? It's not available at ALSA site. Why?
the specs were provided to me with a request that i keep them private.
the source code that i wrote along with input from Winfried Rietsch
and help
The only good thing, to me, about other unices is that
apple had a choice and chose BSD.
Compatibility/portability (read: standards) is the main reason why Linux
became what it is today. _Do_not_ break this or Linux will lose...
Linux's POSIX compatibility is weak in all kinds of areas. There
btw what is the reason for alsa to be constantly changing its API? I mean
it's going on for years.
because we didn't get it right the first time around, suprise,
suprise. nor on the second. unlike OSS, ALSA has been able to grow and
improve as we've gained experience with new audio interfaces
i'm sorry, but i don't understand. the rate at which data comes in
from the s/pdif interface is fixed. the if cd recording device is not
able to adjust its 'pitch', i.e. record at a rate slightly higher/lower
than its nominal recording rate, it will eventually xrun, if that is
the correct term
because it can't fly. It's stable, well documented and works well within
the limited set of audio applications it was designed for (attributes not
notably part of the ALSA feature set at present).
ALSA 0.5: stable, somewhat documented, works better than OSS
ALSA 0.9: now more or less stable,
the rate at which data arrives over s/pdif *is* the exact sample rate
that the CD is burning at (well, unless you're insane). there is no
need for throttling, pitch control or anything like that. you just
buffer the data on startup to protect against latency glitches,
convert the samples to
Only latency sensitive program I made is rtEq where biggest latency source
is FFT size (usually 75% overlap and 8192 point FFT).
i don't believe its common practice to use FFT for real time EQ. its
perfectly possible to use delay lines to accomplish high quality EQ
with much lower latency than
Just curious, but could somebody explain *how* delay lines can be used
implement EQ? I have a strong maths background, but no DSP experience if
that helps.
i'm not a dsp programmer, but its really quite simple. if you
feedback with a delay of just 1 sample, and attenutate both the
current and
lets swap back to interesting stuff. This Still... thread sucks.
So I just added some documentation to glame about How to make funny
realtime networks. We recently had lots of fun at a party doing
some Mickey Mouse Effect on the voice or creating a network that
loops a sample input. So you can do
It is anticipated that most of the time, the GUI will be started
by the LADSPA host in response to some user action.
I'd like to be able to use a text mode interface, controlling
two or more ports (maybe on different plugins) from one program
using the keyboard.
Its not directly on target,
a device is being shipped to me. once it arrives, work on the driver
will commence. i announced this on alsa-devel, where other subsequent
announcements will follow. the pcmcia version is slated for at least
the end of january, since there is at least one more h/w rev needed
before RME
My solution to this problem is to use SysV IPC message queues. Each read/
write to/from a message queue is atomic. That means as long as a each access
of the message queue transfers a complete midi message (3 bytes for note on/of
f
or N bytes for sysex) then the message queue keeps each complete
umm. I am not sure where you are getting your information from but
/dev/midi works just great with USB MIDI devices under linux. I use an
sblive and an Roland UA-100 USB Box (2 midi in/outs + internal midi
synth + raw audio)
but did you get this functionality from the standard kernel drivers,
did you bother to ask on alsa-users?
Its an odd project where you have to ask for installation information on a
mailing list while the web page has documentation which is 3 years old.
yes, i agree that this is odd, and also wrong. however, i would point
out that very, very few people have been
My idea was that the Midi Mapper was actually a userspace daemon. User
space process connect to the daemon with a library.
Takashi has made some promising rumbles about moving most of the ALSA
sequencer into user space at some point.
Don't underestimate the difficulties of writing real-time
I have found this site
www.bluelifeaudio.com/~pcconfig
take a look at the poll which os do you use for audio?
Maybe it would be cool to post some articles on tweaking linux...
way to go gang! its a bit of fake, given that we've probably spammed
it in some fairly honest sense, but linux is at
has any one got this card to work with ardour ?
I've seen a few postings related, but it's still
not clear to if there are settings that will work
or if the card is just not supported ...
yes I know where to change the fragment count, but
the values I tried didn't change much of the situation.
hello, list,
i'm new here, blablabla
i have found in one of the archives some encouraging info about
linux pulsar, so just to make sure i've mailed info_AT_creamware_DOT_com
the answer was clean and simple:
pulsar and linex will not happen
where was the previous positive info coming from ?
The creamware software system is very complex and they have
not that much developers. So they focus on developing new
Actually, its not very complex. the SHARC architecture makes many
things that would be quite hard with Motorola or Intel chips rather
easy. from what guys at creamware told me,
Hi,
using a
mandrake 8.1 kernel-2.4.8
audio card midiman delta1010
and running the ossmixer utility with the oss drivers,
I watched the I/O latency on a scope.
It was about 1.4 msecs.
is there something about ossmixer that bypasses the latency
deal by putting the codecs
compiler. I don't think anyone is willing to write a complete audio
firmware in DSP assembly. So we need a free C compiler, and I only
know of one: for the TMS320C3X series.
the SHARC and Motorola DSP series are supported by available versions
of gcc.
anyway, i personally think you're mad.
Perhaps an important thing about designing such a soundcard is to show the
world that open source is more than just a few hackers doing some funny
coding. I mean open source is real and many commercial companies just
don't see that. Designing hardware with people all over the world as
I think there's no problem in selling open-source h/w...
if its specs are open, and it can be manufactured without special
facilities, why wouldn't anyone simply contract with
board-maker-name and sell them for the lowest price? the lowest
price might be good for consumers, but its bad for
Well being one of those people I guess I should speak up and debunk
the nasty rumor that I have vanished off the face of the earth
before it spreads. I'm just happen to be in lurker mode now.
sorry for seeding misinformation.
I have also been in contacted by Nemesis and they have indicated
Paul Davis wrote:
we didn't have to beg for specs on the trident cards, most
crystal-based cards, the RME cards, the ice1712 cards and on and on.
Some hw manufacturers could call ice1712 open source, as the specs are
open, it's reasonably priced (cheap) and performs very well.
It created
out of curiousity... could you tell me what cards use ice1712, and which
is the best (for music -- not games... no offense gamers).
best to see the soundcard matrix at alsa-project.org, which will, as
they say, reveal all.
many people seem to like the Midiman Delta series.
--p
* Paul Davis ([EMAIL PROTECTED]) wrote:
I would like to code up a proto of the interface though. I was
planning on trying to do it with python and something like wxPython.
Mostly because Python is my new favorite language and I need a good
graphics project to work on.
any chance you'd
Thats actually a good point... I guess the answer is if 2 GB is
enough RAM to have enough channels to overload the CPU and IO
bandwith of the host. Things like GigaSampler allow you to layer and
a bunch of instruments into one channel.
It would still limit you when using things like GigaPiano
This raises a question. Now that desktop computers can contain 2 GB of
ram, has the window of opportunity closed for this technology? It might
be easiest to flow around this rock.
this is a very important point.
however, people with only 128 or 256 or 512MB of RAM won't find much
comfort from
as i said, unix-like operating systems have done disk readahead for almost
as long as unix-like operating systems have existed (and multics
before them, i believe). we cannot allow nemesys/conexant to steal
this technology by pretending it was invented explicitly for audio. if
the USPTO
Or pyqt? (I know Paul loves KDE so much 8-)
heh. i love it as much as i love GNOME :)
--p
What's going to happen with usb audio/midi support ?
(Paul: I know what you think about usb in a professional environment but ...)
:)
Since alsa should become the kernel standard driver for audio/midi/seq
devices and all the work the usb people have done to support the audio
class is based on
What if a caching client for JACK was written?
Basically you would tell it what file(s) you wanted to be cached and
how much cacheing you wanted. Then the EVO JACK client would do all
the layering and mixing, etc. and route it back to the JACK system
for what ever other processing you wanted.
At least the driver handling the standard USB Audio Device Class is
located in the USB kernel directory, ie. linux/drivers/usb. All code is in
the big (~4000loc) audio.c file. It implements the OSS ioctls, plus
OSS-style mmap() (and of course read()/write()).
But it does register itself to the
the only problem i see is that the port can only serve one view of
the file at a time. if you wanted to play two copies of the same
sound, but offset from one another, you'd need different copies of the
same client. JACK doesn't scale well for this kind of thing, so
That's probally
But it does register itself to the OSS subsystem (to
drivers/sound/sound_core.c) like all other sound drivers, so it _is_
part of OSS.
no. sound_core is NOT part of OSS. ALSA attaches to it as well. Alan
Cox wrote that so that OSS and ALSA could (theoretically) co-exist.
Now I'm
juhana writes:
In any case, preload in multitrack editor is totally different
from a disk sampler.
why do you think that?
--p
I think I could write a reasonably working example in 2 days if I had
a day to think about it first. Ardour has sample code, as does
ecasound, alsaplayer and a few other programs that do threaded-disk
i/o. The tricky part is to use that stuff in a thread that services
every instance of the
On Thu, Jan 17, 2002 at 03:06:32 -0500, Paul Davis wrote:
there is, however, a really tricky problem with doing this in LADSPA:
there is no support for string variables, so there is no way to set
the filename to be used.
You could set it in the environment.
not very easily. you'd have
I came across this article (hopefully this isn't old news for most of you)
about the new Steinberg VST System Link and found myself quite excited about
the possibilities (assuming it will work well).
i had mailed something to LAD about this last week, but it never
showed up.
I know this is
Hi,
Shaketracker does not use the /dev/midi devices directly, so it seems,
but /dev/sequencer instead. After starting shaketracker, I have
my recollection is that the ALSA emulation of the OSS sequencer
doesn't work 100%, and i'd be suprised if routing it via virmidi
worked at all! still, i've
i used an even worse hack in Quasimodo to allow passing strings via
floats. you don't want to know.
I guess you could treat 4 characters as a single 32-bit int and cast
it to a float? and do the reverse on the other side? but then how to
terminate the string?
never mind, you were right.
2. MOTU Firewire thing: Got fed up and eventually foolish enough to go for
this. Did some research and it seemed that although this card is firewire
compatible, it only really works with Macs. Possibly with some firewire
cards on some PCs/laptops, but not all and getting hold of this
i asked steinberg about the openness of VST Link:
--
is steinberg planning on providing specs for VST Link on a
royalty+license free basis like VST itself, or will this be
proprietary technology?
at first it will not be open.
suck. I can't say enough good things about the Delta series, but of
course if you want a pro-audio solution for a laptop under Linux you're
still SOL at this point.
for those who haven't seen my announcement on alsa-devel: work will
begin on the ALSA driver for the Hammerfall DSP around the end
From the NAMM announcement of Stanton Magnetics Final Scratch:
Initially, Final Scratch will only be available for
Linux and BeOS operating systems running on an
Intel compatible CPU. A Mac version is planned.
Did you *ever* imagine you'd see such an announcement in your lifetime?
Write
And, obviously, 100,000 is not what we're looking at, but does anyone
have any idea what the numbers could be? Is it 3 figures? Less than 2
*shudder* ? Any way to find out?
someone at rather well-known digital h/w company has told me that their
market research pegged Linux at 2% of their
But Paul, now *you* (and Jaroslav and Abramo and...) should do the next
step and go to RME and require funding (you get specs, don't you) for
development of drivers for their labtop audio solutions ;)
its not much, but a Hammerfall DSP is worth about US$700. given that
they also sent me 1
201 - 300 of 1290 matches
Mail list logo