Every SF2 compliant synthesizer has a low pass filter in place. To use
that, open a SF2 editor, select an instrument and specify filter
cutoff frequency and filter Q.
To automate this via MIDI events, you should learn about Soundfont
modulators, and how they influence the values that you have
Fluidsynth is intended to be run as user service. It should be as simple as:
systemctl start --user fluidsynth
Using both drivers, alsa and pulseaudio, is working fine on my side.
It's not possible to say what's going wrong as you're neither providing
/lib/systemd/system/fluidsynth.service
1. No. Every synth must be driven by exactly one synth-thread. Whether this
synth-thread is an audio driver or your app calling one of the rendering
functions, doesn't matter.
2. As per MIDI standard, the reverb-send-level is initialized to zero. So by default, there
is no reverb until
No, it's not possible for multiple synths to share a single sequencer.
Each synth needs it's own sequencer instance due to the internal
sample timer making the sequencer advance (*).
It's not exactly clear to me what you mean by "shared MIDI-Channels".
Yet I think it should be possible to do it
This is already possible:
https://www.fluidsynth.org/api/fluidsynth_sfload_mem_8c-example.html
Tom
___
fluid-dev mailing list
fluid-dev@nongnu.org
https://lists.nongnu.org/mailman/listinfo/fluid-dev
I think I've seen this question already 4 years ago. I didn't know back then
why it was done that way, neither do I know now. I assumed that this function
wasn't made public because fluidsynth is not supposed to be a MIDI parser.
Particularly, there is an internal MIDI parser, but this one has
Sorry Lorenzo, you seem to have understood me completely wrong.
> I feel pitch bend sensitivity should not be defined by the synth
I wasn't suggesting that the synth defines it. However, I was
suggesting to allow the soundfont designer to set a reasonable
"initial" bend range for a particular
One thing that keeps annoying me about Soundfonts is that there is no
way to specify the initial pitch of an instrument in the instrument
itself. The spec defines generators for an "initial attenuation",
"initial filter Q and FC", and many more. But there is no such thing
when it comes to the
Sorry, I cannot reproduce this. I only tried it on Linux though. Maybe it would
help to set audio.jack.multi to 1 and investigate the fx buffers individually.
Tom
___
fluid-dev mailing list
fluid-dev@nongnu.org
Would it make sense for fluidsynth to implement different reverb engines?
I recently came across a Lexicon 224 reverbator, which I quite like. To my ears
it sounds better than the FDN reverbator which fluidsynth currently implements.
Now, I don't want to go through the discussion of changing
Fluidsynth's player API offers several hooks that could affect the "total playback
duration" while playback is ongoing. Also, there is not "the MIDI file". There are
potentially many of them.
Anybody wanting to hook up Fluidsynth for some serious MIDI playback is
encouraged to use a MIDI
Parameters that do not require an argument can be concatenated without
a space. Extra arguments to those parameters, however, must be
delimited by spaces. When you write
"-R1nivF/home/parallels/my_music.wav" how should getopt know where the
argument for the -R parameter ends? Just like you cannot
You're building and executing the unit tests as root. That doesn't
make much sense. You should only use sudo when really needed, cf. the
docs:
https://github.com/FluidSynth/fluidsynth/wiki/BuildingWithCMake#common-tips-for-compiling-from-source
> And when the music ends, it can not exit
Ok, I got it compiled for arm64 and x86_64. You can grab the binaries from here:
https://dev.azure.com/tommbrt/tommbrt/_build/results?buildId=7586=artifacts=false=publishedArtifacts
For the moment, I only got it working on MacOS 11 and 12, but not on
MacOS 10.15. I'm not even sure if there's any
> would also like to ask why there's no mac builds in the release page on
> github just for windows and android.
So far, nobody has asked for precompiled binaries for Mac. I guess
that's because there are a couple of package managers for Mac out
there. (People have asked for IOS, but this seems
der a version of the release binaries that includes the needed files
> like it was in the past?
>
> Thanks, Leon
>
> On 12/18/2021 12:08 AM, Tom M. wrote:
>
> I assume you're at least using Android API Level 24.
>
> In this case, yes, it could be due to missing .so libs. The
> precom
>From what I understand, you're trying to perform preset changes via
CC72. I don't understand why you're coming up with CC72 though.
Every GM MIDI compliant device performs preset selections via "program
change" events. Prog change events are completely different from CC
events. Fluidsynth cannot
I assume you're at least using Android API Level 24.
In this case, yes, it could be due to missing .so libs. The
precompiled binaries from 2.2.0 basically included many libraries from
the NDK. I considered this bad practice, as I assumed that the NDK
will be available anyway in the target
Hi Pascal,
1) is not really a problem and can be ignored for now.
2) make test only runs the tests (but doesn't compile them). This target is
provided by cmake implicitly. You must use make check. (P.S.: You always
had to:
Dave, from the libraries you've listed I only see one confusion: Which is
that libfluidsynth still reports as 2.1.7. You probably expect it to report
as 2.2.0. Well, I haven't done this for the beta. Maybe I should have done
it for the Release candidate. But I'll definitely do it for the final
2.1.8 is the version of the project, that any maintainer is free to choose
as he pleases.
2.3.8 is the version of the library-interface. It tells you about API/ABI
stability because follows the strict semantic versioning rules originally
implemented by libtool. See the comment here:
Ruben is probably referring to fluidmax a binding for Max/MSP that we used
to ship with the source code up until 1.1.11. I removed that binding from
the codebase in June 2018 because
* it hasn't received any code changes since 2009,
* it used functions from private fluidsynth headers, and
* I
Seems like portaudio is also using CoreAudio under the hood, getting the
same error -66748. Looks like the problem lies somewhere between MacOS,
your hardware, your user account, or any system-specific setting.
Tom
___
fluid-dev mailing list
> fluidsynth: error: Error setting the audio callback. Status=-66748
This error is printed when the call to AudioUnitSetProperty failed.
According to www.osstatus.com the returned error is
kAudioComponentErr_NotPermitted. I neither know what this error means, nor
how to fix it.
On the other
A PR is now ready for this: https://github.com/FluidSynth/fluidsynth/pull/746
Feedback is welcome!
Tom
___
fluid-dev mailing list
fluid-dev@nongnu.org
https://lists.nongnu.org/mailman/listinfo/fluid-dev
I’ve rebuilt fluidsynth from the lastest source, and now it works as one would
expect.
Great, glad to hear that!
I've just finished testing the FLUID_SEQ_SCALE event and merged it to master
now.
Tom
___
fluid-dev mailing list
fluid-dev@nongnu.org
nFsResult = fluid_synth_system_reset( synth );
You should use program_reset. system_reset will reset any previous changes made
to the synth.
[But interestingly setting channel 64 to percussion *does* work.]
Because future prog changes will implicitly select a preset from the drum
Please make sure to include the mailing list when replying.
On 10.12.20 15:10, d...@mozart.co.uk wrote:
Hi Tom,
fluid_synth_set_channel_type() changes the type only. It does not assign new
presets. To do this, either add a call to fluid_synth_program_reset() or send
some
fluid_synth_set_channel_type() changes the type only. It does not assign
new presets. To do this, either add a call to fluid_synth_program_reset()
or send some fluid_synth_program_change() messages afterwards.
And make sure to implement proper error handling by checking the return
values of those
At the moment, 189 people have subscribed to the mailing list. For that
number, it is really pretty quiet here.
The discussion feature looks good. I like the free categorization feature.
Let's give it a try, I've enabled it.
Tom
___
fluid-dev mailing
Pls. make sure to include the mailing list when replying.
On 30.11.20 17:54, Jeroen Neve wrote:
Thanks for the reply, Tom.
Let's say I have a music notation program, and I want to use FluidSynth to
render an audio-file of the music as it is notated.
There are more than 16 voices in parallel,
Sry, I haven't understood the use-case you described. I could answer
your question with "no", but this wouldn't help you, so let's try to
shed some light:
> I want to feed the sequencer Mozart’s Clarinet Concerto to render to file.
Are you coming from a live concerto performance or are you
For the record, let me just point out that fluid_synth_process()
currently supports the following two use-cases:
* Separate Dry and Wet Audio
* MIDI channels 5 and 6 on a separate output, including effects
Use-case "Duplicate output for MIDI channel 1" requires a simple
post-procession by the
> how could it work on Samsung A50 phone with Api-28 (Android 9) ?
Apparently, this particular function in6addr_any is already available
in API 28. Which is why it works in this particular case.
> And what should I set as minimum target version for the app ?
The minimum target version should be
The Asus TF300 has an NVIDIA Tegra 3 CPU. This is an ARM Cortex-A9
based processor. And this in turn implements the ARMv7A architecture.
So, the architectures are not the problem.
The real problem seems to be
dlopen failed: cannot locate symbol in6addr_any
The fluidsynth 2.1.5 Android binaries
> So in my opinion, doing the work to make the sample cache and maybe
even the whole soundfont loading mechanism work in parallel seems like
a good idea and worth the effort.
We need to be careful, as the soundfont loading is exposed via the API.
Breaking changes should be avoided. Also, we could
> 1) Not enough loud:
It's not clear what you mean by "artifacts". Is it clipping?
Interruptions? Distortions?
Ideally provide the broken audio rendering. But without the soundfont
and a test MIDI it's very hard to tell what's wrong. Try a different
audio driver. Or use the file renderer to
Exiting the fluidsynth shell with CTRL+D never worked reliably. This is
what happens to me when doing so:
> fluidsynth: panic: An error occurred while reading from stdin.
This error happens for 2.1.1 as well as 2.1.5. Also, I see no change
that could have caused a change in behaviour.
Perhaps
> I would need to compile with Visual Studio
Pls, why do you *need* to compile with that ancient Microsoft product?
Couldn't you use MinGW, CygWin, Clang or ICC?
Tom
___
fluid-dev mailing list
fluid-dev@nongnu.org
An idea to parallelize loading would be to use openMP Tasks. I've made a
quick'n dirty implementation of that:
https://github.com/FluidSynth/fluidsynth/tree/parallelize-sf3-loading
Currently, there are two problems:
1. The samplecache is guarded by a mutex that prevents naïve
parallelization.
JJC is right, we need to profile it. I can test this with the Intel
VTune Amplifier. Which SF3 to use? Is the MuseScore_General.sf3 (38MB) a
realistic test-case? Or do we need another (bigger) one?
Tom
___
fluid-dev mailing list
fluid-dev@nongnu.org
> there is no thread-safe way of retrieving and using those pointers for
external applications (outside of the synthesis thread)
Valid point, indeed.
> What do you think about:
> fluid_synth_pin_preset(synth, sfont_id, bank, program)
Ok for me. Looking forward to it :)
Tom
Ok, I see.
So, my preference of implementing this is to use the pinning approach.
I'm thinking of a function like
fluid_synth_pin_preset(synth, fluid_preset_t*)
The function will attempt to pin all samples of the given preset and
load them into memory, if they are currently unloaded. "to pin"
Understood. Yes, it would be nice to hear at least one other person who
has a similar use case.
I also have a preference of implementing this. But before, allow me to
ask the following questions:
> You could say that I shouldn't use dynamic sample loading in that
case
Why do you *need* to use
Confirmed.
___
fluid-dev mailing list
fluid-dev@nongnu.org
https://lists.nongnu.org/mailman/listinfo/fluid-dev
Marcus, wait - remember:
>> [Tom] However, I vote against introducing new shell commands. Instead, the
>> existing commands, should become smart enough to detect whether they
>> have been called with one or two parameters.
> [Marcus] Good idea! [...]
Pls. what has changed your mind to "introduce
> The reason is that those actual shell fx commands are redundant with
> respective settings (synth.reverb., synth.chorus.).> Instead, the
> user can use the commands: set synth.reverb., set
> synth.chorus. which set the respective setting for all fx unit. Those
JJC is correct.
> why there are so many ".so" files.
Because fluidsynth itself depends on 3rd party software, most notably
glib. And this 3rd party software depends on other software. And that
software depends on software which... ultimately... depends on the
Android NDK.
VolcanoMobile has customized
> In particular we have not found a way to create and register midi out ports
> for Jack and use those in a similar way in Jack as for midi in.
If you want to create additional jack ports and manage connections
between them, you must use Jack's API. Fluidsynth doesn't expose any
functionality of
> My guess is that most users simply use a single stereo output from
FluidSynth.
Probably. And I must admit that I'm still not convinced that it makes
sense from a musically perspective to control all effects groups
independently. However, I understand the technical need for it, so I'll
buy it.
Sorry for the delay.
Marcus said:
> FluidSynth does not seem to mix the internal group channels back into
> the main output if you are using audio-groups=2 and audio-channels=1.
> Not sure if this is a feature or a bug... Tom, do you know anything
> about the design decisions here?
I'd say it's
> experimenting with various combinations of values synth.audio-channels and
> synth.audio-groups either gets me effects on all channels or none
Seems like you created the Ladspa effects on the Main:L and Main:R
ports. Have you tried explicitly specifying the subgroups as described
here?
Copying only libfluidsynth-2.dll to Unity's plugin folder is not enough. All of
fluidsynth's dependencies must be present in this directory as well (or
alternatively in any of the directories specified by %PATH%). You can use the
dependency walker to find out which dependency dlls of
> cmake .. -D-DCMAKE_INSTALL_PREFIX
Find the typo :)
Tom
___
fluid-dev mailing list
fluid-dev@nongnu.org
https://lists.nongnu.org/mailman/listinfo/fluid-dev
> my old, working environment contains a file libfluidsynth.so.1 which appears
> to be a link to libfluidsynth.so.1.5.2
So, you need to make sure that your new working environment contains
the same symlink and lib. Typically, this is done by installing
libfluidsynth1
My clear recommendation
> > Why don't you all relent, admit you are wrong, and change the synth
> > documentation to indicate that the synth inputs are SYNTH channels i.e.
> > FSchans as I have called them.
> Quite frankly: that is a weird (and a little rude) question to ask.
I don't think David is interested in a
> four instances are around four times faster than one instance would be
Four instances are pretty much four times more wasteful on resources
than a single instance. The synth has a built-in parallel renderer,
you just need to use it:
fluid_settings_setint(settings, "synth.cpu-cores", 4);
>
> Thus there is no way that midi channels are equivalent to FSchannels.
I said **semantically** equivalent.
From my understanding, it was actually your design decision to give it that
decidated "midi channel" and "FSchannel" meaning. I understand why you do that
(i.e. playing the same notes
There is no such thing as "FSchannels". The documentation of the synth always
talks about "midi channels". The only way you communicate with the synth is
via "midi channels". Just because the number of midi channels is limited to 16
in standard midi files does not mean it's limited in the same
I've looked into it. The good news is that your "Harp LP" instrument is the
only one that disables the vibrato modulator on CC1. Your organ instruments
have proper vibrato. You can easily check this by executing
fluidsynth.exe 201606esteyDB-0.sf2
and then typing in the shell:
prog 0 13
noteon
David Back, 2. July 2020, 21:35:34 CEST dixit:
> Thanks Tom
> Its a lot more complicated than I expected. I assumed that FS would use its
> own implementation of CC 1 rather than using that of the sound font.
> It's encouraging to know that FS actually implements the function I am using
> - I
The soundfont spec defines a default modulator for CC1. That means a value of
127 will result in a vibrato effect of +-50cents (not to be confused with
tremolo).
This default modulator is implemented in fluidsynth, see this chart:
https://github.com/FluidSynth/fluidsynth/wiki/FluidFeatures
> I got a response from the Viena author; the mistake I was making was that I
> was doing this:
>
> CalcToTimeVal(PresetZoneVal) + CalcToTimeVal(InstrumentZoneVal) = FinalResult
Oh, Déjà-vu! Same topic, yesterday, two years ago:
FYI: A new implementation is now ready and proposed here:
https://github.com/FluidSynth/fluidsynth/pull/604#issuecomment-616091967
It's the C++ implementation we've talked about earlier, because it received
quite positive feedback from the mailing list compared to the glib approach.
An open
It seems that neither /usr/local/lib nor /usr/local/lib64 are part of your
linker search path. This is very unusual. You can either adjust LD_LIBRARY_PATH
as described here:
https://github.com/FluidSynth/fluidsynth/wiki/BuildingWithCMake#note
Or install fluidsynth's libraries to a different
Seems like you need to empty the build folder before calling cmake again for
the change becoming effective.
Tom
___
fluid-dev mailing list
fluid-dev@nongnu.org
https://lists.nongnu.org/mailman/listinfo/fluid-dev
Since you are on Debian, can you pls. enter the the build folder again and try:
sudo make uninstall
cmake -DLIB_SUFFIX="" ..
sudo make install
Tom
___
fluid-dev mailing list
fluid-dev@nongnu.org
https://lists.nongnu.org/mailman/listinfo/fluid-dev
I cannot reproduce the issue either. Tested locally on Linux as well as the
2.1.0 release windows binaries. Everything plays at the correct pitch.
The only related change between 2.0.5 and 2.1.0 I see is that our lookup tables
have changed. If your build of 2.1.0 is "polluted" it may use the
Thanks to all of you for your input. FYI, a PR for this issue is now available:
https://github.com/FluidSynth/fluidsynth/pull/629
Tom
___
fluid-dev mailing list
fluid-dev@nongnu.org
https://lists.nongnu.org/mailman/listinfo/fluid-dev
There is no guarantee that sf->sffd is a FILE* . Again, this is exposed via
our API. sf->sffd is a void*, which could be some user defined handle, which,
when passing to fseek(), would result in an access violation and in a
horrible, possibly-hard-to-reproduce crash of the entire application.
> If you decided to require C99, I would think you would want to use int64_t
> and its relatives for portability.
I actually had "long long" in mind, because int64_t would require
fluidsynth's header to include stdint.h, and whenever possible I would
like to avoid polluting our header with any C
We have a bug report on GitHub [1]: A user reports that loading Soundfonts
>2GiB fails on Win10 64bit. There are two factors making up the root cause:
1. The ANSI C file API, namely ftell() and fseek(), use "long" as data type
for specifying offsets in the file.
2. Even on 64bit Windows, "long"
Marc, please make sure to include the mailing list when replying.
On 3/13/2020 7:43 PM - Marc Evanstein wrote:
>
> Hi Tom -- thanks for your response.
>
> I'm a little unclear: how would I produce something like the prebuilt
> windows binaries that you linked to when building from source? (The
>
At least for Windows and Mac bundling prebuilt binaries would be a possible way
to go. On Linux however you should advise people to install the required
packaged via the package manager of their respective distribution.
To get an imagination on how and which libs you need to bundle, see the
Regarding A), it would be worth to run fluidsynth for the current user, rather
than an extra user.
> why doesn't it connect to the user's pulsaudio?
You can tell fluidsynth via the setting audio.pulseaudio.server which server to
connect to. This setting receives a string which will then be
fluidsynth is a synthesizer, not a MIDI keyboard. The commands that you type
in fluidsynth's shell serve as replacement when you don't have a MIDI keyboard
connected. That is, a 'noteon' triggers a note in fluidsynth's synth engine
only. It does not send MIDI events. That's because all of
Before you try JJC's advice: Could it be that you installed fluidsynth
via Homebrew? If so, you probably installed version 2.0.8 which
unfortunately is broken on MacOS. And for more than 3 months Homebrew
is unable to update it, although they already have the second request
for it [1]. Your best
Awesome, thanks Orcan!
___
fluid-dev mailing list
fluid-dev@nongnu.org
https://lists.nongnu.org/mailman/listinfo/fluid-dev
A maintenance release for fluidsynth has been released.
Details can be found in the release notes:
Download: https://github.com/FluidSynth/fluidsynth/releases/tag/v2.1.1
API: http://www.fluidsynth.org/api/
Website: http://www.fluidsynth.org
FluidSynth is a real-time software synthesizer based
Ok, thank you, now it's getting clearer. Your "small change" is
absolutely correct. ATM, only the jack driver supports real
multichannel playback and thus also provides you with dedicated
buffers for reverb and chorus. I've updated the fluidsynth_fx.c
example program accordingly, as well as the
> I use an audio driver based on the fluidsynth_fx sample program
How exactly does your callback function look like? The fluidsynth_fx
example is slightly outdated. Since 2.0.0 it won't work for effects.
I'll update it. But unless you've adjusted the callback, it does not
really explain why 2.0.2
As JJC already indicated, your environment is "polluted". The
fluidsynth 2.1.0 binary tries to use the libfluidsynth 2.0.2 . This
cannot work. Execute ldd ./fluidsynth to see which libfluidsynth is
being loaded. Then remove it as well as other fluidsynth leftover
installation files (e.g. headers).
The best starting point for you would be to use the fast file renderer example:
http://www.fluidsynth.org/api/index.html#FileRenderer
However, instead of using the file renderer's
fluid_file_renderer_process_block(), you would directly call a
rendering function of the synth, as mentioned in this
> Let's keep the two use cases separate.
No, sry. We cannot keep them separate. Soundfont2 is a real-time synth
model based on MIDI, a real-time protocol. That's what fluidsynth has
been designed for. That's what it works for.
> Are we going to be frozen into SoundFont 2.04 forever?
"FluidSynth
Here are some thoughts of a software engineer. Not sure what a musician
would say.
First of all, you are right. A "meaningfully long" attack phase will
"delay" note on and thus shorten it. My question: what would be
the use-case of such a "meaningfully long" attack? The only use-case I
can think
It seems that you are using an old version of ubuntu or debian, for
which the latest version of fluidsynth hasn't been packaged yet (and
probably never will be). You need to either update your distro or
compile fluidsynth from source. Here's an overview of the versions
shipped by various distros:
A few thoughts from my side...
> I have the suspicion that your MIDI keyboard is not yet fully registered as a
> MIDI device when fluidsynth starts.
This shouldn't matter. As soon as a new MIDI port is made available in
alsa_seq, fluidsynth receives an event about a new port becoming
available
Carlo, the reason I would like to use C++ is that I want to maximize
the performance of fluidsynth. Particularly, the sequencers event
queue, which currently blocks rendering for several seconds when
processing a few ten-thousand events (taken from highly polyphonic,
automated MIDI files). See
> But I do wonder: sfconvert seems to have support for FLAC compression
> for at least three years now. Why doesn't MuseScore support SF4/FLAC
> yet?
SF3 was single-handedly driven by Werner Schweer from MuseScore, when
they were in a need to reduce the filesize of soundfonts. It seems
that
> "Which Microsoft Visual Studio version is mandatory for C++98?"
It's hard to find official documentation for those old products, but
VS2005 (8.0) should be sufficient.
Tom
___
fluid-dev mailing list
fluid-dev@nongnu.org
The most recent revisal of fluidsynth's sequencer has raised the
question of whether C++98 can become a mandatory dependency for
fluidsynth. Most importantly:
* Can we put C++ code into fluidsynth?
* Is there anybody out there who has any concerns or objections for a
C++ dependency?
A few more
I am sorry for all the inconvenience caused for our Mac users.
However, Homebew already has a very well-organized and responsive
community that took care of proposing an update [1], only 1 day after
we've released a fix, 3 days after they reported the bug.
The way how the maintainers of Homebrew
Patches for minuet and prboom submitted to upstream:
https://phabricator.kde.org/D26558
https://sourceforge.net/p/prboom-plus/patches/11/
Looking forward to seeing Rawhide switching to fluidsynth 2.1. In case you are
experiencing problems, just let me know.
Tom
After a quick source code review, the following programs have either already
adopted the fluidsynth2 API upstream, or should work out-of-the-box:
ardour5
audacious-plugins-amidi (couldn't find source code, but successfully built by
Debian Unstable)
calf
Carla (assuming Carla-vst and lv2-carla
> This begs the question, is having the two parallelly installable possible or
> not? [...] one of them is to change the installation targets, e.g. the binary
> name, the include directory locations of fluidsynth1 to avoid conflict with
> fluidsynth2.
Only the include files would conflict.
We have a feature request for supporting SF4:
https://github.com/FluidSynth/fluidsynth/issues/605
Anybody ever heard of it? Anybody using it, or interested in using it?
Tom
___
fluid-dev mailing list
fluid-dev@nongnu.org
FYI: A draft C++ implementation is now ready. Detailed information can be found
on Github:
https://github.com/FluidSynth/fluidsynth/pull/604
I've already written a few unit tests for it, to make sure it stays backward
compatible with the current implementation. I'll more intensively test it in
> Is there an example (similar to the metronome example) that shows scheduling
> noteon events to a synth with multiple loaded sound fonts?
There is nothing special to take care about. Just like program and bank changes
affect a MIDI channel, you need to use FLUID_SEQ_PROGRAMSELECT events, that
Thank you Marcus and Antoine for your opinions.
I've file a PR for deprecating the system timer:
https://github.com/FluidSynth/fluidsynth/pull/599
Also, I had a look into glib. Looks like it doesn't provide a heap sort. I
guess we would need a third party lib anyway, which is why I will
> one drawback that I see with the ordering you propose: you currently have the
> option to choose a different order in your client application because we
> currently process events in the order they were added.
Primarily, we process events according to timestamps. Here, we are talking
about a
1 - 100 of 323 matches
Mail list logo