Hi Christian and all,
> > > > It needs to be debugged, for example by commenting out various synthesis
> > > > components to narrow down the source of this issue.
> > >
> > > You can use me as your guinea pig - just give me a hint how and where I
> > > can
> > > comment/disable them.
> >
> > First of all, I would force a constant velocity value being used by the
> > sampler, as provided by the attached hack. That way you can forget about all
> > MIDI input issues for now and just concentrate on the actual bug.
> >
> > Then your playground will be src/engines/common/AbstractVoice.cpp. You find
> > a method there called AbstractVoice::Synthesize(). That's actually the
> > heart of it all. Disable synthesis components there (i.e. EQ, EGs, filter,
> > etc.) until the issue disappears.
>
> Any finding in the meantime Frank?
Hm...not a lot so far. Let me list:
- I hard-coded the velocity as per your suggestion, but that didn't change
anything.
I believe I can trust my sequencer software to produce stable velocity output
:-).
- I disabled a couple of things in AbstraceVoice.cpp as you suggested (I have
attached a diff against the current version of that file). Again, no change
yet, this issue still appears.
- I noticed that - when comparing original sample playback against
Linuxsampler's
output - that LinuxSampler "gets it wrong most of the time" and only a
minority
of the events are playing back the sample correctly.
The original sample has a pretty hard "click" at the beginning (I attached
again a very close-up view of the first 50 or so frames of that sample as seen
in Audacity, file "original_sample.png"), and it's this click I wanted to
hear
reproduced by LinuxSampler. Most of the sounds it produces are a lot softer,
though, but I believe I stated that before.
- The only other interesting observation from today was to see what happens if
I change the jackd buffer size. My standard buffer size here is 128 frames,
and with that there is roughly one correctly played sample in about 10
events.
If I increase the buffer size to 1024 frames, the issue is a lot more rare -
I have to wait up to a minute to hear one correctly played sample.
One the other side, reducing the buffer size to 64 frames creates more correct
playbacks - roughly 1 out of 4.
Not sure if this brings us closer to the root cause, but still interesting.
- To make sure I am not fooled by some issue with my sound card (M-Audio
Audiophile 2496), I set up playback of the same file with qtractor and
Rui's "drumkv1" LV2 plugin. That one delivers stable "correct" audio data,
so the problem really appear to stem from LinuxSampler itself.
I'll continue fooling around with AbstraceVoice.cpp :-).
Greetings,
Frank
--- src/engines/common/AbstractVoice.cpp.orig 2017-01-15 13:08:46.174155350 +0100
+++ src/engines/common/AbstractVoice.cpp 2017-01-19 00:05:44.389549546 +0100
@@ -379,8 +379,8 @@
pEngineChannel->GetFxSendCount() > 0 &&
(pMidiKeyInfo->ReverbSend || pMidiKeyInfo->ChorusSend);
- const bool bEq =
- pSignalUnitRack != NULL && pSignalUnitRack->HasEq() && pEq->HasSupport();
+ const bool bEq = false;
+// pSignalUnitRack != NULL && pSignalUnitRack->HasEq() && pEq->HasSupport();
if (bEq) {
pEq->GetInChannelLeft()->Clear();
@@ -457,7 +457,8 @@
finalSynthesisParameters.fFinalPitch = Pitch.PitchBase * Pitch.PitchBend * NotePitch;
- float fFinalVolume = VolumeSmoother.render() * CrossfadeSmoother.render() * NoteVolumeSmoother.render();
+ float fFinalVolume = NoteVolumeSmoother.render();
+// float fFinalVolume = VolumeSmoother.render() * CrossfadeSmoother.render() * NoteVolumeSmoother.render();
#ifdef CONFIG_PROCESS_MUTED_CHANNELS
if (pChannel->GetMute()) fFinalVolume = 0;
#endif
@@ -477,6 +478,7 @@
}
// process envelope generators
+#if 0
switch (pEG1->getSegmentType()) {
case EG::segment_lin:
fFinalVolume *= pEG1->processLin();
@@ -491,6 +493,7 @@
fFinalVolume *= pEG1->processPow();
break;
}
+
switch (pEG2->getSegmentType()) {
case EG::segment_lin:
fFinalCutoff *= pEG2->processLin();
@@ -505,12 +508,15 @@
fFinalCutoff *= pEG2->processPow();
break;
}
+
if (EG3.active()) finalSynthesisParameters.fFinalPitch *= EG3.render();
+#endif
+
// process low frequency oscillators
- if (bLFO1Enabled) fFinalVolume *= (1.0f - pLFO1->render());
- if (bLFO2Enabled) fFinalCutoff *= (1.0f - pLFO2->render());
- if (bLFO3Enabled) finalSynthesisParameters.fFinalPitch *= RTMath::CentsToFreqRatio(pLFO3->render());
+// if (bLFO1Enabled) fFinalVolume *= (1.0f - pLFO1->render());
+// if (bLFO2Enabled) fFinalCutoff *= (1.0f - pLFO2->render());
+// if (bLFO3Enabled) finalSynthesisParameters.fFinalPitch *= RTMath::CentsToFreqRatio(pLFO3->render());
} else {
// if the voice was killed in this subfragment, enter fade out stage
if (itKillEvent && killPos <= iSubFragmentEnd) {
@@ -526,26 +532,26 @@
}*/
// TODO: ^^^
- fFinalVolume *= pSignalUnitRack->GetEndpointUnit()->GetVolume();
- fFinalCutoff = pSignalUnitRack->GetEndpointUnit()->CalculateFilterCutoff(fFinalCutoff);
- fFinalResonance = pSignalUnitRack->GetEndpointUnit()->CalculateResonance(fFinalResonance);
+// fFinalVolume *= pSignalUnitRack->GetEndpointUnit()->GetVolume();
+// fFinalCutoff = pSignalUnitRack->GetEndpointUnit()->CalculateFilterCutoff(fFinalCutoff);
+// fFinalResonance = pSignalUnitRack->GetEndpointUnit()->CalculateResonance(fFinalResonance);
finalSynthesisParameters.fFinalPitch =
pSignalUnitRack->GetEndpointUnit()->CalculatePitch(finalSynthesisParameters.fFinalPitch);
}
- fFinalCutoff *= NoteCutoff;
- fFinalResonance *= NoteResonance;
+// fFinalCutoff *= NoteCutoff;
+// fFinalResonance *= NoteResonance;
// limit the pitch so we don't read outside the buffer
finalSynthesisParameters.fFinalPitch = RTMath::Min(finalSynthesisParameters.fFinalPitch, float(1 << CONFIG_MAX_PITCH));
// if filter enabled then update filter coefficients
- if (SYNTHESIS_MODE_GET_FILTER(SynthesisMode)) {
- finalSynthesisParameters.filterLeft.SetParameters(fFinalCutoff, fFinalResonance, GetEngine()->SampleRate);
- finalSynthesisParameters.filterRight.SetParameters(fFinalCutoff, fFinalResonance, GetEngine()->SampleRate);
- }
+// if (SYNTHESIS_MODE_GET_FILTER(SynthesisMode)) {
+// finalSynthesisParameters.filterLeft.SetParameters(fFinalCutoff, fFinalResonance, GetEngine()->SampleRate);
+// finalSynthesisParameters.filterRight.SetParameters(fFinalCutoff, fFinalResonance, GetEngine()->SampleRate);
+// }
// do we need resampling?
const float __PLUS_ONE_CENT = 1.000577789506554859250142541782224725466f;
------------------------------------------------------------------------------
Check out the vibrant tech community on one of the world's most
engaging tech sites, SlashDot.org! http://sdm.link/slashdot
_______________________________________________
Linuxsampler-devel mailing list
Linuxsampler-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/linuxsampler-devel