Andrew Kohlsmith [EMAIL PROTECTED] wrote:
> [EMAIL PROTECTED] wrote:
> > My qualification is having worked on the IAX2 jitter buffer,
> > consequently having studied how audio flows from the received frames
> > through the jitter buffer and then via ast_translate() into the codec.
> >
> Hmm...  having worked on the IAX2 jitter buffer, can you tell us why
> trunking and jitter buffers don't get along?  When trunking with nufone I
> get ... interesting... audio if I have a jitter buffer enabled.  :-)
> 
Getting back to loss concealment for a moment, it seems to me that we
could do something like the following:

    * Every 20ms, call a scheduled function that inserts a "silent"
      voice frame into the stream.  The frame would be marked as
      "bogus" in some way and would be timestamped appropriately.

    * The jitter buffer should then remove the "duplicate" voice
      frames, leaving a constant 20ms stream of either voice data
      or silence.

    * The individual codecs should then either spot the frame's
      "bogus" marker and deal with it as a dropped frame or, if the
      codec can't do reconstruction, process the frame as silent audio.
      I expect that a silent frame would sound much the same as a
      dropped frame (with no reconstruction) anyway.

Does that sound feasible with the current framework?  My initial
inspection of the SIP/IAX2 code says that it should be, although
it'd introduce a fair amount of overhead.

-- 
   _/   _/  _/_/_/_/  _/    _/  _/_/_/  _/    _/
  _/_/_/   _/_/      _/    _/    _/    _/_/  _/   K e v i n   W a l s h
 _/ _/    _/          _/ _/     _/    _/  _/_/    [EMAIL PROTECTED]
_/   _/  _/_/_/_/      _/    _/_/_/  _/    _/

_______________________________________________
Asterisk-Users mailing list
[EMAIL PROTECTED]
http://lists.digium.com/mailman/listinfo/asterisk-users
To UNSUBSCRIBE or update options visit:
   http://lists.digium.com/mailman/listinfo/asterisk-users

Reply via email to