> On March 2, 2015, 3:54 p.m., Matt Jordan wrote:
> > Review of https://wiki.asterisk.org/wiki/display/AST/RTP+engine+replacement
> > 
> > -- Section: A glossary of terms
> > 
> > {quote}
> > * RTP stream: RTP instances created by Steel Zebra will be referred to as 
> > RTP streams.
> > * RTP session: A structure created by Steel Zebra that contains related RTP 
> > streams and coordinates activities between streams where necessary.
> > {quote}
> > 
> > Since RTP instances are created separately by a channel driver, how will 
> > the RTP engine be notified that multiple RTP instances (that is, streams) 
> > are related? This may be answered in another section, but it might be good 
> > to know that this will require changes in the RTP engine here.
> > 
> > -- Section: Media Flow: Incoming Media
> > 
> > {quote}
> > Buffering/reordering
> > 
> > RTP may be received in bursts, out of order, or in other less-than-ideal 
> > ways. Asterisk will implement reception buffers to place incoming RTP 
> > traffic into, potentially reordering packets as necessary if they arrive 
> > out of order.
> > {quote}
> > 
> > While I'm not against buffering in the RTP stack, have you given any 
> > thought how that would be set up? As it adds delay, I would expect that not 
> > every RTP stream should be buffered; consequently, this would need to be 
> > driven by configuration or by some dialplan construct. Configuration may 
> > work in some cases (for example, when you know that some endpoint is always 
> > jittery); in other cases, dialplan is probably a better approach. In both 
> > cases however, these would require manipulation at a layer higher than the 
> > RTP stack itself, which would mean drilling down through the RTP engine 
> > into the RTP instance - which ends up sound something like our current 
> > jitter buffer approaches. What advantages are there to buffering in the 
> > stack itself, versus simply expanding the jitter buffers to handle more 
> > than just VOICE frames? Would we want to provide buffering in native RTP 
> > bridges, or let the far endpoints handle the re-ordering?
> > 
> > -- Section: Other Stuff
> > 
> > {quote}
> > Native local RTP bridges
> > 
> > Native local RTP bridges have a few considerations when implementing a new 
> > RTP engine.
> > 
> > First, bridge_native_rtp requires that the RTP engine's local_bridge method 
> > has to be the same for each of the bridged RTP instances. If we create a 
> > new RTP engine, it will not have the same local_bridge method as 
> > res_rtp_asterisk. This means that calls that use res_rtp_asterisk will not 
> > be able to be locally bridged with calls that use the new RTP engine. I 
> > think it is possible to rework the inner workings of native local bridges 
> > such that they can be between different RTP engines. However, if the goal 
> > is the total removal of res_rtp_asterisk from the codebase, then such 
> > considerations are not as necessary.
> > 
> > Second, native local RTP bridging is performed at the main RTP API layer by 
> > having the bridged RTP instances point at each other. It is up to the 
> > individual RTP instances to detect that this has occurred and act 
> > accordingly. It might work better if the job of setting bridges on RTP 
> > instances were passed down to the engines themselves in case they want to 
> > perform other side effects besides changing a pointer.
> > {quote}
> > 
> > I think it is arguable whether or not the local_bridge code should be in 
> > res_rtp_asterisk still. Ideally, an RTP implementation would simply have a 
> > "direct write/direct read" callbacks that the bridge itself would call 
> > into, rather than let the RTP implementation do the actual bridging. This 
> > has a few advantages:
> > (1) We could implement some other more interesting RTP bridges (such as a 
> > multi-party RTP forwarding bridge or a RTP bridge w/ RTP recording)
> > (2) It simplifies the thread boundaries. Right now, it's a little tricky 
> > managing the safety of calling into an RTP implementation from 
> > bridge_native_rtp. Having a more concrete boundary between the bridge and 
> > the RTP implementations would be advantageous.
> > 
> > Of course, if res_rtp_asterisk is refactored as opposed to replaced, that 
> > makes altering and/or supporting the native bridging in any fashion easier.
> > 
> > I think you might be referring to this in your second point, but I'm not 
> > entirely sure if this is what you meant in it.

{quote}
Since RTP instances are created separately by a channel driver, how will the 
RTP engine be notified that multiple RTP instances (that is, streams) are 
related? This may be answered in another section, but it might be good to know 
that this will require changes in the RTP engine here.
{quote}

This is addressed in the "Media Setup" section, subsection "The actual Setup" . 
{{ast_rtp_instance_new()}} takes an engine-specific void pointer as the final 
parameter. The new RTP engine will take a structure that contains the channel 
that the RTP instance is being created for. This channel is used as a key to 
determine if a session has been set up yet.

{quote}
Stuff about buffering
{quote}

Between your comments here and OEJ's comments on the -dev list, I think the 
buffering should just be removed from the RTP layer completely. Jitter buffers 
can be expanded to work on other media streams, and something at either the 
channel or bridge layer can be used to synchronize content from multiple 
streams if desired.
This also means that the need for an RTP session structure is lessened, since 
its main duty was going to be to synchronize media from multiple streams. The 
RTP session structure will probably still be needed eventually for BUNDLE, and 
it may be useful to have for statistics-gathering purposes.

{quote}
Stuff about local native bridges
{quote}

I like the ideas you have here, and I'd be all for implementing them. The only 
qualm I have is that it expands the scope of the project to also include a 
refactor of native RTP bridging rather than just refactoring or rewriting an 
RTP engine.


- Mark


-----------------------------------------------------------
This is an automatically generated e-mail. To reply, visit:
https://reviewboard.asterisk.org/r/4453/#review14566
-----------------------------------------------------------


On Feb. 27, 2015, 6:47 p.m., Mark Michelson wrote:
> 
> -----------------------------------------------------------
> This is an automatically generated e-mail. To reply, visit:
> https://reviewboard.asterisk.org/r/4453/
> -----------------------------------------------------------
> 
> (Updated Feb. 27, 2015, 6:47 p.m.)
> 
> 
> Review request for Asterisk Developers.
> 
> 
> Description
> -------
> 
> I've created a series of wiki pages that discuss the idea of writing an 
> improved RTP architecture in Asterisk 14.
> 
> To regurgitate some details from the linked page, the current RTP engine in 
> Asterisk (res_rtp_asterisk) gets the job done but has some issues. It is not 
> architected in a way that allows for easy insertion of new features. It has 
> dead code (or code that might as well be dead). And it has some general flaws 
> in it with regards to following rules defined by fundamental RFCs.
> 
> I have approached these wiki pages with the idea of writing a replacement for 
> res_rtp_asterisk.c. The reason for this is that there are interesting 
> media-related IETF drafts (trickle ICE and BUNDLE, to name two) that would be 
> difficult to implement in the current res_rtp_asterisk.c code correctly. 
> Taking the opportunity to re-engineer the underlying architecture into 
> something more layered and extendable would help in this regard. The goal 
> also is to not disturb the high-level RTP engine API wherever possible, 
> meaning that channel drivers will not be touched at all by this set of 
> changes.
> 
> The main page where this is discussed is here: 
> https://wiki.asterisk.org/wiki/display/AST/RTP+engine+replacement . This page 
> has a subpage that has my informal rambling notes regarding a sampling of RTP 
> and media-related RFCs and drafts I read. It also has a subpage with more 
> informal and rambling notes about the current state of RTP in Asterisk. While 
> these pages are not really part of the review, you may want to read them 
> anyway just so you might have some idea of where I'm coming from when drawing 
> up the ideas behind a new architecture.
> 
> I also have a task list page that details a list of high-level tasks that 
> would need to be performed if a new RTP engine were to be written: 
> https://wiki.asterisk.org/wiki/display/AST/RTP+task+list . This should give 
> some idea of the amount of work required to make a new RTP engine a reality. 
> The tasks with (?) around them are tasks that add new features to Asterisk's 
> RTP support, and it is therefore questionable whether they fit in the scope 
> of this work at this time.
> 
> Some things to consider when reading through this:
> * Refactor or rewrite? When considering current issues with RTP/RTCP in 
> Asterisk, and considering the types of features that are coming down the 
> pipe, which of these options seems more prudent?
> * Does the proposed architecture make sense from a high level? Is there 
> confusion about how certain areas are intended to work?
> * Are there any glaring details you can think of that have been left out?
> * Are there any questions about how specific features would fit into the 
> described architecture?
> 
> 
> Diffs
> -----
> 
> 
> Diff: https://reviewboard.asterisk.org/r/4453/diff/
> 
> 
> Testing
> -------
> 
> 
> Thanks,
> 
> Mark Michelson
> 
>

-- 
_____________________________________________________________________
-- Bandwidth and Colocation Provided by http://www.api-digital.com --

asterisk-dev mailing list
To UNSUBSCRIBE or update options visit:
   http://lists.digium.com/mailman/listinfo/asterisk-dev

Reply via email to