Hello Jérôme,

(see inline)
--- Jérôme WAGNER <[EMAIL PROTECTED]> schrieb:

> Hello Matthias,
> 
> This is going to be hard for me next week because i will be out of office.
> 
> Your approach for encap/deencap looks good at at least corresponds to what
> we do today with ffmpeg.
> 
> Some more thoughts about your API:
> 
> - there is a debate at the codec init stage as to who allocates the
> processing buffers. Sometimes it is better when done by the codec
> implementation itself ; sometimes the application does it by itself.
> 
> In gstreamer, they have the word "in-place" when a filter does all its work
> without needing any more memory allocated.

what exactly do you mean with processing buffers? Everything codec-specific 
will be in the plugin.
On the interface we have the RTP payloads and the decoded frames, which in my 
opinion should be
freed by the application, since depending on its architecture on return to the 
codec plugin they
might still be in use (e.g. by a different thread). About the unencoded frame 
and the rtp payload
passed to the plugin from the application I am still thinking...


> - on another point ; I don't remember if it was you that sent it, but it's
> interesting to compare your API with:
> http://www.voxgratia.org/docs/openh323/openh323-v1_18_0/classH323Codec.html
> there could be 
interesting ideas to get from there.

Yes i am looking at
http://www.voxgratia.org/docs/codec_plugins.html
however the issues are similar to the plugin 
system that has been implemented by Vadim, that does
not offer the:
configuration aspects
sdp aspects
fragmentation aspects (encap/deencap)
so the interface should be extended/changed in my opinion

> 
> -----
> On the directx subject, mstute tried to integrate it but currently has a
> crash on his machine inside directx. We need to look more into this.
> 
> -----
That was a crash I unfortunately couldnt reproduce in my environment, furth 
input is welcome

> On the encap/decap, I'll probably try to commit it before tomorrow on the
> trunk ; it's only a few lines and does not seem to create backward
> compatibility issue. This way you will be able to see it and it will not get
> in your way I'm sure (because it's very short). I'll keep you posted with
> the revision number
> 
> Jérôme
> 
wouldnt that break the compatibility to existing wengophone implementation, 
adding confusion? Of
course I would welcome a standard-complient implementation to kick out the 
ffmpeg-proprietary
stuff


> 
> -----Message d'origine-----
> De : Matthias Schneider [mailto:[EMAIL PROTECTED] 
> Envoyé : jeudi 10 août 2006 13:54
> À : Jérôme WAGNER
> Cc : wengophone-devel
> Objet : RE: [Wengophone-devel] Update on work on H.264 codec integration
> 
> Hello Jerome, 
> let me explain how my encap and deencap works:
> My plugins I have written so far include a max_payload_size configuration
> setting, which is the
> MTU minus the IP/UDP/RTP header size (MTU-40 bytes). All generated RTP
> payload packets are (in
> case NAL units are bigger) fragmented for fitting into that size. After
> handing a frame to the
> encoder, the rtpCallback Callback is called one or multiple times, once for
> each RTP packet to be
> generated. On the receiving side, multiple rtp packet payloads with the same
> timestamp are passed
> to the deencap_and_decode routine, where they are assembled. The last packet
> of a frame has the
> markbit set, which is signaled via the mbit variable, triggering the
> decoding process. The decoded
> frame is then passed via the frameCallback to the application. As i have
> seen in the phapi
> implementation, RTP packets with the same timecode are first collected in a
> list and sent to the
> deencap process after receipt of the markbit. This mechanism could be kept
> in order to decide
> whether to decode a frame or not. 
> 
> About the alternative H.263 encapsulation I would suggest to write a plugin
> for each of the two
> encaps, one for wengophone compatibility (to be depreciated), which I will
> have running this week
> I hope and a second one that complies to the standard, featuring your
> implementation.
> 
> I think it could be possible to start integrating the two then existing
> plugins next week, I hope
> I can get some support from you, especially for integrating stuff to the gui
> as well (and perhaps
> if you could have a look at that directx stuff I send a few months ago), I
> would prefer to focus
> on the plugins and interoperability.
> 
> Thats it for now, 
> Matthias
> 
> 
> --- Jérôme WAGNER <[EMAIL PROTECTED]> schrieb:
> 
> > Hello Matthias,
> > 
> > Great work so far!
> > 
> > I agree on the fact that encap and deencap are necessary. Nevertheless,
> from
> > what I remember in ffmpeg, you need to decode a whole frame.
> > 
> > If the encoded frame is split into N chunks, you get N deencap for 1
> decode.
> > On the encoding side, when you encode 1 frame, you get N encaps
> > 
> > That may lead to a modification of your API where I don't see the slicing
> > aspect that we have now in phapi and that is necessary to fall below the
> > 1000-1400 MTU limit on most backbone routers in UDP.
> > 
> > Currently, we're missing the whole encap/deencap in phmedia-video + as you
> > have seen in the commited code we have a mixed H263+ encoder versus H263
> > decoder ; I have some uncommitted code that hacks everything into H263 and
> > adds a fixed RFC2190 encap ; I could commit that on the trunk since it is
> > better than what we have now (and should fix the eyebeam interop problem)
> > 
> > Keep us posted !
> > Jérôme
> > 
> > -----Message d'origine-----
> > De : [EMAIL PROTECTED]
> > [mailto:[EMAIL PROTECTED] De la part de
> Matthias
> > Schneider
> > Envoyé : mercredi 9 août 2006 15:16
> > À : Vadim Lebedev
> > Cc : wengophone-devel
> > Objet : Re: [Wengophone-devel] Update on work on H.264 codec integration
> > 
> > Hello Vadim,
> > 
> > ffmpeg does feature only an H.264 decoder, although it includes a wrapper
> to
> > use x264 for
> > encoding. However I did not see the use of adding another wrapper, so I
> > decided to use x264
> > directly.
> > Actually it was planned to feature a decoder in x264 as well, however (up
> to
> > now) that task was
> > never completed, so the combo x264 for encoding and ffmpeg for decoding is
> > the only workable
> > combination right now.
> > 
> > About the plugin system, of course I will not intend to reinvent the wheel
> -
> > however I think that
> > video codec plugins have to contain more functionality ( and thus a bigger
> > interface) than the
> > audio codec plugins used up to now.
> > Right now a phapi audio codec plugin offers the following functions:
> > 
> > encoder_init
> > encoder
> > encoder_cleanup
> > 
> > The same is to be said about the decoder.
> > 
> > I think the functionality for a "complete" video plugin may be divided
> into
> > the following:
> > - encoding/decoding
> > - encapsulation and deencapsulation for RTP payload
> > - logging 
> > - configuration management
> > - codec specific SDP negotiation 
> > 
> > The following is the functionality that I have partly implemented in my
> > proof of concept code,
> > ordered by functional group: 
> > 
> > - encoding/decoding + encapsulation and deencapsulation for RTP payload
> >     typedef void (frameCallback) (frame_t* decoded_frame, void* priv);
> >     bool deencap_and_decode (void* context, uint8_t* payload, uint32_t
> > payload_len, uint64_t
> > timestamp, bool mbit);
> >     void* create_decoder ();
> >     int init_decoder (void* context, const char *media_fmt);
> >     void reg_decoder_frame_callback(void* context, frameCallback*
> frameCbk,
> > void* priv);
> > 
> >     typedef void (rtpCallback) (uint8_t* payload, uint16_t payload_len,
> > u_int64_t rtpTimestamp,
> > int mbit, void* priv);
> >     bool encode_and_encap (void* context, frame_t* frame);
> >     void* create_encoder();
> >     int init_encoder(void* context);
> >     void reg_encoder_rtp_callback(void* context, rtpCallback* rtpCbk,
> void*
> > priv);
> > 
> > - logging 
> >     typedef void (logCallback) (char* message, int severity, void* priv);
> >     void reg_decoder_log_callback(void* context, int level, logCallback*
> > logCbk, void* priv);
> >     void reg_encoder_log_callback(void* context, int level, logCallback*
> > logCbk, void* priv);
> > 
> > -configuration management
> >     void config_dec_get_first(void* context, char** key, void** value,
> int*
> > type, char** name,
> > char** description);
> >     void config_dec_get_next(void* context, char** key, void** value, int*
> > type, char** name,
> > char** description);
> >     int config_dec_set_value(void* context, char* key, void* value);
> 
=== message truncated ===


                
___________________________________________________________ 
Telefonate ohne weitere Kosten vom PC zum PC: http://messenger.yahoo.de
_______________________________________________
Wengophone-devel mailing list
[email protected]
http://dev.openwengo.com/mailman/listinfo/wengophone-devel

Reply via email to