On Tue, 20 Jan 2015, Julius Friedman wrote:

Given such a response from a server as I indicated previously the client
received the message and begins to parse and read a byte at a time, if a
byte can be read based on the byte read from the socket it is determined
what action to take.

It seems there are quite a few points of failure but I will outline what my
'issue' is below:

There are two interesting and reliable cases for this logic to be observed:

1) When the server sends back an encapsulated RTP or RTCP packet before the
'PLAY' response is received, the 'return_on_interleaved_data' check to be
false, an attempt an attempt to skip the rtsp packet is made,

This part seems quite correct (sounds pretty much as it is by design - also, why is a server sending an inline response before the reply to PLAY?)

since this is not framed packet the length is incorrectly read by ffmpeg

Umm, what? "not framed"? What kind of packet is this? If it is an inline packet escaped with $, it should have such framing and have the length prepended, right? Is this not the case?

Here you lost me. If the server sends an inline packet before the response, this packet should be sent completely before the response comes, and it should be framed correctly as any other interleaved packet.

Can you provide a packet trace of this part or other exact description of what the server sends? Currently, to me, this sounds like you're saying the server is sending incorrect/nonframed/random data - and I don't believe that's what you're trying to say.

2) At any point during interleaved communication when a 'tcp
re-transmission occurs'

Umm, what? A tcp retransmission shouldn't be noticeable on the app layer level - whether a retransmission occurred or not shouldn't matter for the libavformat code.

The tcp layer makes sure that the data received on the socket level is identical, in the same order, as the peer sent it. So why does the retransmission matter?

What I also observe occurring during this time is that the source sending
to ffmpeg happens to also re-transmit data more frequently than my client
for what I can only assume is the way the sending is occurring and receives
are happening.

I notice this seems to occur a lot with interleaved 'RTSP' and it seems one
possible reason for this is as follows:

If the receiver (avconv/ffmpeg/avplay/ffplay/whatever) blocks while reading from the source when it has buffered enough, it can indeed cause the peer to block, trying to retransmit the same packet a few times until the receiver calls av_read_frame again. (With avplay, you can avoid this by passing the -infbuf option, and ffplay has got a tweak/hack for enabling this automatically.)

time as reading with ff_rtsp_read_reply

Hence why I was talking about the polling which comes from 'rtsp.c' at
udp_read_packet
among other places (where it probably should have been defined in a way
which allowed it's logic to be re-used without being re-defined, but none
the less) what I meant is that functions like

ff_rtp_send_rtcp_feedback

Don't poll for write, the underlying result is that whatever is using
ffmpeg e.g. VLC starts to think it can't write to the rtsp socket because
the socket is latent or failed when in fact the socket is just being used
by ffmpeg already for another send operation and the 'pts_delay' becomes
increased and is never reset lower and additionally may take a long time to
timeout.

Umm, what is 'pts_delay'? I can't find such a word anywhere in the source.

This can resolve itself with time as any connection issue, however the
problem is that the timeout is never adjusted again when writing to decease
the back-off which occurred when reading timed out and subsequently causes
the library to react to situations where there is no more legitimate data
from a sender for a small period of time with a large poll delay which will
cause latency if no data is received during the adjusted timeout and could
cause the connection to timeout because:

1) Rtsp data can't be sent (GET_PARAMETER) because a `read` is
already occurring

2) RTCP data can't sent because a `read` is already occurring

This `read` is either from a RTSP request outbound or incoming RTSP message
or RTP Interleaved data on the same file descriptor and not connection
related but there is no way to tell without determining if there is first
an outbound connection or of the socket can be written to with 0 byte data.

The bottom line is that a server can reliably cause a the rtsp client to
enter a state where is consumes more data than it should and never returns
control until the underlying connection is aborted and if a server uses the
code to process messages then it can also be exploited by the message
problems cites above.

I don't really follow your explanation here, but I do agree that such a deadlock theoretically could happen if the server is blocked, waiting trying to send data to the client (and not reading on its incoming connection), while the client is blocked trying to send rtcp feedback to the server. However, since the rtcp feedback data usually is quite small and infrequent, it should in most cases fit into tcp send buffers and not really block the client sending it.

The re-transmission issue is more or less a by-product of the above also
IMHO but I would glad to hear what you think anyway.

So in short the I guess the problem can be simplified to

1) "Rtsp parsing logic is incorrect when '$' appears"
And
2) "RtspClient does not properly share resources concurrently"

but i'm not sure that states the seriousness of the issue en toto,
hopefully I have provided enough information.

I really don't understand the issues you are trying to explain. To make things simpler, I think it would be best if you'd try to explain them separately one at a time - starting with the allegedly incorrect handling of '$' including a packet dump or some trace of what is happening.

// Martin
_______________________________________________
libav-devel mailing list
[email protected]
https://lists.libav.org/mailman/listinfo/libav-devel

Reply via email to