Re: [Live-devel] how to make latency as low as possible

2017-01-17 Thread Ross Finlayson
> Continuing question 1, I see that deliverFrame() is called by two callers, 
> doGetNextFrame(), which is called by sink object, and deliverFrame0(), which 
> is called by event loop when signalNewFrameData() emits an event. In my case, 
> I left signalNewFrameData() never called, hence deliverFrame0() never called.

Yes, that was the wrong thing for you to do - because it means that if 
“doGetFrame()” happens to get called when no frame is immediately available to 
be delivered, then “doGetFrame()” will return immediately, but no delivery will 
ever get done.  (See “DeviceSource.cpp”, line 92.)

You are doing event-driven programming; you need to understand this.  If no 
frame is immediately able to be delivered, then you should not ‘block’, waiting 
for a frame to come available.  Instead, you must arrange for an event to get 
handled when a frame later becomes available.  One common way to do this is to 
have a separate thread for you encoder; this thread would call no LIVE555 
library function *except* “triggerEvent()”.

This is all explained very clearly in our FAQ, and in the comments in 
“DeviceSource.cpp”.  Therefore - to avoid bothering the hundreds of mailing 
list subscribers any more - this will be my (and your) last posting on this 
topic.


Ross Finlayson
Live Networks, Inc.
http://www.live555.com/


___
live-devel mailing list
live-devel@lists.live555.com
http://lists.live555.com/mailman/listinfo/live-devel


Re: [Live-devel] how to make latency as low as possible

2017-01-17 Thread x...@vscenevideo.com
Continuing question 1, I see that deliverFrame() is called by two callers, 
doGetNextFrame(), which is called by sink object, and deliverFrame0(), which is 
called by event loop when signalNewFrameData() emits an event. In my case, I 
left signalNewFrameData() never called, hence deliverFrame0() never called. 
This means deliverFrame() is only called by doGetNextFrame() from the sink side 
object. Please correct me if I read the code wrong. Thanks.



Xin
Mobile: +86 186-1245-1524
Email: x...@vscenevideo.com
QQ: 156678745
 
From: Ross Finlayson
Date: 2017-01-17 16:31
To: LIVE555 Streaming Media - development & use
Subject: Re: [Live-devel] how to make latency as low as possible
> My question is this. 
> 1. How the sink object decide its timing of fetching data from the source? 
 
It doesn’t.  Instead, your video source object (if it’s programmed correctly) 
decides when to deliver a new frame of data (by arranging for “deliverFrame()” 
to get called - i.e., in handling an event.
 
> 2. What is the purpose of fPresentationTime?
 
To tell the receiver when (relatively) each frame should be rendered to the 
viewer.
 
> 3. Should I use my camera's capture timestamp for fPresentationTime?
 
Yes, this would be best (if this timestamp is accurate).
 
 
Ross Finlayson
Live Networks, Inc.
http://www.live555.com/
 
 
___
live-devel mailing list
live-devel@lists.live555.com
http://lists.live555.com/mailman/listinfo/live-devel
___
live-devel mailing list
live-devel@lists.live555.com
http://lists.live555.com/mailman/listinfo/live-devel


Re: [Live-devel] how to make latency as low as possible

2017-01-17 Thread Ross Finlayson
> My question is this. 
> 1. How the sink object decide its timing of fetching data from the source? 

It doesn’t.  Instead, your video source object (if it’s programmed correctly) 
decides when to deliver a new frame of data (by arranging for “deliverFrame()” 
to get called - i.e., in handling an event.

> 2. What is the purpose of fPresentationTime?

To tell the receiver when (relatively) each frame should be rendered to the 
viewer.

> 3. Should I use my camera's capture timestamp for fPresentationTime?

Yes, this would be best (if this timestamp is accurate).


Ross Finlayson
Live Networks, Inc.
http://www.live555.com/


___
live-devel mailing list
live-devel@lists.live555.com
http://lists.live555.com/mailman/listinfo/live-devel


Re: [Live-devel] how to make latency as low as possible

2017-01-16 Thread x...@vscenevideo.com
hi Ross,

I changed the code according to your advice, to delivering only 1 NAL unit each 
doGetNextFrame(), then the app runs. Thank you!

However, there's something I don't understand. In our camera, SPS/PPS/IFrame 
are generated at one time. If the sink object copies one NALU at a time, at the 
pace of frame interval (40ms for 25fps), then data will congest obviously. The 
fact is that sink object copies NALU at the pace of an interval less than 40ms. 
My log below shows this interval fluctuates considerably (the printed timestamp 
is "gettimeofday"ed right before each memmove()).

size:10   ts:2798817
size:4   ts:2798831
size:126288   ts:2798859
size:32513   ts:2798890
size:42666   ts:2798905
size:31548   ts:2798928
size:44705   ts:2798977
size:34214   ts:2799016
size:46538   ts:2799057
size:35807   ts:2799096
size:49680   ts:2799138
size:36974   ts:2799176

My question is this. 
1. How the sink object decide its timing of fetching data from the source? 
2. What is the purpose of fPresentationTime?
3. Should I use my camera's capture timestamp for fPresentationTime? (I tried 
this. Then, vidoe jitters when played with VLC sometimes. But OK when played 
with Mplayer)

Thank you!


Xin
Mobile: +86 186-1245-1524
Email: x...@vscenevideo.com
QQ: 156678745
 
From: Ross Finlayson
Date: 2017-01-16 20:12
To: LIVE555 Streaming Media - development & use
Subject: Re: [Live-devel] how to make latency as low as possible
You need to copy data to “fTo” and call “FramedSource::afterGetting(this)” only 
*once*, for each NAL unit that you deliver.  (Your code seems to be doing this 
multiple times for each delivery; this is wrong.)
 
In other words, each call to “doGetNextFrame()” must (eventually) lead to the 
delivery of exactly one H.264 NAL unit, followed by exactly one call to 
“FramedSource::afterGetting(this)”.
 
 
Ross Finlayson
Live Networks, Inc.
http://www.live555.com/
 
 
___
live-devel mailing list
live-devel@lists.live555.com
http://lists.live555.com/mailman/listinfo/live-devel
___
live-devel mailing list
live-devel@lists.live555.com
http://lists.live555.com/mailman/listinfo/live-devel


Re: [Live-devel] how to make latency as low as possible

2017-01-16 Thread Ross Finlayson
You need to copy data to “fTo” and call “FramedSource::afterGetting(this)” only 
*once*, for each NAL unit that you deliver.  (Your code seems to be doing this 
multiple times for each delivery; this is wrong.)

In other words, each call to “doGetNextFrame()” must (eventually) lead to the 
delivery of exactly one H.264 NAL unit, followed by exactly one call to 
“FramedSource::afterGetting(this)”.


Ross Finlayson
Live Networks, Inc.
http://www.live555.com/


___
live-devel mailing list
live-devel@lists.live555.com
http://lists.live555.com/mailman/listinfo/live-devel


Re: [Live-devel] how to make latency as low as possible

2017-01-16 Thread x...@vscenevideo.com
nt.so.1
#5  0x76dfeefc in BasicTaskScheduler::SingleStep(unsigned int) ()
   from /mnt/nfs/target/usr/lib/libBasicUsageEnvironment.so.1
#6  0x76dfe224 in BasicTaskScheduler0::doEventLoop(char volatile*) ()
   from /mnt/nfs/target/usr/lib/libBasicUsageEnvironment.so.1
#7  0xbc44 in main (argc=1, argv=0x7efcfdc4) at ../rtspd_main.cpp:74

Again, thanks for your patience.



Xin
Mobile: +86 186-1245-1524
Email: x...@vscenevideo.com
QQ: 156678745
 
From: Ross Finlayson
Date: 2017-01-13 23:52
To: LIVE555 Streaming Media - development & use
Subject: Re: [Live-devel] how to make latency as low as possible
> For the start code, I have eleminated them all. My camera's working pattern 
> is like this, when encoding key frame, it outputs SC+SPS+SC+PPS+SC+Frame, 
> when encoding non-key frame, it outputs SC+Frame. So, after I eleminated the 
> SC, what I sent to the sink object is SPS+PPS+FRAME for key frame, and FRAME 
> alone for non-key frame.
> 
> Did I missed something here?
 
Yes, you missed the part where I said that:
each ‘frame’ that come from your input source must be a ***single*** H.264 NAL 
unit, and MUST NOT be prepended by a 0x00 0x00 0x00 0x01 ‘start code’.
 
So, you must deliver (without a prepended ‘start code’ in each case):
SPS, then
PPS, then
FRAME, etc.
 
***provided that*** FRAME is a single NAL unit.  If, instead, FRAME consists of 
multiple ‘slice’ NAL units (this is actually preferred for streaming, because 
it’s much more tolerant of network packet loss), then you must deliver each 
‘slice’ NAL unit individually (again, without a ‘start code’).
 
Also, when testing your server, I suggest first using “openRTSP” 
<http://www.live555.com/openRTSP/> as a client, before using VLC.
 
 
Ross Finlayson
Live Networks, Inc.
http://www.live555.com/
 
 
___
live-devel mailing list
live-devel@lists.live555.com
http://lists.live555.com/mailman/listinfo/live-devel
___
live-devel mailing list
live-devel@lists.live555.com
http://lists.live555.com/mailman/listinfo/live-devel


Re: [Live-devel] how to make latency as low as possible

2017-01-13 Thread Ross Finlayson
> For the start code, I have eleminated them all. My camera's working pattern 
> is like this, when encoding key frame, it outputs SC+SPS+SC+PPS+SC+Frame, 
> when encoding non-key frame, it outputs SC+Frame. So, after I eleminated the 
> SC, what I sent to the sink object is SPS+PPS+FRAME for key frame, and FRAME 
> alone for non-key frame.
> 
> Did I missed something here?

Yes, you missed the part where I said that:
each ‘frame’ that come from your input source must be a ***single*** 
H.264 NAL unit, and MUST NOT be prepended by a 0x00 0x00 0x00 0x01 ‘start code’.

So, you must deliver (without a prepended ‘start code’ in each case):
SPS, then
PPS, then
FRAME, etc.

***provided that*** FRAME is a single NAL unit.  If, instead, FRAME consists of 
multiple ‘slice’ NAL units (this is actually preferred for streaming, because 
it’s much more tolerant of network packet loss), then you must deliver each 
‘slice’ NAL unit individually (again, without a ‘start code’).

Also, when testing your server, I suggest first using “openRTSP” 
 as a client, before using VLC.


Ross Finlayson
Live Networks, Inc.
http://www.live555.com/


___
live-devel mailing list
live-devel@lists.live555.com
http://lists.live555.com/mailman/listinfo/live-devel


Re: [Live-devel] how to make latency as low as possible

2017-01-13 Thread x...@vscenevideo.com
hi Ross,

I followed your suggestion to use “H264VideoStreamDiscreteFramer” instead of 
“H264VideoStreamFramer”. Now that the data seems to be consumed right away 
after "afterGetting" callback, but VLC player can't set up session. 

For the start code, I have eleminated them all. My camera's working pattern is 
like this, when encoding key frame, it outputs SC+SPS+SC+PPS+SC+Frame, when 
encoding non-key frame, it outputs SC+Frame. So, after I eleminated the SC, 
what I sent to the sink object is SPS+PPS+FRAME for key frame, and FRAME alone 
for non-key frame.

Did I missed something here?



Xin
Mobile: +86 186-1245-1524
Email: x...@vscenevideo.com
QQ: 156678745
 
From: Ross Finlayson
Date: 2017-01-13 01:30
To: LIVE555 Streaming Media - development & use
Subject: Re: [Live-devel] how to make latency as low as possible
Your data source class (“FramedSource” subclass) looks mostly OK; however, I 
suspect that your problem is that you are not using the correct ‘framer’ class 
for your H.264 video ‘frames’ (in reality, H.264 NAL units).
 
Look at your implementation of the “createNewStreamSource()” virtual function 
(in your “OnDemandServerMediaSubsession” subclass).  Because your data source - 
from your encoder - is discrete H.264 NAL units (i.e., one at a time), rather 
than a byte stream, you must use feed your input source to a 
“H264VideoStreamDiscreteFramer”, *not* a “H264VideoStreamFramer”.
 
Note also that each ‘frame’ that come from your input source must be a single 
H.264 NAL unit, and MUST NOT be prepended by a 0x00 0x00 0x00 0x01 ‘start code’.
 
 
Ross Finlayson
Live Networks, Inc.
http://www.live555.com/
 
 
___
live-devel mailing list
live-devel@lists.live555.com
http://lists.live555.com/mailman/listinfo/live-devel
___
live-devel mailing list
live-devel@lists.live555.com
http://lists.live555.com/mailman/listinfo/live-devel


Re: [Live-devel] how to make latency as low as possible

2017-01-12 Thread Ross Finlayson
Your data source class (“FramedSource” subclass) looks mostly OK; however, I 
suspect that your problem is that you are not using the correct ‘framer’ class 
for your H.264 video ‘frames’ (in reality, H.264 NAL units).

Look at your implementation of the “createNewStreamSource()” virtual function 
(in your “OnDemandServerMediaSubsession” subclass).  Because your data source - 
from your encoder - is discrete H.264 NAL units (i.e., one at a time), rather 
than a byte stream, you must use feed your input source to a 
“H264VideoStreamDiscreteFramer”, *not* a “H264VideoStreamFramer”.

Note also that each ‘frame’ that come from your input source must be a single 
H.264 NAL unit, and MUST NOT be prepended by a 0x00 0x00 0x00 0x01 ‘start code’.


Ross Finlayson
Live Networks, Inc.
http://www.live555.com/


___
live-devel mailing list
live-devel@lists.live555.com
http://lists.live555.com/mailman/listinfo/live-devel


Re: [Live-devel] how to make latency as low as possible

2017-01-11 Thread x...@vscenevideo.com
hi Ross,

Let me explain how I transfer data from our camera to live555 library.

I followed the second approach from instructions on the FAQ page, 
http://www.live555.com/liveMedia/faq.html#liveInput 
namely, to write my own "Gm813xSource" as subclass of "FramedSource", and my 
own "GM813XServerMediaSubsession" as subclass of 
"OnDemandServerMediaSubsession".

The GM813XSource.cpp is modified out of DeviceSource.cpp. Here I copy the key 
part of the code to below.

void Gm813xSource::doGetNextFrame()
{
if (gmPollFrame()) {
deliverFrame();
}
}

Boolean Gm813xSource::gmPollFrame(void)
{
int ret = gm_poll(&poll_fds, 1, 2000);
if(GM_TIMEOUT == ret) {
envir() << "gm_poll timeout\n";
return false;
}
if(GM_SUCCESS == ret) {
return true;
}
envir() << "gm_poll error, ret=" << ret << "\n";
return false;
}

void Gm813xSource::deliverFrame(void)
{
int ret;
gm_enc_multi_bitstream_t bs;

if (!isCurrentlyAwaitingData())
return; // we're not ready for the data yet

memset(&bs, 0, sizeof(bs));

bs.bindfd = main_bindfd;//poll_fds[i].bindfd;
bs.bs.bs_buf = frameBuf;  // set buffer point
bs.bs.bs_buf_len = FRAME_BUF_SIZE;  // set buffer length
bs.bs.mv_buf = 0;  // not to recevie MV data
bs.bs.mv_buf_len = 0;  // not to recevie MV data

if (bytesInBuf > 0) { // send leftover data
if (bytesInBuf > fMaxSize) {
fFrameSize = fMaxSize;
fNumTruncatedBytes = bytesInBuf - fMaxSize;
bytesInBuf = fNumTruncatedBytes;
dataPtr += fFrameSize;
} else {
fFrameSize = bytesInBuf;
bytesInBuf = 0;
}
memmove(fTo, dataPtr, fFrameSize);
FramedSource::afterGetting(this);
} else { // get a new frame and send
if ((ret = gm_recv_multi_bitstreams(&bs, 1)) < 0) {
printf("Error, gm_recv_multi_bitstreams return value %d\n", ret);
} else {
if ((bs.retval < 0) && bs.bindfd) {
printf("Error to receive bitstream. ret=%d\n", bs.retval);
} else if (bs.retval == GM_SUCCESS) {
u_int8_t* newFrameDataStart = (u_int8_t*)bs.bs.bs_buf; //%%% TO 
BE WRITTEN %%%
unsigned newFrameSize = bs.bs.bs_len; //%%% TO BE WRITTEN %%%
bytesInBuf = newFrameSize;
dataPtr = bs.bs.bs_buf;

// Deliver the data here:
if (newFrameSize > fMaxSize) {
fFrameSize = fMaxSize;
fNumTruncatedBytes = newFrameSize - fMaxSize;
} else {
fFrameSize = newFrameSize;
}

bytesInBuf -= fFrameSize;
dataPtr += fFrameSize;

gettimeofday(&fPresentationTime, NULL); 
memmove(fTo, newFrameDataStart, fFrameSize);

// After delivering the data, inform the reader that it is now 
available:
FramedSource::afterGetting(this);
}
}
}
}

As you can see, I use "frameBuf" to hold encoded bytes from the encoder, and 
then move the data to fTo as soon as possible, finally, after each copying I 
called afterGetting callback. For the "frameBuf", I allocated 512K bytes, which 
is big enough to hold the largest I frame. If the afterGetting callback of 
sink-side object then deliver the frame data to network immediatly, the latency 
I've introduced by frameBuf should be only 1 frame, namely 33ms (of course 
there's capture buffer and buffers inside the hw encoder that's not counted 
here, but they shouldn't add too much, I'm currenly checking this with help 
from the chip FAE). 

Another confusion is raised when I observed the fMaxSize. The value of this 
variable goes down by the encoded frame size, each time after I copy the frame 
and call afterGetting. But it doesn't go back to the max value (seems to be 
150KB, no matter how I try to override it in createNewRTPSink) next time. Only 
if the new frame size exceeds fMaxSize, then next time it goes back. My 
question is, does this mean that all those data copied to fTO is not consumed 
right away, thus introduced a frame buffer here?

It's a long post, thanks for your patience to read this far.



Xin Liu
VsceneVideo Co. Ltd.
Mobile:+86 186 1245 1524
Email:x...@vscenevideo.com
From: Ross Finlayson
Date: 2017-01-11 20:04
To: LIVE555 Streaming Media - development & use
Subject: Re: [Live-devel] how to make latency as low as possible
Our server software - by itself - contributes no significant latency.  I 
suspect that most of your latency comes from the interface between your camera 
and our server.  (You didn’t say how you are feeding your camera’s output to 
our

Re: [Live-devel] how to make latency as low as possible

2017-01-11 Thread Gordon Apple
I’m currently a lurker on this list, but hope to eventually include RTSP
streaming in our educational presentation Mac app, iQPresenter. Latency is a
major factor. We currently produce movies, create HLS variants and segments
using Compressor. Talk about latency, “HLS” is a misnomer. Round trip
latency could be as much as a minute with HLS. One second might be tolerable
but I would hope for less. So we are definitely interested in what latency
can reasonably be achieved using RTSP.


On 1/11/17 5:53 AM, "x...@vscenevideo.com"  wrote:

> hi there,
> 
> I'm using live555 as our RTSP server on a live camera, which capture and
> encode video into H.264 stream. Everything works fine, except that the latency
> is a bit high. I tried different players, including VLC, ffplay, mplayer. The
> lowest latency is when I set -nocache to mplayer, but still it's over 1
> second. 
> 
> How should I lower the latency? Any suggestion would be appreciated.
> 
> Btw, my camera's dim is 2048x1520, frame rate is 25fps, bit rate is 8Mbps,
> encoder is H.264 without B-frame.
> 
> Thanks,
> Xin
> 
> Xin Liu
> VsceneVideo Co. Ltd.
> 
> Mobile:+86 186 1245 1524
> 
> Email:x...@vscenevideo.com

-- 
G. Gordon Apple, PhD
Ed4U
118 Chelle Ln
Little Rock AR 72223

501-868-7637
310-766-3900 (cell)
800-579-Ed4U (3348)
425-732-0300 (eFax)
ggap...@ed4u.tv
www.ed4u.tv


___
live-devel mailing list
live-devel@lists.live555.com
http://lists.live555.com/mailman/listinfo/live-devel


Re: [Live-devel] how to make latency as low as possible

2017-01-11 Thread Ross Finlayson
Our server software - by itself - contributes no significant latency.  I 
suspect that most of your latency comes from the interface between your camera 
and our server.  (You didn’t say how you are feeding your camera’s output to 
our server; but that’s where I would look first.)

Ross Finlayson
Live Networks, Inc.
http://www.live555.com/


___
live-devel mailing list
live-devel@lists.live555.com
http://lists.live555.com/mailman/listinfo/live-devel