Hi

I tried the 1580 revision and it's not crashing like the later ones. The 1580 one consumes more memory until it reaches the top but it doesn't stop working all of a sudden like the later versions. I think this issue started in revision 1582.
Thanks


Daniel Daley escribió:
Hi,

Just wanted to write in to let you know that none of my testing with the timeouts worked. I still have to leave the keep-alive code disabled or my load jumps from 1 to over 4 in a short period of time until eventually the server deadlocks without an exception or anything. It just quits logging but the load stays at or above 4 until I issue a kill -9 to the process. I do think it has to do with it trying to disconnect hung connections that just won't go away with a normal disconnect() call, but as to why I'm unsure.

Thanks,

--Dan--

On Dec 8, 2006, at 8:48 PM, Daniel Daley wrote:

Sorry, no not really 2 times higher because of the job but rather I think something is happening over time that causes the keep-alive job to start having difficulty disconnecting ghost connections. It's only after a duration of time once several hundred people have connected and disconnected that the problem shows up. I have lowered the timeout that seemed to help Adam's situation and am monitoring to see what happens. As far as the rest of the issue goes I may end up having to get ambitious and write a shared memory adapter using JNI and C. From what I can tell the MappedByteBuffer class in java is broken to the point that once you map data there is no way to unmap it as the JVM garbage collector does not free it.
--Dan--

On Dec 8, 2006, at 8:21 PM, Steven Gong wrote:

Hi Dan,
Let me re-explain the problem you met in case I misunderstood it.

The problem you met is that the server load is heavier (about 2 times) because of the keep-alive job in RTMPConnection. Is that what you mean?

On 12/9/06, *Daniel Daley* <[EMAIL PROTECTED] <mailto:[EMAIL PROTECTED]>> wrote:

Hi Steven,
    Sorry to just jump in on this thread but I thought it was
    probably too similar to start a new one. I've been working all
    day on tracking down a performance problem on my server. We have
    recently started receiving more traffic to our Red5 box and it
    has caused the load to skyrocket. After some investigation I
think it is multiple problems.
    As we approach around 70 concurrent VOD (full audio and video)
    the load on the server spikes from around 2.00 to 4-5. I
    disabled the addScheduledJob and removeScheduledJob inside
    RTMPConnection that was just added and the load stays down at
    least until the connections build up. I saw a condition like
    this once when I was trying to create my own ghost connection
    remover. After a while of running when the close() method was
    called on certain ghost connections the call would never return.
    As the job was a repeating one it would continue hanging each
    time it called it until the job scheduler was no longer
    executing jobs. I'm not sure that's what would be going on in
    this case but they seem very similar.

    Even with the ghost remover disabled a load of 2.00 is still
    pretty high. I'm having some luck implementing caching of the
    keyframe data but the results are marginal. It's times like this
    that I wish the memory mapping system in java wasn't messed up.

    Thanks,

    --Dan--


    On Dec 8, 2006, at 7:16 PM, Steven Gong wrote:

    Adam,
    Thanks for your figures. I'll set these as default value if we
    don't find any problems in it.

    On 12/8/06, *Adam* < [EMAIL PROTECTED]
    <mailto:[EMAIL PROTECTED]>> wrote:

        Hi Julian,
ived played around with the values rtmptConnection,
        rtmpMinaConnection at conf/red5-core.xml.
        When i took 1000 for both; red5 was all the time running,
        but at some point in time the streaming stopped. (no
        motioned picture)
        The last test i did with rtmpMinaConnection = 10000,
        rtmptConnection = 5000 with more than 80 Clients.
        It worked fine for more than 12h till my wife closed the
        notebook covers today in the morning, arghh!
Ive to rerun this test to be sure, but it looks like that
        adjusting these values can fix the problems for this
        particular test.
        I guess its this (ghost) connection problem. With
        rtmptConnection and rtmpMinaConnection you can set the keep
        alive time. its pinging the clients. they answer with a
        pong including a timestamp. is this timestamp older than in
        rtmpMinaConnection, rtmptConnection the connection get closed.
        If this keep alive is to big you probably have to much
        ghost connection (before they get closed) what is causing
        the halt.
        ill keep on updating you with the results.
greets
        -Adam-
>I have the same problem.
        >Steven Gong escribi?:
        >>>
        >>>
        >>> On 12/7/06, *Adam* < [EMAIL PROTECTED]
        <mailto:[EMAIL PROTECTED]>
        >>> <mailto: [EMAIL PROTECTED]
        <mailto:[EMAIL PROTECTED]>>>> wrote:
        >>>
        >>> Hi Chris,
        >>
        >> i've rerun the test with revision 1583.
        >> With less than 10 test clients it worked fine for more
        than 12 hours.
        >> Than i increased to 20 test clients and after a few
        minutes red5
        >> freezed as before!!
        >>
        >> I've checked the connections with netstat -an and while
        the server
        >> was running or freezed. the connection count was fine!
        >> even when red5 was freezed the connections where
        established and
        >> closed.
        >>
        >> Also new: The broadcaster receives a unpublish.success .
        at least i
        >> havent noticed it before.
        >>
        >> Thats a vital issue and a show stopper. what can i do to
        help out
        >> finding the problem.
        >>
        >>
        >> Please report it to JIRA first and describe the
        reproducing steps as
        >> detailed as possible. Thanks.
        >>
        >> greets
        >> -Adam-
        >>
        >> >Thanks for the quick response.
        >> >I will do so and reply the results to this list!!
        >> >>
        >> >greets
        >> >-Adam-
        >> >>
        >> >>On 12/6/06, Chris <[EMAIL PROTECTED]
        <mailto:[EMAIL PROTECTED]>
        >> <mailto: [EMAIL PROTECTED]
        <mailto:[EMAIL PROTECTED]>>>> wrote:
        >> >>>
        >> >>Hi Adam,
        >> >>Steven recently fixed a problem with ghost connections
        not being
        >> >removed automatically. Perhaps this problem that you are
        >> experiencing is related. See this thread for more info on
        it: ><_
        >>
        http://osflash.org/pipermail/red5devs_osflash.org/2006-November/002566_
        
<http://osflash.org/pipermail/red5devs_osflash.org/2006-November/002566_>.
        >> >html>>
        >> >
        http://osflash.org/pipermail/red5devs_osflash.org/2006-November/002566.h
        
<http://osflash.org/pipermail/red5devs_osflash.org/2006-November/002566.h>
        >> >tml
        >> >>Could you try your test with r1582 or later? You will
        more than
        >> likely
        >> >have to build it from the source code in the trunk.
        >> >>Thanks very much for reporting the bug.
        >> >>-Chris
        >> >>On 12/6/06, Adam < [EMAIL PROTECTED]
        <mailto:[EMAIL PROTECTED]>
        >> <mailto: [EMAIL PROTECTED]
        <mailto:[EMAIL PROTECTED]>>>> wrote:
        >> >>>
        >> >>>
        >> >>> Ups,
        >> >>> forgot again!
        >> >>> iam using 0.6rc1 Revision 1579 on Windows XP
        >> >>>
        >> >>>
        >> >>>
        >> >>> >On 12/6/06, Steven <[EMAIL PROTECTED]
        <mailto:[EMAIL PROTECTED]>
        >> <mailto: [EMAIL PROTECTED]
        <mailto:[EMAIL PROTECTED]>>>> wrote:
        >> >>> >Which version?
        >> >>> >On 12/6/06, Adam < [EMAIL PROTECTED]
        <mailto:[EMAIL PROTECTED]>
        >> <mailto: [EMAIL PROTECTED]
        <mailto:[EMAIL PROTECTED]>>>> wrote:
        >> >>> >>>
        >> >>> >>> Hi list,
        >> >>> >>>
        >> >>> >>> i tried to check my application with respect to it's
        >> stability. So
        >> >>> >>> i ran a few times the following code(every 10 sec
        connecting or
        >> >>> >>> disconnecting) on each of 3 machines in my LAN.
        >> >>> >>> setInterval(reconnect,10000);
        >> >>> >>> function reconnect() {
        >> >>> >>> connToggle=!connToggle;
        >> >>> >>> if (connToggle) {
        >> >>> >>> trace ("disconnect");
        >> >>> >>> my_ns.close();
        >> >>> >>> my_nc.close();
        >> >>> >>> } else {
        >> >>> >>> trace ("connect");
        my_nc.connect("rtmp://localhost/fitcDemo",
        >> >>> >>> roomName,
        >> >>> checkId,
        >> >>> >>> loginName,
        >> >>> >>> broadcaster);
        >> >>> >>> remote_so.connect(my_nc);
        >> >>> >>> my_ns = new NetStream(my_nc);
        >> >>> >>> vid_stream.attachVideo(my_ns);
        >> >>> >>> my_ns.play(roomName);
        >> >>> >>> }
        >> >>> >>> }
        >> >>> >>> After a while (sometimes after tens of minutes,
        another time
        >> a few
        >> >>> >>> hours) the server gets halted. Even my timer
        that's frequently
        >> >>> >>> executed to kill this ghost connection isn't
        executed anymore.
        >> >>> >>> Its a complete halt. No logs, nothing.
        >> >>> >>> The only thing that's working is that the client
        receives the
        >> >>> >>> netconnection close event on disconnecting. Apart
        from this
        >> Nothing!!
        >> >>> >>> I noticed this with my and the sample applications.
        >> >>> >>> After restarting the server it works again. But
        after every halt
        >>> and
        >> >>> >>> restart of the system this halt happens earlier.
        At the end of my
        >> tests
        >> >>> (2
        >> >>> >>> days) this halt happened a few minutes after
        restarting and
        >> >>> >>> connecting to red5.
        >> >>> >>>
        >> >>> >>> Is this known. Is there a solution, workaround?
        >> >>> >>>
        >> >>> >>> please reply
        >> >>> >>>
        >> >>> >>> greets
        >> >>> >>> -Adam-

        _______________________________________________
        Red5 mailing list
        [email protected] <mailto:[email protected]>
        http://osflash.org/mailman/listinfo/red5_osflash.org





-- I cannot tell why this heart languishes in silence. It is for
    small needs it never asks, or knows or remembers.  -- Tagore

    Best Regards
    Steven Gong
    _______________________________________________
    Red5 mailing list
    [email protected] <mailto:[email protected]>
    http://osflash.org/mailman/listinfo/red5_osflash.org
    <http://osflash.org/mailman/listinfo/red5_osflash.org>


    _______________________________________________
    Red5 mailing list
    [email protected] <mailto:[email protected]>
    http://osflash.org/mailman/listinfo/red5_osflash.org





--
I cannot tell why this heart languishes in silence. It is for small needs it never asks, or knows or remembers. -- Tagore

Best Regards
Steven Gong
_______________________________________________
Red5 mailing list
[email protected] <mailto:[email protected]>
http://osflash.org/mailman/listinfo/red5_osflash.org

_______________________________________________
Red5 mailing list
[email protected] <mailto:[email protected]>
http://osflash.org/mailman/listinfo/red5_osflash.org



__________ Información de NOD32, revisión 1918 (20061212) __________

Este mensaje ha sido analizado con NOD32 antivirus system
http://www.nod32.com
------------------------------------------------------------------------

_______________________________________________
Red5 mailing list
[email protected]
http://osflash.org/mailman/listinfo/red5_osflash.org


__________ Información de NOD32, revisión 1918 (20061212) __________

Este mensaje ha sido analizado con  NOD32 antivirus system
http://www.nod32.com


_______________________________________________
Red5 mailing list
[email protected]
http://osflash.org/mailman/listinfo/red5_osflash.org

Reply via email to