Matthew,

Firstly, thanks for the help :-)

Are you watching for an EV_WRITE event while you have nothing to write?

Nope -- my static Write() method is only called by libEvent, and it (the Write() method) doesn't loop. In fact, here it is in its' entirety:

void CSocketTestModule::Write( int iDescriptor, short sEvent, void *pArg )
{
    // pArg is the struct event ...

    struct event *pEvent = reinterpret_cast< struct event * > ( pArg );

    // write as much data as we can to this client ...

    CClient *pClient = mapOClients[ iDescriptor ];

    if ( pClient )
    {
        iWrites ++;

        int iSendLimit;

if ( iSize - pClient->GetBufferPosition() > pClient- >GetLowWatermark() )
        {
            iSendLimit = pClient->GetLowWatermark();
        }
        else
        {
            iSendLimit = iSize - pClient->GetBufferPosition();
        }

        int iBytesSent = send( pClient->GetDescriptor(),
                               pClient->GetBufferPosition() + pBuffer,
                               iSendLimit,
                               0 );

        if ( -1 != iBytesSent )
        {
            iTotalBytesWritten += iBytesSent;
            pClient->IncBufferPosition( iBytesSent );
        }

// if the client is still around and we've more data to send, add the event again

        if ( iSize > pClient->GetBufferPosition() && iBytesSent != 0 )
        {
            event_add( pEvent, NULL );
        }
    }
    else
    {
        std::cout << "no client instance for this FD!?!?\n";
    }
}



Are you attempting to send more data in a single send() than the socket's
SO_SNDBUF and SO_SNDLOWAT allow?


Nope -- I only send SO_SNDLOWAT bytes (or less, if i'm at the end of the clip, for instance) in each send(). I also tried sending SO_SNDBUF bytes in each send(), but neither helped -- CPU usage is way high (90%+), while only streaming a 6mb/s MPEG2 clip to 1 VLC client on another machine.

I'm not understanding why the CPU is so high ... i'm really anxious to find out :-)


I'm still learning libEvent, so maybe there is something I can do to
fix this.  I've a very basic C++ server based on libEvent, and i'm
having the server stream 1 MPEG2 PS (6-8mbps) file to 1 client over
HTTP.  The client is VLC.  While this works, the CPU usage is very
high e.g. 90+% on both my PowerBook (running OSX 10.4.6) and a (much
faster) Debian Sarge box running on a 2.4ghz P4.  This CPU usage is
the server only -- the VLC client is on another machine.

The program sits inside event_dispatch() and doesn't come out until
the client disconnects.  Furthermore, if I use VLC to stream the same
file over RTP, it only uses 2-4% of the CPU, not 90+%.

I'm very new to libEvent, but is there something I can do to make the
CPU usage of my libEvent-based server more inline with what VLC
uses?  If I do an strace (on Linux) or a ktrace (on OSX), I see the
same calls -- epoll_ctl(), a send() which returns EAGAIN (since I
made the socket non-blocking), and that's about it.  Is there a call
I can make into libEvent before I call event_dispatch() that will
make libEvent not spin so heavily on the select() / kqueue / epoll()?


Regards,

John

Falling You - exploring the beauty of voice and sound
http://www.fallingyou.com












_______________________________________________
Libevent-users mailing list
Libevent-users@monkey.org
http://monkey.org/mailman/listinfo/libevent-users

Reply via email to