Hi im checking your tweaks out because red5 is fillling up the jvm 
memory quite quickly, and there is not much space for anything else, 
leaving 200MB left.

The permsize line gave me this, any ideas ? Ive added the HT bits in aswell.

ERROR  | wrapper  | 2007/05/09 05:14:49 | JVM exited while loading the 
application.
STATUS | wrapper  | 2007/05/09 05:14:54 | Launching a JVM...
INFO   | jvm 5    | 2007/05/09 05:14:54 | Error occurred during 
initialization of VM
INFO   | jvm 5    | 2007/05/09 05:14:54 | Could not reserve enough space 
for object heap
INFO   | jvm 5    | 2007/05/09 05:14:54 | Could not create the Java 
virtual machine.
ERROR  | wrapper  | 2007/05/09 05:14:54 | JVM exited while loading the 
application.
FATAL  | wrapper  | 2007/05/09 05:14:54 | There were 5 failed launches 
in a row, each lasting less than 300 seconds.

Mondain wrote:
> You might want to try some JVM tweaking to get better performance.. 
> (mileage will vary)
>
> Use these if you have a lot of RAM
>
> -Xms768M -Xmx1400M -Xss128K -Xrs
>
> Try these to help tune a bit more..
>
> -XX:PermSize=256M -XX:MaxPermSize=512M -XX:NewRatio=2 
> -XX:MinHeapFreeRatio=20
>
> If you have hyperthreading or multiple CPU's add these
>
> -XX:+AggressiveHeap -XX:+DisableExplicitGC -XX:ParallelGCThreads=2 
> -XX:+UseParallelOldGC
>
> This one may not be available (increases file handles to max)
>
> -XX:+MaxFDLimit
>
> I would suggest that this set always be included
>
> -Dsun.rmi.dgc.client.gcInterval=990000 
> -Dsun.rmi.dgc.server.gcInterval=990000 -Djava.net.preferIPv4Stack=true 
> -Xverify:none
>
>
> Altogether now (starting red5 from command line)
>
> java -Djava.security.manager -Djava.security.policy=conf/red5.policy 
> -Xms768M -Xmx1400M -Xss128K -Xrs -XX:PermSize=256M 
> -XX:MaxPermSize=512M -XX:NewRatio=2 -XX:MinHeapFreeRatio=20 
> -XX:+AggressiveHeap -XX:+DisableExplicitGC -XX:ParallelGCThreads=2 
> -XX:+UseParallelOldGC -XX:+MaxFDLimit - 
> Dsun.rmi.dgc.client.gcInterval=990000 
> -Dsun.rmi.dgc.server.gcInterval=990000 -Djava.net.preferIPv4Stack=true 
> -Xverify:none -cp red5.jar;conf;%CLASSPATH% org.red5.server.Standalone
>
>
> Additional info links:
>
> http://java.sun.com/javase/technologies/hotspot/vmoptions.jsp
> http://blogs.sun.com/watt/resource/jvm-options-list.html 
> <http://blogs.sun.com/watt/resource/jvm-options-list.html>
> http://performance.netbeans.org/howto/jvmswitches/index.html
> http://developer.apple.com/documentation/Java/Conceptual/JavaPropVMInfoRef/Articles/JavaVirtualMachineOptions.html
>
>
>
>
>
> On 5/8/07, *Interalab * <[EMAIL PROTECTED] 
> <mailto:[EMAIL PROTECTED]>> wrote:
>
>     Interesting thing happened.  I turned off the memory hole setting
>     so the
>     full 4 gig ram are not accessable and now the eth errors have
>     dissappeared.
>
>     Odd, but it certainly looks like a hardware problem.
>
>     Luke Hubbard wrote:
>     > Hi Bill,
>     >
>     > Thanks for running this test. The cpu numbers are promising if
>     we can
>     > fix this other issue. Can you provide deals of how much memory the
>     > red5 process was using.
>     >
>     > To be clear every time the server died it didn't hang its process
>     > died. That is very odd, if there was some exception it should have
>     > been logged. I suspect something happened in a native networking
>     code
>     > which killed the java process. I googled those errors you got in
>     your
>     > system logs and found this..
>     >
>     > http://osdir.com/ml/linux.drivers.e1000.devel/2007-01/msg00133.html
>     >
>     
> http://www.kaltenbrunner.cc/blog/index.php?/archives/8-fixing-e1000-TX-transmit-timeouts-at-least-some-of-them.html
>     
> <http://www.kaltenbrunner.cc/blog/index.php?/archives/8-fixing-e1000-TX-transmit-timeouts-at-least-some-of-them.html>
>     >
>     > Sounds like it might be possible to fix the error by adjusting
>     the nic settings.
>     >
>     > Is anyone else getting experiencing the same symptoms?
>     > Process dieing without hanging or throwing any errors? If so
>     please speak up.
>     >
>     > Luke
>     >
>     > On 5/8/07, Interalab <[EMAIL PROTECTED]
>     <mailto:[EMAIL PROTECTED]>> wrote:
>     >
>     >> Rob Schoenaker and I ran a little stress test this morning and
>     wanted to
>     >> share our results.  Rob, feel free to add to or correct me if
>     you want.
>     >>
>     >> This was a test of one publishing live stream client and many
>     >> subscribing clients.
>     >>
>     >> Here's the server config:
>     >>
>     >> Xubuntu Linux
>     >> AMD 64 3500+ processor
>     >> 4 GB RAM
>     >> Red 5 trunk ver 1961
>     >> Gbit Internet connection
>     >>
>     >> Client side:
>     >>
>     >>  From the other side of the world . . .
>     >> Lots of available bandwidth
>     >>
>     >> The first run choked the server at 256 simultaneous
>     connections.  They
>     >> were 250k - 450k live streams.
>     >>
>     >> After a re-boot, we got up into the 300 + connections.  This
>     time the
>     >> resolution was lower, so the average bandwidth per stream was
>     about 150k
>     >>
>     >> Server looked like this:
>     >> Cpu(s): 12.0%us,  2.0%sy,  0.0%ni,
>     84.0%id,  0.0%wa,  0.3%hi,  1.7%si,
>     >> 0.0%st
>     >>  Mem:   3976784k total,  1085004k used,  2891780k free,    
>     7896k buffers
>     >>  Swap:  2819368k total,        0k used,  2819368k free,  
>     193740k cached
>     >>
>     >> After about 15 minutes, and over 400 connections, Red5 quit
>     without any
>     >> log errors.  The Java PID just went away.  Had a bunch of these in
>     >> dmesg:  e1000: eth1: e1000_clean_tx_irq: Detected Tx Unit Hang
>     >>
>     >> Started Red5 by running red5.sh without re-booting the
>     server.  It came
>     >> right back up and started streaming again.
>     >>
>     >> This time, we set the resolution to 80x60, or about 60-80 kbps
>     per stream.
>     >>
>     >> Rob tried to crash it by launching about 200 connections in
>     about 10
>     >> seconds, but it kept running.  It didn't die again.
>     >>
>     >> Final outcome of the last test:
>     >>
>     >> 627 concurrent connections peak
>     >> approx 1100 connections total (some dropped when browsers
>     crashed under
>     >> the load, etc.)
>     >>
>     >> At the peak, player buffers started to get big.  Some as high
>     as 70,
>     >> most of mine were in the 30's.
>     >>
>     >> So, my observation is that even though the server and available
>     >> bandwidth didn't seem to be stressed too much - lots of memory
>     and cpu %
>     >> in the teens, the larger the individual streams, the fewer total
>     >> connections we could make.
>     >>
>     >> Not very scientific, but we thought it was worth sharing with
>     the list.
>     >>
>     >> Regards,
>     >> Bill
>     >>
>     >>
>     >>
>     >>
>     >>
>     >>
>     >>
>     >>
>     >>
>     >>
>     >>
>     >>
>     >>
>     >> _______________________________________________
>     >> Red5 mailing list
>     >> [email protected] <mailto:[email protected]>
>     >> http://osflash.org/mailman/listinfo/red5_osflash.org
>     >>
>     >>
>     >
>     >
>     >
>
>     _______________________________________________
>     Red5 mailing list
>     [email protected] <mailto:[email protected]>
>     http://osflash.org/mailman/listinfo/red5_osflash.org
>
>
>
>
> -- 
> It is difficult to free fools from the chains they revere. - Voltaire
> ------------------------------------------------------------------------
>
> _______________________________________________
> Red5 mailing list
> [email protected]
> http://osflash.org/mailman/listinfo/red5_osflash.org
>   


_______________________________________________
Red5 mailing list
[email protected]
http://osflash.org/mailman/listinfo/red5_osflash.org

Reply via email to