Thank you very much for your reply! See below! I also want to ask another question about cloudstone. I meet the retry connecting problem, it will automatically connect 10 times, then faban will be killed. ssh no password, firewall is not the problem. I have no idea about this question. Does anybody know the solution for this problem? Before I post this question along with the logs, however, I have not yet received any reply.
At 2013-09-30 06:41:24,"Cansu Kaynak" <[email protected]> wrote: Hi. 1) Client-side Java process crash: Since it seems you have enough memory (8GB) for the client machine, you should be able to launch more than 60 clients. Please check the error messages printed out by the client when it crashes. It might have to do with the ulimits (#processes or #open files). Where I can check the logs? When it crashes, ssh connection is forced to exit. I can not find any error messages printed out by the client. 2) Server still serving requests after the client crashes: The server does not check if the corresponding Java client process is up and running, while it is streaming a video to the client. The server receives a request for a video at the beginning and does not receive any feedback from the corresponding client while streaming the video. The server just streams the video (sends packets), until it reaches the end of the video. That's why you see videos being streamed by the server, even after the client crashed. So, you need to restart the server before you start a new run, in case the client crashes. 3) AvgDelay: Delay (in ms) is the difference between the time a video packet is scheduled to be sent and the time it is actually sent. Whether my understanding is right? The time a video packet is scheduled is the time that finish reading data from disk. The time a video packet is actually sent is the time that nic sent out the packets to clients. How to get these two times? Obviously, Darwin server provides the AvgDelay, only the scheduler can know when a video packet is scheduled. I am interested how Darwin knows this information. If the server is overloaded and doesn't have enough resources (e.g., processing cycles), In my tests, server side has very low cpu utilization even I use the number of clients 60. So whether it is correct, cpu is less probability to become bottleneck compared to network. the delay of each packet will increase, which will eventually cause each video being streamed to lag behind on the client side. AvgDelay (in ms) is the average delay of the packets that were sent during the last statistics collection period (we recommend 1 sec.). Lower AvgDelay means better QoS. In our experience, as long as AvgDelay is below 100 ms. the client is able to watch video without any major interruptions. To see a difference in AvgDelay, you need to stress the server by increasing the number of clients. While doing so, you should observe the server utilization (e.g., check the output of top). The AvgDelay will increase after a certain server utilization point. The AvgDelay will increase after a certain server utilization point. Which resources you mean? cpu? In my tests, the cpu is very low. Usually the cpu utilization should low or high when use media streaming benchmark to test darwin server. Hope this helps. Regards, -- Cansu On Sep 27, 2013, at 5:38 AM, 张伟 <[email protected]> wrote: Hi all, This is my first time to use video sever. Now I meet a question I can not understand. The problem is when I increase the number of clients to above 60. I find that the client side will crash. I install the client side in a Virtual machine. The virtual machine has 8G memory size, 2vcpus. Java maximum heap size is 7G. When it crashes, the ssh connection will exit. When I relogin, I find that all java process-the clients exit. ps aux|grep java However, the server side still has the requests, RTP-Conns RTSP-Conns HTTP-Conns kBits/Sec Pkts/Sec RTP-Playing AvgDelay CurMaxDelay MaxDelay AvgQuality NumThinned Time 4 4 0 1333 197 4 -4 7 10 0 0 2013-09-27 03:09:21 4 4 0 1197 177 4 -4 10 10 0 0 2013-09-27 03:09:22 4 4 0 1263 191 4 -4 7 10 0 0 2013-09-27 03:09:23 4 4 0 1196 184 4 -4 9 10 0 0 2013-09-27 03:09:24 4 4 0 1112 174 4 -4 7 10 0 0 2013-09-27 03:09:25 4 4 0 1024 157 4 -5 10 10 0 0 2013-09-27 03:09:26 4 4 0 1201 183 4 -4 10 10 0 0 2013-09-27 03:09:27 4 4 0 1662 252 4 -3 6 10 0 0 2013-09-27 03:09:28 4 4 0 1681 250 4 -3 9 10 0 0 2013-09-27 03:09:29 4 4 0 1525 222 4 -3 7 10 0 0 2013-09-27 03:09:30 These values is not 0. After the client crash for a long time, the server side still receives the requests. Can anybody can give some reasons? The other question is that usually uses which column as the performance is better. I want to know the meaning of AvgDelay. However, I do not yet find something about its meaning. If I use different number of clients, this value seems no big difference.
