Hi ket,

No, I didnt mean that. I just wanted to call out that different 
applications have different infrastructure requirements and different load 
patterns. You need to do performance testing to figure out how your 
application is performing with the infrastructure your have and do capacity 
planning accordingly.

Thanks,
Unmesh

On Monday, 24 February 2014 07:14:34 UTC+5:30, Ket wrote:
>
> Hi Unmesh,
>
> Are you suggesting that Node.js on a quad core server machine of something 
> 4GB RAM can handle only small task like chat app and is not suitable for 
> heavy task like streaming video live like ustream.tv.
>
> No argument intended, I'm just deeply interesting to gather as much info 
> about this as possible.
>
> Ket
>
>
> On Sunday, February 23, 2014 11:48:34 AM UTC+7, Unmesh Joshi wrote:
>>
>> Hi Nitin,
>>
>> The  buzz around handling millions of connections by single node server, 
>> should be interpreted as, 'it can handle lot more connections, provided it 
>> has enough CPU and memory, as compared to traditional synchronous thread 
>> per connection approaches'.
>> Traditionally in JVM or Ruby world, the multiple concurrent connections 
>> are handled by spawning a thread per connection and using blocking IO calls 
>> per connection. So even if threads are waiting for IO calls, you can handle 
>> more concurrent connections. You can see JVMs with threadpool configuration 
>> of say 2000 threads. On typical linux machines, given enough memory, (say 
>> 16 GB), its ok to even spawn few thousand threads.
>> To handle millions of connections (thats too much in my experience, you 
>> should probably revisit your calculations using something like little's law 
>> (
>> http://www1.practicalperformanceanalyst.com/resources/important-formulae/what-is-littles-law/)),
>>  
>> you will typically need a cluster of hundreds of servers. 
>> For more connections, spawning more and more threads is a problem, 
>> because there is a memory overhead (each thread needing 1 KB of stack 
>> space, so with 10000 threads its 1GB just for thread stack) and also there 
>> is a scheduler overhead, as typical linux thread scheduler performs with 
>> O(log N). So with more and more threads, more and more pressure will be put 
>> on the system (OS and JVM etc) and less resources will be available for 
>> user program which is actually handling the request.
>> So with event driven IO models, the idea is to use less threads and less 
>> system resources to handle more and more connections (hundreds of 
>> thousands) and allow user programs to use more resources.
>> So 'millions of connections' part of the promise, is really about 
>> 'framework and system not needing too many resources for doing book keeping 
>> for lots of connections'.
>>
>> The other side is, how the user programs which are now allowed to use 
>> more system resources utilize those. Typically a single node process can 
>> use max of 1.7 GB of RAM. (Thats what I know u can get MAX for v8)
>> This space is again divided into generations, as v8 uses generational 
>> garbage collection. 
>>
>> When you build http responses, you are really doing string 
>> concatenations, so really building a lots of small string objects. Add to 
>> that the other javascript objects created to handle the requests. Handling 
>> of all these objects takes memory and cpu. The garbage collection will be 
>> taking some CPU too. 
>> If you have lots of connections with less memory, it will put a lot of 
>> pressure on garbage collector and your CPU will be taken by the GC 
>> activitiy.
>>
>> So the way to handle this, is first to capacity plan your system. Figure 
>> out how many connections can be handled with your default configuration 
>> (say a single node process, 512 MB of memory), if say you can handle 100 
>> connections with this configurations and you have quad core machine with 4 
>> GB of RAM, you can spawn 4 node processes to handle 400 concurrent 
>> connections on single box.
>> If you need more concurrent connections,  you can add more such boxes. 
>> You will need 10 boxes say, to handle 4000 concurrent connections.
>> Again with horizontal scaling, the trick is not to use sticky sessions 
>> and you need to have a separate session store setup (say a separate Redis 
>> server).
>>
>> Thanks,
>> Unmesh
>>
>>
>>
>>
>> On Tuesday, 18 February 2014 15:57:20 UTC+5:30, Nitin Gupta wrote:
>>>
>>> Hi, This is regarding scaling node.js application. I have heard a lot of 
>>> buzz around handling millions of connection by single node.js server.
>>>
>>> To check that i wrote a http server which was returning nothing but 200 
>>> ok and CPU started blowing up with 10000 concurrent requests.
>>>
>>> CODE:
>>> my_http.createServer(function(request, response){
>>>   response.writeHead(200);
>>>   response.end();
>>> });
>>>
>>> I have an application which has to handle 1M concurrent connections but 
>>> what i found with testing doesn't seem like a solution to this. Please 
>>> guide me through this. Have i missed something or is it not possible with 
>>> barebone node.js server.
>>>
>>> My application looks like:
>>> a.) MongoDB with node.js
>>> b.) A post request with some params.
>>> c.) Saving params' values to DB and return to client with success or 
>>> failure
>>>
>>>

-- 
-- 
Job Board: http://jobs.nodejs.org/
Posting guidelines: 
https://github.com/joyent/node/wiki/Mailing-List-Posting-Guidelines
You received this message because you are subscribed to the Google
Groups "nodejs" group.
To post to this group, send email to [email protected]
To unsubscribe from this group, send email to
[email protected]
For more options, visit this group at
http://groups.google.com/group/nodejs?hl=en?hl=en

--- 
You received this message because you are subscribed to the Google Groups 
"nodejs" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
For more options, visit https://groups.google.com/groups/opt_out.

Reply via email to