scratch the comment about memory usage being comparable to nginx. optimised 
nginx uses less than 1MB RSS. no way node.js will ever come near that. 
however, i am getting better performance (rps) than nginx for static files 
on my setup at the moment but imagine nginx is probably doing more than my 
simple static file server which has been added to the repo.

On Tuesday, April 2, 2013 12:57:51 PM UTC+1, billywhizz wrote:
>
> just wanted to get some feedback on some experiments i have been doing in 
> getting maximum http performance in node.js. the code is here:
>
> https://github.com/billywhizz/minhttp
>
> this is a c++ addon that does the absolute minimum http processing and 
> leaves all decisions to the end user about how much support for http they 
> want to provide. it uses ryah's http_parser to take care of http parsing 
> and does some hacky stuff with buffers to ensure no new objects are created 
> between requests. if you run the http-min example you can hammer it with 
> apachebench and if you run with --trace-gc you will see that there are NO 
> pauses for garbage collection which makes a huge performance difference. i 
> played around with a lot of different techniques for achieving max 
> performance and settled on the one it is currently using which is basically 
> as follows:
>
> - client provides an input buffer and an output buffer to the library when 
> it is initialised. no other buffers need to be created when receiving 
> requests or writing responses. for writes, the memcpy is taken care of in 
> c++ land. for reads this means that once the onResponse callback has 
> finished, the input buffer can be overwritten so it is up to the user to 
> save the request state according to their needs.
> - callbacks must be explicitly set using a setCallbacks method which binds 
> the library to the relevant callbacks at that point. this means we don't 
> check again if the callbacks exist which saves quite a bit of cpu time
> - parsing the request requires some nasty binary parsing but this pays off 
> big time performance-wise compared to creating lots of js objects which 
> have to be GC'd
>
> on my hardware i can get 67k responses a second for the absolute minimal 
> case (just return a 200 OK keepalive request for every response). this 
> compares to a minimal server using node.js http library giving 12k rps 
> which is quite a boost. i have also tested it as a very basic static file 
> server and can get the same performance as optimised nginx on my hardware. 
> memory overhead is only 2-3 MB more than nginx too due to the fact that no 
> objects are being created on the fly.
>
> not sure if this approach is viable in the real world but would be 
> interested in any feedback/ideas/gotchas that people might come up with. i 
> would like to turn it into a low level http/tcp binding that could be 
> useful for people who need really low level access to the protocols or 
> might be running on a device with limited memory/cpu.
>
> bear in mind this is very much a first pass at this so expect segfaults 
> and all sorts of bad things to happen if anything unexpected happens. will 
> be looking at making it more robust next.
>

-- 
-- 
Job Board: http://jobs.nodejs.org/
Posting guidelines: 
https://github.com/joyent/node/wiki/Mailing-List-Posting-Guidelines
You received this message because you are subscribed to the Google
Groups "nodejs" group.
To post to this group, send email to [email protected]
To unsubscribe from this group, send email to
[email protected]
For more options, visit this group at
http://groups.google.com/group/nodejs?hl=en?hl=en

--- 
You received this message because you are subscribed to the Google Groups 
"nodejs" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
For more options, visit https://groups.google.com/groups/opt_out.


Reply via email to