You have worker threads that you can use to offload heavy CPU work, you 
just make sacrifices on your ability to use them for I/O since there are 
only 4 by default so if you fill them up with your work then file I/O is 
going to suffer. Although you can now set the UV_THREADPOOL_SIZE 
environment variable to increase the number of worker threads all the way 
up to a max of 128 (!!).

If your work is chunkable then my advice is to avoid the logic overhead of 
managing an additional event loop by just pushing the work into a worker 
thread. Since you're already using NAN have a look at the NanAsyncWorker 
class that abstracts a whole lot of the hassle of this. There's even an 
example in NAN for doing exactly this: 
https://github.com/rvagg/nan/tree/master/examples/async_pi_estimate 
calculating Pi using an average-of-averages Monte Carlo method with work 
spawned into parallel worker threads.

 -- Rod


On Sunday, 17 November 2013 07:23:36 UTC+11, João Henriques wrote:
>
> But I read this in the libuv documentation:
>  it is *imperative to make sure that no function which runs periodically 
> in the loop thread blocks when performing I/O or is a serious CPU hog*, 
> because this means the loop slows down and events are not being dealt with 
> at full capacity
>
> The only solution seems to launch another thread, as the event loop would 
> be block when the task is running, Receiving new connections from other 
> clients would be blocked until the task as finished.
>
> I learned about node cluster or using a supervisor as pm2 , but I would 
> rather make it as efficient as possible first and then think about 
> clustering.
>
> On Saturday, 16 November 2013 20:01:47 UTC, Ben Noordhuis wrote:
>>
>> On Sat, Nov 16, 2013 at 8:14 PM, João Henriques <[email protected]> 
>> wrote: 
>> > 
>> > I'm building a module to be used on a server application that does some 
>> > heavy work on the background, and the best design I thought for 
>> > not blocking node's main event loop was creating a new thread and a new 
>> > event loop with libuv, keeping it referenced in the background until 
>> someone 
>> > needs it, 
>> > 
>> > 
>> > /************************************************** 
>> >  * Module initialization 
>> >  **************************************************/ 
>> > 
>> > uv_loop_t *second_threaded_loop; 
>> > 
>> > void create_loop(void *) 
>> > { 
>> > uv_run(second_threaded_loop, UV_RUN_DEFAULT); 
>> > } 
>> > 
>> > void init (Handle<Object> target) 
>> > { 
>> > NanScope(); 
>> > 
>> > uv_thread_t thread_loop; 
>> > uv_async_t async; 
>> > 
>> > second_threaded_loop = uv_loop_new(); 
>> > 
>> > // Reference the second loop so it keeps on idle 
>> > uv_async_init(second_threaded_loop, &async, NULL); 
>> > 
>> > uv_thread_create(&thread_loop, create_loop, NULL); 
>> > 
>> > // .... 
>> > } 
>> > 
>> > /************************************************** 
>> >  * Then, a random function running on 
>> >  * the main loop does 
>> >  **************************************************/ 
>> > 
>> > NAN_METHOD(Foo::doHeavyOperation) 
>> > { 
>> > // ... 
>> > 
>> > uv_work_t* req = new uv_work_t; 
>> > 
>> > random_context_struct *ctx = new random_context_struct; 
>> > // ctx->handle is a Handle<Object> 
>> > ctx->handle = Persistent<Object>::New(args.This()); 
>> > 
>> > req->data = /* random context */; 
>> > 
>> > uv_queue_work(second_threaded_loop, req, startHeavyOpeation, 
>> EmitMessage); 
>> > 
>> > // .... 
>> > 
>> > NanReturnUndefined(); 
>> > } 
>> > 
>> > /************************************************** 
>> >  * After startHeavyOpeation ends, Emit Message is called 
>> >  * Where it supossly should emit an event 
>> >  * informing that the task finished 
>> >  **************************************************/ 
>> > 
>> > void EmitMessage(uv_work_t *req, int status) 
>> > { 
>> > random_context_struct *ctx = (random_context_struct*)req->data; 
>> > 
>> > Local<Value> argv[1] = { NanSymbol("parsed") }; 
>> > 
>> > TryCatch tc; 
>> > 
>> > MakeCallback(ctx->handle, "done", 1, argv); 
>> > 
>> > if (tc.HasCaught()) 
>> > printf("Error occured"); 
>> > } 
>> > 
>> > 
>> > 
>> > Now, if I run this task on the main event loop the operation succeeds 
>> and 
>> > the event is triggered, 
>> > but in the second loop the application prematurely ends, without any 
>> > explicit error being thrown, or being caught. 
>> > 
>> > A second loop to me seems the best idea, because blocking in the main 
>> event 
>> > loop is not be acceptable as the server 
>> > will be receiving new connections from other users. Creating a thread 
>> for 
>> > each request would not be acceptable too, 
>> > as the overhead would be unmanageable. A second event loop on a child 
>> thread 
>> > seems a reasonably approach but emitting seems not to be possible. 
>>
>> V8 as it's used in node.js is not thread safe.  You cannot call V8 
>> functions from another thread.* 
>>
>> Do you need a separate event loop?  The uv_queue_work() function runs 
>> the work callback in a separate thread and invokes the done callback 
>> on the original thread (i.e. the main thread.) 
>>
>> * V8 can be entered from different threads only if you use appropriate 
>> mutual exclusion with v8::Locker and v8::Unlocker objects.  That won't 
>> help in your case because your worker thread would block the main 
>> thread if it somehow acquired the Locker. 
>>
>

-- 
-- 
Job Board: http://jobs.nodejs.org/
Posting guidelines: 
https://github.com/joyent/node/wiki/Mailing-List-Posting-Guidelines
You received this message because you are subscribed to the Google
Groups "nodejs" group.
To post to this group, send email to [email protected]
To unsubscribe from this group, send email to
[email protected]
For more options, visit this group at
http://groups.google.com/group/nodejs?hl=en?hl=en

--- 
You received this message because you are subscribed to the Google Groups 
"nodejs" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
For more options, visit https://groups.google.com/groups/opt_out.

Reply via email to