> Another approach is to have an initial query create a list of
> documents to process and cut it into chunks (say 1000
> documents each) that are each handed off to a spawned task.
> With this, the configured number of threads in the task queue
> will run in parallel giving you higher overall throughput.

That is what I more or less did. However, I decided to pass in a start and end 
count, to have the function find out for itself which documents that should be.

And one other thing I did was divide the total range into a number of segments 
equal to the number of parallel threads I would want to run, which like this 
can be smaller than the total number of threads allowed on the task server, 
giving more room for normal traffic. So instead of spawning all tasks at once 
(cluttering the queue as well), each task creates the next one. A simple try 
catch makes sure the recursive spawning isn't terminated before the end..

Kind regards,
Geert


drs. G.P.H. (Geert) Josten
Consultant


Daidalos BV
Hoekeindsehof 1-4
2665 JZ Bleiswijk

T +31 (0)10 850 1200
F +31 (0)10 850 1199

mailto:[email protected]
http://www.daidalos.nl/

KvK 27164984

P Please consider the environment before printing this mail.
De informatie - verzonden in of met dit e-mailbericht - is afkomstig van 
Daidalos BV en is uitsluitend bestemd voor de geadresseerde. Indien u dit 
bericht onbedoeld hebt ontvangen, verzoeken wij u het te verwijderen. Aan dit 
bericht kunnen geen rechten worden ontleend.

_______________________________________________
General mailing list
[email protected]
http://xqzone.com/mailman/listinfo/general

Reply via email to