> I used sync because I need to wait response from server, otherwise
> "for" will run all iteration without waiting request to finish...

That  is  incorrect, or at least poorly stated. I feel you're standing
firm  on  certain architectural decisions without really understanding
the technologies.

By  default,  independent  async  XHRs  will  always start and finish,
calling  your callbacks for important events, most commonly on overall
success/failure  but  also  firing as data is received. Yes, they will
not  "wait  to  finish"  in  the sense that they will not make *other*
unrelated tasks wait for them to finish -- and that is A Good Thing.

A  few  things  mitigate  async XHRs' ability to finish, but they will
still  appear to start immediately (ceding control back to the calling
code block, again the Good Thing).

One mitigator is the availability of client outbound connections; even
fully   independent   XHRs  are  limited  by  the  maximum  number  of
simultaneous  connections  to the same domain, so connections over the
max will have to wait to open.

Another mitigator is whether you attempt to use the same XHR object or
create  +  open a new, independent object; if you use the same XHR, it
can  only  be  making  one  outbound connection at a time. By default,
MooTools  will  ignore  attempts to reuse an XHR while it is currently
running,  the  assumption being that you will be requesting a specific
result  set  definition  per  object... while a later connection would
give  fresher  result  data,  usually  you want to have calls complete
instead  of constantly preempting each other and possibly never giving
you  complete results. Optionally, you can switch to use link:chain to
chain  one  XHR's  connections one after another, so what you get is a
synchronous  chain  of  asynchronous  calls,  very  handy for "pulsed"
handling, with the result set definition periodically changing.

Getting  to  the  meat of what you're describing -- and suspending for
now  my  and  others'  critique of your decision to mandate two remote
connections  (client-to-server and server-to-server, both ill-advised)
for  every  single  client request for an RSS2JSON feed -- you want to
have  600 client connections and display data as it comes in from each
URL.  It's absolutely clear that you need to use async calls. Probably
a  mixture  of new XHRs every <n> iterations and link:chain in-between
is  the  simplest  right  way. Advanced possibility is using Cometised
conns  (beyond the scope of this e-mail) if you right your assumptions
about  the  server dumbly feeding one URL per request. Either way, you
certainly haven't justified sync XHR, and I doubt you can. :P

-- S.

Reply via email to