But I prefer something very simple which solves the problem without touching too much. My patch looks simple which is important to be accepted. Also, I prefer that the user perspective do not change, so accumulate the EntityProxyChange until all operations are processed is required. Breaks up success/failure looks like something that will cause unexpected problem on apps. I sometimes write code which expect that all Receivers in one request be called in the same browser event loop, so looks like a bad idea split the response of a request in multiple browser event loops.
On Thursday, July 10, 2014 7:53:14 PM UTC+2, Colin Alworth wrote: > > Jens, I think you may be mistaken on how far this patch moves the problem > to the future - rather than break it into just another chunk, this patch > appears to break into as many chunks as are required - one per message > coming back from the server. The next bottleneck, if it can exist, appears > to be now moved to processReturnOperation, which iterates over the current > proxy and essentially hits each setter. In order for a single step to block > long enough for an error, the proxy would need to have so many properties > (at least on the order of thousands, if not hundreds of thousands) that it > hangs. > > If it were me writing the patch, I'd go another step, and break up the > calls to onSuccess/onFail too ;). However, *that* might end up having some > ramifications from users expecting that all onSuccess calls run > synchronously with each other, and users can fix long-running errors by > doing the scheduler work in their own code. > > Truly massive object graphs might end up balking on the JSON.parse that > happens in AutoBeanCodex.decode. That will be difficult to break up without > rewriting decode to take a callback though. > > > On Tue, Jul 8, 2014 at 11:36 AM, Jens <[email protected] <javascript:>> > wrote: > >> Well in general I think its not a big issue to process the response in an >> async way, however it just moves your problem into the future. Your patch >> allows you to load more data from the server without blocking the browser. >> However sooner or later the browser will block again because you probably >> start loading even more data in the future and the chunks of work will >> become too large again. But a maintainer of RequestFactory will decide if >> its worth it. >> >> IMHO your real solution would be to rethink your UI / workflow so you >> don't need load such a large amount of data at once. Out of curiosity: How >> much data are you actually trying to transfer and which causes the browser >> to block? >> >> As a side note: GWT does not accept pull requests on GitHub. You must >> sign up on Gerrit and sign a CLA: >> http://www.gwtproject.org/makinggwtbetter.html#submittingpatches >> >> -- J. >> >> -- >> You received this message because you are subscribed to the Google Groups >> "GWT Contributors" group. >> To unsubscribe from this group and stop receiving emails from it, send an >> email to [email protected] >> <javascript:>. >> To view this discussion on the web visit >> https://groups.google.com/d/msgid/google-web-toolkit-contributors/9c5244f3-e55b-4ade-aa82-4c60e2294229%40googlegroups.com >> >> <https://groups.google.com/d/msgid/google-web-toolkit-contributors/9c5244f3-e55b-4ade-aa82-4c60e2294229%40googlegroups.com?utm_medium=email&utm_source=footer> >> . >> >> For more options, visit https://groups.google.com/d/optout. >> > > > > -- > 218.248.6165 > [email protected] <javascript:> > -- You received this message because you are subscribed to the Google Groups "GWT Contributors" group. To unsubscribe from this group and stop receiving emails from it, send an email to [email protected]. To view this discussion on the web visit https://groups.google.com/d/msgid/google-web-toolkit-contributors/e191a616-f5b5-4659-852f-f9436f04b19e%40googlegroups.com. For more options, visit https://groups.google.com/d/optout.
