On 2015-09-18 20:44, Richard Gaskin wrote:
Peter Haworth wrote:
The dictionary says all actions that refer to a URL are blocking but if I
execute:

put URL myURL into tResults

... my handler immediately goes to the next statement and tResuls contains
"error URL is currently loading".

The url is question is to an api so I guess the error could be coming from there but all the error messages I've received have been wrapped in XML.

No - that error is definitely coming from libURL.

We really need some clarity on this.  I've been using LC for years,
but whatever rules might allow me to anticipate whether network I/O is
blocking or non-blocking have eluded me in practice.

It is somewhat subtle - there are actually three different potential behaviors here:

1) Blocking without message dispatch - a command will not return until the operation is complete and will ensure that no messages are sent whilst the operation completes (equiv. to doing a wait without messages)

2) Blocking with message dispatch - a command will not return until the operation is complete but will allow messages to be sent (equiv. to doing a wait with messages)

3) Non-blocking - a command will return immediately and not cause any sort of wait.

All of libURLs functions which block do so with message dispatch - they have to as libURL is written in LiveCode Script and needs messages from sockets to function.

Herein lies the problem. If you do evaluate a remote URL chunk (e.g. url "http://livecode.com), whilst the evaluation will not return until it has the data, it is still possible for your scripts to get messages whilst it is happening. Now, as it stands, libURL will only allow a single connection to a given server at once (IIRC) so Peter's problem is that whilst the URL operation is happening, his scripts are receiving a message which is causing the same or similar URL to be fetched - the calls are becoming nested (this is what you would call a re-entrancy problem) and thus libURL is saying 'no' to the subsequent request.

This is the only way things can really work with a feature such as blocking URL evaluation written in script with way the engine currently works - 'wait with messages nest' rather than being side-by-side.

These days I almost always rely on callbacks, since I know those are
always non-blocking, though even then I'm not sure of the implications
in terms of overall performance, given that it provides us the
illusion of concurrency but without true parallelism.

I'm not sure that is a concern here. The kernel is definitely concurrent in its handling of sockets and the flow of data. Thus all the app is doing is responding to notifications from the kernel when sockets have changed state in some way. For sanity of app writing (and many other reasons!), this is much better serialized than happening 'truly concurrently' - particularly as most modern OSes do not do truly concurrent UI manipulations.

(This is not to say that being able to offload long running computations / processes onto background threads would not be useful - just limited in utility particularly if you want to not go slightly mad trying to debug things).

Could someone on the core team draft a page that would help us
understand network I/O in terms of blocking and non-blocking, and how
non-blocking code is handled in a single-threaded engine?

As I said above, the fact the engine 'single-threaded' (from the point of view of script at least) isn't really important. The engine is almost a node.js model but not quite. This model is where you can fire off background processes which could be truly parallel but the management of them (in terms of state changes, dispatch, closure) always happens on the main thread. If you look at the development of node.js then it achieves exceptionally high throughput at a great deal more ease than previous attempts to leverage true multi-threading.

Whilst it is easy to think that 'going multi-threaded' would always be of benefit - in reality it rarely is. As soon as you have multiple things running simultaneously and wanting to communicate in some fashion, or access the same resources, you need locks. As soon as you need locks you start to, very quickly, lose the benefit of parallelism in the first place. We had just this experience with our attempts to parallelize rendering - initially it looked like it was a win-win situation, but it ended up being pointless as the number of points at which the multiple threads had to wait on locks to ensure consistency meant that any benefit in performance was lost.

Using multiple threads is only *really* beneficial when you can divide your apps actions up into (long running) bite-sized pieces of computation which need not interact with anything else at all whilst running.

Mark.

--
Mark Waddingham ~ m...@livecode.com ~ http://www.livecode.com/
LiveCode: Everyone can create apps

_______________________________________________
use-livecode mailing list
use-livecode@lists.runrev.com
Please visit this url to subscribe, unsubscribe and manage your subscription 
preferences:
http://lists.runrev.com/mailman/listinfo/use-livecode

Reply via email to