Apologies for the top post, but it seemed easier than responding to the rather long single-track thread this has generated. :)

On 2017-12-04 12:32, Jonathan Lynch via use-livecode wrote:
In looking at node.js it seems that two things stand out - LC server
waits for the database to send a reply, rather than setting an event
listener, and perhaps node.js launches faster when a request comes in?

Yes.

Is this accurate? Could LC server be modified to run as fast as node?
I would love to feel comfortable using LC server for millions of users
simultaneously - which I realize would take more than just speeding up
the server software.

Yes it is accurate - in regards to 'millions of users' then that is more down to IT infrastructure than anything else.

In terms of the other things raised in this thread - here is my take...

Node.js consists of three things - libuv, Google's JavaScript V8 engine, a mature package management system and large set of prebuilt libraries.

libuv (https://nikhilm.github.io/uvbook/introduction.html) is a highly performant low-level library (written in C) for handling IO on Windows and UNIX systems. It wraps the best possible approaches available in those two worlds (kqueue/poll on UNIX, IOCP on Windows) in a non-blocking and event-driven API for all forms of IO - whether that be pipes, sockets, files or inter-process communication. It aims to virtually eliminate the overhead involved in handling data flowing in and out of the process using it.

Node.js puts the V8 engine on top of libuv, and wraps its API at a reasonably high-level. The API allows JS to request IO operations, with which you provide a callback (whether wrapped in the syntactic sugar of futures/promises/continuations or not). The callbacks then call back into the V8 engine, again on the main (event-loop) thread.

Indeed, there really isn't 'that much' difference between LiveCode and Node.JS from this point of view. Two main differences though:

1) Node.JS has a much more efficient underling IO handling system (libuv) in contrast to LiveCode's which has mainly evolved centered around UI event loops.

2) a completely non-blocking API for all kinds of IO in contrast to LiveCode which only really offers non-blocking IO directly for sockets, although processes can be done with a simple polling loop.

In particular, Node.js doesn't natively use multiple threads (just like LiveCode) but there are packages which offer 'worker' style threading as you see in the browser though which basically allow you to eliminate (in a restricted way) the overhead of spinning up a separate process and using IPC to share state (the pros and cons of that will largely depend on the application though).

Of course, some Node.js packages *will* spin up separate threads to perform very specific things - like using the libmysql client library which uses blocking sockets to communicate with the DB, or resolving DNS names which typically use a blocking call (although on UNIX glibc wraps this in an async version). There are actually a few things in LiveCode which do a similar thing - DNS resolution spawns a temporary thread to do that, tsNet runs a separate thread to handle libcurl's event loop and the revandroid external spins up a separate thread to communicate with adb as a separate process.

In the future it would be nice to evolve the LiveCode engine to be Node.js-like - i.e. use libuv as the underlying IO library and ensure there is easy to use syntax for all IO operations you might want to perform. Indeed, it should really go further - in recent JS versions you can now write blocking-like code which runs as if it is non-blocking - essentially, JS has gained a form of 'wait' (something LiveCode has had for a long time). However, LiveCode's current implementation of wait is recursive and not round-robin - this is something which can (and will!) be changed when we solve the wait-problem for HTML5 (which I've finally managed to find a workable and incremental approach to solving).

Of course that is the future - so what about right now?

Well the first part is to allow certain operations to be non-blocking (i.e. callback driven) which currently are not - database access is one of these. There are two options here (1) do some low-level C fettling to run revdb calls on separate threads and use a callback, (2) write appropriate db protocol libraries in LiveCode Script using its non-blocking sockets. The latter might sound like 'why bother?' but, DB protocols change rarely, and there is a huge overhead in maintaining and shipping any large blob of compiled C++ when you target umpteen different OSes/architecture combinations. Of course, some db libraries are naturally non-blocking, in which case wrapping them in LCB would be a suitable option.

Beyond that, I really do think it mostly comes down to infrastructure. You need a gateway process which sits and handles the front-end connections (e.g. like a webserver) really efficiently, and then passes those off to continually running processes which each run a single 'session' at a time (of course you can multiplex multiple user sessions into a single process once you have non-blocking versions of things like DB access - which saves the time of loading shared state again and again). Node.js is particularly good at doing both the front-end hand-off and session processes - which is why you can probably say that 'Node.js is faster than LiveCode Server'.

Like most cases of optimization, you have to be careful to determine where the true bottlenecks are - in a system relying on a DB to provide the 'shared state' / datastore, the DB queries will almost always completely dominate *everything else* in that system, so the cost of overhead of the language being used to access them is pretty much immaterial.

Warmest Regards,

Mark.

--
Mark Waddingham ~ m...@livecode.com ~ http://www.livecode.com/
LiveCode: Everyone can create apps

_______________________________________________
use-livecode mailing list
use-livecode@lists.runrev.com
Please visit this url to subscribe, unsubscribe and manage your subscription 
preferences:
http://lists.runrev.com/mailman/listinfo/use-livecode

Reply via email to