Re: LiveNode Server

2015-04-14 Thread Mark Wilcox
This is an interesting thread. Let me add a few comments:

The CGI / FastCGI thing is a red herring. CGI is slow and so is FastCGI.
:)
Slighter faster options (e.g. for things like PHP and Python) implement
the language runtime via a plugin module to the web server.
However, things like Node.js and Go are fast because they don't have
Apache or similar in the way. I once saw a presentation by a guy that
had built the most carefully optimised (and very popular) HTTP proxy in
Python and then discovered that a very simplistic implementation with
Node.js was more than an order of magnitude faster. This was because
Apache was out of the loop, not because JavaScript is so much faster
than Python (it is faster but nowhere near that much).

Now Apache and other web servers are really good at serving up files but
these days we often want to build APIs. If you've got static files
you're better off putting them on Amazon S3 or similar. For getting a
chunk of JSON querying a database and spitting some more JSON back the
other way you're better off with something architecturally like Node.js.
The fact that the whole ecosystem has been forced to do everything
asynchronously is a big help. JavaScript and its single threaded nature
are not virtues here but not a fatal handicap either. Someone mentioned
Meteor.js - that's built on Node.js but they actually use Node Fibers,
basically lightweight threads (that have to yield explicitly - no
pre-emption) - it makes the code a whole lot more readable.

Not having real threads is bad when it comes to processor intensive
operations but Node.js got a very long way without them and I think a
Node-like LiveCode server could too. It would need to have proper
asynchronous I/O everywhere though and that's not trivial from where we
are now. It is definitely a project worth pursuing if anyone has the
time. The quickest way to get their might be to integrate libevent (as
used in Chromium) rather than libuv (as used in Node.js) because it
comes with DNS, HTTP and SSL support - you really want all of those in
the C/C++ layer, not implemented in LiveCode on top of a raw socket . It
might not be the best way overall but alternatives probably involve
quite a lot more work.

Adding something like blocks in Objective-C (very closure-like), or
Node's TAGG, to enable real multi-threaded programming but limited to
avoid all the usual thread synchronisation headaches would also be
fantastic for LiveCode but another big chunk of work. There didn't seem
much point in even looking at any of these things while the engine was
undergoing a major architectural overhaul but maybe things are starting
to stabilise enough now that it'll make sense to think about them again
soon.

Mark

-- 
  Mark Wilcox
  m...@sorcery-ltd.co.uk

On Wed, Apr 8, 2015, at 02:47 PM, Andrew Kluthe wrote:
 To clarify just a little bit further. The code and objects weren't
 holding
 onto memory, the variables used in that code were due to weird scoping.
 Big
 chunks of db results, etc that persist after I've already done my
 business
 with them and tried to move on.
 
 If I can recommend a book on Javascript, I can't speak highly enough of
 the
 insights given by  JavsScript: the Good Parts from O'Reilly. He
 provides
 some history behind some of the design choices in javascript and some of
 the problems still being worked around today in regards to the bad parts.
 

___
use-livecode mailing list
use-livecode@lists.runrev.com
Please visit this url to subscribe, unsubscribe and manage your subscription 
preferences:
http://lists.runrev.com/mailman/listinfo/use-livecode


Re: LiveNode Server

2015-04-14 Thread David Bovill
Yes - thanks for the input Mark.

How about having Livecode as a Node extension that we could install with
NPM? Is that not a much easier first step? I still would like to get to the
bottom of not being fork able means?

It would be fantastic to get add Live code to a Node server with a couple
of lines. Particularly for serving dynamically created images. I guess this
route will also be possible when we have JavaScript export?

On Tuesday, April 14, 2015, Jim Lambert j...@netrin.com wrote:


  Mark Wilcox wrote:
 
  This is an interesting thread.


 Indeed it is. Thanks for your informative comments.

 Jim Lambert
 ___
 use-livecode mailing list
 use-livecode@lists.runrev.com javascript:;
 Please visit this url to subscribe, unsubscribe and manage your
 subscription preferences:
 http://lists.runrev.com/mailman/listinfo/use-livecode

___
use-livecode mailing list
use-livecode@lists.runrev.com
Please visit this url to subscribe, unsubscribe and manage your subscription 
preferences:
http://lists.runrev.com/mailman/listinfo/use-livecode


Re: LiveNode Server

2015-04-14 Thread Jim Lambert

 Mark Wilcox wrote:
 
 This is an interesting thread.


Indeed it is. Thanks for your informative comments.

Jim Lambert
___
use-livecode mailing list
use-livecode@lists.runrev.com
Please visit this url to subscribe, unsubscribe and manage your subscription 
preferences:
http://lists.runrev.com/mailman/listinfo/use-livecode


Re: LiveNode Server

2015-04-14 Thread Mark Wilcox
 On 14 Apr 2015, at 18:17, David Bovill david@viral.academy wrote:
 
 Yes - thanks for the input Mark.
 
 How about having Livecode as a Node extension that we could install with
 NPM? Is that not a much easier first step? I still would like to get to the
 bottom of not being fork able means?
Unless you want to ban most of the language then you'd still have to replace or 
integrate the event loop with libuv, which is probably not a small or easy job. 
If a future LiveCode has a more modular core you might be able to take just the 
bits you need.

 
 It would be fantastic to get add Live code to a Node server with a couple
 of lines. Particularly for serving dynamically created images. I guess this
 route will also be possible when we have JavaScript export?
 
I think it would be possible with JavaScript export but that's a very 
heavyweight solution. Dynamically creating images is the sort of processor 
intensive task where Node performs badly. It would probably make more sense to 
have a separate LiveCode executable that can run in another process on another 
CPU core on the same server. You could signal it via sockets from the Node 
server, or even just communicate via the file system.

 On Tuesday, April 14, 2015, Jim Lambert j...@netrin.com wrote:
 
 
 Mark Wilcox wrote:
 
 This is an interesting thread.
 
 
 Indeed it is. Thanks for your informative comments.
 
 Jim Lambert
 ___
 use-livecode mailing list
 use-livecode@lists.runrev.com javascript:;
 Please visit this url to subscribe, unsubscribe and manage your
 subscription preferences:
 http://lists.runrev.com/mailman/listinfo/use-livecode
 ___
 use-livecode mailing list
 use-livecode@lists.runrev.com
 Please visit this url to subscribe, unsubscribe and manage your subscription 
 preferences:
 http://lists.runrev.com/mailman/listinfo/use-livecode

___
use-livecode mailing list
use-livecode@lists.runrev.com
Please visit this url to subscribe, unsubscribe and manage your subscription 
preferences:
http://lists.runrev.com/mailman/listinfo/use-livecode


Re: LiveNode Server

2015-04-08 Thread David Bovill
Thanks for this Andrew - I learned a lot. I spend a lot of time passing
messages around Livecode objects with a view to making them standalone code
chunks. Debugging works pretty well - but there was a need for a library
and a graphing mechanism - design pattern style. This all adds overhead I
guess when it comes to just reading code, and the design patterns for
callbacks that closures seem to encourage makes this easier by itself?

One thing that would be good to know more about with regard to Livecode is
the memory handling - so when does a peice of code get released from
memory? My understanding is that it does not - except possibly when the
stack it resides in is deleted from memory?

Otherwise my understanding is that like with Hypercard the script is
compiled to bytecode the first time it is executed (which is a relatively
slow step), but thereafter resides in memory to be executed when needed. Is
that about right?

It makes me think of an architecture in Livecode using [[dispatch]] where a
stack is loaded that contains the needed code should the dispatch call not
be handled by the message hierarchy - by default this stack could be
deleted after it is called - so releasing it from memory. Commonly called
handlers could be loaded before hand by a different command and therfore
stay in memory.


On 7 April 2015 at 21:21, Andrew Kluthe and...@ctech.me wrote:

 1. Livecode messaging is fully asynchronous. Not semi-async.

 Right, when I said semi-async I was referring to the single threadedness of
 livecode (which node shares) along with all the baked into livecode stuff
 that blocks up messages currently: accessing a large file on disk, posting
 some information to a web service with a large json payload being returned.
 It's async, with some pretty hard to work around exceptions (url library
 specifically has been the primary source of past frustration in this way).

 3. Livecode does not have closures = passing anonymous callbacks as
 params to functions so they can be executed later

 As for anonymous callbacks, I totally agree. Most early Node development
 had to overcome the callback hell that these patterns introduce. However,
 almost all of the nodejs projects and libraries I've worked with leveraged
 them heavily or exclusively. Promsies seem to have become the standard way
 of dealing with the callback hell that node was so famous for for a long
 time. Why does node use anonymous functions over the method you linked to
 in the article? Anonymous functions are marked for garbage collection
 immediately after being returned. All other functions at the global scope
 run the risk of needlessly using memory after they run. I've gotten into
 some hairy situations with memory management with these kinds of named
 callbacks (specifically for database access and return of lots of results
 when not scoped correctly).

 Passing a function (not just a name of a function to be used with a send or
 a dispatch later on) as a parameter even in your article still demonstrates
 something LC just can't do currently. In the article he's still using
 closures, it's just got a name instead of being anonymous. It's still a
 closure. LC has ways to accomplish similar things by passing names of
 functions and using dispatch, but I think it's not exactly the same.
 Closures are part of the reason node.js works the way it does and closures
 are one of the pirmary reasons javascript was chosen for node. It's
 certainly possible to do async without them, but closures are what makes it
 easy and kind of a fundamental principle to working in node.js.

 4. But we can easily call / dispatch calls to functions by passing names
 around and we can limit scope by using private handlers or libraries.

 Sure, there is nothing STOPPING us from implementing named callbacks in the
 current fashion or passing the named callback references dynamically as you
 and I mentioned, but from experience trying it this way I feel like it
 makes maintaining large projects built this way a lot more difficult. To
 the point where I ended up completely redoing most of the livecode stuff
 I've written in this way early on because it was getting to be a nightmare
 to maintain a completely separate callback functions rather than the sort
 of nested structure you get in node with callbacks. It takes a lot of
 discipline in placement and grouping of the code that is related in this
 way to come back later and make sense of it. In summary: it can be done,
 but that doesn't mean that it SHOULD be done.

 Kind of a weird long post there. Sorry for the length and probable
 repetition of my points.


 Also, this was something really neat I've used recently to make node work
 in-process with some .NET applications we have. Something that does this
 with node and LC would indeed be the bees knees.


 http://www.hanselman.com/blog/ItsJustASoftwareIssueEdgejsBringsNodeAndNETTogetherOnThreePlatforms.aspx

 Specifically the part about it allowing us to write node 

Re: LiveNode Server

2015-04-08 Thread David Bovill
Yes I second that - async file and network I/O. And full REST support -
PATCH I think is not supported - or is that documented?

On 7 April 2015 at 23:44, Andrew Kluthe and...@ctech.me wrote:

 I'm not using LC server side much so I can't say for sure there in
 reference to this thread and the things we've been discussing. I think the
 direction livecode is going and the state that it is/was (I still use 5.5
 for a lot of things) in to be great.

 If we can get as many of the blocking bits down to a minimum as possible
 (specifically the url libraries), I think it would be perfect. The thing
 that peeved me most is that most of my DB work is not done by directly
 connecting to the database but some sort of api layer. Usually my LC apps
 are just clients for these apis (often built in Node or python if they were
 made in-house). I like the flexibility this gives me. They post some JSON
 and get a JSON payload back. If the payload is large, I've had to do things
 like use curl and some other things to make up for the built-in super
 convenient internet library just sitting locking the application while it
 waits to return. I've converted entire applications out of LC into other
 technology stacks just because of the kludge needed for this one thing. I'd
 love to be able to stream this stuff in a little bit at a time as well. I
 can get some desired results with regular GET request using load url with a
 callback but it doesn't help when I have to post a more complex query. This
 happens in my .NET apps as well, but I use the parallel task libraries .NET
 has to get around the UI lockups. I've been spoiled on some of visual
 studio's tooling features in the meantime too :P (intellisense, jump to
 definitions, some other things that i think will come to LC in time).

 I also have a node-webkit (now nw.js) application that I think would be
 perfectly suited to be done in livecode once things stabilize a bit (this
 has already started to happen) with the newer builds using Chrome Embedded
 Framework. I needed something with all the fine tuned styling I could get
 from web app we already have but running as a standalone against SQLite DB.
 We did this to reuse the same visual cues and javascript libraries that we
 use on the web version. We wanted a copy of the web application that could
 run completely without the internet. I think with just a bit of time, I
 could have used LC to do this comfortably.

 The short answer? An url library that can read a file off disk
 asynchronously (I think this can be done now using some of the other ways
 of doing disk access in LC, but it would be nice if the url(binfile:) bit
 did the same thing) and an url library that can return the response of a
 POST asynchronously (preferably returning chunks as they come in).

 The widgets architecture sets itself up to solve all of my other potential
 wants/needs, maybe even this one.

 On Tue, Apr 7, 2015 at 4:19 PM Richard Gaskin ambassa...@fourthworld.com
 wrote:

  Andrew Kluthe wrote:
 
  1. Livecode messaging is fully asynchronous. Not semi-async.
  
   Right, when I said semi-async I was referring to the single
 threadedness
  of
   livecode (which node shares) along with all the baked into livecode
 stuff
   that blocks up messages currently: accessing a large file on disk,
  posting
   some information to a web service with a large json payload being
  returned.
   It's async, with some pretty hard to work around exceptions (url
 library
   specifically has been the primary source of past frustration in this
  way).
  
  3. Livecode does not have closures = passing anonymous callbacks as
   params to functions so they can be executed later
  
   As for anonymous callbacks, I totally agree. Most early Node
 development
   had to overcome the callback hell that these patterns introduce.
 However,
   almost all of the nodejs projects and libraries I've worked with
  leveraged
   them heavily or exclusively. Promsies seem to have become the standard
  way
   of dealing with the callback hell that node was so famous for for a
 long
   time. Why does node use anonymous functions over the method you linked
 to
   in the article? Anonymous functions are marked for garbage collection
   immediately after being returned. All other functions at the global
 scope
   run the risk of needlessly using memory after they run. I've gotten
 into
   some hairy situations with memory management with these kinds of named
   callbacks (specifically for database access and return of lots of
 results
   when not scoped correctly).
  
   Passing a function (not just a name of a function to be used with a
 send
  or
   a dispatch later on) as a parameter even in your article still
  demonstrates
   something LC just can't do currently. In the article he's still using
   closures, it's just got a name instead of being anonymous. It's still a
   closure. LC has ways to accomplish similar things by passing names of
   functions and using dispatch, but I 

Re: LiveNode Server

2015-04-08 Thread Andrew Kluthe
To clarify just a little bit further. The code and objects weren't holding
onto memory, the variables used in that code were due to weird scoping. Big
chunks of db results, etc that persist after I've already done my business
with them and tried to move on.

If I can recommend a book on Javascript, I can't speak highly enough of the
insights given by  JavsScript: the Good Parts from O'Reilly. He provides
some history behind some of the design choices in javascript and some of
the problems still being worked around today in regards to the bad parts.

On Wed, Apr 8, 2015 at 8:41 AM Andrew Kluthe and...@ctech.me wrote:

 I haven't had many problems with livecode chewing up memory and not
 letting it go (unless I've done something obvious like stash it someplace
 where I would expect it to persist). I think JS in general is prone to
 memory leaks just because of how much of it was designed around the use of
 global variables. All the scoping improvements we've had over the years
 were kind of grafted on top of this design to try and address this.

 In Livecode, memory leaks happen if you are really reckless.

 In most of the JS environments (node, browsers), they happen when you
 aren't careful.

 On Wed, Apr 8, 2015 at 4:54 AM David Bovill david@viral.academy wrote:

 Thanks for this Andrew - I learned a lot. I spend a lot of time passing
 messages around Livecode objects with a view to making them standalone
 code
 chunks. Debugging works pretty well - but there was a need for a library
 and a graphing mechanism - design pattern style. This all adds overhead I
 guess when it comes to just reading code, and the design patterns for
 callbacks that closures seem to encourage makes this easier by itself?

 One thing that would be good to know more about with regard to Livecode is
 the memory handling - so when does a peice of code get released from
 memory? My understanding is that it does not - except possibly when the
 stack it resides in is deleted from memory?

 Otherwise my understanding is that like with Hypercard the script is
 compiled to bytecode the first time it is executed (which is a relatively
 slow step), but thereafter resides in memory to be executed when needed.
 Is
 that about right?

 It makes me think of an architecture in Livecode using [[dispatch]] where
 a
 stack is loaded that contains the needed code should the dispatch call not
 be handled by the message hierarchy - by default this stack could be
 deleted after it is called - so releasing it from memory. Commonly called
 handlers could be loaded before hand by a different command and therfore
 stay in memory.


 On 7 April 2015 at 21:21, Andrew Kluthe and...@ctech.me wrote:

  1. Livecode messaging is fully asynchronous. Not semi-async.
 
  Right, when I said semi-async I was referring to the single
 threadedness of
  livecode (which node shares) along with all the baked into livecode
 stuff
  that blocks up messages currently: accessing a large file on disk,
 posting
  some information to a web service with a large json payload being
 returned.
  It's async, with some pretty hard to work around exceptions (url library
  specifically has been the primary source of past frustration in this
 way).
 
  3. Livecode does not have closures = passing anonymous callbacks as
  params to functions so they can be executed later
 
  As for anonymous callbacks, I totally agree. Most early Node development
  had to overcome the callback hell that these patterns introduce.
 However,
  almost all of the nodejs projects and libraries I've worked with
 leveraged
  them heavily or exclusively. Promsies seem to have become the standard
 way
  of dealing with the callback hell that node was so famous for for a long
  time. Why does node use anonymous functions over the method you linked
 to
  in the article? Anonymous functions are marked for garbage collection
  immediately after being returned. All other functions at the global
 scope
  run the risk of needlessly using memory after they run. I've gotten into
  some hairy situations with memory management with these kinds of named
  callbacks (specifically for database access and return of lots of
 results
  when not scoped correctly).
 
  Passing a function (not just a name of a function to be used with a
 send or
  a dispatch later on) as a parameter even in your article still
 demonstrates
  something LC just can't do currently. In the article he's still using
  closures, it's just got a name instead of being anonymous. It's still a
  closure. LC has ways to accomplish similar things by passing names of
  functions and using dispatch, but I think it's not exactly the same.
  Closures are part of the reason node.js works the way it does and
 closures
  are one of the pirmary reasons javascript was chosen for node. It's
  certainly possible to do async without them, but closures are what
 makes it
  easy and kind of a fundamental principle to working in node.js.
 
  4. But we can easily call / dispatch 

Re: LiveNode Server

2015-04-08 Thread Andrew Kluthe
I haven't had many problems with livecode chewing up memory and not letting
it go (unless I've done something obvious like stash it someplace where I
would expect it to persist). I think JS in general is prone to memory leaks
just because of how much of it was designed around the use of global
variables. All the scoping improvements we've had over the years were kind
of grafted on top of this design to try and address this.

In Livecode, memory leaks happen if you are really reckless.

In most of the JS environments (node, browsers), they happen when you
aren't careful.

On Wed, Apr 8, 2015 at 4:54 AM David Bovill david@viral.academy wrote:

 Thanks for this Andrew - I learned a lot. I spend a lot of time passing
 messages around Livecode objects with a view to making them standalone code
 chunks. Debugging works pretty well - but there was a need for a library
 and a graphing mechanism - design pattern style. This all adds overhead I
 guess when it comes to just reading code, and the design patterns for
 callbacks that closures seem to encourage makes this easier by itself?

 One thing that would be good to know more about with regard to Livecode is
 the memory handling - so when does a peice of code get released from
 memory? My understanding is that it does not - except possibly when the
 stack it resides in is deleted from memory?

 Otherwise my understanding is that like with Hypercard the script is
 compiled to bytecode the first time it is executed (which is a relatively
 slow step), but thereafter resides in memory to be executed when needed. Is
 that about right?

 It makes me think of an architecture in Livecode using [[dispatch]] where a
 stack is loaded that contains the needed code should the dispatch call not
 be handled by the message hierarchy - by default this stack could be
 deleted after it is called - so releasing it from memory. Commonly called
 handlers could be loaded before hand by a different command and therfore
 stay in memory.


 On 7 April 2015 at 21:21, Andrew Kluthe and...@ctech.me wrote:

  1. Livecode messaging is fully asynchronous. Not semi-async.
 
  Right, when I said semi-async I was referring to the single threadedness
 of
  livecode (which node shares) along with all the baked into livecode stuff
  that blocks up messages currently: accessing a large file on disk,
 posting
  some information to a web service with a large json payload being
 returned.
  It's async, with some pretty hard to work around exceptions (url library
  specifically has been the primary source of past frustration in this
 way).
 
  3. Livecode does not have closures = passing anonymous callbacks as
  params to functions so they can be executed later
 
  As for anonymous callbacks, I totally agree. Most early Node development
  had to overcome the callback hell that these patterns introduce. However,
  almost all of the nodejs projects and libraries I've worked with
 leveraged
  them heavily or exclusively. Promsies seem to have become the standard
 way
  of dealing with the callback hell that node was so famous for for a long
  time. Why does node use anonymous functions over the method you linked to
  in the article? Anonymous functions are marked for garbage collection
  immediately after being returned. All other functions at the global scope
  run the risk of needlessly using memory after they run. I've gotten into
  some hairy situations with memory management with these kinds of named
  callbacks (specifically for database access and return of lots of results
  when not scoped correctly).
 
  Passing a function (not just a name of a function to be used with a send
 or
  a dispatch later on) as a parameter even in your article still
 demonstrates
  something LC just can't do currently. In the article he's still using
  closures, it's just got a name instead of being anonymous. It's still a
  closure. LC has ways to accomplish similar things by passing names of
  functions and using dispatch, but I think it's not exactly the same.
  Closures are part of the reason node.js works the way it does and
 closures
  are one of the pirmary reasons javascript was chosen for node. It's
  certainly possible to do async without them, but closures are what makes
 it
  easy and kind of a fundamental principle to working in node.js.
 
  4. But we can easily call / dispatch calls to functions by passing
 names
  around and we can limit scope by using private handlers or libraries.
 
  Sure, there is nothing STOPPING us from implementing named callbacks in
 the
  current fashion or passing the named callback references dynamically as
 you
  and I mentioned, but from experience trying it this way I feel like it
  makes maintaining large projects built this way a lot more difficult. To
  the point where I ended up completely redoing most of the livecode stuff
  I've written in this way early on because it was getting to be a
 nightmare
  to maintain a completely separate callback functions rather than the 

Re: LiveNode Server

2015-04-07 Thread David Bovill
OK. A few questions... I'll post them as assertions to aid clarity. There
may well be mistakes - so please let me an others know if there is anything
wrong below:

  1. Livecode messaging is fully asynchronous. Not semi-async.
  2. There are a bunch of functions that are currently synchronous in
LiveCode that make it difficult to create asynchronous code - such as
certain network call like POST.
  3. Livecode does not have closures = passing anonymous callbacks as
params to functions so they can be executed later
  4. But we can easily call / dispatch calls to functions by passing names
around and we can limit scope by using private handlers or libraries.

Here is an article about why you should not use anonymous callbacks that
seems interesting in the context of readabiity and literate programming
languages:

  * Avoiding anonymous JavaScript functions
http://toddmotto.com/avoiding-anonymous-javascript-functions/

@Andrew thanks for your feedback, but as I,ve not used closures in real
work I can't see how the lack f them stops us creating an async event
driven callback model for a server?

On Monday, April 6, 2015, Andrew Kluthe and...@ctech.me wrote:

 I think the real missing piece in making LC work like node's event loop
 would be anonymous callback functions that can be treated like other
 variables. We can do semi- async stuff using messages in LC but you'd have
 to either name separate callback functions or dynamically pass the names of
 separately defined callback functions. We've got no way to pass an
 anonymous function as a param to something like you can with js.

 On Sun, Apr 5, 2015, 10:21 AM Richard Gaskin ambassa...@fourthworld.com
 javascript:;
 wrote:

  David Bovill wrote:
 
On 5 April 2015 at 05:01, Richard Gaskin wrote:
   
David Bovill wrote:
 I am not quite sure what not being forkable is here - can you
 explain.
   
Not as well as Andre:
   
  http://lists.runrev.com/pipermail/use-livecode/2009-January/119437.html
 
 
 
Ok - so the key sentance there is - We can't fork in revolution.
So what does that mean? What is so special about Livecode that
it can't do this?
It's not multi-threading - it's something ?
   
My thinking is that what we need is to be able to have some existing
monitoring service keep a pool of LiveNode servers up and running -
in a way in which you can configure the number of servers you need.
Then you need a Node load balancing server / broker thing passing off
messages asynchronously to a LiveNode server and immediately
returning control to the user. only when all the LiveNode servers
were used up - would a cue kick into action?
   
This is all standard server / inter-application messaging stuff no?
What prevents us doing that in Livecode?
 
  As you read in Andre's post I linked to, that's more or less what he
  proposes as an alternative to FastCGI.
 
  If one is willing to put the time into assembling such a
  multi-processing pool, the downsides relative to having forking appear
  to be somewhat minor, not likely the sort of thing we'd run into south
  of the C10k problem.
 
  What have you run into in trying this that isn't working?
 
  --
Richard Gaskin
Fourth World Systems
Software Design and Development for Desktop, Mobile, and Web

ambassa...@fourthworld.comhttp://www.FourthWorld.com
 
  ___
  use-livecode mailing list
  use-livecode@lists.runrev.com javascript:;
  Please visit this url to subscribe, unsubscribe and manage your
  subscription preferences:
  http://lists.runrev.com/mailman/listinfo/use-livecode
 
 ___
 use-livecode mailing list
 use-livecode@lists.runrev.com javascript:;
 Please visit this url to subscribe, unsubscribe and manage your
 subscription preferences:
 http://lists.runrev.com/mailman/listinfo/use-livecode

___
use-livecode mailing list
use-livecode@lists.runrev.com
Please visit this url to subscribe, unsubscribe and manage your subscription 
preferences:
http://lists.runrev.com/mailman/listinfo/use-livecode


Re: LiveNode Server

2015-04-07 Thread Richard Gaskin

David Bovill wrote:

 OK. A few questions... I'll post them as assertions to aid clarity.

Personally I find it clearer to read questions as questions, but with 
that explanation I can work with this:


   1. Livecode messaging is fully asynchronous. Not semi-async.

What is semi-asynchronous in the context of LC?

There is a distinction between asynchronous and non-blocking that 
I'm not entirely clear on, so I'll use non-blocking for now:


Socket I/O messaging can be non-blocking when the with message 
option is used, e.g.:


   accept connections on port  with message GotConnection

But this is of limited value within a single LC instance, since doing 
anything with that message is likely going to involve blocking code 
(e.g., reading a file or accessing a database, doing something with that 
data, and then returning it).


So messages will keep coming in, but they'll queue up.  This may be fine 
for light loads, but when expecting multiple simultaneous connections it 
would be ideal to handle the tasks more independently.


Many programs do this with threading, but we don't have threading in LC.

Instead of multithreading we can use multiprocessing, having multiple 
instances of LC apps each working on a separate task.


The challenge is that to hand off a socket request to a child process 
entirely would require us to have some way of handing over the socket 
connection itself.  I believe fork allows this, but I know of no way to 
launch new instances of an LC app in a way that will hand over the 
socket connection to them.


In lieu of being able to fork, two options I know of are the ones I 
noted earlier; others may be possible as well, but these seem common 
enough to be reasonable starting points:

http://lists.runrev.com/pipermail/use-livecode/2015-April/213208.html



   2. There are a bunch of functions that are currently synchronous in
 LiveCode that make it difficult to create asynchronous code - such as
 certain network call like POST.

Yes, as above.


   3. Livecode does not have closures = passing anonymous callbacks as
 params to functions so they can be executed later

Not per se, but as you note:

   4. But we can easily call / dispatch calls to functions by passing
 names around and we can limit scope by using private handlers or
 libraries.

 Here is an article about why you should not use anonymous callbacks
 that seems interesting in the context of readabiity and literate
 programming languages:

   * Avoiding anonymous JavaScript functions
 http://toddmotto.com/avoiding-anonymous-javascript-functions/

Good find.  While most of that is very JS-specific, the 
readability/complexity argument applies in LC well.  As a general rule, 
I feel that do is a last resort when all more direct means of 
accomplishing something have been exhausted, and even dispatch, 
send, and value can have similar impact in terms of 
debugging/maintenance.


I almost never use do anywhere, and on servers I limit use of the 
others for another reason not mentioned in the article, security: since 
they execute arbitrary code by design, do, dispatch, send and 
value can potentially be sources of injection when used on any part of 
incoming data.


I use dispatch in one place in my server framework, but only after 
checking an incoming command argument against a list of known one-word 
commands; any request missing a command parameter, or having one not 
found on that list, returns an invalid command error message.


--
 Richard Gaskin
 Fourth World Systems
 Software Design and Development for the Desktop, Mobile, and the Web
 
 ambassa...@fourthworld.comhttp://www.FourthWorld.com

___
use-livecode mailing list
use-livecode@lists.runrev.com
Please visit this url to subscribe, unsubscribe and manage your subscription 
preferences:
http://lists.runrev.com/mailman/listinfo/use-livecode


Re: LiveNode Server

2015-04-07 Thread Andrew Kluthe
1. Livecode messaging is fully asynchronous. Not semi-async.

Right, when I said semi-async I was referring to the single threadedness of
livecode (which node shares) along with all the baked into livecode stuff
that blocks up messages currently: accessing a large file on disk, posting
some information to a web service with a large json payload being returned.
It's async, with some pretty hard to work around exceptions (url library
specifically has been the primary source of past frustration in this way).

3. Livecode does not have closures = passing anonymous callbacks as
params to functions so they can be executed later

As for anonymous callbacks, I totally agree. Most early Node development
had to overcome the callback hell that these patterns introduce. However,
almost all of the nodejs projects and libraries I've worked with leveraged
them heavily or exclusively. Promsies seem to have become the standard way
of dealing with the callback hell that node was so famous for for a long
time. Why does node use anonymous functions over the method you linked to
in the article? Anonymous functions are marked for garbage collection
immediately after being returned. All other functions at the global scope
run the risk of needlessly using memory after they run. I've gotten into
some hairy situations with memory management with these kinds of named
callbacks (specifically for database access and return of lots of results
when not scoped correctly).

Passing a function (not just a name of a function to be used with a send or
a dispatch later on) as a parameter even in your article still demonstrates
something LC just can't do currently. In the article he's still using
closures, it's just got a name instead of being anonymous. It's still a
closure. LC has ways to accomplish similar things by passing names of
functions and using dispatch, but I think it's not exactly the same.
Closures are part of the reason node.js works the way it does and closures
are one of the pirmary reasons javascript was chosen for node. It's
certainly possible to do async without them, but closures are what makes it
easy and kind of a fundamental principle to working in node.js.

4. But we can easily call / dispatch calls to functions by passing names
around and we can limit scope by using private handlers or libraries.

Sure, there is nothing STOPPING us from implementing named callbacks in the
current fashion or passing the named callback references dynamically as you
and I mentioned, but from experience trying it this way I feel like it
makes maintaining large projects built this way a lot more difficult. To
the point where I ended up completely redoing most of the livecode stuff
I've written in this way early on because it was getting to be a nightmare
to maintain a completely separate callback functions rather than the sort
of nested structure you get in node with callbacks. It takes a lot of
discipline in placement and grouping of the code that is related in this
way to come back later and make sense of it. In summary: it can be done,
but that doesn't mean that it SHOULD be done.

Kind of a weird long post there. Sorry for the length and probable
repetition of my points.


Also, this was something really neat I've used recently to make node work
in-process with some .NET applications we have. Something that does this
with node and LC would indeed be the bees knees.

http://www.hanselman.com/blog/ItsJustASoftwareIssueEdgejsBringsNodeAndNETTogetherOnThreePlatforms.aspx

Specifically the part about it allowing us to write node extensions in C#
in addition to the standard C and C++ way of doing it. I'd love to be able
to hook node into extensions written in livecode.


On Tue, Apr 7, 2015 at 12:24 PM Richard Gaskin ambassa...@fourthworld.com
wrote:

 David Bovill wrote:

   OK. A few questions... I'll post them as assertions to aid clarity.

 Personally I find it clearer to read questions as questions, but with
 that explanation I can work with this:

 1. Livecode messaging is fully asynchronous. Not semi-async.

 What is semi-asynchronous in the context of LC?

 There is a distinction between asynchronous and non-blocking that
 I'm not entirely clear on, so I'll use non-blocking for now:

 Socket I/O messaging can be non-blocking when the with message
 option is used, e.g.:

 accept connections on port  with message GotConnection

 But this is of limited value within a single LC instance, since doing
 anything with that message is likely going to involve blocking code
 (e.g., reading a file or accessing a database, doing something with that
 data, and then returning it).

 So messages will keep coming in, but they'll queue up.  This may be fine
 for light loads, but when expecting multiple simultaneous connections it
 would be ideal to handle the tasks more independently.

 Many programs do this with threading, but we don't have threading in LC.

 Instead of multithreading we can use multiprocessing, having multiple
 

Re: LiveNode Server

2015-04-07 Thread Richard Gaskin

Andrew Kluthe wrote:


1. Livecode messaging is fully asynchronous. Not semi-async.


Right, when I said semi-async I was referring to the single threadedness of
livecode (which node shares) along with all the baked into livecode stuff
that blocks up messages currently: accessing a large file on disk, posting
some information to a web service with a large json payload being returned.
It's async, with some pretty hard to work around exceptions (url library
specifically has been the primary source of past frustration in this way).


3. Livecode does not have closures = passing anonymous callbacks as

params to functions so they can be executed later

As for anonymous callbacks, I totally agree. Most early Node development
had to overcome the callback hell that these patterns introduce. However,
almost all of the nodejs projects and libraries I've worked with leveraged
them heavily or exclusively. Promsies seem to have become the standard way
of dealing with the callback hell that node was so famous for for a long
time. Why does node use anonymous functions over the method you linked to
in the article? Anonymous functions are marked for garbage collection
immediately after being returned. All other functions at the global scope
run the risk of needlessly using memory after they run. I've gotten into
some hairy situations with memory management with these kinds of named
callbacks (specifically for database access and return of lots of results
when not scoped correctly).

Passing a function (not just a name of a function to be used with a send or
a dispatch later on) as a parameter even in your article still demonstrates
something LC just can't do currently. In the article he's still using
closures, it's just got a name instead of being anonymous. It's still a
closure. LC has ways to accomplish similar things by passing names of
functions and using dispatch, but I think it's not exactly the same.
Closures are part of the reason node.js works the way it does and closures
are one of the pirmary reasons javascript was chosen for node. It's
certainly possible to do async without them, but closures are what makes it
easy and kind of a fundamental principle to working in node.js.


4. But we can easily call / dispatch calls to functions by passing names

around and we can limit scope by using private handlers or libraries.

Sure, there is nothing STOPPING us from implementing named callbacks in the
current fashion or passing the named callback references dynamically as you
and I mentioned, but from experience trying it this way I feel like it
makes maintaining large projects built this way a lot more difficult. To
the point where I ended up completely redoing most of the livecode stuff
I've written in this way early on because it was getting to be a nightmare
to maintain a completely separate callback functions rather than the sort
of nested structure you get in node with callbacks. It takes a lot of
discipline in placement and grouping of the code that is related in this
way to come back later and make sense of it. In summary: it can be done,
but that doesn't mean that it SHOULD be done.

Kind of a weird long post there. Sorry for the length and probable
repetition of my points.


Not at all - good stuff.

What would you say would be the minimum we'd need to add to the LC 
engine to make it suitable for the sort of work you do?


--
 Richard Gaskin
 Fourth World Systems
 Software Design and Development for the Desktop, Mobile, and the Web
 
 ambassa...@fourthworld.comhttp://www.FourthWorld.com

___
use-livecode mailing list
use-livecode@lists.runrev.com
Please visit this url to subscribe, unsubscribe and manage your subscription 
preferences:
http://lists.runrev.com/mailman/listinfo/use-livecode


Re: LiveNode Server

2015-04-07 Thread Andrew Kluthe
I'm not using LC server side much so I can't say for sure there in
reference to this thread and the things we've been discussing. I think the
direction livecode is going and the state that it is/was (I still use 5.5
for a lot of things) in to be great.

If we can get as many of the blocking bits down to a minimum as possible
(specifically the url libraries), I think it would be perfect. The thing
that peeved me most is that most of my DB work is not done by directly
connecting to the database but some sort of api layer. Usually my LC apps
are just clients for these apis (often built in Node or python if they were
made in-house). I like the flexibility this gives me. They post some JSON
and get a JSON payload back. If the payload is large, I've had to do things
like use curl and some other things to make up for the built-in super
convenient internet library just sitting locking the application while it
waits to return. I've converted entire applications out of LC into other
technology stacks just because of the kludge needed for this one thing. I'd
love to be able to stream this stuff in a little bit at a time as well. I
can get some desired results with regular GET request using load url with a
callback but it doesn't help when I have to post a more complex query. This
happens in my .NET apps as well, but I use the parallel task libraries .NET
has to get around the UI lockups. I've been spoiled on some of visual
studio's tooling features in the meantime too :P (intellisense, jump to
definitions, some other things that i think will come to LC in time).

I also have a node-webkit (now nw.js) application that I think would be
perfectly suited to be done in livecode once things stabilize a bit (this
has already started to happen) with the newer builds using Chrome Embedded
Framework. I needed something with all the fine tuned styling I could get
from web app we already have but running as a standalone against SQLite DB.
We did this to reuse the same visual cues and javascript libraries that we
use on the web version. We wanted a copy of the web application that could
run completely without the internet. I think with just a bit of time, I
could have used LC to do this comfortably.

The short answer? An url library that can read a file off disk
asynchronously (I think this can be done now using some of the other ways
of doing disk access in LC, but it would be nice if the url(binfile:) bit
did the same thing) and an url library that can return the response of a
POST asynchronously (preferably returning chunks as they come in).

The widgets architecture sets itself up to solve all of my other potential
wants/needs, maybe even this one.

On Tue, Apr 7, 2015 at 4:19 PM Richard Gaskin ambassa...@fourthworld.com
wrote:

 Andrew Kluthe wrote:

 1. Livecode messaging is fully asynchronous. Not semi-async.
 
  Right, when I said semi-async I was referring to the single threadedness
 of
  livecode (which node shares) along with all the baked into livecode stuff
  that blocks up messages currently: accessing a large file on disk,
 posting
  some information to a web service with a large json payload being
 returned.
  It's async, with some pretty hard to work around exceptions (url library
  specifically has been the primary source of past frustration in this
 way).
 
 3. Livecode does not have closures = passing anonymous callbacks as
  params to functions so they can be executed later
 
  As for anonymous callbacks, I totally agree. Most early Node development
  had to overcome the callback hell that these patterns introduce. However,
  almost all of the nodejs projects and libraries I've worked with
 leveraged
  them heavily or exclusively. Promsies seem to have become the standard
 way
  of dealing with the callback hell that node was so famous for for a long
  time. Why does node use anonymous functions over the method you linked to
  in the article? Anonymous functions are marked for garbage collection
  immediately after being returned. All other functions at the global scope
  run the risk of needlessly using memory after they run. I've gotten into
  some hairy situations with memory management with these kinds of named
  callbacks (specifically for database access and return of lots of results
  when not scoped correctly).
 
  Passing a function (not just a name of a function to be used with a send
 or
  a dispatch later on) as a parameter even in your article still
 demonstrates
  something LC just can't do currently. In the article he's still using
  closures, it's just got a name instead of being anonymous. It's still a
  closure. LC has ways to accomplish similar things by passing names of
  functions and using dispatch, but I think it's not exactly the same.
  Closures are part of the reason node.js works the way it does and
 closures
  are one of the pirmary reasons javascript was chosen for node. It's
  certainly possible to do async without them, but closures are what makes
 it
  easy and kind of a 

Re: LiveNode Server

2015-04-06 Thread Andrew Kluthe
I think the real missing piece in making LC work like node's event loop
would be anonymous callback functions that can be treated like other
variables. We can do semi- async stuff using messages in LC but you'd have
to either name separate callback functions or dynamically pass the names of
separately defined callback functions. We've got no way to pass an
anonymous function as a param to something like you can with js.

On Sun, Apr 5, 2015, 10:21 AM Richard Gaskin ambassa...@fourthworld.com
wrote:

 David Bovill wrote:

   On 5 April 2015 at 05:01, Richard Gaskin wrote:
  
   David Bovill wrote:
I am not quite sure what not being forkable is here - can you
explain.
  
   Not as well as Andre:
  
 http://lists.runrev.com/pipermail/use-livecode/2009-January/119437.html


   Ok - so the key sentance there is - We can't fork in revolution.
   So what does that mean? What is so special about Livecode that
   it can't do this?
   It's not multi-threading - it's something ?
  
   My thinking is that what we need is to be able to have some existing
   monitoring service keep a pool of LiveNode servers up and running -
   in a way in which you can configure the number of servers you need.
   Then you need a Node load balancing server / broker thing passing off
   messages asynchronously to a LiveNode server and immediately
   returning control to the user. only when all the LiveNode servers
   were used up - would a cue kick into action?
  
   This is all standard server / inter-application messaging stuff no?
   What prevents us doing that in Livecode?

 As you read in Andre's post I linked to, that's more or less what he
 proposes as an alternative to FastCGI.

 If one is willing to put the time into assembling such a
 multi-processing pool, the downsides relative to having forking appear
 to be somewhat minor, not likely the sort of thing we'd run into south
 of the C10k problem.

 What have you run into in trying this that isn't working?

 --
   Richard Gaskin
   Fourth World Systems
   Software Design and Development for Desktop, Mobile, and Web
   
   ambassa...@fourthworld.comhttp://www.FourthWorld.com

 ___
 use-livecode mailing list
 use-livecode@lists.runrev.com
 Please visit this url to subscribe, unsubscribe and manage your
 subscription preferences:
 http://lists.runrev.com/mailman/listinfo/use-livecode

___
use-livecode mailing list
use-livecode@lists.runrev.com
Please visit this url to subscribe, unsubscribe and manage your subscription 
preferences:
http://lists.runrev.com/mailman/listinfo/use-livecode


Re: LiveNode Server

2015-04-05 Thread David Bovill
Ok - so the key sentance there is - We can't fork in revolution. So what
does that mean? What is so special about Livecode that it can't do this?
It's not multi-threading - it's something ?

My thinking is that what we need is to be able to have some existing
monitoring service keep a pool of LiveNode servers up and running - in a
way in which you can configure the number of servers you need. Then you
need a Node load balancing server / broker thing passing off messages
asynchronously to a LiveNode server and immediately returning control to
the user. only when all the LiveNode servers were used up - would a cue
kick into action?

This is all standard server / inter-application messaging stuff no? What
prevents us doing that in Livecode?


On 5 April 2015 at 05:01, Richard Gaskin ambassa...@fourthworld.com wrote:

 David Bovill wrote:

  I am not quite sure what not being forkable is here - can you
  explain.

 Not as well as Andre:
 http://lists.runrev.com/pipermail/use-livecode/2009-January/119437.html

 --
  Richard Gaskin
  Fourth World Systems

 ___
 use-livecode mailing list
 use-livecode@lists.runrev.com
 Please visit this url to subscribe, unsubscribe and manage your
 subscription preferences:
 http://lists.runrev.com/mailman/listinfo/use-livecode

___
use-livecode mailing list
use-livecode@lists.runrev.com
Please visit this url to subscribe, unsubscribe and manage your subscription 
preferences:
http://lists.runrev.com/mailman/listinfo/use-livecode


Re: LiveNode Server

2015-04-05 Thread Richard Gaskin

David Bovill wrote:

 On 5 April 2015 at 05:01, Richard Gaskin wrote:

 David Bovill wrote:
  I am not quite sure what not being forkable is here - can you
  explain.

 Not as well as Andre:
 
http://lists.runrev.com/pipermail/use-livecode/2009-January/119437.html



 Ok - so the key sentance there is - We can't fork in revolution.
 So what does that mean? What is so special about Livecode that
 it can't do this?
 It's not multi-threading - it's something ?

 My thinking is that what we need is to be able to have some existing
 monitoring service keep a pool of LiveNode servers up and running -
 in a way in which you can configure the number of servers you need.
 Then you need a Node load balancing server / broker thing passing off
 messages asynchronously to a LiveNode server and immediately
 returning control to the user. only when all the LiveNode servers
 were used up - would a cue kick into action?

 This is all standard server / inter-application messaging stuff no?
 What prevents us doing that in Livecode?

As you read in Andre's post I linked to, that's more or less what he 
proposes as an alternative to FastCGI.


If one is willing to put the time into assembling such a 
multi-processing pool, the downsides relative to having forking appear 
to be somewhat minor, not likely the sort of thing we'd run into south 
of the C10k problem.


What have you run into in trying this that isn't working?

--
 Richard Gaskin
 Fourth World Systems
 Software Design and Development for Desktop, Mobile, and Web
 
 ambassa...@fourthworld.comhttp://www.FourthWorld.com

___
use-livecode mailing list
use-livecode@lists.runrev.com
Please visit this url to subscribe, unsubscribe and manage your subscription 
preferences:
http://lists.runrev.com/mailman/listinfo/use-livecode


Re: LiveNode Server

2015-04-04 Thread Richard Gaskin

David Bovill wrote:

 I am not quite sure what not being forkable is here - can you
 explain.

Not as well as Andre:
http://lists.runrev.com/pipermail/use-livecode/2009-January/119437.html

--
 Richard Gaskin
 Fourth World Systems

___
use-livecode mailing list
use-livecode@lists.runrev.com
Please visit this url to subscribe, unsubscribe and manage your subscription 
preferences:
http://lists.runrev.com/mailman/listinfo/use-livecode


Re: LiveNode Server

2015-04-01 Thread Richard Gaskin
GMTA: the folder where I keep my experimental server stacks is named 
LiveNode. :)


Good stuff here, a very useful and practical pursuit, IMO, in a world 
where one of the largest MMOs is also written in a high-level scripting 
language (EVE Online, in Python) so we know it's more than possible to 
consider a full-stack server solution entirely written in LiveCode:


David Bovill wrote:

 The question is can you create in Livecode an aynchronous event-drive
 architecture? Livecode is built after all around an event loop, and
 through commands like dispatch, send in time, and wait with messages,
 it is possible to create asynchronous call back mechanisms - so why
 can we not create a node-like server in Livecode?

 Perhaps the answer lies in the nature of the asynchronous commands
 that are available? Still I don't see why this in an issue. From
 my experience of coding an HTTP server in Livecode - I cannot
 understand why it should not be possible to accept a socket
 connection, dispatch a command, and immediately return a result on
 the connected socket. The event loop should take over and allow
 new connections / data on the socket, and when the dispatched
 command completes it will return a result that can then be send
 back down the open socket.

I've been pondering similar questions myself:
http://lists.runrev.com/pipermail/use-livecode/2015-February/211536.html
http://lists.runrev.com/pipermail/use-livecode/2015-March/212281.html

Pierre's been exploring this even longer:
http://lists.runrev.com/pipermail/metacard/2002-September/002462.html

With socket I/O apparently handled asynchronously when the with 
message option is used, this is a very tempting pursuit.


The challenge arises from the recipient of the message: it will be 
running in the same thread as the socket broker, causing a backlog of 
message queueing; requests are received well enough, but responding 
requires then to be processes one at a time.


Down the road we may have some form of threading, though that's not 
without programming complication and threads are somewhat expensive in 
terms of system resources (though there are options like green threads 
at least one Python build uses).


Working with what we have, Mark Talluto, Todd Geist, and I (and probably 
others) have been independently exploring concurrency options using 
multiprocessing in lieu of multithreading, using a single connection 
broker feeding processing to any number of worker instances.


The challenge there is that the LC VM is not currently forkable, so we 
can't pass a socket connection from the broker to a child worker process.


Instead, we have to look at more primitive means, which tend toward two 
camps (though I'm sure many others are possible):


a) Consistent Socket Broker
   The socket broker handles all network I/O with all clients, and
   feeds instructions for tasks to workers via sockets, stdIn, or
   even files (/sys/shm is pretty fast even though it uses simple
   file routines).

   The upside here is that any heavy processing is distributed among
   multiple workers, but the downside is that all network I/O still
   goes through one broker process.


b) Redirects to Multiple Workers
   Here the main socket broker listening on the standard port only
   does one thing: it looks at a list of available workers (whether
   through simple round-robin, or something smarter like load
   reporting), each of which is listening on a non-standard port,
   and sends the client a 302 redirect to the server with that
   non-standard port so each worker is handling the socket comms
   directly and only a subset of them.   If each worker also has
   its own collection of sub-workers as in option a) above, this
   could greatly multiple the number of clients served concurrently.

   The upside is that all aspects of load are distributed among
   multiple processes, even socket I/O, but the downside is the
   somewhat modest but annoying requirement that each request
   be submitted twice, once to the main broker and again to the
   redirected instance assigned to handle it.


Purpose-built application servers can indeed be made with the LiveCode 
we have today and can handle reasonable amounts of traffic, more than 
one might think for a single-threaded VM.


But all systems have scaling limits, and the limits with LC would be 
encountered sooner than with some other systems built from the ground up 
as high-load servers.


IMO such explorations can be valuable for specific kinds of server apps, 
but as tempting as it is I wouldn't want to build a general purpose Web 
server with LiveCode.  In addition to the scope of the HTTP 1.1 spec 
itself, Web stuff consists of many small transactions, in which a single 
page may require a dozen or more requests for static media like CSS, JS, 
images, etc., and Apache and NgineX are really good solutions that 
handle those needs well.


I think the sweet spot for an entirely LiveCode application server would 
be those 

Re: LiveNode Server

2015-04-01 Thread David Bovill
On 1 April 2015 at 16:55, Richard Gaskin ambassa...@fourthworld.com wrote:


 David Bovill wrote:

  The question is can you create in Livecode an aynchronous event-drive
  architecture?
..

 With socket I/O apparently handled asynchronously when the with
message option is used, this is a very tempting pursuit.

 The challenge arises from the recipient of the message: it will be
running in the same thread as the socket broker, causing a backlog of
message queueing; requests are received well enough, but responding
requires then to be processes one at a time.

Ah - OK
So the first response would be fine - but not the second.


 The challenge there is that the LC VM is not currently forkable, so we
can't pass a socket connection from the broker to a child worker process.


 I am not quite sure what not being forkable is here - can you explain.
What is special about LC here compared with other VM's



 Instead, we have to look at more primitive means, which tend toward two
camps (though I'm sure many others are possible):

 a) Consistent Socket Broker
The socket broker handles all network I/O with all clients, and
feeds instructions for tasks to workers via sockets, stdIn, or
even files (/sys/shm is pretty fast even though it uses simple
file routines).

The upside here is that any heavy processing is distributed among
multiple workers, but the downside is that all network I/O still
goes through one broker process.


 b) Redirects to Multiple Workers
Here the main socket broker listening on the standard port only
does one thing: it looks at a list of available workers (whether
through simple round-robin, or something smarter like load
reporting), each of which is listening on a non-standard port,
and sends the client a 302 redirect to the server with that
non-standard port so each worker is handling the socket comms
directly and only a subset of them.   If each worker also has
its own collection of sub-workers as in option a) above, this
could greatly multiple the number of clients served concurrently.

The upside is that all aspects of load are distributed among
multiple processes, even socket I/O, but the downside is the
somewhat modest but annoying requirement that each request
be submitted twice, once to the main broker and again to the
redirected instance assigned to handle it.

OK - so a graph of servers communicating over sockets is better than one
central spoke and hub scenario.

From the FastCGI docs: http://www.fastcgi.com/drupal/node/6?q=node/16

With session affinity you run a pool of application processes and the Web
server routes requests to individual processes based on any information
contained in the request. For instance, the server can route according to
the area of content that's been requested, or according to the user. The
user might be identified by an application-specific session identifier, by
the user ID contained in an Open Market Secure Link ticket, by the Basic
Authentication user name, or whatever. Each process maintains its own
cache, and session affinity ensures that each incoming request has access
to the cache that will speed up processing the most.


 I think the sweet spot for an entirely LiveCode application server would
be those apps where backend processing load exceeds network I/O.

Yes - I see no real use for using LiveCode as a server. I'd use Node. I
want to be able to use LiveCode within a mixed coding environment and get
LiveCode to do stuff there - for instance image processing. I want to be
able to deploy it using NPM - so it's easy to set up.


 As interesting as these things are, I have to admit I currently have no
practical need for such a creature, so my brief experiments have been few
and limited to an occasional Sunday with a free hour on my hands. :)

Hell - I do. I'd be able to write all sorts of stuff for real world
applications if I could choose to write a routine in LiveCode and switch to
something else down the line if needed. The main use case is to work in
teams with other mainstream devs, and to choose the language that suites
the problem - so polyglot server programming.


 If you have such a need it would be interesting to see how these things
flesh out under real-work load.


  Assuming there is an issue with the above, the next question is
  that given that Node already can be extended with C / C++
  extensions api - so why not treat Livecode as simply a Node
  extension and let Node do the async event driven I/O that it is
  so good at?

 I have no direct experience with either Node.js or NgineX, so I'm out of
my depth here - but that won't stop me from conjecturing g:

Addons are dynamically linked shared objects. They can provide glue to C
and C++ libraries - [https://nodejs.org/api/addons.html nodejs.org]



 My understanding is that LiveCode, being single-threaded today, is
limited to CGI, while Node.js and NgineX expect FastCGI (forkable) support.

This