Basically this is  a JSON-like parser that results in various types of
values to JS... (not just a specific neural network interface)

 I setup a JS->Native code object, that has the access methods to the
native module, and some data tracking stuff..

This is 'objects' which is an array of values which are in JS, but are
referenced from the C++ code via an index number into the array..

https://github.com/d3x0r/jsox-wasm/blob/master/simple_js.c#L168

These are examples that create a new array, or object... further down more
elemental objects are created and pushed into the value array...
https://github.com/d3x0r/jsox-wasm/blob/master/simple_js.c#L65

(Set object by Index)  - assign another referenced object into another....
https://github.com/d3x0r/jsox-wasm/blob/master/simple_js.c#L282

After playing with this for a while, I really gave up on this approach for
my own purposes, the overhead of creating JS objects from WASM during the
process of parsing JSOX(JSON) was much slower than just doing it in JS
natively...

Also you don't really NEED a preload code.. you can just have the require
in an init... which can setup the interface area between the native and JS
code...



On Thu, Mar 4, 2021 at 10:45 PM Jim Lloyd <[email protected]> wrote:

> I have a body of C++ code that I have successfully compiled with em++.
> Many of the unit tests run. I am unclear now how to achieve my end goal,
> which is to use this code in a node/express server that imports
> @tensorflow/tfjs-node and utilizes that module to perform Tensorflow
> predictions.
>
> I believe the general outline  will require that I inject JavaScript code
> to load the TensorFlow module, probably using `--pre-js <file>`. If `
> *--pre-js*` is correct, what should the contents of `*<file>*` be? One
> line should be this:
>
>     const *tf* = require('@tensorflow/tfjs-node');
>
> My C++ code will at some point do something like this:
>
>     EM_JS(val, loadModel, (string modelUrl), {
>         return await *tf*.loadGraphModel(modelUrl);
>     });
>     val *model* = loadModel("http://...";)
>
> Later in my code I will pass the model and a tensor into another EM_JS
> function to do the prediction, but I expect that once I understand the
> above I will be able figure out the rest.
>
> How do I arrange for the *tf* module reference to be available within the
> EM_JS definition of the loadModel function?
>
> Of course, if there is already some demo somewhere that does something
> similar, a link to it may be all that I need. So far I have not found it
> after a couple hours of searching.
>
> Thanks.
>
>
>
> --
> You received this message because you are subscribed to the Google Groups
> "emscripten-discuss" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to [email protected].
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/emscripten-discuss/2a618a23-fd04-462e-bec7-dd37506b3d89n%40googlegroups.com
> <https://groups.google.com/d/msgid/emscripten-discuss/2a618a23-fd04-462e-bec7-dd37506b3d89n%40googlegroups.com?utm_medium=email&utm_source=footer>
> .
>

-- 
You received this message because you are subscribed to the Google Groups 
"emscripten-discuss" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To view this discussion on the web visit 
https://groups.google.com/d/msgid/emscripten-discuss/CAA2GJqUru_AnbvbJog9ibrw8n8dqy2OwW4q4%2Bz5KD4Y7CPAo4Q%40mail.gmail.com.

Reply via email to