TL;DR: Imagine having to do all HTTP requests via a single port for posting
all requests and a single port for receiving back all results. Do you want
to program to that model?

What I'm concerned with is the case where one needs to run more than one
instance of the long-running or otherwise external process at once possibly
delivering their results to different parts of the model. For example,
imagine that our model contained a list of simulations where we could set
parameters via Elm but the actual computation needed to be sent off to
JavaScript (or asm-js). We could start a computation via the command port
and get answers back via the subscription port, but because we have more
than one simulation to process, we might reasonably want to start the
computation for each of those simulations and get the results back with the
appropriate tagging. Maybe we do a lot with simulations and we want to have
different views with different sets of simulations and we would like to put
the code for dealing with the ports in one place.

>From an API standpoint, I would argue that the most flexible option is a
way to provide the following from JavaScript:

runSimulation : SimulationParams -> Task SimulationError SimulationValue


These are flexible because we can chain them into bigger structures (and
maybe someday, though not today, now only spawn them but cancel them).

Sticking with a more command like API that is less composable but otherwise
has similar semantics, one could provide:

runSimulation : (Result SimulationError SimulationValue -> msg) ->
SimulationParams -> Cmd msg


We can build that using ports if we drop commands in favor of out messages
— in this case an out message asking to run a simulation with particular
parameters and a particular tagger function for the results. (We can build
it using commands if we write an effects manager but writing those is
discouraged.)

We might think that we could put the port logic in the Simulation module
and create a subscription to the results port for each model currently
awaiting the results of a simulation run. But this is difficult to make
work in practice because all of the models will be subscribing to the same
port and receiving the same messages via that port. Hence, we will need
some way to globally manage the identifiers for either the runs or the
models and by pushing the logic down toward the individual model instances,
we have lost the opportunity to provide that sort of global ID management.

Mark

On Wed, Dec 28, 2016 at 6:10 AM, GordonBGood <[email protected]> wrote:

>
> If we can do all of that, I don't see what Mark is worried about?  We
> don't have to have an Event Manager?  What's the point of a Router/Event
> Manager?  Ah, I think it's to do with queue'ing messages to process later
> whereas this will process them immediately?
>
> If this all works, we could write LRP's in JavaScript or optionally move
> them to Elm when it gets more efficient.
>
> --
> You received this message because you are subscribed to the Google Groups
> "Elm Discuss" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to [email protected].
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups "Elm 
Discuss" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
For more options, visit https://groups.google.com/d/optout.

Reply via email to