Alex,
I think, in general, you use -1 too generously (the same applies to change
requested status in PRs)
First of all, this is using [DISCUSS] etiquette, so there is not really a
proposal to -1
Second, in my latest e-mail, based on your feedback, I summarize the
options (context or not context), so I think the proper answer to my e-mail
should be "I prefer option 2, not context, master workflow", which is what
I deduce from your -1.
Hopefully, I will be able to convince the user to use the second approach.


On Tue, May 7, 2024 at 2:47 PM Alex Porcelli <a...@porcelli.me> wrote:

> I understand the willingness to simplify, but I think this risks
> opening a can of worms. As mentioned by Enrique, Workflow is not a
> system of records... events are the way to go.
>
> Apache KIE is a code-first approach, so I'd argue that it's expected
> that users will need to do some coding, and leverage best practices to
> best decouple workflows into smaller event based flows that Apache KIE
> already support out of the box.
>
> -1 for me.
>
> On Tue, May 7, 2024 at 5:53 AM Francisco Javier Tirado Sarti
> <ftira...@redhat.com> wrote:
> >
> > Hi Enrique,
> > Thanks for the explanation
> > As I said, the proposal is trying to implement the concept of shared
> > context information between workflows.
> > In a particular domain, several and *independent* workflows might want to
> > share the same set of process variables (and its values).
> > One way of implementing that, without really changing the core you are
> > mentioning, is to have a workflow that sets the context (the context
> > setting can be automated itself) and the other workflows (the domain
> ones)
> > that use it by including the context as part of the input parameters.
> > The other way will be for workflows (of that particular domain) to
> > explicitly read the context using some primitive (so you will have to
> know
> > that a particular variable is in the "context"). I found this latter less
> > desirable than the former. The workflow designer knows that the workflow
> > has a certain input, it really does not care how this input is provided.
> > It is true that through events or REST invocation, this merging can be
> > manually done (in fact, this is what the user is currently doing, it
> reads
> > the output from Data index and manually add to the input of the workflow
> to
> > be executed), but I think it is a reasonable request to ask for some
> > facilities to perform this merge using just runtimes infrastructure
> > (without DI being present).
> > It is true that the whole domain, at least in theory, can be modelled as
> a
> > huge workflow (this was in fact my first proposal to the user, but they
> > were not really appealed by it, I can try again though), where the first
> > part is  the  "context" setting and then you can  have multiple branches
> > (one per each independent workflow) waiting for human intervention. In
> this
> > approach, every independent workflow can be invoked as a subflow
> (triggered
> > by an event or http call). But we need to check how the whole thing will
> be
> > visualized in DataIndex.
> > So far, there are following options
> > 1) Implement shared context through input parameters. There are two
> > alternative implementations:
> > a) Change Kogito ProcessService (which is not part of the engine)  to
> > perform the merge if configuration is present and add interface to
> > existence persistence addon to store the output when the process finishes
> > if the configuration is present,
> > b) Do not touch any existing class, add an addon (only active for users
> > that want this behaviour) that listen for process start and perform the
> > merge there, with an additional addon for each supported DB to read the
> > data to be merged-, This is more implementation work than a), so you now
> > understand why I'm writing so much to adopt a) :)
> > 2) Do not implement shared context, but write the whole domain as a
> single
> > workflow that invokes independent workflows as subflows based on event
> > triggering. We need to check how this is visualized in Data Index and
> > monitoring.
> >
> >
> >
> > On Mon, May 6, 2024 at 7:11 PM Enrique Gonzalez Martinez <
> > egonza...@apache.org> wrote:
> >
> > > Hi Francisco,
> > > Workflow engine is based at the very core in two different primitives:
> > > events and activities.
> > >
> > > Events are something that happens that initiates and execution.
> > > Catch/throw events of any type, timers, conditions...
> > > An activity is something that produces some sort of computation unit
> like
> > > script, user taks, etc..
> > >
> > > Anything that starts a process should be achieved by an event (start),
> and
> > > a human task is not really an event of any kind.
> > >
> > > There is a primitive called start event with parallel multiple set to
> true
> > > that covers multiple events definitions that must happen in order to
> start
> > > a process. We don't have this sort of implementation right now in the
> > > engine.( It is really hard to make it work and requires careful
> design) but
> > > should be your use case if we had it. One event for process being
> finished
> > > and other event triggered by some human sending a start with its
> inputs.
> > >
> > > That being said, about this idea:
> > >
> > > This sort of idea will introduce into the very core of the engine human
> > > tasks as events. I dont see any justification to do that.
> > >
> > > As we did mention at some point, human task will become an entire
> subsystem
> > > on its own as it does not fit the requirements to be a proper subsystem
> > > with the features we had in v7.
> > >
> > > This introduce a concept of process orchestration based on humans input
> > > which defeats the purpose of a workflow as you are introducing an
> arbitrary
> > > way of subprocess execution or interdependent process based on humans.
> It
> > > is not the same of using the output of a human task to trigger the
> > > execution of a subprocess that using human input as a some sort of
> gateway
> > > event.
> > >
> > > How to perform this:
> > > As Alex mention you can achieve this in a very simple way.
> > > 1. Process finish and send a message to a kafka queue
> > > 2 - third party system gets the events and allows you to manipulate the
> > > input
> > > 3 - it sends to a queue
> > > 4 - process listening to kafka triggers a start event.
> > >
> > > If you dont like the third party system you can create a very simple
> > > process that reads from a stream, allows you to modify the input and
> sends
> > > the outcome to another stream.
> > >
> > > Cheers
> > >
> > >
> > >
> > > El lun, 6 may 2024, 17:16, Francisco Javier Tirado Sarti <
> > > ftira...@redhat.com> escribió:
> > >
> > > > Alex, I might be missing something, but I do not think this scenario
> can
> > > be
> > > > covered through event consumption. The key part is that workflows of
> > > type B
> > > > are manually executed by users, which will provide its own set of
> > > > parameters. Workflow of type A is just setting a variable context
> which
> > > is
> > > > shared by all workflows of type B. To simulate such context without
> > > > introducing the concept into the workflow definition itself, the
> > > properties
> > > > setup by A should be passed as input of B.
> > > >
> > > > On Mon, May 6, 2024 at 5:05 PM Alex Porcelli <porce...@apache.org>
> > > wrote:
> > > >
> > > > > Isn’t this already achievable using events with different topics?
> > > > >
> > > > > -
> > > > > Alex
> > > > >
> > > > >
> > > > > On Mon, May 6, 2024 at 11:02 AM Francisco Javier Tirado Sarti <
> > > > > ftira...@redhat.com> wrote:
> > > > >
> > > > > > Hi,
> > > > > > This is related with issue
> > > > > >
> https://github.com/apache/incubator-kie-kogito-runtimes/issues/3495
> > > > > > We have one user which would like to reuse the result of one
> workflow
> > > > > > execution (let's call this workflow of type  A) as input of
> several
> > > > > > workflows (lets call them  workflows of type B)
> > > > > >
> > > > > > Workflow A is executed before all B workflows. Then B workflows
> are
> > > > > > manually executed by users. The desired input of B workflows
> should
> > > > be a
> > > > > > merge of what the user provides when performing the start
> request and
> > > > the
> > > > > > output of workflow A. In order to achieve this, it is expected
> that
> > > > users
> > > > > > include, in the start request of workflow of type B,  the process
> > > > > instance
> > > > > > id of workflow A (so rather than taking the output of A and
> merging
> > > it
> > > > > for
> > > > > > every call, they just pass the process instance id)
> > > > > >
> > > > > > In order for this approach to work, output of workflow A has to
> be
> > > > stored
> > > > > > somewhere in the DB (Currently runtimes DB only stores active
> process
> > > > > > information). Since we do not want all process to keep their
> output
> > > > > > information in the DB (only workflows of type A), workflows of
> type A
> > > > has
> > > > > > to be identified somehow
> > > > > >
> > > > > > But before entering into more implementation details, which I
> would
> > > > like
> > > > > to
> > > > > > know is if this is a valid case both for BPMN or not. The
> > > > implementation
> > > > > > implications are pretty relevant. If a valid use case for both
> BPMN
> > > and
> > > > > > SWF, we can implement this functionality in the kogito core,
> there we
> > > > can
> > > > > > take advantage of existing persistence addons and add the newly
> > > > required
> > > > > > storage there. If not, we need to provide a SWF specific addon
> for
> > > each
> > > > > > existing persistence add-on with the additional storage.
> > > > > > Let's share your thoughts.
> > > > > > Thanks in advance.
> > > > > >
> > > > >
> > > >
> > >
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: dev-unsubscr...@kie.apache.org
> For additional commands, e-mail: dev-h...@kie.apache.org
>
>

Reply via email to