I think we need a "kogito context variable" interface to read from/write
to, implemented with the constraints you mentioned.
I'm not against adding that new node (which should be using that API) you
propose, but it won't be useful for SWF case (In my opinion, there is not
need to add a new node to the process instance execution list every time
you want to read/write from the context)
IN SWF we would add a reserved keyword $WORKLOW.context.<variable name> to
read/write to the context (using jq expressions) that directly invoke the
"kogito context variable" interface underneath (as part of the expression
evaluation)




On Tue, May 7, 2024 at 7:13 PM Enrique Gonzalez Martinez <
egonza...@apache.org> wrote:

> Ok, so les brief this a bit.
> The constraints i would set are:
> 1. Should be stored outside the process state
> 2. Pluggable (data base store as default impl)
> 3. Should be lazy load. Not required to access anytime we access the
> process state but should be involved automatically in any input/output like
> a process or context variable
> 4. Setup should be logically defined through idiom definition but wired at
> runtime level. It is to say. The workflow definition should say this store
> points to this logica store and runtime define the mappimg between the
> logical store and physical storage.
> 5. The store should define what is stored in it. For instance it has store
> the variable name with this structure. ( that way we can use those
> identifiers like variables amd we can compute the outcome type)
>
> I would suggest an impl like this
>
> 1. Create a new type of node. Let's call it store
> 2. As this node is special type we need a way to tell the engine this will
> be part of input/ output. For this we can create a new type of connection (
> different from engine path execution- DROOLS_CONNECTION.... DROOLS_STORE or
> something like that)
> 3. Once we reach an activity we look for these connections and add them to
> the inputs/outputs
> 4. For inputs we call load operation. For outputs we call save.
> 5. For computing how to load something we have compose the context. For
> trivial cases we can use process instance id + variable name. For cluster
> operation we can use something instan es share among them.the important
> thing is that process need to know how to compose that context.
>
>
> In the case of BPMN we have the data store for this. Dont know if therecis
> something different for swf.
>
> That would be an initial idea.
>
> El mar, 7 may 2024, 17:57, Francisco Javier Tirado Sarti <
> ftira...@redhat.com> escribió:
>
> > Hi Enrique,
> > Of course I'm open to exploring that path.
> > I guess we can have a table where we store the context variables and that
> > table will be read every time we execute an instance that is reading
> > information from context.
> > There are some questions that we will need to answer.
> > How do we populate that context? Through another workflow? Can the
> context
> > be arbitrarily written?
> > Which will be the context key? Just the property name (assuming it will
> be
> > unique across the application)? The property name plus a context id (to
> > allow reusing the same name for differente domains)?
> > I let you propose the BPMN idiom and we can think of something equivalent
> > for SWF.
> > Thanks a lot.
> >
> >
> >
> > On Tue, May 7, 2024 at 5:27 PM Enrique Gonzalez Martinez <
> > egonza...@apache.org> wrote:
> >
> > > Hi Francisco,
> > > Regarding your notion of context. We had in the past something called
> > > globals. This was a store shared among processes usually through kie
> > > session. Actually not long ago i had to made a hack to support globals
> at
> > > instance level.
> > > In this case i am more open to discuss but shifting a bit the
> discussion
> > to
> > > something more general if u dont mind.
> > >
> > > Having stores shared among processes is not a bad idea and we can make
> it
> > > work for process instance cooperation.
> > >
> > > Maybe we can create variables in the process that are stored outside
> the
> > > process state.
> > >
> > > In this case I think this is more sensible than using human task as an
> > > event.
> > >
> > > In this case there are several options and does not affect the engine
> at
> > > the very core.
> > >
> > > Wdyt Francisco ?
> > >
> > > PS: the downside of using globals is making things a bit slower as u
> need
> > > to access extra storage for executing process instances.
> > >
> > > El mar, 7 may 2024, 15:21, Francisco Javier Tirado Sarti <
> > > ftira...@redhat.com> escribió:
> > >
> > > > Alex,
> > > > I think, in general, you use -1 too generously (the same applies to
> > > change
> > > > requested status in PRs)
> > > > First of all, this is using [DISCUSS] etiquette, so there is not
> > really a
> > > > proposal to -1
> > > > Second, in my latest e-mail, based on your feedback, I summarize the
> > > > options (context or not context), so I think the proper answer to my
> > > e-mail
> > > > should be "I prefer option 2, not context, master workflow", which is
> > > what
> > > > I deduce from your -1.
> > > > Hopefully, I will be able to convince the user to use the second
> > > approach.
> > > >
> > > >
> > > > On Tue, May 7, 2024 at 2:47 PM Alex Porcelli <a...@porcelli.me>
> wrote:
> > > >
> > > > > I understand the willingness to simplify, but I think this risks
> > > > > opening a can of worms. As mentioned by Enrique, Workflow is not a
> > > > > system of records... events are the way to go.
> > > > >
> > > > > Apache KIE is a code-first approach, so I'd argue that it's
> expected
> > > > > that users will need to do some coding, and leverage best practices
> > to
> > > > > best decouple workflows into smaller event based flows that Apache
> > KIE
> > > > > already support out of the box.
> > > > >
> > > > > -1 for me.
> > > > >
> > > > > On Tue, May 7, 2024 at 5:53 AM Francisco Javier Tirado Sarti
> > > > > <ftira...@redhat.com> wrote:
> > > > > >
> > > > > > Hi Enrique,
> > > > > > Thanks for the explanation
> > > > > > As I said, the proposal is trying to implement the concept of
> > shared
> > > > > > context information between workflows.
> > > > > > In a particular domain, several and *independent* workflows might
> > > want
> > > > to
> > > > > > share the same set of process variables (and its values).
> > > > > > One way of implementing that, without really changing the core
> you
> > > are
> > > > > > mentioning, is to have a workflow that sets the context (the
> > context
> > > > > > setting can be automated itself) and the other workflows (the
> > domain
> > > > > ones)
> > > > > > that use it by including the context as part of the input
> > parameters.
> > > > > > The other way will be for workflows (of that particular domain)
> to
> > > > > > explicitly read the context using some primitive (so you will
> have
> > to
> > > > > know
> > > > > > that a particular variable is in the "context"). I found this
> > latter
> > > > less
> > > > > > desirable than the former. The workflow designer knows that the
> > > > workflow
> > > > > > has a certain input, it really does not care how this input is
> > > > provided.
> > > > > > It is true that through events or REST invocation, this merging
> can
> > > be
> > > > > > manually done (in fact, this is what the user is currently doing,
> > it
> > > > > reads
> > > > > > the output from Data index and manually add to the input of the
> > > > workflow
> > > > > to
> > > > > > be executed), but I think it is a reasonable request to ask for
> > some
> > > > > > facilities to perform this merge using just runtimes
> infrastructure
> > > > > > (without DI being present).
> > > > > > It is true that the whole domain, at least in theory, can be
> > modelled
> > > > as
> > > > > a
> > > > > > huge workflow (this was in fact my first proposal to the user,
> but
> > > they
> > > > > > were not really appealed by it, I can try again though), where
> the
> > > > first
> > > > > > part is  the  "context" setting and then you can  have multiple
> > > > branches
> > > > > > (one per each independent workflow) waiting for human
> intervention.
> > > In
> > > > > this
> > > > > > approach, every independent workflow can be invoked as a subflow
> > > > > (triggered
> > > > > > by an event or http call). But we need to check how the whole
> thing
> > > > will
> > > > > be
> > > > > > visualized in DataIndex.
> > > > > > So far, there are following options
> > > > > > 1) Implement shared context through input parameters. There are
> two
> > > > > > alternative implementations:
> > > > > > a) Change Kogito ProcessService (which is not part of the engine)
> > to
> > > > > > perform the merge if configuration is present and add interface
> to
> > > > > > existence persistence addon to store the output when the process
> > > > finishes
> > > > > > if the configuration is present,
> > > > > > b) Do not touch any existing class, add an addon (only active for
> > > users
> > > > > > that want this behaviour) that listen for process start and
> perform
> > > the
> > > > > > merge there, with an additional addon for each supported DB to
> read
> > > the
> > > > > > data to be merged-, This is more implementation work than a), so
> > you
> > > > now
> > > > > > understand why I'm writing so much to adopt a) :)
> > > > > > 2) Do not implement shared context, but write the whole domain
> as a
> > > > > single
> > > > > > workflow that invokes independent workflows as subflows based on
> > > event
> > > > > > triggering. We need to check how this is visualized in Data Index
> > and
> > > > > > monitoring.
> > > > > >
> > > > > >
> > > > > >
> > > > > > On Mon, May 6, 2024 at 7:11 PM Enrique Gonzalez Martinez <
> > > > > > egonza...@apache.org> wrote:
> > > > > >
> > > > > > > Hi Francisco,
> > > > > > > Workflow engine is based at the very core in two different
> > > > primitives:
> > > > > > > events and activities.
> > > > > > >
> > > > > > > Events are something that happens that initiates and execution.
> > > > > > > Catch/throw events of any type, timers, conditions...
> > > > > > > An activity is something that produces some sort of computation
> > > unit
> > > > > like
> > > > > > > script, user taks, etc..
> > > > > > >
> > > > > > > Anything that starts a process should be achieved by an event
> > > > (start),
> > > > > and
> > > > > > > a human task is not really an event of any kind.
> > > > > > >
> > > > > > > There is a primitive called start event with parallel multiple
> > set
> > > to
> > > > > true
> > > > > > > that covers multiple events definitions that must happen in
> order
> > > to
> > > > > start
> > > > > > > a process. We don't have this sort of implementation right now
> in
> > > the
> > > > > > > engine.( It is really hard to make it work and requires careful
> > > > > design) but
> > > > > > > should be your use case if we had it. One event for process
> being
> > > > > finished
> > > > > > > and other event triggered by some human sending a start with
> its
> > > > > inputs.
> > > > > > >
> > > > > > > That being said, about this idea:
> > > > > > >
> > > > > > > This sort of idea will introduce into the very core of the
> engine
> > > > human
> > > > > > > tasks as events. I dont see any justification to do that.
> > > > > > >
> > > > > > > As we did mention at some point, human task will become an
> entire
> > > > > subsystem
> > > > > > > on its own as it does not fit the requirements to be a proper
> > > > subsystem
> > > > > > > with the features we had in v7.
> > > > > > >
> > > > > > > This introduce a concept of process orchestration based on
> humans
> > > > input
> > > > > > > which defeats the purpose of a workflow as you are introducing
> an
> > > > > arbitrary
> > > > > > > way of subprocess execution or interdependent process based on
> > > > humans.
> > > > > It
> > > > > > > is not the same of using the output of a human task to trigger
> > the
> > > > > > > execution of a subprocess that using human input as a some sort
> > of
> > > > > gateway
> > > > > > > event.
> > > > > > >
> > > > > > > How to perform this:
> > > > > > > As Alex mention you can achieve this in a very simple way.
> > > > > > > 1. Process finish and send a message to a kafka queue
> > > > > > > 2 - third party system gets the events and allows you to
> > manipulate
> > > > the
> > > > > > > input
> > > > > > > 3 - it sends to a queue
> > > > > > > 4 - process listening to kafka triggers a start event.
> > > > > > >
> > > > > > > If you dont like the third party system you can create a very
> > > simple
> > > > > > > process that reads from a stream, allows you to modify the
> input
> > > and
> > > > > sends
> > > > > > > the outcome to another stream.
> > > > > > >
> > > > > > > Cheers
> > > > > > >
> > > > > > >
> > > > > > >
> > > > > > > El lun, 6 may 2024, 17:16, Francisco Javier Tirado Sarti <
> > > > > > > ftira...@redhat.com> escribió:
> > > > > > >
> > > > > > > > Alex, I might be missing something, but I do not think this
> > > > scenario
> > > > > can
> > > > > > > be
> > > > > > > > covered through event consumption. The key part is that
> > workflows
> > > > of
> > > > > > > type B
> > > > > > > > are manually executed by users, which will provide its own
> set
> > of
> > > > > > > > parameters. Workflow of type A is just setting a variable
> > context
> > > > > which
> > > > > > > is
> > > > > > > > shared by all workflows of type B. To simulate such context
> > > without
> > > > > > > > introducing the concept into the workflow definition itself,
> > the
> > > > > > > properties
> > > > > > > > setup by A should be passed as input of B.
> > > > > > > >
> > > > > > > > On Mon, May 6, 2024 at 5:05 PM Alex Porcelli <
> > > porce...@apache.org>
> > > > > > > wrote:
> > > > > > > >
> > > > > > > > > Isn’t this already achievable using events with different
> > > topics?
> > > > > > > > >
> > > > > > > > > -
> > > > > > > > > Alex
> > > > > > > > >
> > > > > > > > >
> > > > > > > > > On Mon, May 6, 2024 at 11:02 AM Francisco Javier Tirado
> > Sarti <
> > > > > > > > > ftira...@redhat.com> wrote:
> > > > > > > > >
> > > > > > > > > > Hi,
> > > > > > > > > > This is related with issue
> > > > > > > > > >
> > > > >
> https://github.com/apache/incubator-kie-kogito-runtimes/issues/3495
> > > > > > > > > > We have one user which would like to reuse the result of
> > one
> > > > > workflow
> > > > > > > > > > execution (let's call this workflow of type  A) as input
> of
> > > > > several
> > > > > > > > > > workflows (lets call them  workflows of type B)
> > > > > > > > > >
> > > > > > > > > > Workflow A is executed before all B workflows. Then B
> > > workflows
> > > > > are
> > > > > > > > > > manually executed by users. The desired input of B
> > workflows
> > > > > should
> > > > > > > > be a
> > > > > > > > > > merge of what the user provides when performing the start
> > > > > request and
> > > > > > > > the
> > > > > > > > > > output of workflow A. In order to achieve this, it is
> > > expected
> > > > > that
> > > > > > > > users
> > > > > > > > > > include, in the start request of workflow of type B,  the
> > > > process
> > > > > > > > > instance
> > > > > > > > > > id of workflow A (so rather than taking the output of A
> and
> > > > > merging
> > > > > > > it
> > > > > > > > > for
> > > > > > > > > > every call, they just pass the process instance id)
> > > > > > > > > >
> > > > > > > > > > In order for this approach to work, output of workflow A
> > has
> > > to
> > > > > be
> > > > > > > > stored
> > > > > > > > > > somewhere in the DB (Currently runtimes DB only stores
> > active
> > > > > process
> > > > > > > > > > information). Since we do not want all process to keep
> > their
> > > > > output
> > > > > > > > > > information in the DB (only workflows of type A),
> workflows
> > > of
> > > > > type A
> > > > > > > > has
> > > > > > > > > > to be identified somehow
> > > > > > > > > >
> > > > > > > > > > But before entering into more implementation details,
> > which I
> > > > > would
> > > > > > > > like
> > > > > > > > > to
> > > > > > > > > > know is if this is a valid case both for BPMN or not. The
> > > > > > > > implementation
> > > > > > > > > > implications are pretty relevant. If a valid use case for
> > > both
> > > > > BPMN
> > > > > > > and
> > > > > > > > > > SWF, we can implement this functionality in the kogito
> > core,
> > > > > there we
> > > > > > > > can
> > > > > > > > > > take advantage of existing persistence addons and add the
> > > newly
> > > > > > > > required
> > > > > > > > > > storage there. If not, we need to provide a SWF specific
> > > addon
> > > > > for
> > > > > > > each
> > > > > > > > > > existing persistence add-on with the additional storage.
> > > > > > > > > > Let's share your thoughts.
> > > > > > > > > > Thanks in advance.
> > > > > > > > > >
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > >
> > > > >
> ---------------------------------------------------------------------
> > > > > To unsubscribe, e-mail: dev-unsubscr...@kie.apache.org
> > > > > For additional commands, e-mail: dev-h...@kie.apache.org
> > > > >
> > > > >
> > > >
> > >
> >
>

Reply via email to