Are there features in Ignite which would support running an infinite (while
the cluster is up and running) job? For example, continuously reading values
from the distributed queue? So to implement producer/consumer pattern, where
there could be multiple producers, but I want to limit number of consumers,
ideally per specific key/group or if it is not possible, just to have one
consumer per queue.

If I asynchronously submit affinity ignite job with `queue.affinityRun()`
what is the implication of the this job never to finish? Will it consume the
thread from the ExecutorService thread pool on the running node forever
then?

To give a better a context, this is the problem I am trying to solve (maybe
there are even other approaches to  solve it, and I am looking into the
completely wrong direction?):
- there are application events coming periodically (based on the application
state changes)
- I have to accumulate these events until the block of the events is
"complete" (completion is defined by an application rule), as until the
group is complete nothing can be done/processed
- when the group is complete I have to process all of the events in the
group (as one complete chunk), while still accepting new events coming for
now another "incomplete" group
- and repeat since the beginning

So, far I came with the following solution:
- collect and keep all the events in the distributed IgniteQueue
- when the application notifies the completion of the group, I trigger
`queue.affinityRun()` (as I have to do a peek before removing the event from
the queue, so I want to run the execution logic on the node where the queue
is stored, they are small and run in collocated mode, and so peek will not
do an unnecessary network call)
[the reason for a peek, is that even if I receive the application event of
the group completion, due to the way events are stored (in the queue), I
don't know where the group ends, only where it starts (head of the queue),
but looking into the event itself, I can detect whether it is still from the
same group, or already from a new incomplete group, this is why I have to do
peek, as if I do poll/take first then I have to the put the element back
into the head of the queue (which obviously is not possible, as it is a
queue and not a dequeue), then I have to store this element/event somewhere
else, and on the next job submitted start with this stored event as a "head"
of the queue, and only then switch back to the real queue. As I don't want
this extra complexity, I am ready to pay a price for an extra peek before
the take]
- implement custom CollisionSpi which will understand whether there is
already a running job for the given queue, and if so, keeps the newly
submitted job in the waiting list
[here again due to the fact how events are stored (in the queue) I don't
allow multiple jobs running against same queue at the same time, as taking
the element from the middle of one group already processing group is
obviously an error, so I have to limit (to 1) the number of parallel jobs
against the given queue]
- it also requires to submit a new ignite job (distributed closure) on the
queue every time the application triggers/generates a completion group
event, which requires/should schedule a queue processing (also see above on
the overall number of the simultaneous jobs)

I thought about other alternative solutions, but all of them turned out to
be more complex, and involve more moving parts (as for example, for the
distributed queue Ignite manages atomicity, and consistency, with other
approaches I have to do it all manually, which I just want to minimize) and
more logic to maintain and ensure correctness.

Is there any other suitable alternative for the problem described above?





--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Running-an-infinite-job-use-case-inside-or-alternatives-tp5430.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.

Reply via email to