Guys,
there is no problem in blocking thread monitroing. Please, look at the
error message: "failureCtx=FailureContext
[type=SYSTEM_WORKER_TERMINATION, err=class
o.a.i.IgniteCheckedException: Node is stopping: grid-2]]". Some
critical worker was terminated unexpectedly. So the problem isn't
Folks,
What are the current timeouts? We need to know the probability of failures
in dev environment. This affect usability.
--
Denis
On Thu, Dec 27, 2018 at 4:59 AM Alexey Goncharuk
wrote:
> Nikolay,
>
> Yes, the fix is already in master. Looks like I was wrong, in your case
> failure
Nikolay,
Yes, the fix is already in master. Looks like I was wrong, in your case
failure handler is triggered by 'Node is stopping: grid-2'. Can you please
share the full trace?
чт, 27 дек. 2018 г. в 12:41, Nikolay Izhikov :
> Alexey
>
> Fix for this issue already in master?
> I run tests on
Alexey
Fix for this issue already in master?
I run tests on current master.
> Should we somehow announce it on the user-list or highlight on readme.io?
I don't think our users will be happy to users stuck with this behavior in
production.
Am I understand you correctly:
If someone use 2.7.
Hi Nikolay,
This is the issue I mentioned in "Critical worker threads liveness checking
drawbacks" topic which I was expecting to be included to Ignite 2.7, but it
was not. To workaround the issue, you should set
DataStorageConfiguration#setCheckpointReadLockTimeout to 0.
Should we somehow
Hello, Igniters.
I run into issue with critical system worker failure handler.
I just run `IgniteDataFrameSuite` and it terminates on random test.
My laptop doesn't have bleeding edge hardware, so tests can take
significant amount of time.
Looks like our watch dog too aggressive on development