Hi Sergey,

can you please provide more information?
Have you changed the example (if so, can you provide the changes you made?)
Is the example executed normally (without node failures)?

In the example, semaphore is created in non-failover safe mode,
which means it is not safe to use it once it is broken (something like
CyclicBarrier in java.util.concurrent),
and the semaphore is preserved in spite of the first node failing (if the
backups are configured),
so if the first node failed, then (broken) semaphore with the same name
should still be in the cache,
and this is expected behavior.

If this is not the case (test was executed normally) then please submit a
ticket describing more your setup,
how many nodes, how many backups configured, etc..

Thanks!
Vladisav

On Tue, Nov 8, 2016 at 10:37 AM, Sergey Chugunov <sergey.chugu...@gmail.com>
wrote:

>  Hello folks,
>
> I found a reason why *IgniteSemaphoreExample* hangs when started twice
> without restarting a cluster; and it doesn't seem minor to me anymore.
>
> From here I'm going to refer to example's code so please have it opened.
>
> So, when the first instance of node running example code finishes and
> leaves the cluster, synchronization semaphore named
> "IgniteSemaphoreExample" goes to broken state on all other cluster nodes.
> If I restart example without restarting all nodes of the cluster, final
> *acquire *call on the semaphore on client side hangs because all other
> nodes treat it as broken and don't increase permits with their *release
> *calls
> on it.
>
> There is an interesting comment inside its *tryReleaseShared*
> implementation
> (BTW it is implemented in *GridCacheSemaphoreImpl*):
>
> "// If broken, return immediately, exception will be thrown anyway.
>  if (broken)
>    return true;"
>
> It seems that no exceptions are thrown neither on client side calling
> *acquire
> *or on server side calling *release *methods on a broken semaphore.
>
> Does anybody know why it behaves in that way? Is it expected behavior at
> all and if yes where is it documented?
>
> Thanks,
> Sergey Chugunov.
>

Reply via email to