Re: Questions about getAllInternal(...)

2018-09-09 Thread steve.hostett...@gmail.com
Hello Alexey,

Thanks for the answer. It does help but I can't help thinking that the sort
of one size fits all approach would have important performance consequences
on simple use cases (read only local or replicated caches).

Wouldn't be useful to have the option for a on heap non serialized cache
(local or replicated) ? 

Steve




--
Sent from: http://apache-ignite-developers.2346864.n4.nabble.com/


Re: Critical worker threads liveness checking drawbacks

2018-09-09 Thread David Harvey
It would be safer to restart the entire cluster than to remove the last
node for a cache that should be redundant.

On Sun, Sep 9, 2018, 4:00 PM Andrey Gura  wrote:

> Hi,
>
> I agree with Yakov that we can provide some option that manage worker
> liveness checker behavior in case of observing that some worker is
> blocked too long.
> At least it will  some workaround for cases when node fails is too
> annoying.
>
> Backups count threshold sounds good but I don't understand how it will
> help in case of cluster hanging.
>
> The simplest solution here is alert in cases of blocking of some
> critical worker (we can improve WorkersRegistry for this purpose and
> expose list of blocked workers) and optionally call system configured
> failure processor. BTW, failure processor can be extended in order to
> perform any checks (e.g. backup count) and decide whether it should
> stop node or not.
> On Sat, Sep 8, 2018 at 3:42 PM Andrey Kuznetsov  wrote:
> >
> > David, Yakov, I understand your fears. But liveness checks deal with
> > _critical_ conditions, i.e. when such a condition is met we conclude the
> > node as totally broken, and there is no sense to keep it alive regardless
> > the data it contains. If we want to give it a chance, then the condition
> > (long fsync etc.) should not considered as critical at all.
> >
> > сб, 8 сент. 2018 г. в 15:18, Yakov Zhdanov :
> >
> > > Agree with David. We need to have an opporunity set backups count
> threshold
> > > (at runtime also!) that will not allow any automatic stop if there
> will be
> > > a data loss. Andrey, what do you think?
> > >
> > > --Yakov
> > >
> >
> >
> > --
> > Best regards,
> >   Andrey Kuznetsov.
>


Re: Critical worker threads liveness checking drawbacks

2018-09-09 Thread Andrey Gura
Hi,

I agree with Yakov that we can provide some option that manage worker
liveness checker behavior in case of observing that some worker is
blocked too long.
At least it will  some workaround for cases when node fails is too annoying.

Backups count threshold sounds good but I don't understand how it will
help in case of cluster hanging.

The simplest solution here is alert in cases of blocking of some
critical worker (we can improve WorkersRegistry for this purpose and
expose list of blocked workers) and optionally call system configured
failure processor. BTW, failure processor can be extended in order to
perform any checks (e.g. backup count) and decide whether it should
stop node or not.
On Sat, Sep 8, 2018 at 3:42 PM Andrey Kuznetsov  wrote:
>
> David, Yakov, I understand your fears. But liveness checks deal with
> _critical_ conditions, i.e. when such a condition is met we conclude the
> node as totally broken, and there is no sense to keep it alive regardless
> the data it contains. If we want to give it a chance, then the condition
> (long fsync etc.) should not considered as critical at all.
>
> сб, 8 сент. 2018 г. в 15:18, Yakov Zhdanov :
>
> > Agree with David. We need to have an opporunity set backups count threshold
> > (at runtime also!) that will not allow any automatic stop if there will be
> > a data loss. Andrey, what do you think?
> >
> > --Yakov
> >
>
>
> --
> Best regards,
>   Andrey Kuznetsov.


[GitHub] ignite pull request #4707: IGNITE-9384 Transaction status can be set incorre...

2018-09-09 Thread agura
GitHub user agura opened a pull request:

https://github.com/apache/ignite/pull/4707

IGNITE-9384 Transaction status can be set incorrectly due to a race on 
prepare phase



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/agura/incubator-ignite ignite-9384

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/ignite/pull/4707.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #4707


commit b14943f33dea9e998ea6d5a9c2570d60d16458d2
Author: agura 
Date:   2018-09-09T19:21:36Z

IGNITE-9384 Transaction status can be set incorrectly due to a race on 
prepare phase




---


[jira] [Created] (IGNITE-9503) Visor CMD shows wrong REPLICATED cache max size

2018-09-09 Thread Denis Magda (JIRA)
Denis Magda created IGNITE-9503:
---

 Summary: Visor CMD shows wrong REPLICATED cache max size
 Key: IGNITE-9503
 URL: https://issues.apache.org/jira/browse/IGNITE-9503
 Project: Ignite
  Issue Type: Task
  Components: visor
Affects Versions: 2.5
Reporter: Denis Magda
Assignee: Alexey Kuznetsov
 Fix For: 2.6


Started 2 nodes with a single replicated cache. Preloaded _50_ entries 
there. 

Visor CMD *cache* command shows a wrong total
{code}
++
|Name(@)|Mode| Nodes | Entries (Heap / 
Off-heap) |   Hits|  Misses   |   Reads   |  Writes   |
++
| CacheDataStreamerExample(@c0) | REPLICATED | 2 | min: 246106 (0 / 246106) 
 | min: 0| min: 0| min: 0| min: 0|
|   ||   | avg: 25.00 (0.00 / 
25.00) | avg: 0.00 | avg: 0.00 | avg: 0.00 | avg: 0.00 |
|   ||   | max: 253894 (0 / 253894) 
 | max: 0| max: 0| max: 0| max: 0|
++
{code}

You find a correct total number only if *cache -a* is used and you calculate 
the sum of entries on each node manually:
{code}
+===+
|   Node ID8(@), IP   | CPUs | Heap Used | CPU Load |   Up Time|
 Size | Hi/Mi/Rd/Wr |
+===+
| 44A2CF9C(@n1), 192.168.1.64 | 4| 19.63 %   | 0.43 %   | 00:01:46.229 | 
Total: 253894| Hi: 0   |
| |  |   |  |  |   
Heap: 0| Mi: 0   |
| |  |   |  |  |   
Off-Heap: 253894   | Rd: 0   |
| |  |   |  |  |   
Off-Heap Memory: 0 | Wr: 0   |
+-+--+---+--+--+--+-+
| 72DEC7B5(@n0), 192.168.1.64 | 4| 9.69 %| 0.50 %   | 00:02:00.456 | 
Total: 246106| Hi: 0   |
| |  |   |  |  |   
Heap: 0| Mi: 0   |
| |  |   |  |  |   
Off-Heap: 246106   | Rd: 0   |
| |  |   |  |  |   
Off-Heap Memory: 0 | Wr: 0   |
+---+
{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: How to throttle/limit the cache-store read threads?

2018-09-09 Thread Saikat Maitra
Hi Mridul,

Have you considered the option of creating a dedicated Threadpool for the
cache store operations. Here is an example

http://commons.apache.org/dormant/threadpool/

Another option would be to consider Hystrix Command Threadpool since you
mentioned that the cache store is remote web service.

https://github.com/Netflix/Hystrix/wiki/How-To-Use#Command%20Thread-Pool

HTH
Regards
Saikat

On Sun, Sep 9, 2018 at 6:04 AM, Mridul Chopra 
wrote:

> Hello,
>
> I have implemented a cache store for read-through ignite caching, whenever
> there is a read-through operation through the cache-store, I want to limit
> this to single thread or at max 2. The reason behind the same is due to the
> fact that the underlying cache store is a remote Rest based webservice that
> can only support limited number of connections. Hence I want to have
> limited number of requests being sent for read-through. Please note that if
> I configure publicThreadPoolSize =1 , this would mean all cache.get and
> cache.getAll operations would be single threaded. Hence this behaviour is
> not desired, I want to have throttling at the cache store level only.
> Thanks,
> Mridul
>


[jira] [Created] (IGNITE-9502) .NET: Add IoC/DI support

2018-09-09 Thread Artyom Sokolov (JIRA)
Artyom Sokolov created IGNITE-9502:
--

 Summary: .NET: Add IoC/DI support
 Key: IGNITE-9502
 URL: https://issues.apache.org/jira/browse/IGNITE-9502
 Project: Ignite
  Issue Type: Improvement
Reporter: Artyom Sokolov


Implement support for popular IoC/DI containers on .NET platform, including, 
but not limited to:

[Castle Windsor|http://www.castleproject.org/projects/windsor/]

[Autofac|https://autofac.org/]

List of containers can be found 
[here|https://github.com/quozd/awesome-dotnet#ioc].



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


IoC/DI support in Apache Ignite.NEt

2018-09-09 Thread Artyom Sokolov
Hello,

I would like to start implementing IoC/DI support (Autofac, Castle Windsor,
etc.) in Apache Ignite.NET.

Please add me as a contributor in JIRA, so I could create ticket and start
working on this.

My JIRA username is applicazza.

Cheers,
Artyom.


How to throttle/limit the cache-store read threads?

2018-09-09 Thread Mridul Chopra
Hello,

I have implemented a cache store for read-through ignite caching, whenever
there is a read-through operation through the cache-store, I want to limit
this to single thread or at max 2. The reason behind the same is due to the
fact that the underlying cache store is a remote Rest based webservice that
can only support limited number of connections. Hence I want to have
limited number of requests being sent for read-through. Please note that if
I configure publicThreadPoolSize =1 , this would mean all cache.get and
cache.getAll operations would be single threaded. Hence this behaviour is
not desired, I want to have throttling at the cache store level only.
Thanks,
Mridul


Re: .Net MethodVisitor ambiguous exception during get StringMethod "Contains"

2018-09-09 Thread Pavel Tupitsyn
Hi,

Do you mean that you can't compile the code?
It certainly compiles (under full .NET, .NET Core, and Mono, on Windows and
Linux), so it seems like some issue with your environment.
Can you do a fresh git checkout, then run `build.bat`?

Pavel

On Fri, Sep 7, 2018 at 5:16 AM Tâm Nguyễn Mạnh 
wrote:

> Hi,
>
> There are more than 1 Contains method for string, that lead into Ambiguous
> exception
>
> modules\platforms\dotnet\Apache.Ignite.Linq\Impl\MethodVisitor.cs:57
> ``` C#
> GetStringMethod("Contains", del: (e, v) => VisitSqlLike(e, v, "'%' || ? ||
> '%'")),
> ```
>
> I think it should be:
> ``` C#
> GetStringMethod("Contains", del: (e, v) => VisitSqlLike(e, v, "'%' || ? ||
> '%'"), argTypes:new []{typeof(string)}),
> ```
>
> --
> Thanks & Best Regards
>
> Tam, Nguyen Manh
>