Cache expiry policy is slow

2019-05-30 Thread shivakumar
Hi,
I have configured cache expiry policy using cache templates for the cache
which I created in ignite, as below 


  
  
  
  
  

  

  

  
  

  

  





according to this configuration cache entries which completes 10 minutes
should be removed but to remove those entries ignite is taking more time.
this is my observation:
after configuring cache expiry policy as mentioned above, I'am batch
ingesting some records to the table for 4 minutes (around 1 million records
in 4 minutes) and after 10 minute it will start removing entries from the
table and number of records will start decreasing when i monitored from
visor CLI, since i configured expiry time as 10 minutes, all the entries
should get removed at the end of 14th minute (because i ingested data from
0th minute to 4th minute) but it is removing all the entries at the end of
20th minute.
any tuning needs to be done or am I missing any configuration ?

regards,
shiva




--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Ignite runs out heap during query (via SQL)

2019-05-30 Thread Roman Guseinov
Hi Shane,

Your understanding is right. During the query execution Ignite copies cache
entries into the heap memory.

Java heap size depends on the SQL queries you perform. If the query fetches
a lot of data it makes sense to increase the max size of the heap.

If the queries use "order by", "group by" or joins then I would recommend
creating indexes to avoid loading all rows of the table into the heap
memory.

Also, it is possible to restrict a number of concurrent SQL queries by
configuring a thread pool size
https://apacheignite.readme.io/docs/thread-pools#section-queries-pool 

Best Regards,
Roman



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Trouble with continuous queries

2019-05-30 Thread Alexandr Shapkin
Wait a sec, I have already posted an example, but for some reasons it was not
send properly.

Here are we go:


class Program
{
private static IContinuousQueryHandle _continuousQuery;
const string CacheName = "sample";
const string TableName = "entity";

static void Main(string[] args)
{
var ignite = Ignition.Start();

var cache = ignite.GetOrCreateCache(new
CacheConfiguration(CacheName) { SqlSchema = "PUBLIC" });

var createTalbeSql =
$"create table if not exists {TableName} (id int, field varchar,
primary key (Id)) " +
$"with \"key_type=int, value_type={typeof(MyEnity).FullName}\"";

cache.Query(new SqlFieldsQuery(createTalbeSql)).GetAll();

ICache entityCache = ignite.GetCache($"SQL_PUBLIC_{TableName.ToUpper()}");

StartContinuousQuery(entityCache);


var entity0 = new MyEnity { Id = 2, Field = "NEW" };
entityCache.Put(entity0.Id, entity0);

var entity = entityCache.Get(entity0.Id);
Console.WriteLine(entity);

var sql = $"update {TableName} set field = 'updated' where Id =
{entity.Id}";
var res = entityCache.Query(new SqlFieldsQuery(sql)).GetAll();
Console.WriteLine(res.First()[0]);

sql = $"insert into {TableName} (_key, field) VALUES(7, 'new
value')";
res = entityCache.Query(new SqlFieldsQuery(sql)).GetAll();
Console.WriteLine(res.First()[0]);

Console.ReadLine();

_continuousQuery.Dispose();
Ignition.Stop(null, true);
}

public static void StartContinuousQuery(ICache cache)
{
var query = new ContinuousQuery(new
ClrCacheSyncEventListener());
_continuousQuery = cache.QueryContinuous(query);
}

public class ClrCacheSyncEventListener : ICacheEntryEventListener
{
public void OnEvent(IEnumerable>
evts)
{
foreach (ICacheEntryEvent cacheEntryEvent in evts)
{
Console.WriteLine($"Action happened
{cacheEntryEvent.EventType}");
}
}
}


public class MyEnity
{
public MyEnity()
{
}

public int Id { get; set; }
[QuerySqlField]
public string Field { get; set; }

public override string ToString()
{
return $"Id = {this.Id}; TextField = {this.Field};";
}
}
}



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Error Running Gridgain's LoadCaches java application

2019-05-30 Thread Jay Fernandez
Hello, attached the threaddump for the one node.

The client LoadCaches is throwing this warning when I turn on verbose mode
over and over again.

WARNING: Failed to wait for initial partition map exchange. Possible
reasons are:
  ^-- Transactions in deadlock.
  ^-- Long running transactions (ignore if this is the case).
  ^-- Unreleased explicit locks.
May 30, 2019 3:55:59 PM org.apache.ignite.logger.java.JavaLogger warning
WARNING: Still waiting for initial partition map exchange
[fut=GridDhtPartitionsExchangeFuture [firstDiscoEvt=DiscoveryEvent
[evtNode=TcpDiscoveryNode [id=aac48b1a-1a69-4046-a570-ca1346149a5b,
addrs=[0:0:0:0:0:0:0:1, 10.0.164.68, 127.0.0.1], sockAddrs=[
GNLT-T580Jfernandez.boston.gryphonnetworks.com/10.0.164.68:0,
/0:0:0:0:0:0:0:1:0, /127.0.0.1:0], discPort=0, order=2, intOrder=0,
lastExchangeTime=1559246117333, loc=true, ver=2.7.0#20181130-sha1:256ae401,
isClient=true], topVer=2, nodeId8=aac48b1a, msg=null, type=NODE_JOINED,
tstamp=1559246119378], crd=TcpDiscoveryNode
[id=da20f8f5-3889-4aed-a394-c789d75f336a, addrs=[0:0:0:0:0:0:0:1%lo,
10.128.0.10, 127.0.0.1], sockAddrs=[/0:0:0:0:0:0:0:1%lo:47500, /
127.0.0.1:47500, /10.128.0.10:47500], discPort=47500, order=1, intOrder=1,
lastExchangeTime=1559246119213, loc=false,
ver=2.7.0#20181130-sha1:256ae401, isClient=false],
exchId=GridDhtPartitionExchangeId [topVer=AffinityTopologyVersion
[topVer=2, minorTopVer=0], discoEvt=DiscoveryEvent
[evtNode=TcpDiscoveryNode [id=aac48b1a-1a69-4046-a570-ca1346149a5b,
addrs=[0:0:0:0:0:0:0:1, 10.0.164.68, 127.0.0.1], sockAddrs=[
GNLT-T580Jfernandez.boston.gryphonnetworks.com/10.0.164.68:0,
/0:0:0:0:0:0:0:1:0, /127.0.0.1:0], discPort=0, order=2, intOrder=0,
lastExchangeTime=1559246117333, loc=true, ver=2.7.0#20181130-sha1:256ae401,
isClient=true], topVer=2, nodeId8=aac48b1a, msg=null, type=NODE_JOINED,
tstamp=1559246119378], nodeId=aac48b1a, evt=NODE_JOINED], added=true,
initFut=GridFutureAdapter [ignoreInterrupts=false, state=INIT, res=null,
hash=641664202], init=false, lastVer=null, partReleaseFut=null,
exchActions=ExchangeActions [startCaches=null, stopCaches=null,
startGrps=[], stopGrps=[], resetParts=null, stateChangeRequest=null],
affChangeMsg=null, initTs=1559246119400, centralizedAff=false,
forceAffReassignment=false, exchangeLocE=null,
cacheChangeFailureMsgSent=false, done=false, state=CLIENT,
registerCachesFuture=null, partitionsSent=false, partitionsReceived=false,
delayedLatestMsg=null, afterLsnrCompleteFut=GridFutureAdapter
[ignoreInterrupts=false, state=INIT, res=null, hash=12139181], evtLatch=0,
remaining=[da20f8f5-3889-4aed-a394-c789d75f336a], super=GridFutureAdapter
[ignoreInterrupts=false, state=INIT, res=null, hash=1103017075]]]


On Thu, May 30, 2019 at 5:25 AM Ilya Kasnacheev 
wrote:

> Hello!
>
> Can you collect thread dumps from all nodes in the cluster, share those
> with us?
>
> Regards,
> --
> Ilya Kasnacheev
>
>
> чт, 30 мая 2019 г. в 00:31, Jay Fernandez :
>
>> This did stop the error from being logged.   However, when I start the
>> loadCaches program, nothing is logged and it seems to just hang.  The
>> ignite logs show that a client connected but nothing after that.  In
>> addition, the web console heap size monitoring jumps up right away and then
>> stops monitoring immediately after.
>>
>> On Tue, May 28, 2019 at 9:42 AM Jay Fernandez 
>> wrote:
>>
>>> Thanks for the reply Denis.  Is the correct way to disable the checker?
>>>
>>> 
>>>
>>> On Fri, May 24, 2019 at 5:59 PM Denis Magda  wrote:
>>>
 Hi Jay,

 Could you please try to disable the "crtical workers checker"?

 https://apacheignite.readme.io/docs/critical-failures-handling#section-critical-workers-health-check

 It will be disabled by default in Ignite 2.7.5 since requires more
 automation and tuning.

 Let us know if it doesn't work.

 -
 Denis


 On Fri, May 24, 2019 at 9:57 AM jay.fernandez 
 wrote:

> Hello, very new to Ignite and excited about using the application.  I
> have
> installed one Apache Ignite 2.7 node on a GCP VM.  I have the web agent
> running locally and I am using Gridgain's Web Console.  I am getting an
> error trying to run the LoadCaches java application that the Gridgain
> Web
> Console generated based on my MySQL database.
>
> Logs from Ignite Server:
>
> May 24 16:54:50 gdw-mysql57 service.sh[26542]: [16:54:50] Ignite node
> started OK (id=1b7f4add)
> May 24 16:54:50 gdw-mysql57 service.sh[26542]: [16:54:50] Topology
> snapshot
> [ver=1, locNode=1b7f4add, servers=1, clients=0, state=ACTIVE, CPUs=2,
> offheap=1.5GB, heap=1.0GB]
> May 24 16:55:03 gdw-mysql57 service.sh[26542]: [16:55:03] Topology
> snapshot
> [ver=2, locNode=1b7f4add, servers=1, clients=1, state=ACTIVE, CPUs=10,
> offheap=1.5GB, heap=8.1GB]
>
>
> Error from the Java project below, any help would be appreciated.
>
> May 24, 2019 12:53:02 PM 

Re: Trouble with continuous queries

2019-05-30 Thread Mike Needham
My next hurdle is to get a remote listener from .NET working to the cache

On Thu, May 30, 2019 at 1:47 PM Mike Needham  wrote:

> I was able to figure this out.  Was missing
> the  ignite.events().localListen(lsnr, EVT_CACHE_OBJECT_PUT);
>
> On Thu, May 30, 2019 at 12:43 PM Mike Needham  wrote:
>
>> But does ignite fire the ignite
>> event EventType.EVT_CACHE_OBJECT_PUT, EventType.EVT_CACHE_OBJECT_READ
>> and EventType.EVT_CACHE_OBJECT_REMOVED
>>
>> On Thu, May 30, 2019 at 12:18 PM Alexandr Shapkin 
>> wrote:
>>
>>> Hi, yes, it should work that way.
>>>
>>> I was able to caught all events event with the raw SQL using this DBeaver
>>> https://apacheignite-sql.readme.io/docs/sql-tooling
>>>
>>>
>>>
>>>
>>>
>>> --
>>> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>>>
>>
>>
>> --
>> *Don't be afraid to be wrong. Don't be afraid to admit you don't have all
>> the answers. Don't be afraid to say "I think" instead of "I know."*
>>
>
>
> --
> *Don't be afraid to be wrong. Don't be afraid to admit you don't have all
> the answers. Don't be afraid to say "I think" instead of "I know."*
>


-- 
*Don't be afraid to be wrong. Don't be afraid to admit you don't have all
the answers. Don't be afraid to say "I think" instead of "I know."*


Re: Trouble with continuous queries

2019-05-30 Thread Mike Needham
I was able to figure this out.  Was missing
the  ignite.events().localListen(lsnr, EVT_CACHE_OBJECT_PUT);

On Thu, May 30, 2019 at 12:43 PM Mike Needham  wrote:

> But does ignite fire the ignite
> event EventType.EVT_CACHE_OBJECT_PUT, EventType.EVT_CACHE_OBJECT_READ
> and EventType.EVT_CACHE_OBJECT_REMOVED
>
> On Thu, May 30, 2019 at 12:18 PM Alexandr Shapkin 
> wrote:
>
>> Hi, yes, it should work that way.
>>
>> I was able to caught all events event with the raw SQL using this DBeaver
>> https://apacheignite-sql.readme.io/docs/sql-tooling
>>
>>
>>
>>
>>
>> --
>> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>>
>
>
> --
> *Don't be afraid to be wrong. Don't be afraid to admit you don't have all
> the answers. Don't be afraid to say "I think" instead of "I know."*
>


-- 
*Don't be afraid to be wrong. Don't be afraid to admit you don't have all
the answers. Don't be afraid to say "I think" instead of "I know."*


Re: Trouble with continuous queries

2019-05-30 Thread Mike Needham
But does ignite fire the ignite
event EventType.EVT_CACHE_OBJECT_PUT, EventType.EVT_CACHE_OBJECT_READ
and EventType.EVT_CACHE_OBJECT_REMOVED

On Thu, May 30, 2019 at 12:18 PM Alexandr Shapkin  wrote:

> Hi, yes, it should work that way.
>
> I was able to caught all events event with the raw SQL using this DBeaver
> https://apacheignite-sql.readme.io/docs/sql-tooling
>
>
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


-- 
*Don't be afraid to be wrong. Don't be afraid to admit you don't have all
the answers. Don't be afraid to say "I think" instead of "I know."*


Re: Trouble with continuous queries

2019-05-30 Thread Alexandr Shapkin
Hi, yes, it should work that way.

I was able to caught all events event with the raw SQL using this DBeaver
https://apacheignite-sql.readme.io/docs/sql-tooling





--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Ignite runs out heap during query (via SQL)

2019-05-30 Thread Shane Duan
Just wondering, how JVM heap is used in Ignite (Version 2.7) during query.
With persistence enabled, I am assuming everything will be stored in the
off-heap memory or disk. But during query time, query is failing because
Ignite is running heap space. In my test, I have to increase heap size for
Ignite to 8Gb with about 5-10 concurrent queries.

My guess is that Ignite need to process query results using heap. Is that
right? If that is the case, any recommendation on JVM heap setting for
Ignite?

Thanks,
Shane


Re: Insert into select OOM exception on java heap

2019-05-30 Thread yann Blazart
It's an insert into select. We made "meta" tables to allow doing other
selects.

Or can I do a lazy select then batch insert you mean ?

Le jeu. 30 mai 2019 à 18:15, Ilya Kasnacheev  a
écrit :

> Hello!
>
> I think it would make better sense to mark already updated entries, update
> in batches until no unmarked entries left.
>
> Regards,
> --
> Ilya Kasnacheev
>
>
> чт, 30 мая 2019 г. в 19:14, yann Blazart :
>
>> Hmmm. Can I use limit and offset ?
>>
>> Doing limit 1 by example and continue while insert  ount = 1 ???
>>
>>
>>
>> Le jeu. 30 mai 2019 à 17:57, Ilya Kasnacheev 
>> a écrit :
>>
>>> Hello!
>>>
>>> I'm afraid you will have to split this query into smaller ones. Ignite
>>> doesn't really have lazy insert ... select, so the result set will have to
>>> be held in heap for some time.
>>>
>>> Regards,
>>> --
>>> Ilya Kasnacheev
>>>
>>>
>>> чт, 30 мая 2019 г. в 18:36, yann Blazart :
>>>
 Hello,  we have 6 nodes configured with 3Gb heap, 30Gb offheap.

 We store lot's of data in some partitioned tables, then we are
 executing some "insert into select... join..." using SqlQueryField (or
 SqlQueryFieldEx).

 With tables of 5000 000 lines, we ran in a OOM error, even with lazy
 set to true and skipOnReduceTable.

 How can we handle this please ?

 Regards.

>>>


Re: Insert into select OOM exception on java heap

2019-05-30 Thread Ilya Kasnacheev
Hello!

I think it would make better sense to mark already updated entries, update
in batches until no unmarked entries left.

Regards,
-- 
Ilya Kasnacheev


чт, 30 мая 2019 г. в 19:14, yann Blazart :

> Hmmm. Can I use limit and offset ?
>
> Doing limit 1 by example and continue while insert  ount = 1 ???
>
>
>
> Le jeu. 30 mai 2019 à 17:57, Ilya Kasnacheev 
> a écrit :
>
>> Hello!
>>
>> I'm afraid you will have to split this query into smaller ones. Ignite
>> doesn't really have lazy insert ... select, so the result set will have to
>> be held in heap for some time.
>>
>> Regards,
>> --
>> Ilya Kasnacheev
>>
>>
>> чт, 30 мая 2019 г. в 18:36, yann Blazart :
>>
>>> Hello,  we have 6 nodes configured with 3Gb heap, 30Gb offheap.
>>>
>>> We store lot's of data in some partitioned tables, then we are executing
>>> some "insert into select... join..." using SqlQueryField (or
>>> SqlQueryFieldEx).
>>>
>>> With tables of 5000 000 lines, we ran in a OOM error, even with lazy set
>>> to true and skipOnReduceTable.
>>>
>>> How can we handle this please ?
>>>
>>> Regards.
>>>
>>


Re: Insert into select OOM exception on java heap

2019-05-30 Thread yann Blazart
Hmmm. Can I use limit and offset ?

Doing limit 1 by example and continue while insert  ount = 1 ???



Le jeu. 30 mai 2019 à 17:57, Ilya Kasnacheev  a
écrit :

> Hello!
>
> I'm afraid you will have to split this query into smaller ones. Ignite
> doesn't really have lazy insert ... select, so the result set will have to
> be held in heap for some time.
>
> Regards,
> --
> Ilya Kasnacheev
>
>
> чт, 30 мая 2019 г. в 18:36, yann Blazart :
>
>> Hello,  we have 6 nodes configured with 3Gb heap, 30Gb offheap.
>>
>> We store lot's of data in some partitioned tables, then we are executing
>> some "insert into select... join..." using SqlQueryField (or
>> SqlQueryFieldEx).
>>
>> With tables of 5000 000 lines, we ran in a OOM error, even with lazy set
>> to true and skipOnReduceTable.
>>
>> How can we handle this please ?
>>
>> Regards.
>>
>


Re: Ignite Visor Cache command hangs indefinitely.

2019-05-30 Thread John Smith
Sorry pressed enter to quickly

So basically I'm 100% sure if visor cache command cannot reach the client
node then it just stays there not doing anything.

On Thu, 30 May 2019 at 11:57, John Smith  wrote:

> Hi, running 2.7.0
>
> - I have a 4 node cluster and it seems to be running ok.
> - I have clients connecting and doing what they need to do.
> - The clients are set as client = true.
> - The clients are also connecting from various parts of the network.
>
> The problem with ignite visor cache command is if visor cannot reach a
> specific client node it just seems to hang indefinitely.
>
> Choose node number ('c' to cancel) [0]: c
> visor> cache
>
> It just stays like that no errors printed nothing...
>


Ignite Visor Cache command hangs indefinitely.

2019-05-30 Thread John Smith
Hi, running 2.7.0

- I have a 4 node cluster and it seems to be running ok.
- I have clients connecting and doing what they need to do.
- The clients are set as client = true.
- The clients are also connecting from various parts of the network.

The problem with ignite visor cache command is if visor cannot reach a
specific client node it just seems to hang indefinitely.

Choose node number ('c' to cancel) [0]: c
visor> cache

It just stays like that no errors printed nothing...


Re: Insert into select OOM exception on java heap

2019-05-30 Thread Ilya Kasnacheev
Hello!

I'm afraid you will have to split this query into smaller ones. Ignite
doesn't really have lazy insert ... select, so the result set will have to
be held in heap for some time.

Regards,
-- 
Ilya Kasnacheev


чт, 30 мая 2019 г. в 18:36, yann Blazart :

> Hello,  we have 6 nodes configured with 3Gb heap, 30Gb offheap.
>
> We store lot's of data in some partitioned tables, then we are executing
> some "insert into select... join..." using SqlQueryField (or
> SqlQueryFieldEx).
>
> With tables of 5000 000 lines, we ran in a OOM error, even with lazy set
> to true and skipOnReduceTable.
>
> How can we handle this please ?
>
> Regards.
>


Insert into select OOM exception on java heap

2019-05-30 Thread yann Blazart
Hello,  we have 6 nodes configured with 3Gb heap, 30Gb offheap.

We store lot's of data in some partitioned tables, then we are executing
some "insert into select... join..." using SqlQueryField (or
SqlQueryFieldEx).

With tables of 5000 000 lines, we ran in a OOM error, even with lazy set to
true and skipOnReduceTable.

How can we handle this please ?

Regards.


Re: How to know memory used by a cache or a set

2019-05-30 Thread yann Blazart
Hello.  I'm almost sure I already did that. I will check tomorrow.

Thks

Le lun. 27 mai 2019 à 11:09, ibelyakov  a écrit :

> Hi,
>
> Did you turn on cache metrics for your data region?
>
> To turn the metrics on, use one of the following approaches:
> 1. Set DataRegionConfiguration.setMetricsEnabled(true) for every region you
> want to collect the metrics for.
> 2. Use the DataRegionMetricsMXBean.enableMetrics() method exposed by a
> special JMX bean.
>
> More information regarding cache metrics available here:
> https://apacheignite.readme.io/docs/cache-metrics
>
> Regards,
> Igor
>
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Toxiproxy

2019-05-30 Thread Delian
Thanks very much Mike.will run through this and report back. Looks
promising.



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Toxiproxy

2019-05-30 Thread Michael Cherkasov
Hi Delian,

I used it to test timeout for client nodes, I attached the example, I think
it can be adapted for your purposes.

>I've now set up Toxiproxy with a proxy (per node) for discovery,
>communication, shared mem and timeserver as the config file for each node
>allows me to explicitly set ports for these.
Ignite uses only communication and discovery by default. shared mem can't
be proxied and it isn't used by default, and timeserver - is an obsolete
configuration that isn't used anymore.

So you need to proxy only Discovery and Communication. In the attached
example I created a server node that listens on port 47600 for discovery
and 47200 for communication.
Client node has in the discovery list the following
address: "localhost:47500" so at port 47500 we have Toxiproxy. Also note,
that the very first server node need to connect it self, so I
left "localhost:47600" in the discovery list.
Okay, now client node will connect to the server node via Toxiproxy, but,
we can't specify particular address/port for the client to communicate with
the server, because ignite uses autodiscovery and all nodes send
communication address on node join view discovery protocol, and there a
trick:
to server node I added address resolver, it is used for heterogeneous
networks, when clients can't directly connect to servers and need to use
another set of addresses:

@NotNull private static AddressResolver getRslvr(String s) {
return new AddressResolver() {
@Override public Collection getExternalAddresses(
InetSocketAddress addr) throws IgniteCheckedException {
List res = Collections.singletonList(
new InetSocketAddress(addr.getHostName(),
addr.getPort() == 0 ? 0 : addr.getPort() - 100)
);

System.out.println(Thread.currentThread().getName() + " "
+ s + "resolve: " + addr + " ->" + res);
return res;
}
};
}

So what does this code mean? The server listens on port 47200, but before
to send it to the client, it will ask AdressResolver to convert its address
to something new, in this case, I just reducer port number by 100. so
Client will get localhost:47100 address for communication, however, even
with new address localhost:47100, Server node will send its original
address to client too localhost:47200, to rid of this original address I
added this lines:

Map> userAttr =
Collections.singletonMap("TcpCommunicationSpi.comm.tcp.addrs",
Collections.emptyList());

igniteCfg.setUserAttributes(userAttr);

so, now the client will get an empty list instead of original communication
address and will have to use adress returned by AddressResolver which is
localhost:47100 where we have Toxiproxy.

I think this example can be adapted to your case.
if you have any further question feel free to mail me, also I will
appreciate if you will share the result of your work.

Thanks,
Mike.

ср, 29 мая 2019 г. в 15:37, Delian :

> Is anyone aware whether Toxiproxy can be set up to sit between Ignite nodes
> in order to look at how things behave under various network conditions ? I
> am new to Ignite and am wondering whether my thinking is flawed.
>
> I have a simple 3 node cluster using the static TCP IP finder - all fine.
> I've now set up Toxiproxy with a proxy (per node) for discovery,
> communication, shared mem and timeserver as the config file for each node
> allows me to explicitly set ports for these.  Finally, the ip finders in
> the
> node configs point to the cluster nodes going through ToxiProxy - not
> direct.
>
> Nodes fire up but don't cluster, I'm seeing a lot of activity in Toxiproxy
> console where by nodes are sending requests on ports other than the above
> (in some cases incrementing so I assume a range is being attempted). As I
> have not explicitly set these up in Toxiproxy the requests seem to get
> routed to the upstream node on 47500 (service disco) which is obviously
> wrong in some cases. I see a number of open ports for the process - some of
> which I have set but some not and they are not the same across the nodes.
>
> 1) Can I statically set all these ports (even if I knew what they were) so
> I
> can create proxies for them with the hope that allows me to cluster up ?
>
> 2) I believe a ring topology is in play - are the hosts/ip's set up in the
> service disco config always used, i.e. so everything goes through Toxiproxy
> or is there the possibility they will connect direct and bypass ?
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


ToxiproxyTest.java
Description: application/ms-java


Re: Trouble with continuous queries

2019-05-30 Thread Mike Needham
Do cache events fire when using SQL to update tables?  My app is listening
to a queue for json payloads that have the INS/UPD/DEL: for the cache
entries.  They are updated using SQL over the respective caches.

On Wed, May 29, 2019 at 8:14 AM Mike Needham  wrote:

> I have tried various things, but it never fires for a change in the
> cache.  that is why I do not think it is set-up correctly
>
>
> On Tue, May 28, 2019 at 9:30 AM Alexandr Shapkin 
> wrote:
>
>> Hi,
>>
>> You can just save an instance of continuous query somewhere and dispose it
>> when required
>>
>>
>>   var listener = new SomeListener();
>>   var someFilter= new
>> CacheEntryEventFilter(_ignite.GetCluster().GetLocalNode(),
>> Const.CacheName);
>>   var query = new ContinuousQuery(listener, someFilter);
>>
>>//save the reference in a private field
>>   _continuousQuery = cache.QueryContinuous(query);
>>
>>
>> Regardless of a cache value.
>> As I understand correctly you have a mixed platform solution (Java + .NET)
>> This may lead to additional marshalling configuration.
>>
>> I will try the posted solution a bit later and reply
>>
>>
>>
>>
>> --
>> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>>
>
>
> --
> *Don't be afraid to be wrong. Don't be afraid to admit you don't have all
> the answers. Don't be afraid to say "I think" instead of "I know."*
>


-- 
*Don't be afraid to be wrong. Don't be afraid to admit you don't have all
the answers. Don't be afraid to say "I think" instead of "I know."*


AdaptiveLoadBalancingSpi not removing finished tasks from taskTops map

2019-05-30 Thread chris_d
Hi,

I have a question/comment about the behaviour of adaptive load balancing
with Ignite 2.7.

I was running a load test on our systems and noticed that eventually we were
running out of heap space. Most of the heap was being taken up by the
taskTops map within AdaptiveLoadBalancingSpi.

Each new task was adding topology data to that map but nothing was ever
getting removed.

There is a GridLocalEventListener registered in AdaptiveLoadBalancingSpi
that will remove tasks from the map if events of type
EVT_TASK_FINISHED/EVT_TASK_FAILED are received.

The problem is that those event types don't seem to be recorded by default.

It seems easy enough to sort by using
IgniteConfiguration.setIncludeEventTypes(EventType.EVT_TASK_FINISHED,
EventType.EVT_TASK_FAILED) but it's not obvious that this is necessary in
the first place. It took me a fair bit of digging through the code and
debugging to work out what was going on.

Is it possible to improve the adaptive load balancing behaviour a bit?
Ensuring the finished/failed event tasks get recorded by default when
adaptive load balancing is used perhaps?

Thanks,
Chris.




--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Task flow implementation question

2019-05-30 Thread Pascoe Scholle
Thank you! Ill give that a go

On Thu, May 30, 2019 at 11:16 AM Pavel Vinokurov 
wrote:

> Hi Pascoe,
>
> Please pay attention to the following example:
> https://apacheignite.readme.io/docs/continuous-mapping#section-example
>
> It demonstrates continuous mapping that could work in your case.
>
>
> Thanks,
> Pavel
>
> чт, 30 мая 2019 г. в 08:34, Pascoe Scholle :
>
>> Hello everyone,
>>
>> So I am trying to put together a task flow chain. A task can have any
>> number of inputs and outputs, what I would call ports and each port has a
>> value stored in the cache.
>>
>> A task can have numerous predecessors.
>>
>> For example say I have two nodes which can be executed in parallel: one
>> generates a large 2d sparse array and saves this to the cache and the
>> second generates a vector which is also saved to cache. A successor is
>> linked to these two tasks, and has an input of type array and a second
>> input of type vector, looking like follows:
>>
>> GEN_MAT (e.g. 15 seconds)  - >
>>   MAT_VEC_MUL - >
>> GEN_VEC(e.g. 1 second)   - >
>>
>> As I have tried to show, the GEN_MAT takes a lot longer, MAT_MUL can only
>> execute once all input ports are set.
>>
>> My question is how to implement this functionality effectively.
>> The input and output ports use a KEY:VALUE scheme, so with cache events,
>> all successor nodes can have their input ports listen for their KEY values
>> to be set, and it does work. But it feels very clunky. I am playing around
>> with setting attributes in ComputeTaskSession, but have not managed to get
>> it working. In a way cache events seem like the best option.
>>
>> Any recommendations or ideas would be really helpful, I am very new to
>> apache ignite and just programming in general.
>>
>> Thanks and kind regards,
>> Pascoe
>>
>
>
> --
>
> Regards
>
> Pavel Vinokurov
>


Re: Error Running Gridgain's LoadCaches java application

2019-05-30 Thread Ilya Kasnacheev
Hello!

Can you collect thread dumps from all nodes in the cluster, share those
with us?

Regards,
-- 
Ilya Kasnacheev


чт, 30 мая 2019 г. в 00:31, Jay Fernandez :

> This did stop the error from being logged.   However, when I start the
> loadCaches program, nothing is logged and it seems to just hang.  The
> ignite logs show that a client connected but nothing after that.  In
> addition, the web console heap size monitoring jumps up right away and then
> stops monitoring immediately after.
>
> On Tue, May 28, 2019 at 9:42 AM Jay Fernandez 
> wrote:
>
>> Thanks for the reply Denis.  Is the correct way to disable the checker?
>>
>> 
>>
>> On Fri, May 24, 2019 at 5:59 PM Denis Magda  wrote:
>>
>>> Hi Jay,
>>>
>>> Could you please try to disable the "crtical workers checker"?
>>>
>>> https://apacheignite.readme.io/docs/critical-failures-handling#section-critical-workers-health-check
>>>
>>> It will be disabled by default in Ignite 2.7.5 since requires more
>>> automation and tuning.
>>>
>>> Let us know if it doesn't work.
>>>
>>> -
>>> Denis
>>>
>>>
>>> On Fri, May 24, 2019 at 9:57 AM jay.fernandez 
>>> wrote:
>>>
 Hello, very new to Ignite and excited about using the application.  I
 have
 installed one Apache Ignite 2.7 node on a GCP VM.  I have the web agent
 running locally and I am using Gridgain's Web Console.  I am getting an
 error trying to run the LoadCaches java application that the Gridgain
 Web
 Console generated based on my MySQL database.

 Logs from Ignite Server:

 May 24 16:54:50 gdw-mysql57 service.sh[26542]: [16:54:50] Ignite node
 started OK (id=1b7f4add)
 May 24 16:54:50 gdw-mysql57 service.sh[26542]: [16:54:50] Topology
 snapshot
 [ver=1, locNode=1b7f4add, servers=1, clients=0, state=ACTIVE, CPUs=2,
 offheap=1.5GB, heap=1.0GB]
 May 24 16:55:03 gdw-mysql57 service.sh[26542]: [16:55:03] Topology
 snapshot
 [ver=2, locNode=1b7f4add, servers=1, clients=1, state=ACTIVE, CPUs=10,
 offheap=1.5GB, heap=8.1GB]


 Error from the Java project below, any help would be appreciated.

 May 24, 2019 12:53:02 PM java.util.logging.LogManager$RootLogger log
 WARNING: Failed to resolve default logging config file:
 config/java.util.logging.properties
 [12:53:02]__  
 [12:53:02]   /  _/ ___/ |/ /  _/_  __/ __/
 [12:53:02]  _/ // (7 7// /  / / / _/
 [12:53:02] /___/\___/_/|_/___/ /_/ /___/
 [12:53:02]
 [12:53:02] ver. 2.7.0#20181130-sha1:256ae401
 [12:53:02] 2018 Copyright(C) Apache Software Foundation
 [12:53:02]
 [12:53:02] Ignite documentation: http://ignite.apache.org
 [12:53:02]
 [12:53:02] Quiet mode.
 [12:53:02]   ^-- Logging by 'JavaLogger [quiet=true, config=null]'
 [12:53:02]   ^-- To see **FULL** console log here add
 -DIGNITE_QUIET=false
 or "-v" to ignite.{sh|bat}
 [12:53:02]
 [12:53:02] OS: Windows 10 10.0 amd64
 [12:53:02] VM information: Java(TM) SE Runtime Environment 1.8.0_201-b09
 Oracle Corporation Java HotSpot(TM) 64-Bit Server VM 25.201-b09
 [12:53:02] Please set system property '-Djava.net.preferIPv4Stack=true'
 to
 avoid possible problems in mixed environments.
 [12:53:02] Initial heap size is 510MB (should be no less than 512MB, use
 -Xms512m -Xmx512m).
 [12:53:02] Configured plugins:
 [12:53:02]   ^-- None
 [12:53:02]
 [12:53:02] Configured failure handler: [hnd=StopNodeOrHaltFailureHandler
 [tryStop=false, timeout=0, super=AbstractFailureHandler
 [ignoredFailureTypes=[SYSTEM_WORKER_BLOCKED
 [12:53:03] Message queue limit is set to 0 which may lead to potential
 OOMEs
 when running cache operations in FULL_ASYNC or PRIMARY_SYNC modes due to
 message queues growth on sender and receiver sides.
 [12:53:03] Security status [authentication=off, tls/ssl=off]
 [12:53:03] REST protocols do not start on client node. To start the
 protocols on client node set '-DIGNITE_REST_START_ON_CLIENT=true' system
 property.
 log4j:WARN No appenders could be found for logger
 (org.springframework.beans.factory.support.DefaultListableBeanFactory).
 log4j:WARN Please initialize the log4j system properly.
 log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig
 for
 more info.
 May 24, 2019 12:53:18 PM org.apache.ignite.logger.java.JavaLogger error
 SEVERE: Blocked system-critical thread has been detected. This can lead
 to
 cluster-wide undefined behaviour [threadName=partition-exchanger,
 blockedFor=12s]
 May 24, 2019 12:53:18 PM java.util.logging.LogManager$RootLogger log
 SEVERE: Critical system error detected. Will be handled accordingly to
 configured handler [hnd=StopNodeOrHaltFailureHandler [tryStop=false,
 timeout=0, super=AbstractFailureHandler
 [ignoredFailureTypes=[SYSTEM_WORKER_BLOCKED]]],
 failureCtx=FailureContext

Re: Task flow implementation question

2019-05-30 Thread Pavel Vinokurov
Hi Pascoe,

Please pay attention to the following example:
https://apacheignite.readme.io/docs/continuous-mapping#section-example

It demonstrates continuous mapping that could work in your case.


Thanks,
Pavel

чт, 30 мая 2019 г. в 08:34, Pascoe Scholle :

> Hello everyone,
>
> So I am trying to put together a task flow chain. A task can have any
> number of inputs and outputs, what I would call ports and each port has a
> value stored in the cache.
>
> A task can have numerous predecessors.
>
> For example say I have two nodes which can be executed in parallel: one
> generates a large 2d sparse array and saves this to the cache and the
> second generates a vector which is also saved to cache. A successor is
> linked to these two tasks, and has an input of type array and a second
> input of type vector, looking like follows:
>
> GEN_MAT (e.g. 15 seconds)  - >
>   MAT_VEC_MUL - >
> GEN_VEC(e.g. 1 second)   - >
>
> As I have tried to show, the GEN_MAT takes a lot longer, MAT_MUL can only
> execute once all input ports are set.
>
> My question is how to implement this functionality effectively.
> The input and output ports use a KEY:VALUE scheme, so with cache events,
> all successor nodes can have their input ports listen for their KEY values
> to be set, and it does work. But it feels very clunky. I am playing around
> with setting attributes in ComputeTaskSession, but have not managed to get
> it working. In a way cache events seem like the best option.
>
> Any recommendations or ideas would be really helpful, I am very new to
> apache ignite and just programming in general.
>
> Thanks and kind regards,
> Pascoe
>


-- 

Regards

Pavel Vinokurov


Re: Compress binary object to save storage spac

2019-05-30 Thread Ilya Kasnacheev
Hello!

Unfortunately it is not implemented. You can, however, compress them
yourself, uncompress on use.
-- 
Ilya Kasnacheev


чт, 30 мая 2019 г. в 01:05, :

> Hello,
>
>
>
> Is there a way in ignite to compress big binary objects ? something like
> cacheConfiguration.setCompressed(true)?
>
> I'm using the latest ignite 2.7
>
>
>
>
>
> Thanks,
>
> Nadav
>
> System Architect
>
> +972-544821606
> DocAuthority.com 
>
>
>