write behind not working for RDBMS

2019-04-01 Thread himanshu
Hi,

I am using ignite 2.7.0, when I am trying to test write through and write
behind, I am running in to issues. when I enable write through I am able to
write to mysql database while in case of write through data is not getting
written in database. any help is appriciated



CacheConfiguration employeeCacheConfiguration = new
CacheConfiguration<>("EmployeeCache");
employeeCacheConfiguration.setIndexedTypes(Long.class, 
Employee.class);

employeeCacheConfiguration.setWriteBehindEnabled(true);
//  employeeCacheConfiguration.setWriteBehindFlushFrequency(1);
employeeCacheConfiguration.setWriteBehindBatchSize(1);
employeeCacheConfiguration.setWriteBehindFlushSize(1);

employeeCacheConfiguration.setReadThrough(true);
//  employeeCacheConfiguration.setWriteThrough(true);

I tried various configurations such as flush frequency, batch size etc but
write behind is not working :(



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: JDBC Thin Client Set Streaming On/Off in node.js app

2019-04-01 Thread ilyn
Thank you so much Ilya!

I was able to successfully test, now I need to implement into my real code.
Hopefully you won't be hearing from me for awhile :)



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


read through with spring data

2019-04-01 Thread himanshu
Hello Gurus,

I have 2 tables Person and Contact , contact table has person id as foreign
key.
I have loaded 200k records in database and I am trying to test read through
with this data set using spring ignite data module.

when I am calling repository.get(primaryKey) it is able to load that object
in memory but when I try the same with another attribute in person table
which is annotated with QuerySqlField annotation it is not working.

when I have loaded all data in memory it is working fine but that is not
read through scenario since I have all data in memory?

any help is appreciated



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Use onheap and offheap memory at the same time recommend?

2019-04-01 Thread ibelyakov
Hello,

There are 2 types of eviction in Ignite:
1. Off-Heap eviction works when Native Persistence is turned off and
off-heap eviction policy is specified. 
In this case entries will be stored on the off-heap until eviction policy
limit is reached, after that entries will be removed from the off-heap
according to policy.
https://apacheignite.readme.io/docs/evictions#section-off-heap-memory
2. On-Heap eviction works when "onheapCacheEnabled" is turned on and on-heap
eviction policy is specified.
In this case entries will be stored on the off-heap similar to the previos
case, but additional copy will stored on the on-heap. If on-heap eviction
policy limit is reached, entries will be removed from the on-heap according
to policy.
https://apacheignite.readme.io/docs/evictions#section-on-heap-cache

More information regarding On-heap caching can be found here:
https://apacheignite.readme.io/docs/memory-configuration#section-on-heap-caching

In case you're using Native Persistence, there is a process similar to
eviction - called Page replacement, but it's not configurable.
For Native Persistence, if you have reached off-heap limit then entries will
be stored to persistence storage.
https://apacheignite.readme.io/docs/distributed-persistent-store

Regards,
Igor



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: When querying data, the field name needs to be all uppercase

2019-04-01 Thread Ilya Kasnacheev
Hello!

What's the dialect of this SQL?

This does not look like Ignite's CREATE TABLE nor it looks like Ignite's
behavior.

Regards,
-- 
Ilya Kasnacheev


пн, 1 апр. 2019 г. в 09:58, gn01887818 :

> Use client mode.(java code)
>
> CREATE TABLE `Sa` (
>  `BuyID` bigint(20) unsigned NOT NULL DEFAULT 0,
>  `Type` smallint(5) unsigned NOT NULL DEFAULT 0
> );
>
> Select BuyID from Sa;  =>No data found.
> Select BUYID from Sa; => have information.
>
> It feels like the field is capitalized.
>
> Is there a way to change the settings of ignite DB?
> Let select BuyID from Sa;
> Can I query the information?
>
> Thank you
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Documentation gaps on Ignite filesytem

2019-04-01 Thread Ilya Kasnacheev
Hello!

"*can IGFS be deployed without Hadoop or any other secondary file stores*"
I guess the answer is 'yes'.

"I've tried to run *examples/config/filesystem* from Ignite distribution to
no avail" Strange, I'm able to run
% bin/ignite.sh examples/config/filesystem/example-igfs.xml
[18:20:32] Ignite node started OK (id=6ecc98b9)
[18:20:32] Topology snapshot [ver=1, locNode=6ecc98b9, servers=1,
clients=0, state=ACTIVE, CPUs=8, offheap=6.3GB, heap=1.0GB]

And then IGFS is there, I can put files and stuff.

Regards,
-- 
Ilya Kasnacheev


пн, 1 апр. 2019 г. в 10:46, maros.urbanec :

> Hi all, a set of questions apropos Ignite filesystem. Reading the
> documentation, it's not immediately clear whether IGFS caching HDFS is an
> option or a must. In other words - *can IGFS be deployed without Hadoop
> or any other secondary file stores*; simply storing its files in an
> Ignite cache? I've tried to run *examples/config/filesystem* from Ignite
> distribution to no avail, Ignite Java client always claims *"IGFS is not
> configured: igfs"* (running on Windows). Last, but not least - IGFS has
> no corresponding C++ API. There's some recurring talk on this mailing list
> about mounting IGFS as a userspace filesystem. Is this integration
> stipulated in Ignite's roadmap?
> --
> Sent from the Apache Ignite Users mailing list archive
>  at Nabble.com.
>


Re: ComputeTask with JSON POST data

2019-04-01 Thread kellan
Sorry, I'm not really familiar with Jetty. I see that there's a
GridJettyRestHandler class in Ignite, but I don't see any place in jetty
configuration where this class is being referenced, so it's not really clear
to me how this class is being initialized, or how to override it.



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: When I use Ignite create table ddl,can I set the db schema instead of the schema name "PUBLIC"?

2019-04-01 Thread kcheng.mvp
Thank you very much! That's really what I want! Hope it will be released in
2.8.



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Use onheap and offheap memory at the same time recommend?

2019-04-01 Thread jackluo923
I am wondering if using onheap memory along with offheap memory + native
persistence is recommend to take advantage of element (entry) level LRU
eviction policy. With this particular configuration, how would the cache
work? 

i.e. Does entry first gets stored on onheap memory; and when onheap memory
is full, entries gets gets evicted and pushed to offheap memory; and when
offheap memory is full, the entry gets persisted to persistant storage?



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: JVM tuning parameters

2019-04-01 Thread Andrey Gura
Hi,

I don't see any mentions of OOM. Provided log message reports blocking
of db-checkpoint-thread. I think worker tries to acquire checkpoint
read lock.
Stack trace corresponds to the thread that detected blocking. Failure
handler prints out threads dump to the log. This thread dump can help
in problem analysis.

Also more detailed case description is required (it's just creation of
400 tables or some data also adding to the tables).

And finally... 1 core - is too hard restriction from my point of view.

On Sat, Mar 30, 2019 at 9:56 AM Denis Magda  wrote:
>
> Hi,
>
> How does the JVM error look like?
>
> Apart from that, Andrey, Igniters, the failure handler fired off but I have 
> no glue from the shared logs what happened or how it is connected to the Java 
> heap issues. Should I expect to see anything from the logs not added to the 
> thread?
>
> --
> Denis Magda
>
>
> On Thu, Mar 28, 2019 at 6:58 AM ashfaq  wrote:
>>
>> Hi Team,
>>
>> We are installing ignite on kubernetes environment with native persistence
>> enabled. When we try to create around 400 tables using the sqlline end point
>> ,  the pods are restarting after creating 200 tables with jvm heap error so
>> we have increased the java heap size from 1GB to 2GB and this time it failed
>> at 300 tables.
>>
>> We would like to know how can we arrive at jvm heap size . Also we want to
>> know how do we configure such that the pods are not restarted and the
>> cluster is stable.
>>
>> Below are the current values that we have used.
>>
>> cpu - 1core
>> xms - 1GB
>> xmx - 2GB
>> RAM - 3GB
>>
>> Below is the error log:
>>
>> "Critical system error detected. Will be handled accordingly to configured
>> handler [hnd=StopNodeOrHaltFailureHandler [tryStop=false, timeout=0,
>> super=AbstractFailureHandler [ignoredFailureTypes=[SYSTEM_WORKER_BLOCKED]]],
>> failureCtx=FailureContext [type=SYSTEM_WORKER_BLOCKED, err=class
>> o.a.i.IgniteException: GridWorker [name=db-checkpoint-thread,
>> igniteInstanceName=null, finished=false, heartbeatTs=1553771825864]]] class
>> org.apache.ignite.IgniteException: GridWorker [name=db-checkpoint-thread,
>> igniteInstanceName=null, finished=false, heartbeatTs=1553771825864]
>> at
>> org.apache.ignite.internal.IgnitionEx$IgniteNamedInstance$2.apply(IgnitionEx.java:1831)
>> at
>> org.apache.ignite.internal.IgnitionEx$IgniteNamedInstance$2.apply(IgnitionEx.java:1826)
>> at
>> org.apache.ignite.internal.worker.WorkersRegistry.onIdle(WorkersRegistry.java:233)
>> at
>> org.apache.ignite.internal.util.worker.GridWorker.onIdle(GridWorker.java:297)
>> at
>> org.apache.ignite.spi.discovery.tcp.ServerImpl$RingMessageWorker.lambda$new$0(ServerImpl.java:2663)
>> at
>> org.apache.ignite.spi.discovery.tcp.ServerImpl$MessageWorker.body(ServerImpl.java:7181)
>> at
>> org.apache.ignite.spi.discovery.tcp.ServerImpl$RingMessageWorker.body(ServerImpl.java:2700)
>> at
>> org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:120)
>> at
>> org.apache.ignite.spi.discovery.tcp.ServerImpl$MessageWorkerThread.body(ServerImpl.java:7119)
>> at
>> org.apache.ignite.spi.IgniteSpiThread.run(IgniteSpiThread.java:62)
>>
>>
>>
>> --
>> Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Combine key value with SQL opeartions over cahce

2019-04-01 Thread Ilya Kasnacheev
Hello!

It's not obvious what you are asking. I will try to answer anyway.

If you do an SQL update via REST, wait for it to return, and then do a get
operation, it should return up to date value if any one of the following is
true:
- There's just one node in cluster.
- There are no backups configured for this table.
- readFromBackup is false.
- Cache write mode is FULL_SYNC (this is configurable when creating table,
btw).

Regards,
-- 
Ilya Kasnacheev


вс, 31 мар. 2019 г. в 15:17, matanlevy :

> Hi,
>
> I am using Ignite Cache in my program, with a REST client.
>
> as a part of it I am combing simple key-value operations with SQL.
> basiclly I am using the simple key-value API for a single record operation,
> and SQL is used for deleting/getting multiple records.
>
> I saw here :
>
> https://apacheignite-sql.readme.io/docs/how-ignite-sql-works#concurrent-modifications
> how ignite SQL solves concurrency issues.
>
> does it apply also for combing SQL and simple key-val operations?
>
> i.e, how can I ensure that running simple GetValue immedaitly after
> opeartions will get the up to date value?
>
> Thanks!
>
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: ComputeTask with JSON POST data

2019-04-01 Thread Ilya Kasnacheev
Hello!

You can specify your own Jetty configuration, and in it you should be able
to define your own handlers.

Regards,
-- 
Ilya Kasnacheev


пн, 1 апр. 2019 г. в 14:38, kellan :

> But is there a built-in way to write a custom REST handler for Ignite? Or
> do
> I need to create a separate application that routes requests to Ignite?
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: JDBC Thin Client Set Streaming On/Off in node.js app

2019-04-01 Thread Ilya Kasnacheev
Hello!

I think the problem is that you are running parallel operations on a single
connection. This is a no-no with Ignite's connection (and some other as
well, I would guess).You should be only use a JDBC connection sequentially,
but your insertRow() would do that in parallel.

Please see how I managed to sequentialize this operation and now have 15 in
table after running:
function(callback) {
  const cities = [
[16,'New York', 'USA', 'New York', 8008278],
[17,'Los Angeles', 'USA', 'California', 3694820],
[18,'Chicago', 'USA', 'Illinois', 2896016],
[4,'Houston', 'USA', 'Texas', 1953631],
[5,'Philadelphia', 'USA', 'Pennsylvania', 1517550],
[6,'Moscow', 'RUS', 'Moscow (City)', 8389200],
[7,'St Petersburg', 'RUS', 'Pietari', 4694000],
[8,'Novosibirsk', 'RUS', 'Novosibirsk', 1398800],
[9,'Nizni Novgorod', 'RUS', 'Nizni Novgorod', 1357000],
[10,'Jekaterinburg', 'RUS', 'Sverdlovsk', 1266300],
[11,'Shanghai', 'CHN', 'Shanghai', 9696300],
[12,'Peking', 'CHN', 'Peking', 7472000],
[13,'Chongqing', 'CHN', 'Chongqing', 6351600],
[14,'Tianjin', 'CHN', 'Tianjin', 5286800],
[15,'Wuhan', 'CHN', 'Hubei', 4344600]
  ];
  inserts = [];
  for (var city of cities) {
var sql = "INSERT INTO BUTCH_city (id, name, countrycode,
district, population)
VALUES("+city[0]+",'"+city[1]+"','"+city[2]+"','"+city[3]+"',"+city[4]+");";
inserts.push((function() {
  var s = sql;
  return function(callback) {
console.log((new Date()) + " insert sql : " + s);
insertRow(conn, s, callback);
  }
})());
  }
  asyncjs.series(inserts, callback);

},

Sorry for my awful Node code.

Regards,
-- 
Ilya Kasnacheev


пт, 29 мар. 2019 г. в 19:48, ilyn :

> Ilya,
>
> Please see attached code.
>
> jdbcStream.js
> 
>
>
> Thanks!
> Ilyn
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Failed to start cache due to conflicting cache ID

2019-04-01 Thread Sergio Hernández Martínez
Hi all,

I am testing the fault tolerance of ignite 2.7 without persistence. When the 
ignite-0001 node goes off and gets up again, I have the following error message 
the other 2 nodes (ignite-0002 and ignite-0003):

2019-04-01T10: 17: 18.881 + 00: 00 ERROR - - - 
org.apache.ignite.logger.log4j2.Log4J2Logger {"@message": "[:] - Failed to 
reinitialize local partitions (rebalancing will be stopped) : 
GridDhtPartitionExchangeId [topVer = AffinityTopologyVersion [topVer = 5, 
minorTopVer = 0], diskEvt = DiscoveryEvent [evtNode = TcpDiscoveryNode [id = 
2a3e4f28-8d8c-4e11-8c58-8b4385592610, addrs = [172.25.65.125], sockAddrs = 
[ignite-0001 .ignite.test.com / 172.25.65.125: 47500], discPort = 47500, order 
= 5, intOrder = 4, lastExchangeTime = 1554113838586, loc = false, ver = 2.7.0 # 
20181130-sha1: 256ae401, isClient = false] , topVer = 5, nodeId8 = b329068b, 
msg = Node joined: TcpDiscoveryNode [id = 2a3e4f28-8d8c-4e11-8c58-8b4385592610, 
addrs = [172.25.65.125], sockAddrs = [ignite-0001.ignite.test.com/172.25 
.65.125: 47500], discPort = 47500, order = 5, intOrder = 4, lastExchangeTime = 
1554113838586, loc = false, ver = 2.7.0 # 20181130-sha1: 256ae401, isClient = 
false], type = NODE_JOINED, tstamp = 1554113838860 ], nodeId = 2a3e4f28, evt = 
NODE_JOINED] "," @ data ": {}}
org.apache.ignite.IgniteCheckedException: Failed to start cache due to 
conflicting cache ID (change cache name and restart grid) [cacheName = 
ignite-sys-cache, conflictingCacheName = ignite-sys-cache]
at 
org.apache.ignite.internal.processors.cache.GridCacheSharedContext.addCacheContext
 (GridCacheSharedContext.java:524) ~ [ignite-core-2.7.0.jar: 2.7.0]
at 
org.apache.ignite.internal.processors.cache.GridCacheProcessor.prepareCacheStart
 (GridCacheProcessor.java:2161) ~ [ignite-core-2.7.0.jar: 2.7.0]
at 
org.apache.ignite.internal.processors.cache.GridCacheProcessor.startReceivedCaches
 (GridCacheProcessor.java:2063) ~ [ignite-core-2.7.0.jar: 2.7.0]
at 
org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture.init
 (GridDhtPartitionsExchangeFuture.java:759) [ignite-core-2.7.0.jar: 2.7.0]
at 
org.apache.ignite.internal.processors.cache.GridCachePartitionExchangeManager $ 
ExchangeWorker.body0 (GridCachePartitionExchangeManager.java:2667) 
[ignite-core-2.7.0.jar: 2.7.0]
at 
org.apache.ignite.internal.processors.cache.GridCachePartitionExchangeManager $ 
ExchangeWorker.body (GridCachePartitionExchangeManager.java:2539) 
[ignite-core-2.7.0.jar: 2.7.0]
at org.apache.ignite.internal.util.worker.GridWorker.run (GridWorker.java:120) 
[ignite-core-2.7.0.jar: 2.7.0]
at java.lang.Thread.run (Thread.java:748) [?: 1.8.0_191]

When this error occurs, the other 2 ignite nodes stop with the same error.

In the config.xml file I have the discovery is by address and I have the 
following to keep the same nodeid and consistenid:

   
   
   

These parameters set them to be able to maintain my task manager.

What can be happening?

Thank you


Re: ComputeTask with JSON POST data

2019-04-01 Thread kellan
But is there a built-in way to write a custom REST handler for Ignite? Or do
I need to create a separate application that routes requests to Ignite?



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: When I use Ignite create table ddl,can I set the db schema instead of the schema name "PUBLIC"?

2019-04-01 Thread Ilya Kasnacheev
Hello!

I guess you could watch https://issues.apache.org/jira/browse/IGNITE-5780

Regards,
-- 
Ilya Kasnacheev


сб, 30 мар. 2019 г. в 19:41, kcheng.mvp :

> For the case describe in the thread
>
> http://apache-ignite-users.70518.x6.nabble.com/Re-Can-t-add-new-key-value-pair-to-existing-cache-via-sql-command-td25126.html#none
>
> <
> http://apache-ignite-users.70518.x6.nabble.com/file/t100/WX20190331-003651%402x.png>
>
>
>
>
> is there a jira trick it ? or when this limit will be removed? I have used
> ignite in production in a system. so far so good. I would like to use
> ignite
> in another big system.(more tables, more business changes) I just want to
> keep the tables in one business module in just one cache
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: ComputeTask with JSON POST data

2019-04-01 Thread Ilya Kasnacheev
Hello!

You can implement REST handler in your user code to run tasks with
arbitrary results.
However, the question of writing REST handlers for Jetty is not really
related to Ignite anymore.

Regards,
-- 
Ilya Kasnacheev


пт, 29 мар. 2019 г. в 23:09, kellan :

> How exactly would I go about doing that?
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: key value cache - value size limits or concerns

2019-04-01 Thread Ilya Kasnacheev
Helo!

But if one will use collocated processing, such as cache.invoke(), then
there's no need to send large objects other (but you need to serialize and
send over your closure).

Regards,
-- 
Ilya Kasnacheev


сб, 30 мар. 2019 г. в 10:11, Denis Magda :

> A downside of big and growing values is that they have to be deserialized,
> updated, serialized all the times on an update. Plus, you will send them
> over the network to your app. Finally, no way to index the lists.
>
> -
> Denis
>
>
> On Fri, Mar 29, 2019 at 7:30 AM wt  wrote:
>
>> Hi SCott
>>
>> access will be driven by data loads on the other end. I am looking at odbc
>> and rest access which will be used to load data into SQL server
>>
>> here is a base of how it will look
>>
>> class data
>> {
>>  timespan loaddate
>>  string item
>> string value
>> }
>>
>> class alldata
>> {
>>list records
>> }
>>
>>
>> then when we query it will be something like this
>>
>> select item, value,loaddate from cache where loaddate > somevalue and key
>> =
>> somevalue
>>
>> They will likely run ETL with joins to the cache on key and date.
>>
>>
>>
>> --
>> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>>
>


Re: Clarity on Ignite filesytem

2019-04-01 Thread Vladimir Ozerov
Hi Maros,

Over the years of IGFS existence we came to conclusion that it doesn't fit
to typical Ignite use cases well. Currently development of this component
is frozen. Most likely it will be removed in nearest future.
Hence, no plans to improve it or extend with new APIs.

On Mon, Apr 1, 2019 at 11:14 AM maros.urbanec 
wrote:

> Hi all, I've been recently looking into Ignite Filesystem (IGFS for short)
> and couldn't find much assistance by Google searches.
>
> Reading the documentation, it wasn't immediately obvious whether IGFS as a
> cache for HDFS is an option or a must. In other words - is it possible to
> use IGFS without Hadoop or any other secondary file stores; storing files
> in
> Ignite cache only?
>
> Secondarily, I've tried to start Ignite with
> examples\config\filesystem\example-igfs.xml configuration, only for Java
> client to obstinately claim "IGFS is not configured: igfs". The
> configuration is being read by Ignite, since enabling shared memory
> endpoint
> end up in error (I'm running on Windows). Is there any step I'm missing?
>
> As a last question - there is no C++ API for IGFS at this point. However, a
> recurring talk on these mailing lists is an integration of IGFS as a
> userspace filesystem. Is this item stipulated on a roadmap?
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Clarity on Ignite filesytem

2019-04-01 Thread maros.urbanec
Hi all, I've been recently looking into Ignite Filesystem (IGFS for short)
and couldn't find much assistance by Google searches.

Reading the documentation, it wasn't immediately obvious whether IGFS as a
cache for HDFS is an option or a must. In other words - is it possible to
use IGFS without Hadoop or any other secondary file stores; storing files in
Ignite cache only?

Secondarily, I've tried to start Ignite with
examples\config\filesystem\example-igfs.xml configuration, only for Java
client to obstinately claim "IGFS is not configured: igfs". The
configuration is being read by Ignite, since enabling shared memory endpoint
end up in error (I'm running on Windows). Is there any step I'm missing?

As a last question - there is no C++ API for IGFS at this point. However, a
recurring talk on these mailing lists is an integration of IGFS as a
userspace filesystem. Is this item stipulated on a roadmap?



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Primary partitions return zero partitions before rebalance.

2019-04-01 Thread Павлухин Иван
Hi,

Sorry for the late answer. An observed result seems expected to me. I
suppose following:
1. EVT_CACHE_REBALANCE_STOPPED is fired when a particular node loaded
all partitions which it will be responsible for.
2. All nodes it the cluster must become aware that partition
assignment was changed. So, PME will happen to make all nodes aware of
new assignment.
3. Once PME completes all nodes will consistently treat just entered
node as primary for a corresponding set of partitions.

Do not hesitate to write back if you feel that something is going wrong.

вт, 19 мар. 2019 г. в 19:30, Koitoer :
>
> Hello Igniters
>
> The version of Ignite that we are using is 2.7.0. I'm adding the events that 
> I want to hear via the IgniteConfiguration using the `setIncludeEventTypes`
> Then using ignite.event().localListen(listenerPredicate, eventTypes);
>
> EVT_CACHE_REBALANCE_STARTED,
> EVT_CACHE_REBALANCE_STOPPED,
> EVT_CACHE_REBALANCE_PART_LOADED,
> EVT_CACHE_REBALANCE_PART_UNLOADED,
> EVT_CACHE_REBALANCE_PART_DATA_LOST
>
> Once I listen any of the events above, I used 
> `ignite.affinity(cacheName.name())`  to retrieve the Affinity function in 
> which I'm calling the `primaryPartitions` method or `allPartitions` using the 
> ClusterNode instance that represents `this` node.
>
> Once I hear the rebalance process stop event I created a thread in charge of 
> checking the partition assignment as follows.
>
> new Thread(() -> {
> for (int attempt = 0; attempt <= attempts; attempt++) {
> log.info("event=partitionAssignmentRetryLogic attempt={}, before={}, 
> now={}", attempt, assignedPartitions,
> affinity.primaryPartitions(clusterNode));
>
> try {
> if (affinity.primaryPartitions(clusterNode).length != 0) {
> log.info("event=partitionAssignmentRetryLogicSuccess");
> }
> TimeUnit.SECONDS.sleep(delay);
> } catch (Exception e) {
> log.error("event=ErrorOnTimerWait message={}", e.getMessage(), e);
> }
> }
> }).start();
>
>
> After a couple of attempts (some seconds), the `primaryPartitions` is 
> returning the correct set of partitions assigned to a node.  I will check the 
> AffinityAssignment for trying to do this in a cleaner way as you suggest.
>
>
> On Fri, Mar 15, 2019 at 12:11 PM Павлухин Иван  wrote:
>>
>> Hi,
>>
>> What Ignite version do you use?
>> How do you register your listener?
>> On what object do you call primaryPartitions/allPartitions?
>>
>> It is true that Ignite uses late affinitly assignment. And it means
>> that for each topology change (node enter or node leave) parttion
>> assigment changes twice. First time temporay backups are created which
>> should be rebalanced from other nodes (EVT_CACHE_REBALANCE_STARTED
>> takes place here). Second time redundant partition replicas should be
>> marked as unusable (and unloaded after that)
>> (EVT_CACHE_REBALANCE_STOPPED). And it is useful to understand that
>> Affinity interface calculates partition distribution using affinity
>> function and such distribution might differ from real partitoin
>> assignment. And it differes when rebalance is in progress. See
>> AffinityAssignment interface.
>>
>> ср, 13 мар. 2019 г. в 21:59, Koitoer :
>> >
>> > Hi All.
>> >
>> > I'm trying to follow the rebalance events of my ignite cluster so I'm able 
>> > to track which partitions are assigned to each node at any point in time. 
>> > I am listening to the `EVT_CACHE_REBALANCE_STARTED` and 
>> > `EVT_CACHE_REBALANCE_STOPPED`
>> > events from Ignite and that is working well, except in the case one node 
>> > crash and another take its place.
>> >
>> > My cluster is 5 nodes.
>> > Ex. Node 1 has let's say 100 partitions, after I kill this node the 
>> > partitions that were assigned to it, got rebalance across the entire 
>> > cluster, I'm able to track that done with the STOPPED event and checking 
>> > the affinity function in each one of them using the `primaryPartitions` 
>> > method gives me that, if I add all those numbers I get 1024 partitions, 
>> > which is why I was expected.
>> >
>> > However when a new node replaces the previous one, I see a rebalance 
>> > process occurs and now I'm getting that some of the partitions `disappear` 
>> > from the already existing nodes (which is expected as well as new node 
>> > will take some partitions from them) but when the STOPPED event is 
>> > listened by this new node if I call the `primaryPartitions` that one 
>> > returns an empty list, but if I used the  `allPartitions` method that one 
>> > give me a list (I think at this point is primary + backups).
>> >
>> > If I let pass some time and I execute the `primaryPartitions` method again 
>> > I am able to retrieve the partitions that I was expecting to see after the 
>> > STOPPED event comes. I read here 
>> > https://cwiki.apache.org/confluence/display/IGNITE/%28Partition+Map%29+Exchange+-+under+the+hood#id-(PartitionMap)Exchange-under
>> >  the 

Documentation gaps on Ignite filesytem

2019-04-01 Thread maros.urbanec
Hi all, a set of questions apropos Ignite filesystem.Reading the
documentation, it's not immediately clear whether IGFS caching HDFS is an
option or a must. In other words - *can IGFS be deployed without Hadoop or
any other secondary file stores*; simply storing its files in an Ignite
cache?I've tried to run /examples/config/filesystem/ from Ignite
distribution to no avail, Ignite Java client always claims /"IGFS is not
configured: igfs"/ (running on Windows). Last, but not least - IGFS has no
corresponding C++ API. There's some recurring talk on this mailing list
about mounting IGFS as a userspace filesystem. Is this integration
stipulated in Ignite's roadmap?



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/

When querying data, the field name needs to be all uppercase

2019-04-01 Thread gn01887818
Use client mode.(java code)

CREATE TABLE `Sa` (
 `BuyID` bigint(20) unsigned NOT NULL DEFAULT 0,
 `Type` smallint(5) unsigned NOT NULL DEFAULT 0
);

Select BuyID from Sa;  =>No data found.
Select BUYID from Sa; => have information.

It feels like the field is capitalized.

Is there a way to change the settings of ignite DB?
Let select BuyID from Sa;
Can I query the information?

Thank you



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/