Re: Running heavy queries on Ignite cluster on backing store directly without impacting the cluster

2018-03-29 Thread Naveen
Lazy is something which we can use when we are expecting huge result set to
minimize the memory consumption at the cost of little performance hit.
However my requirement to run a query by joining couple of tables with
complex where clause to debug some data related issues, this query may not
return huge result set, but scan the the complete table to get the result
set, during this time it should not hit the performance. 

So my question here is, can we try applying the same feature like eviction,
in case of eviction, if the record is not there in RAM, it will go to
backing store and gets the data and also load it to memory. Similar way, can
we hint the cluster to run the query on backing store instead of RAM, so
that that the impact on RAM would be less if you are running the query on
backing store.

In our case, we have 2 TB servers with 128 core machine, very high end
machines, RAM is utilized upto 60 to 70%, however CPU resources are not
utilized. 

Hope I was able to convey my adhoc requirements to run various SQL queries
to debug data related issues and consequences of these queries on the
performance on the cluster

Thanks
Naveen



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Ignite Client Heap out of Memory issue

2018-03-29 Thread shawn.du






Hi,My Ignite client heap OOM yesterday.  This is the first time we encounter this issue.My ignite client colocates within Storm worker process. this issue cause storm worker restart.I have several questions about it: our ignite version is 2.3.01) if ignite in client mode, it use offheap? how to set the max onheap/offheap memory to use.2) our storm worker have 8G memory, ignite client print OOM, it doesn't trigger storm worker to dump the heap.    but we get a ignite server's heap dump.  ignite server didn't die.  The ignite server's heap dump is very small. only have 200M.    which process is OOM? worker or ignite server?This is logs:  Thanks in advance.  Suppressed: org.apache.ignite.IgniteCheckedException: Failed to update keys on primary node.                at org.apache.ignite.internal.processors.cache.distributed.dht.atomic.UpdateErrors.addFailedKeys(UpdateErrors.java:124) ~[stormjar.jar:?]                at org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridNearAtomicUpdateResponse.addFailedKeys(GridNearAtomicUpdateResponse.java:342) ~[stormjar.jar:?]                at org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicCache.updateAllAsyncInternal0(GridDhtAtomicCache.java:1784) ~[stormjar.jar:?]                at org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicCache.updateAllAsyncInternal(GridDhtAtomicCache.java:1627) ~[stormjar.jar:?]                at org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicCache.processNearAtomicUpdateRequest(GridDhtAtomicCache.java:3054) ~[stormjar.jar:?]                at org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicCache.access$400(GridDhtAtomicCache.java:129) ~[stormjar.jar:?]                at org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicCache$5.apply(GridDhtAtomicCache.java:265) ~[stormjar.jar:?]                at org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicCache$5.apply(GridDhtAtomicCache.java:260) ~[stormjar.jar:?]                at org.apache.ignite.internal.processors.cache.GridCacheIoManager.processMessage(GridCacheIoManager.java:1060) ~[stormjar.jar:?]                at org.apache.ignite.internal.processors.cache.GridCacheIoManager.onMessage0(GridCacheIoManager.java:579) ~[stormjar.jar:?]                at org.apache.ignite.internal.processors.cache.GridCacheIoManager.handleMessage(GridCacheIoManager.java:378) ~[stormjar.jar:?]                at org.apache.ignite.internal.processors.cache.GridCacheIoManager.handleMessage(GridCacheIoManager.java:304) ~[stormjar.jar:?]                at org.apache.ignite.internal.processors.cache.GridCacheIoManager.access$100(GridCacheIoManager.java:99) ~[stormjar.jar:?]                at org.apache.ignite.internal.processors.cache.GridCacheIoManager$1.onMessage(GridCacheIoManager.java:293) ~[stormjar.jar:?]                at org.apache.ignite.internal.managers.communication.GridIoManager.invokeListener(GridIoManager.java:1555) ~[stormjar.jar:?]                at org.apache.ignite.internal.managers.communication.GridIoManager.processRegularMessage0(GridIoManager.java:1183) ~[stormjar.jar:?]                at org.apache.ignite.internal.managers.communication.GridIoManager.access$4200(GridIoManager.java:126) ~[stormjar.jar:?]                at org.apache.ignite.internal.managers.communication.GridIoManager$9.run(GridIoManager.java:1090) ~[stormjar.jar:?]                at org.apache.ignite.internal.util.StripedExecutor$Stripe.run(StripedExecutor.java:505) ~[stormjar.jar:?]                at java.lang.Thread.run(Thread.java:748) [?:1.8.0_151]                Suppressed: java.lang.OutOfMemoryError: Java heap space                        at org.apache.ignite.internal.processors.cache.IncompleteCacheObject.(IncompleteCacheObject.java:44) ~[stormjar.jar:?]                        at org.apache.ignite.internal.processors.cacheobject.IgniteCacheObjectProcessorImpl.toCacheObject(IgniteCacheObjectProcessorImpl.java:191) ~[stormjar.jar:?]                        at org.apache.ignite.internal.processors.cache.persistence.CacheDataRowAdapter.readIncompleteValue(CacheDataRowAdapter.java:404) ~[stormjar.jar:?]                        at org.apache.ignite.internal.processors.cache.persistence.CacheDataRowAdapter.readFragment(CacheDataRowAdapter.java:248) ~[stormjar.jar:?]                        at org.apache.ignite.internal.processors.cache.persistence.CacheDataRowAdapter.initFromLink(CacheDataRowAdapter.java:174) ~[stormjar.jar:?]                        at org.apache.ignite.internal.processors.cache.persistence.CacheDataRowAdapter.initFromLink(CacheDataRowAdapter.java:102) ~[stormjar.jar:?]                        at org.apache.ignite.internal.processors.query.h2.database.H2RowFactory.getRow(H2RowFactory.java:62) ~[stormjar.jar:?]                        at 

Re: Unable to connect ignite pods in Kubernetes using Ip-finder

2018-03-29 Thread Denis Magda
Guys,

Thanks for reporting and sharing the resolution. We'll update Ignite
Kubernetes deployment doc:
https://issues.apache.org/jira/browse/IGNITE-8081

--
Denis

On Tue, Mar 27, 2018 at 10:10 AM, lukaszbyjos 
wrote:

> I have the same problem on GKE with 2.4. I found some helpful info there
> but
> I see ignite neet more permissions.
> I think this should be added to Ignite kubernetes deployment instruction.
>
> https://stackoverflow.com/a/49405879/2578137
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Upgrade from 2.1.0 to 2.4.0 resulting in error within transaction block

2018-03-29 Thread smurphy
Another update..

I changed the code (shown at the beginning of this thread) that deletes
records from:

SqlFieldsQuery sqlQuery = new SqlFieldsQuery("delete from EngineFragment 
where " + criteria()); 
fragmentCache.query(sqlQuery.setArgs(criteria.getArgs())); 

to:

_fragmentCache.remove(id);


This code works successfully within a transaction..




--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: how to organize groups in cluster?

2018-03-29 Thread mamaco
Hi Andrew,
Thank you for the response.
1. yes, I will try this way to build sub caches
2. yes, make sense.
3. I tested EvictionPolicy, it seemed to be not available in ignite 2.0+, 
please refer to my earlier discuss (with denis)

 
, is there sth else that I need to know?

I'm fighting against none-blocking real-time processes, I hope I could have
better understanding of ignite documents


Andrew Mashenkov wrote
> Hi,
> 
> 1. NodeFilter can be used [1] to bound cache to certain data nodes.
> 2. You can subscribe to cache updates via ContinuousQuery [2].
> But keep in mind, ContinuousQuery listener is called from sensitive code
> and it is usually bad idea to make blocking operations in it.
> 3. To keep only TOP N entries in cache, a EvictionPolicy [3] can be used.
> EvictionPolicy doesn't support persistence.
> 
> Hope, this helps.
> 
> [1]
> https://stackoverflow.com/questions/39612201/apache-ignite-how-to-deploy-cache-to-some-certain-cluster-group
> [2] https://apacheignite.readme.io/docs/continuous-queries
> [3]
> https://apacheignite.readme.io/docs/evictions#section-first-in-first-out-fifo-
> 
> On Thu, Mar 29, 2018 at 2:42 AM, vkulichenko 

> valentin.kulichenko@

> 
> wrote:
> 
>> Sorry, I'm still failing to understand what you're trying to achieve.
>> What
>> is
>> the reason to manually maintain a tree structure which is basically an
>> index? Why not use Ignite indexes that are provided out of the box?
>>
>> -Val
>>
>>
>>
>> --
>> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>>
> 
> 
> 
> -- 
> Best regards,
> Andrey V. Mashenkov





--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: how to organize groups in cluster?

2018-03-29 Thread mamaco
Hi Val,
Here's the background:
I'm thinking about creating message pipelines right in ignite cluster. 
it's a nightmare to save data in a centralized cache and run everything on
it, 
so my whole idea is to guarantee *none-blocking process* and scalable
consumers.

1. (producer) put real-time stream into internal message pipeline
2. (consumer) top N algorithm process
3. (consumer) cross-cache replication


 

//Topic based message receiver
IgniteMessaging msg = ignite.message(ignite.cluster().forRemotes());
msg.localListen("Item", (nodeId, message) -> {
this.topitems.Process((AdobeItem) message);
return true; 
});


//algorithm class
public class AdobeTopViewedItems {
private static Ignite ignite; 
private static IgniteCache cacheItems;

private Map ListTopN = new LinkedHashMap();
private AdobeItem smallest=null;
private Integer TopN=Integer.parseInt(System.getProperty("TopN"));
private static WebsocketUtil socket;
public AdobeTopViewedItems(String accountid) {
ignite=Ignition.ignite();
cacheItems = ignite.cache("Items");
Initialize(accountid);
socket = new WebsocketUtil("s7biapp10",9001,"ItemChannel");
}

public Map Get(){
return ListTopN;
}

public void Initialize(String accountid){
this.ListTopN=load(accountid);
}


public Map load(String accountid){
Map output =new LinkedHashMap();
String script="from AdobeItem \n"
+ "where \n"
+ 
"datekey>='"+DateUtil.DateTime2DateString(DateUtil.DateTime2Date(new
Date()))+"' \n"
+ "and accountid='"+accountid+"' \n"
+ "order by visits desc \n"
+ "limit "+Integer.toString(TopN)+" \n";
SqlQuery sql=new SqlQuery(AdobeItem.class, script);
QueryCursor>
cursor=cacheItems.query(sql);
for (javax.cache.Cache.Entry row : cursor){
AdobeItem entry=row.getValue();
output.put(entry.key(), entry);
}
return output;
}

public void Process(AdobeItem item){
if(this.smallest==null){
this.smallest=item;
this.ListTopN.put(item.key(), item);
this.ListTopN=sort(ListTopN,false);

}else{
if(item.visits>this.smallest.visits){
this.smallest=item;
this.ListTopN.put(item.key(), item);
this.ListTopN=sort(ListTopN,false);
*send(item);*
}
}
 }

 private void send(AdobeItem item){
 String message=new Gson().toJson(item);
 socket.send(message, MessageRoute.server2client.toString());
 }
   
 private Map sort(Map map,Boolean
descending) {
final Boolean is_descending=descending;
List> list = new
LinkedList>(map.entrySet());
Collections.sort(list, new Comparator>() {
   public int compare(Entry o1, Entry o2) {
   if (is_descending) {
return o2.getValue().visits.compareTo(o1.getValue().visits);
   } else {
   return
o2.getValue().visits.compareTo(o1.getValue().visits);
   }
   }
});
Map sortedMap = new LinkedHashMap();
Integer counter=0;
for (Entry entry : list) {
sortedMap.put(entry.getKey(), entry.getValue());
counter+=1;
smallest=entry.getValue();
if(counter==TopN) break;
}
list.clear();
list=null;
return sortedMap;
 }
}




vkulichenko wrote
> Sorry, I'm still failing to understand what you're trying to achieve. What
> is
> the reason to manually maintain a tree structure which is basically an
> index? Why not use Ignite indexes that are provided out of the box?
> 
> -Val
> 
> 
> 
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/





--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Upgrade from 2.1.0 to 2.4.0 resulting in error within transaction block

2018-03-29 Thread smurphy
I neglected to say that I also removed Spring's ChainedTransactionManager and
used SpringTransactionManager in the annotation, which also resulted in the
same stack trace..

@Transactional("igniteTxMgr")
// code that deletes from cache... 


// Here is how the transaction manager is wired up..
@Bean
public PlatformTransactionManager igniteTxMgr(Ignite ignite) {
SpringTransactionManager igniteTxMgr = new
SpringTransactionManager();
igniteTxMgr.setIgniteInstanceName(ignite.name());
igniteTxMgr.setTransactionConcurrency(OPTIMISTIC);
return igniteTxMgr;
}



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: How to insert multiple rows/data into Cache once

2018-03-29 Thread Andrey Mashenkov
Hi,

Try to use DataStreamer for fast cache load [1].
If you need to use SQL, you can try to use bulk mode updated via JDBC [2]


Also a COPY SQL command [3] will be available in next 2.5 release.
The feature is already in master, you can try to build from it. See example
[4]
.

[1] https://apacheignite.readme.io/docs/data-streamers
[2]
https://apacheignite.readme.io/v2.0/docs/jdbc-driver#section-streaming-mode
[3] https://issues.apache.org/jira/browse/IGNITE-6917
[4]
https://github.com/apache/ignite/blob/master/examples/src/main/java/org/apache/ignite/examples/sql/SqlJdbcCopyExample.java

On Thu, Mar 29, 2018 at 11:30 AM,  wrote:

> Dear all,
>
>
>
> I am trying to use the SqlFieldsQuery sdk to insert data to one cache on
> Ignite.
>
>
>
> I can insert one data into one cache at a time.
>
>
>
> However, I have no idea to insert multiple rows/data into the cache once.
>
>
>
> For example, I would like to insert 1000 rows/data into the cache once.
>
>
>
> Here, I provide my code to everyone to reproduce my situation.
>
> *public* *class* IgniteCreateServer {
>
> *public* *class* Person {
>
>  @QuerySqlField
>
>  *private* String firstName;
>
>   @QuerySqlField
>
>   *private* String lastName;
>
>   *public* Person(String firstName, String lastName) {
>
> *this*.firstName = firstName;
>
> *this*.lastName = lastName;
>
>   }
>
> }
>
> *public* *static* *void* main(String[] args) {
>
> cacheConf.setName("igniteCache");
>
> *cacheConf**.setIndexedTypes(String.**class**, String.**class**)*;
>
> cacheConf.setCacheMode(CacheMode.*REPLICATED*);
>
> cacheConf.setAtomicityMode(CacheAtomicityMode.*ATOMIC*);
>
>   cfg.setCacheConfiguration(cacheConf);
>
>   Ignite igniteNode = Ignition.*getOrStart*(cfg);
>
>   *IgniteCache* cacheKeyvalue = *igniteNode**.getOrCreateCache(*
> *cacheConf**)*;
>
>
>
> *long* starttime, endtime;
>
> starttime = System.*currentTimeMillis*();
>
> *int* datasize = 10;
>
> *for* (*int* i = 0; i < datasize; i++) {
>
> cacheKeyvalue.put("key " + Integer.*toString*(i), Integer.*toString*(i
> ));
>
> }
>
>   endtime = System.*currentTimeMillis*();
>
>   System.*out*.println("write " + datasize + " pairkeyvalue data: spend "
> + (endtime - starttime)  + "milliseconds");
>
> //==
> ===
>
>
>
> cacheCfg.setName("personCache");
>
> cacheCfg.setIndexedTypes(String.*class*, Person.*class*);
>
> cacheCfg.setCacheMode(CacheMode.*REPLICATED*);
>
> cacheCfg.setAtomicityMode(CacheAtomicityMode.*ATOMIC*);
>
> *IgniteCache* cacheKeyTable = igniteNode.getOrCreateCache(cacheCfg);
>
>
>
> *long* starttime1, endtime1;
>
> starttime1 = System.*currentTimeMillis*();
>
> *for* (*int* i = 0; i < datasize; i++) {
>
> cacheKeyTable.query(*new* SqlFieldsQuery("INSERT INTO
> Person(_key,firstName,lastName) VALUES(?,?,?)")
>
> .setArgs(i, "key " + Integer.*toString*(i), Integer.*toString*(i)));
>
> }
>
> endtime1 = System.*currentTimeMillis*();
>
> System.*out*.println("write" + datasize + " table data: spend " + (
> endtime1 - starttime1)  + "milliseconds");
>
> }
>
>
>
> The my code show as:
>
> “write 10 pairkeyvalue data: spend 4734 milliseconds
>
> write 10 table data: spend 2846 milliseconds”
>
>
>
> From the above result, I feel that using the SQL to insert data to cache
> is faster than using cache.getall().
>
>
>
> I am not sure if this is correct or not?
>
>
>
> In addition, that is important for me to insert data into cache via the
> use of SQL,
>
> so I would like to insert multiple rows/data to accelerate it.
>
>
>
> if any further information is needed, I am glad to be informed and will
> provide to you as soon as possible.
>
>
>
> Thanks
>
>
>
> Rick
>
>
>
>
>
>
>
>
>
>
>
>
> --
> 本信件可能包含工研院機密資訊,非指定之收件者,請勿使用或揭露本信件內容,並請銷毀此信件。 This email may contain
> confidential information. Please do not use or disclose it in any way and
> delete it if you are not the intended recipient.
>



-- 
Best regards,
Andrey V. Mashenkov


Error while activating the ignite cluster.

2018-03-29 Thread Swetha
Im trying to activate the cluster while loading data into ignite cache. It is
hanging forever. 
Im found below logs in /Ignite_Home/work/logs/***.log

Metrics for local node (to disable set 'metricsLogFrequency' to 0)
^-- Node [id=d8b1c981, uptime=03:22:00.946]
^-- H/N/C [hosts=2, nodes=2, CPUs=48]
^-- CPU [cur=0.03%, avg=0%, GC=0%]
^-- PageMemory [pages=0]
^-- Heap [used=165MB, free=82.62%, comm=952MB]
^-- Non heap [used=49MB, free=-1%, comm=50MB]
^-- Public thread pool [active=0, idle=2, qSize=0]
^-- System thread pool [active=0, idle=1, qSize=0]
^-- Outbound messages queue [size=0]

Above error is repeatedly printed in the logs. Please help.



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Basic terms in Ignite

2018-03-29 Thread begineer
Thanks Dave for very informative reply. I will come back with some more
questions

Regards,




--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Running heavy queries on Ignite cluster on backing store directly without impacting the cluster

2018-03-29 Thread Andrey Mashenkov
Hi Naveen,

You can try a 'lazy' flag for query. It is available from ignite-2.4 that
has been released recently.
See SqlFieldQuery javadoc [1] and JDBC doc [2] for details.

[1]
https://ignite.apache.org/releases/latest/javadoc/org/apache/ignite/cache/query/SqlFieldsQuery.html#setLazy-boolean-
[2] https://apacheignite-sql.readme.io/docs/jdbc-client-driver


On Thu, Mar 29, 2018 at 8:57 AM, Naveen  wrote:

> Hi
>
> I am using ignite 2.3 with native persistence layer as backing store
>
> We do have close to half to 1 billion records in each of the tables.
>
> There are some adhoc requirements to query the tables with diffrent where
> conditions, columns which we use in where clause may not have indexes,
> which
> may take time to execute the query, but it should slow the down or crash
> the
> cluster. We are not using eviction, all our data is residing in RAM.
>
> My question is - shall we have any means to run the queries directly on
> persistence store instead of on RAM, so that whatever queries we run will
> not impact cluster ??
>
> Hope you understood my requirement ?
>
> Thanks
> Naveen
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>



-- 
Best regards,
Andrey V. Mashenkov


How to insert multiple rows/data into Cache once

2018-03-29 Thread linrick
Dear all,

I am trying to use the SqlFieldsQuery sdk to insert data to one cache on Ignite.

I can insert one data into one cache at a time.

However, I have no idea to insert multiple rows/data into the cache once.

For example, I would like to insert 1000 rows/data into the cache once.

Here, I provide my code to everyone to reproduce my situation.
public class IgniteCreateServer {
public class Person {
 @QuerySqlField
 private String firstName;
  @QuerySqlField
  private String lastName;
  public Person(String firstName, String lastName) {
this.firstName = firstName;
this.lastName = lastName;
  }
}
public static void main(String[] args) {
cacheConf.setName("igniteCache");
cacheConf.setIndexedTypes(String.class, String.class);
cacheConf.setCacheMode(CacheMode.REPLICATED);
cacheConf.setAtomicityMode(CacheAtomicityMode.ATOMIC);
  cfg.setCacheConfiguration(cacheConf);
  Ignite igniteNode = Ignition.getOrStart(cfg);
  IgniteCache cacheKeyvalue = igniteNode.getOrCreateCache(cacheConf);

long starttime, endtime;
starttime = System.currentTimeMillis();
int datasize = 10;
for (int i = 0; i < datasize; i++) {
cacheKeyvalue.put("key " + Integer.toString(i), Integer.toString(i));
}
  endtime = System.currentTimeMillis();
  System.out.println("write " + datasize + " pairkeyvalue data: spend " + 
(endtime - starttime)  + "milliseconds");
//=

cacheCfg.setName("personCache");
cacheCfg.setIndexedTypes(String.class, Person.class);
cacheCfg.setCacheMode(CacheMode.REPLICATED);
cacheCfg.setAtomicityMode(CacheAtomicityMode.ATOMIC);
IgniteCache cacheKeyTable = igniteNode.getOrCreateCache(cacheCfg);

long starttime1, endtime1;
starttime1 = System.currentTimeMillis();
for (int i = 0; i < datasize; i++) {
cacheKeyTable.query(new SqlFieldsQuery("INSERT INTO 
Person(_key,firstName,lastName) VALUES(?,?,?)")
.setArgs(i, "key " + Integer.toString(i), Integer.toString(i)));
}
endtime1 = System.currentTimeMillis();
System.out.println("write" + datasize + " table data: spend " + (endtime1 - 
starttime1)  + "milliseconds");
}

The my code show as:
“write 10 pairkeyvalue data: spend 4734 milliseconds
write 10 table data: spend 2846 milliseconds”

From the above result, I feel that using the SQL to insert data to cache is 
faster than using cache.getall().

I am not sure if this is correct or not?

In addition, that is important for me to insert data into cache via the use of 
SQL,
so I would like to insert multiple rows/data to accelerate it.

if any further information is needed, I am glad to be informed and will provide 
to you as soon as possible.

Thanks

Rick







--
本信件可能包含工研院機密資訊,非指定之收件者,請勿使用或揭露本信件內容,並請銷毀此信件。 This email may contain 
confidential information. Please do not use or disclose it in any way and 
delete it if you are not the intended recipient.


Re: how to organize groups in cluster?

2018-03-29 Thread Andrey Mashenkov
Hi,

1. NodeFilter can be used [1] to bound cache to certain data nodes.
2. You can subscribe to cache updates via ContinuousQuery [2].
But keep in mind, ContinuousQuery listener is called from sensitive code
and it is usually bad idea to make blocking operations in it.
3. To keep only TOP N entries in cache, a EvictionPolicy [3] can be used.
EvictionPolicy doesn't support persistence.

Hope, this helps.

[1]
https://stackoverflow.com/questions/39612201/apache-ignite-how-to-deploy-cache-to-some-certain-cluster-group
[2] https://apacheignite.readme.io/docs/continuous-queries
[3]
https://apacheignite.readme.io/docs/evictions#section-first-in-first-out-fifo-

On Thu, Mar 29, 2018 at 2:42 AM, vkulichenko 
wrote:

> Sorry, I'm still failing to understand what you're trying to achieve. What
> is
> the reason to manually maintain a tree structure which is basically an
> index? Why not use Ignite indexes that are provided out of the box?
>
> -Val
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>



-- 
Best regards,
Andrey V. Mashenkov