Re: Segmentation policy configuration

2018-03-25 Thread Hemasundara Rao
Thank you very much Stan

Regards,
Hemasundar

On 23 March 2018 at 19:20, Stanislav Lukyanov 
wrote:

> Hi,
>
>
>
> There is an answer for this question already, maybe you didn’t receive it
> - http://apache-ignite-users.70518.x6.nabble.com/Segmentation-policy-
> configuration-tp20392p20628.html.
>
> If you have additional questions, please ask them in the original thread.
>
>
>
> Thanks,
>
> Stan
>
>
>
> *From: *Hemasundara Rao 
> *Sent: *23 марта 2018 г. 13:50
> *To: *user@ignite.apache.org
> *Subject: *Segmentation policy configuration
>
>
>
> Hi,
>
>  Actually I am looking for help on how to specify following details in xml
> configuration file
>
>
>
> 1) SegmentationPolicy other that default value
>
> 2) SegmentCheckFrequency
>
> 3) SegmentationResolveAttempts
>
> and other required setting in the configuration file.
>
>
>
> I want to specify  "Segmentation Policies" to "RESTART_JVM" or "NOOP".
>
>
>
>
>
> Thanks and Regards,
>
> Hemasundar.
>
>
>


答复: Liquibase with Ignite?

2018-03-25 Thread 王 刚
Maybe beause there is no callback fucntion for 
org.apache.ignite.IgniteJdbcThinDriver.



I use a class mock liquibase, hope can help you:



/*

* To change this license header, choose License Headers in Project Properties.

* To change this template file, choose Tools | Templates

* and open the template in the editor.

*/

package com.samples.vehicles.ignite.init;



import com.samples.vehicles.common.DomainUtils;



import java.util.ArrayList;

import java.util.Collections;

import java.util.HashMap;

import java.util.List;

import java.util.Map;

import java.util.UUID;



import org.slf4j.Logger;

import org.slf4j.LoggerFactory;

import org.springframework.beans.factory.annotation.Autowired;

import org.springframework.context.annotation.Bean;

import org.springframework.context.annotation.Configuration;

import org.springframework.context.annotation.DependsOn;

import org.springframework.jdbc.core.JdbcTemplate;



/**

* @author lenovo

*/

@Configuration

public class ILikeLiquibase {



private final Logger log = LoggerFactory.getLogger(this.getClass());

@Autowired

private JdbcTemplate jdbcTemplate;



private final String lock = UUID.randomUUID().toString();

private final List sqlList = new ArrayList<>();

private final Map sqlMap = new HashMap<>();



//add new script here

private void loadScripts() {



addSql("20180314WG03", "DROP TABLE IF EXISTS alarm_record_file;");

addSql("20180314WG04", "CREATE TABLE alarm_record_file(fileossid 
VARCHAR PRIMARY KEY,alarm_id VARCHAR,"

+ "file_type Long, channel Long, create_time TIMESTAMP);");



addSql("20180314WG05", "DROP TABLE IF EXISTS gps;");

addSql("20180314WG06", "CREATE TABLE gps(id Long PRIMARY KEY 
,vehicle_id Long, time TIMESTAMP,driver_id Long,"

+ "status Long,lng Long,lat Long,height Long,speed 
Long,direction Long,wirelessstrength Long,gnns Long,mileage Long);");



addSql("20180323DXM01", "DROP TABLE IF EXISTS alarm_record_file;");

addSql("20180323DXM02", "CREATE TABLE alarm_record_file(fileossid 
VARCHAR,alarm_id VARCHAR,"

+ "file_type Long, channel Long, create_time TIMESTAMP, PRIMARY 
KEY(fileossid,alarm_id));");



}



private void mergeScripts() {

jdbcTemplate.query("select id, sql from Public.sqlScriptLog",

(rs, row) -> {

String id = rs.getString("id");

String sql = rs.getString("sql");

if (sqlMap.containsKey(id) && !sqlMap.get(id).equals(sql)) {

throw new RuntimeException("Sql script changed id:" + 
id);

}

sqlList.remove(id);

return null;

});

}



private void addSql(String id, String sql) {

if (sqlList.contains(id)) {

throw new RuntimeException("duplicate sql script");

}

sqlList.add(id);

sqlMap.put(id, sql);

}



private void createTable() {

jdbcTemplate.execute("CREATE TABLE IF NOT EXISTS sqlScriptLog("

+ " id VARCHAR PRIMARY KEY, sql VARCHAR)"

+ " WITH \"backups=2\";");

jdbcTemplate.execute("CREATE TABLE IF NOT EXISTS sqlScriptLock("

+ " id VARCHAR PRIMARY KEY, lock LONG)"

+ " WITH \"backups=2\";");

}



private void checkLock() {

Integer nLock = jdbcTemplate.queryForObject("select count(*) from 
sqlScriptLock",

Integer.class);

if (nLock == 0) {

addLock();

}

Integer myLock = jdbcTemplate.queryForObject("select count(*) from 
sqlScriptLock "

+ "where id = ? ", new Object[]{lock}, Integer.class);

if (myLock == 0) {

log.info("SqlScriptManagement waiting for lock ...");

try {

Thread.sleep(1L);

} catch (Exception e) {

}

checkLock();

}



}



private void addLock() {

log.info("SqlScriptManagement get lock ...");

jdbcTemplate.update("insert into sqlScriptLock (id, lock) values (?, 
?)",

new Object[]{lock, 1});

}



private void removeLock() {

log.info("SqlScriptManagement remove lock ...");

jdbcTemplate.update("delete from sqlScriptLock "

+ "where id = ?", new Object[]{lock});

}



private void runSql() {

Collections.sort(sqlList);

log.info("SqlScriptManagement run scripts:" + 
DomainUtils.bean2json(sqlList));

sqlList.forEach(id -> {

log.info("Running sql script id:" + id);

jdbcTemplate.execute(sqlMap.get(id));

jdbcTemplate.update("insert into sqlScriptLog(id, sql) VALUES 
(?,?)",

ps -> {

ps.setString(1, id);


Re: Graph Query Integration

2018-03-25 Thread vkulichenko
There is no native graph support in Ignite. However, for certain use cases it
might be possible to store graph data in a set of caches, and then use
compute APIs [1] to do the processing.

[1] https://apacheignite.readme.io/docs/compute-grid

-Val



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Liquibase with Ignite?

2018-03-25 Thread vkulichenko
Never heard of anyone doing this, but I don't see why it wouldn't work. Did
you have any issues while working with this combination?

-Val



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: How to achieve writethrough with ignite data streamer

2018-03-25 Thread vkulichenko
Hm.. Not sore what happened exactly in your case, but cache store is never
deployed via peer class loading. It's required that you have a class
explicitly deployed on every node prior to start up.

-Val



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: how to organize groups in cluster?

2018-03-25 Thread vkulichenko
Can you clarify what you mean by "real-time query" in this case? Why not just
start node C as a client and run a query from it?

-Val



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: JTA transactions in Iginte

2018-03-25 Thread Prasad Bhalerao
Hi Val/Denis

Can you please answer this?

Thanks,
prasad

On Sat, Mar 24, 2018, 1:58 PM Prasad Bhalerao 
wrote:

> Hi,
> I can't use write through approach. I will be doing CRUD batch operations
> on oracle tables to avoid frequent calls to db.
> Now to keep ignite cache and oracle db consistent I have to use JTA
> transaction.
> Note: I have also enabled ignite persistance.
>
> Can you please provide an example to configure jta with ignite?
> I have already gone through the ignite documentation,but it's not clear to
> me how to use it.
> I have configured the transaction manager in ignite configuration as shown
> by the Akash in his mail.
> But how to enlist oracle data source in transaction?
> As per the ignite doc, it uses user transaction. But how get the user
> transaction in first place?
> How to attach xa data source to this user transaction?
> Please advise, I stuck at this point.
>
>
> On Sat, Mar 24, 2018, 12:12 PM Denis Magda  wrote:
>
>> Do you use Ignite as a distributed cache of your database? If it’s so,
>> then you can hook Ignite up with the database using CacheStore interface.
>> After that, Ignite will write-through changes automatically including
>> transactions.
>>
>> Denis
>>
>> On Friday, March 23, 2018, Prasad Bhalerao 
>> wrote:
>>
>>> Can someone please answer this?
>>> I am also facing the similar problem.
>>>
>>>
>>>
>>>
>>> On Fri, Mar 23, 2018, 5:12 PM akash shinde 
>>> wrote:
>>>
 Hello,
  My requirement is to achieve ignite cache update operation and
 database write operation in single transaction.
 Right now I am trying to use  JTA transaction management. I have
 created below IgniteConfiguration  using transactionConfiguration. But I am
 not sure how to configure XAResource to this transaction manager.
 I will be needing two XAResource here 1)Datasource(JDBC data source)
 2)Ignite cache.

 Please suggest how can I achieve this JTA transaction using two
 XAResources.

 Please also suggest after all these configuration how to
 get JtaTransactionManager instance to initiate the transaction. Currently
 I am using Narayana JTA implementation.


 private IgniteConfiguration getIgniteConfiguration(IgniteSpringBean 
 ignite) {

   IgniteConfiguration cfg = new IgniteConfiguration();
   cfg.setIgniteInstanceName("springDataNode");
   cfg.setPeerClassLoadingEnabled(false);
   cfg.setRebalanceThreadPoolSize(4);

  * cfg.setTransactionConfiguration(transactionConfiguration());*

   
 cfg.setCacheConfiguration(getIPContainerCacheConfiguration(),getIPContainerIPV4CacheConfiguration(),getUserPermissionCacheConfiguration(),getUserAccountCacheConfiguration());
   return cfg;
 }


 @Bean
 public TransactionConfiguration transactionConfiguration(){
   TransactionConfiguration configuration = new TransactionConfiguration();
   Factory factory = new Factory() {
 @Override
 public Object create() {
   return jtaTransactionManager();
 }
   };
   configuration.setTxManagerFactory(factory);
   *configuration.setUseJtaSynchronization(true);*
   return configuration;
 }

 @Bean
 public PlatformTransactionManager jtaTransactionManager() {
   JtaTransactionManager tm = new JtaTransactionManager();
   tm.setTransactionManager(transactionManager());
   return tm;
 }

 @Bean
 public TransactionManager transactionManager() {
   return new TransactionManagerImple();
 }


 Thanks,
 Akash

>>>


Re: Any references - Syncing Ignite and Oracle DB with Oracle GoldenGate - updates from DB to ignite

2018-03-25 Thread vkulichenko
Naveen,

Ignite does not provide such integration out of the box, however there a
commercial offering from GridGain for that:
https://docs.gridgain.com/docs/goldengate-replication

-Val



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: ContinuousQuery - SqlFieldsQuery as InitialQuery

2018-03-25 Thread vkulichenko
This makes sense, and actually that's exactly how initialQuery works. It's
executed after continuous query listener is deployed, so nothing is missed,
but duplicates are possible.

-Val



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Cancelling a Continuous Query

2018-03-25 Thread au.fp2018
Thanks Slava!

I had dropped the initial query from my continuous query, so I didn't look
in that direction. You are right it is clearly stated in the docs. 

Future reference for others, this works even if the inital query is not
specified. 
According to the docs:

/Note that this works even if you didn't provide initial query. Cursor
will be empty in this case, but it will still unregister listeners when
QueryCursor.close() is called./

A classic case of RTFM :)



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: eviction performance

2018-03-25 Thread Stanislav Lukyanov
Hi Scott,

With eagetTTL=false, each time you access an entry its TTL is automatically
checked - if it is expired, the entry will be removed. It means that instead
of having a separate thread waking up and removing all of the expired
entries at the same time (and taking a chunk of CPU and IO for a noticeable
period of time), you'll be removing them over time. On average, individual
accesses may become a bit more expensive (because some of them will trigger
removal) but that will allow you to avoid latency spikes.

About the truncate - I think there is no API to do that based on partitions.
I guess you could try creating a new cache for each batch and destroying it
after you don't need it anymore, but it's hard to say whether it will be
more efficient.

Stan



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Cancelling a Continuous Query

2018-03-25 Thread Вячеслав Коптилин
Hello Andre,

As mentioned in the javadoc [1], you have to call QueryCursor#close()
method in order to stop receiving updates.

// Create new continuous query.
ContinuousQuery qry = new ContinuousQuery<>();
// Execute query and get cursor that iterates through initial data.
QueryCursor> cur = cache.query(qry);
...
// Stop receiving updates
cur.close();

[1] 
https://ignite.apache.org/releases/latest/javadoc/org/apache/ignite/cache/query/ContinuousQuery.html

[2] 
https://ignite.apache.org/releases/latest/javadoc/org/apache/ignite/cache/query/QueryCursor.html


Thanks!


2018-03-25 20:47 GMT+03:00 au.fp2018 :

> Hello All,
>
> What is the correct way to cancel a Continuous Query, and make sure all the
> resources are used by the query are freed?
>
> I looked in the documentation and the examples, I didn't see any explicit
> reference to cancelling a running continuous query.
>
> Thanks,
> Andre
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Cancelling a Continuous Query

2018-03-25 Thread au.fp2018
Hello All,

What is the correct way to cancel a Continuous Query, and make sure all the
resources are used by the query are freed?

I looked in the documentation and the examples, I didn't see any explicit
reference to cancelling a running continuous query.

Thanks,
Andre



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Basic terms in Ignite

2018-03-25 Thread David Harvey
1.   You can call ignite.start() twice in the same JVM, and you get two
Ignite nodes (there are some restrictions around the same cluster I don't
understand).  An ignite node may be a client or a server.  I believe
can have two client nodes in the same JVM connected to two different
clusters.( For peer class loading, the terms master and workers
introduced, just to be confusing, where in typical use cases the master is
the client and the workers the servers.)

2. A cache is broken, by default, into 1000 partitions, and the primary for
a partition exists on exactly one node, as does each backup for the
partition.   A partitioned cache can be created on any "cluster group",
which is not, as you might think, a "group of clusters" but instead a
subset of the nodes in a cluster.Of course, it would not make sense to
have less nodes than # backups + 1.

3. In 2.4, you are using Ignite Persistence,  the cluster will remain
inactive on startup until the set of nodes you have defined as necessary
have started, and then it will activate automatically.  Prior versions
needed to be manually activated.   I'm not aware of any controls w/o Ignite
Persistence for the case your describe.

On Sun, Mar 25, 2018 at 7:53 AM, begineer  wrote:

> Hi,
> I am confused about few terms used often in ignite docs. Could someone
> please clarify them.
>
> 1. What is difference between a Server, JVM and ignite node. Can one have
> multiple instances of other 2, like a jvm has two nodes or a node has 2
> jvms. Please explain in detail, I am very confused about them
>
> 2. In partitioned mode, can a partition span across multiple nodes. Again
> relating to first point. If I have 10 servers of 10GB each so total memory
> is 100 GB. Can a partition be created on 2-3 servers of 30 GB size. This is
> hypothetical example though.
>
> 3. Lets say I have 90 GB data to be loaded on 10 servers of 10GB each. Now
> if first ignite node joins the gird and starts loading data. There might be
> a chance that it loads so much data that it runs out of memory before new
> ones join the grid. How do we handle this scenario. Is there a way to wait
> for some condition before kicking of cache loading.
>
> Thanks
> Surinder
>
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>

Disclaimer

The information contained in this communication from the sender is 
confidential. It is intended solely for use by the recipient and others 
authorized to receive it. If you are not the recipient, you are hereby notified 
that any disclosure, copying, distribution or taking action in relation of the 
contents of this information is strictly prohibited and may be unlawful.

This email has been scanned for viruses and malware, and may have been 
automatically archived by Mimecast Ltd, an innovator in Software as a Service 
(SaaS) for business. Providing a safer and more useful place for your human 
generated data. Specializing in; Security, archiving and compliance. To find 
out more visit the Mimecast website.


Basic terms in Ignite

2018-03-25 Thread begineer
Hi,
I am confused about few terms used often in ignite docs. Could someone
please clarify them.

1. What is difference between a Server, JVM and ignite node. Can one have
multiple instances of other 2, like a jvm has two nodes or a node has 2
jvms. Please explain in detail, I am very confused about them

2. In partitioned mode, can a partition span across multiple nodes. Again
relating to first point. If I have 10 servers of 10GB each so total memory
is 100 GB. Can a partition be created on 2-3 servers of 30 GB size. This is
hypothetical example though.

3. Lets say I have 90 GB data to be loaded on 10 servers of 10GB each. Now
if first ignite node joins the gird and starts loading data. There might be
a chance that it loads so much data that it runs out of memory before new
ones join the grid. How do we handle this scenario. Is there a way to wait
for some condition before kicking of cache loading.

Thanks
Surinder




--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/