Re: sql insert with composite primary key

2017-11-29 Thread Fff
HI,
Thanks for your help!
We define it using QueryEntity by providing 'keyFields' property:




triggerId
biEventId

  



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Re: Re: Re: JdbcQueryExecuteResult cast error

2017-11-29 Thread Denis Mekhanikov
Lucky,

If it's possible, that this code is executed cuncurrently, then you need to
add additional synchronization. Otherwise correct work of JDBC driver is
not guaranteed.

Denis

On Thu, Nov 30, 2017, 09:10 Lucky  wrote:

> Denis,
> I used java code,just like this:
> Connection conn = getConn();
> PreparedStatement stmt = conn.prepareStatement(sql);
> ResultSet rs =  stmt.executeQuery();
> public Connection getConn() throws SQLException{
> if (conn==null || conn.isClosed()){
> try {
> Class.forName("org.apache.ignite.IgniteJdbcThinDriver");
> conn = DriverManager.getConnection("jdbc:ignite:thin://
> 192.168.1.2?distributedJoins=true");
> } catch (Exception e) {
> logger.error("IgniteSererFacade getConn error === ",e);
> }
> }
> return conn;
> }
>
> Yes, I only used one connection.
> Because there will execute many times in one operation. Sometimes
> there will have 30 times.
> If I create a JDBC connection every time, the performance can not be
> accepted.
> This problem did not reproduce if use DBeaver.
>
> Also I have say before, that If I exit the system,And relogin again,
> it is normal.
>
> Thank you very much.
>
>
>
> At 2017-11-29 18:37:33, "Denis Mekhanikov"  wrote:
>
> Lucky,
>
> What tool are you using to access Ignite over JDBC? Does this problem
> reproduce, if you use DBeaver, for example?
> As Taras said, looks like the same JDBC connection is used concurrently.
>
> Denis
>
>>
>>>


Re: Local node and remote node have different version numbers

2017-11-29 Thread Rajarshi Pain
Hi Evgenii,

We both tried to specify the port, but still we rae getting same error if
we both running it on the time in our system.

On Wed 29 Nov, 2017, 21:04 Evgenii Zhuravlev, 
wrote:

> Hi,
>
> In this case, you should specify ports for addresses, for example:
>
> 10.23.153.56:47500
>
>
> Evgenii
>
>
> 2017-11-29 18:20 GMT+03:00 Rajarshi Pain :
>
>> Hi,
>>
>> I was working on Ignite 2,3, and my colleague is working on 1.8. We are
>> running different application on our local machine using the example ignite
>> xml, but it getting failed with the below exception:-
>>
>> We are using our local system ip under TcpDiscoverySpi, but still, it is
>> detecting each other. How I can stop this and not let them discover each
>> other.
>>
>> 
>> 
>>   
>> > class="org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder">
>>   
>> 10.23.153.56
>>
>>   
>>
>>
>> Exception:-
>>
>>  Local node and remote node have different version numbers (node will
>> not join, Ignite does not support rolling updates, so versions must be
>> exactly the same) [locBuildVer=2.3.0, rmtBuildVer=1.8.0, locNodeAddrs=[
>> 01HW1083820.India.TCS.com/0:0:0:0:0:0:0:1, /10.23.209.152, /127.0.0.1],
>> rmtNodeAddrs=[01Hw1083774.India.TCS.com/0:0:0:0:0:0:0:1, /10.23.209.149,
>> /127.0.0.1], locNodeId=c3e4f1ff-f6d7-4c26-9baf-e04f99a8eaac,
>> rmtNodeId=75e5e5bc-5de2-484a-9a91-a3a9f2ec48d3]
>>
>> --
>> Regards,
>> Rajarshi Pain
>>
>
> --

Thanks
Rajarshi


Re:Re: Re: Re: JdbcQueryExecuteResult cast error

2017-11-29 Thread Lucky
Denis,
I used java code,just like this:
Connection conn = getConn();
PreparedStatement stmt = conn.prepareStatement(sql);
ResultSet rs =  stmt.executeQuery();
public Connection getConn() throws SQLException{
if (conn==null || conn.isClosed()){
try {
Class.forName("org.apache.ignite.IgniteJdbcThinDriver");
conn = 
DriverManager.getConnection("jdbc:ignite:thin://192.168.1.2?distributedJoins=true");
} catch (Exception e) {
logger.error("IgniteSererFacade getConn error === ",e);
}
}
return conn;
}


Yes, I only used one connection.
Because there will execute many times in one operation. Sometimes there 
will have 30 times.
If I create a JDBC connection every time, the performance can not be 
accepted.
This problem did not reproduce if use DBeaver. 


Also I have say before, that If I exit the system,And relogin again, it is 
normal.


Thank you very much.




At 2017-11-29 18:37:33, "Denis Mekhanikov"  wrote:

Lucky,


What tool are you using to access Ignite over JDBC? Does this problem 
reproduce, if you use DBeaver, for example?
As Taras said, looks like the same JDBC connection is used concurrently.


Denis



Re: Redis problem with values > 8kb

2017-11-29 Thread Wolfram Huesken

Hello Alexey,

sorry for the delayed answer, here's the PHP example which demonstrates 
the problem: 
https://gist.github.com/wolframite/ee23b08bdd26bc1284cacc5259b850f8


Cheers
Wolfram

On 28/11/2017 03:15, Alexey Popov wrote:

Wolfram,

The buffer size is hardcoded now, but it could be made configurable if it
the real issue.

Can you share a simple PHP sample?
I think Java Unit tests may miss some important details you have with PHP. I
wonder if I can test & find the missing part of the puzzle ).

Thanks,
Alexey



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/



Re: Error running ignite in YARN

2017-11-29 Thread Juan Rodríguez Hortalá
Hi,

Any thoughts on this? Do you need any further details about the setup?

Thanks,

Juan

On Tue, Nov 21, 2017 at 8:59 AM, Juan Rodríguez Hortalá <
juan.rodriguez.hort...@gmail.com> wrote:

> Hi,
>
> Anyone might help a newbie ramping up with Ignite on YARN?
>
> Thanks,
>
> Juan
>
>
> On Sun, Nov 19, 2017 at 7:34 PM, Juan Rodríguez Hortalá <
> juan.rodriguez.hort...@gmail.com> wrote:
>
>> Hi,
>>
>> I'm trying to run ignite on AWS EMR as a YARN application, using
>> zookeeper for node discovery. I have compiled ignite with
>>
>> ```
>> mvn clean package -DskipTests -Dignite.edition=hadoop
>> -Dhadoop.version=2.7.3
>> ```
>>
>> I'm using ignite_yarn.properties
>>
>> ```
>> # The number of nodes in the cluster.
>> IGNITE_NODE_COUNT=3
>>
>> # The number of CPU Cores for each Apache Ignite node.
>> IGNITE_RUN_CPU_PER_NODE=1
>>
>> # The number of Megabytes of RAM for each Apache Ignite node.
>> IGNITE_MEMORY_PER_NODE=500
>>
>> IGNITE_PATH=hdfs:///user/hadoop/ignite/apache-ignite-2.3.0-
>> hadoop-2.7.3.zip
>>
>> IGNITE_XML_CONFIG=hdfs:///user/hadoop/ignite/ignite_conf.xml
>>
>> # Local path
>> IGNITE_WORK_DIR=/mnt
>>
>> # Local path
>> IGNITE_RELEASES_DIR=/mnt
>>
>> IGNITE_WORKING_DIR=/mnt
>> 
>>
>> and ignite_conf.xml as
>>
>> ```
>> 
>> http://www.springframework.org/schema/beans;
>>xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance;
>>xsi:schemaLocation="
>> http://www.springframework.org/schema/beans
>> http://www.springframework.org/schema/beans/spring-beans.xsd;>
>> 
>> 
>> 
>> 
>> 
>> 
>> 
>> 
>> 
>> 
>> 
>> 
>>
>> 
>> 
>> 
>>   
>>   
>>
>>   > value="ip-10-0-0-173.ec2.internal:2181"/>
>>   
>>   
>> 
>> 
>> 
>>   
>> 
>> > value="config/ignite-log4j2.xml"/>
>>   
>> 
>> 
>> 
>> ```
>>
>> Then I launch the yarn job as
>>
>>
>> ```
>> IGNITE_YARN_JAR=/mnt/ignite/apache-ignite-2.3.0-src/modules/
>> yarn/target/ignite-yarn-2.3.0.jar
>>  yarn jar ${IGNITE_YARN_JAR} ${IGNITE_YARN_JAR}
>> /mnt/ignite/ignite_yarn.properties
>> ```
>>
>> The app launches and the application master is outputting logs, but
>> containers only last some seconds running, and the application is
>> constantly asking for more containers. For example, in the application
>> master log
>>
>> ```
>>
>> Nov 20, 2017 3:08:30 AM org.apache.ignite.yarn.ApplicationMaster 
>> onContainersAllocated
>> INFO: Launching container: container_1511142795395_0005_01_017079.
>> 17/11/20 03:08:30 INFO impl.ContainerManagementProtocolProxy: Opening proxy 
>> : ip-10-0-0-230.ec2.internal:8041
>> Nov 20, 2017 3:08:30 AM org.apache.ignite.yarn.ApplicationMaster 
>> onContainersAllocated
>> INFO: Launching container: container_1511142795395_0005_01_017080.
>> 17/11/20 03:08:30 INFO impl.ContainerManagementProtocolProxy: Opening proxy 
>> : ip-10-0-0-78.ec2.internal:8041
>> Nov 20, 2017 3:08:30 AM org.apache.ignite.yarn.ApplicationMaster 
>> onContainersAllocated
>> INFO: Launching container: container_1511142795395_0005_01_017081.
>> 17/11/20 03:08:30 INFO impl.ContainerManagementProtocolProxy: Opening proxy 
>> : ip-10-0-0-193.ec2.internal:8041
>> Nov 20, 2017 3:08:31 AM org.apache.ignite.yarn.ApplicationMaster 
>> onContainersCompleted
>> INFO: Container completed. Container id: 
>> container_1511142795395_0005_01_017080. State: COMPLETE.
>> Nov 20, 2017 3:08:31 AM org.apache.ignite.yarn.ApplicationMaster 
>> onContainersCompleted
>> INFO: Container completed. Container id: 
>> container_1511142795395_0005_01_017081. State: COMPLETE.
>> Nov 20, 2017 3:08:31 AM org.apache.ignite.yarn.ApplicationMaster 
>> onContainersCompleted
>> INFO: Container completed. Container id: 
>> container_1511142795395_0005_01_017079. State: COMPLETE.
>> Nov 20, 2017 3:08:31 AM org.apache.ignite.yarn.ApplicationMaster 
>> onContainersAllocated
>>
>> ```
>>
>> In the logs for a node manager I see containers seem to fail when they
>> are launched, because the corresponding bash command is not well formed
>>
>> ```
>> 2017-11-20 03:08:47,810 INFO org.apache.hadoop.yarn.server.
>> nodemanager.containermanager.container.ContainerImpl (AsyncDispatcher
>> event handler): Container container_1511142795395_0005_01_017281
>> transitioned from LOCALIZED to RUNNING
>> 2017-11-20 03:08:47,811 INFO org.apache.hadoop.yarn.server.
>> nodemanager.DefaultContainerExecutor (ContainersLauncher #4):
>> launchContainer: [bash, /mnt/yarn/usercache/hadoop/app
>> cache/application_1511142795395_0005/container_1511142795395
>> _0005_01_017281/default_container_executor.sh]
>> 2017-11-20 03:08:47,819 WARN org.apache.hadoop.yarn.server.
>> nodemanager.DefaultContainerExecutor (ContainersLauncher #4): Exit code
>> 

Re: get api takes more time when adding more nodes to cluster

2017-11-29 Thread Biren Shah
I am not sure how client-server deployment would look in our case. Our 
application starts the ignite in server mode, initializes caches and later 
access those caches. To switch to client-server deployment, the application 
will start a node in server mode, initializes caches. Then the application will 
start another node in client mode on the same JVM? That does not to look right. 
We don’t want to separate out these functionalities into different 
applications/JVM.

I can make read heavy caches to be replicated. That will improve get operations 
on those caches. Ill try that out.

-Biren

On 11/29/17, 5:07 PM, "vkulichenko"  wrote:

Biren,

If half of your operations became multiple times slower, why would you
expect throughput to increase? In case you don't use collocation, I would
recommend you to switch to client-server deployment. Initial performance
with two nodes and fully replicated cache can be slower than now, but you
will see how it scales out.

-Val



--
Sent from: 
https://urldefense.proofpoint.com/v2/url?u=http-3A__apache-2Dignite-2Dusers.70518.x6.nabble.com_=DwICAg=Zok6nrOF6Fe0JtVEqKh3FEeUbToa1PtNBZf6G01cvEQ=rbkF1xy5tYmkV8VMdTRVaIVhaXCNGxmyTB5plfGtWuY=q8E7KLmXbR4rdJjoAHbe2vqobCfu4moZki2hcqMjH6M=02J_1t0Q5IgCpjhUxcZsqyVJ3bOzlylkXdrdtB7Iw0Q=




Re: Ignite behaving strange with Spark SharedRDD in AWS EMR Yarn Client Mode

2017-11-29 Thread vkulichenko
I don't think raksja had an issue with only one record in the RDD.
IgniteRDD#count redirects directly to IgniteCache#size, so if it returns 1,
so you indeed have only one entry in a cache for some reason.

-Val



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Local node failure starts cluster-wide procedure

2017-11-29 Thread vkulichenko
Amit,

Does it work if you start without nohup?

-Val



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: get api takes more time when adding more nodes to cluster

2017-11-29 Thread vkulichenko
Biren,

If half of your operations became multiple times slower, why would you
expect throughput to increase? In case you don't use collocation, I would
recommend you to switch to client-server deployment. Initial performance
with two nodes and fully replicated cache can be slower than now, but you
will see how it scales out.

-Val



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: SQL Count(*) returns incorrect count

2017-11-29 Thread vkulichenko
Looks like you're running in embedded mode. In this mode server nodes are
started within Spark executors, so when executor is stopped some of the data
is lost. Try to start a separate Ignite cluster and create IgniteContext
with standalone=true.

-Val



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: get api takes more time when adding more nodes to cluster

2017-11-29 Thread Biren Shah
Well I am not expecting that by doubling the number of nodes, I will get 2x 
throughput. But it should be at some liner rate and definitely should not bring 
the throughput down. We have embedded the ignite in the application. On start 
of the application we start ignite in server mode. We initialize distributed 
caches and load data from persistent store. Then the application reads from and 
writes to caches.

-Biren 

On 11/29/17, 3:52 PM, "vkulichenko"  wrote:

Biren,

That's a wrong expectation because local in-memory read is drastically
faster than a network read. How do you choose a server node to read from?
What is overall use case?

-Val



--
Sent from: 
https://urldefense.proofpoint.com/v2/url?u=http-3A__apache-2Dignite-2Dusers.70518.x6.nabble.com_=DwICAg=Zok6nrOF6Fe0JtVEqKh3FEeUbToa1PtNBZf6G01cvEQ=rbkF1xy5tYmkV8VMdTRVaIVhaXCNGxmyTB5plfGtWuY=grCPTVl8FQVCiNf6dQwL5M3RP54gZKSIb-nKIoTtqh4=Zb6ay9er_YKcjzeqY9zeum4rf2rWCPnlYf9b_7wizXU=




Re: get api takes more time when adding more nodes to cluster

2017-11-29 Thread vkulichenko
Biren,

That's a wrong expectation because local in-memory read is drastically
faster than a network read. How do you choose a server node to read from?
What is overall use case?

-Val



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: get api takes more time when adding more nodes to cluster

2017-11-29 Thread Biren Shah
Hi Val,

We are doing read on server only. I understand the get will be slow in case of 
4 nodes because its doing remote get for some cache entries. But it should not 
bring down overall throughput of an application. Just to give you some context, 
with 2 nodes the application can process 500K data points every minute. But 
when I add 2 more nodes to the cluster the throughput reduces to 400K data 
point. Expectation is that with total 4 nodes we should be able to process more 
data point. 

-Biren

On 11/29/17, 2:52 PM, "vkulichenko"  wrote:

Hi Biren,

Are you doing reads from a client or directly on server nodes? If the
letter, then I guess you just do not collocate properly. With two nodes and
one backup all data is available on both nodes, so any server side read
would be local. With four nodes some of them would be remote which is
obviously slower.

-Val



--
Sent from: 
https://urldefense.proofpoint.com/v2/url?u=http-3A__apache-2Dignite-2Dusers.70518.x6.nabble.com_=DwICAg=Zok6nrOF6Fe0JtVEqKh3FEeUbToa1PtNBZf6G01cvEQ=rbkF1xy5tYmkV8VMdTRVaIVhaXCNGxmyTB5plfGtWuY=mhGI_pnCKYuFMBxwVkhd6Ja-qVO_m4CWWvPSXjBzlU4=ZWb-NQgr5GT-noYFO2XXS6CMQoUtUE45zbWI-YqpuSA=




IgniteCheckedException: Failed to register query type: QueryTypeDescriptorImpl

2017-11-29 Thread daniels
I'am using ignite 2.0 ],and in my Junit test Igntie brings bellow
excpetion,but my applicataion works properly,only during  junit test brings
that error.the configurations are same in both.
I did debug and noticed that the almost  same query in my application works
properly but in junit test it brigns error.

here  query - CREATE TABLE ""person-default"".TWOINDEXFIELDENTITYDTO (_KEY
OTHER INVISIBLE*[*]* NOT NULL,_VAL OTHER INVISIBLE,_VER OTHER
INVISIBLE,INDEXFIELD1 VARCHAR,INDEXFIELD2 VARCHAR) ENGINE
""org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing$H2TableEngine""
 

And here  stack trace:

class org.apache.ignite.IgniteCheckedException: Failed to register query
type: QueryTypeDescriptorImpl [space=person-default,
name=TwoIndexFieldEntityDto, tblName=null, fields={indexField1=class
java.lang.String, indexField2=class java.lang.String},
idxs={twoindexfieldentitydto_indexfield1_asc_indexfield2_asc_idx=QueryIndexDescriptorImpl
[name=twoindexfieldentitydto_indexfield1_asc_indexfield2_asc_idx,
type=SORTED, inlineSize=-1]}, fullTextIdx=null, keyCls=class
java.lang.Object, valCls=class java.lang.Object,
keyTypeName=com.synisys.idm.shared.Identity,
valTypeName=com.synisys.idm.entity.caching.model.TwoIndexFieldEntityDto,
valTextIdx=false, typeId=0, affKey=null, keyFieldName=null,
valFieldName=null, obsolete=false]
at 
at
org.apache.ignite.internal.processors.cache.GridCachePartitionExchangeManager$ExchangeWorker.body(GridCachePartitionExchangeManager.java:1806)
at
org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:110)
at java.lang.Thread.run(Thread.java:748)
Caused by: org.h2.jdbc.JdbcSQLException: Syntax error in SQL statement
"CREATE TABLE ""person-default"".TWOINDEXFIELDENTITYDTO (_KEY OTHER
INVISIBLE[*] NOT NULL,_VAL OTHER INVISIBLE,_VER OTHER INVISIBLE,INDEXFIELD1
VARCHAR,INDEXFIELD2 VARCHAR) ENGINE
""org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing$H2TableEngine""
"; expected "(, FOR, UNSIGNED, NOT, NULL, AS, DEFAULT, GENERATED, NOT, NULL,
AUTO_INCREMENT, BIGSERIAL, SERIAL, IDENTITY, NULL_TO_DEFAULT, SEQUENCE,
SELECTIVITY, COMMENT, CONSTRAINT, PRIMARY, UNIQUE, NOT, NULL, CHECK,
REFERENCES, ,, )"; SQL statement:
CREATE TABLE "person-default".TwoIndexFieldEntityDto (_key OTHER INVISIBLE
NOT NULL,_val OTHER INVISIBLE,_ver OTHER INVISIBLE,indexField1
VARCHAR,indexField2 VARCHAR) engine
"org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing$H2TableEngine"
[42001-193]







--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: sql insert with composite primary key

2017-11-29 Thread vkulichenko
Hi,

How do you define the schema? Using DDL or in CacheConfiguration?

Basically, you don't need to use _key at all here since TriggerId and
BiEventId actually compose the primary key. But you need to make sure Ignite
knows about that. For example, if you create a table like this:

CREATE TABLE TriggerDefinition (triggerId int, biEventId int, nonKeyField
int, PRIMARY KEY (triggerId, biEventId))

you can insert like this:

INSERT INTO TriggerDefinition (triggerId, biEventId, nonKeyField) VALUES
(10, 20, 30);

Ignite will automatically figure our which fields should go to key object,
and which ones to value object.

If you use QueryEntity to configure schema, provide key field names via
'keyFields' property.

-Val



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: get api takes more time when adding more nodes to cluster

2017-11-29 Thread vkulichenko
Hi Biren,

Are you doing reads from a client or directly on server nodes? If the
letter, then I guess you just do not collocate properly. With two nodes and
one backup all data is available on both nodes, so any server side read
would be local. With four nodes some of them would be remote which is
obviously slower.

-Val



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Apache Ignite & Cassandra configuration with code?

2017-11-29 Thread vkulichenko
Hi Kenan,

Nothing changed there, ticket is still open:
https://issues.apache.org/jira/browse/IGNITE-4555

-Val



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Failed to accept TCP connection. java.net.SocketException: Too many open files

2017-11-29 Thread slava.koptilin
Hi,

I think that most of the open file descriptors relate to partition files.
Perhaps, in terms of performance, this is not a very good way to open/close
files after each read/write.

Thanks.



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: SQL Count(*) returns incorrect count

2017-11-29 Thread soroka21
Switched to Ignite 2.3.0 in hope it has better behavior.
Unfortunately it is not.

During the execution of Spark job number of cache rows is growing but after
Spark job completes - looks like some entries has been removed.
JavaIgniteRDD shows correct count but again final result is incorrect.

I was using sqlline to track count of record during Spark job execution - it
shows some problem also:

0: jdbc:ignite:thin://10.238.42.86/> select count(*) from data3;
++
|COUNT(*)|
++
| 6428   |
++
1 row selected (0.063 seconds)
0: jdbc:ignite:thin://10.238.42.86/> select count(*) from data3;
++
|COUNT(*)|
++
| 62501  |
++
1 row selected (0.096 seconds)
0: jdbc:ignite:thin://10.238.42.86/> select count(*) from data3;
++
|COUNT(*)|
++
| 268183 |
++
1 row selected (0.154 seconds)
0: jdbc:ignite:thin://10.238.42.86/> select count(*) from data3;
++
|COUNT(*)|
++
| 482616 |
++
1 row selected (0.159 seconds)
0: jdbc:ignite:thin://10.238.42.86/> select count(*) from data3;
++
|COUNT(*)|
++
| 722436 |
++
1 row selected (0.159 seconds)
0: jdbc:ignite:thin://10.238.42.86/> select count(*) from data3;
++
|COUNT(*)|
++
| 1017236| <---
Still looks good here
++
1 row selected (0.205 seconds)
0: jdbc:ignite:thin://10.238.42.86/> select count(*) from data3;
++
|COUNT(*)|
++
| 542535 |<---
Ooops!
++






--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Failed to accept TCP connection. java.net.SocketException: Too many open files

2017-11-29 Thread slava.koptilin
Hello,

It looks like there is a new possibility which allows decreasing the number
of open file handles.
As of v2.2, Apache ignite provides a new concept - Cache Groups [1].
Caches within a single cache group share various internal structures and it
allows to mix data of caches in shared partition files. So, this feature
should decrease the number of open file handles.

[1] https://apacheignite.readme.io/v2.3/docs/cache-groups

Thanks!




--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Enabling REST api for a apache client node

2017-11-29 Thread Naveen Kumar
My understanding with some other in-memory product is
We have seeders and leeches, seeders are the one hold data  and leeches are
the one which are exposed to the clients, responsible for processing the
incoming requests. Basis idea was to offload the connection/disconnection
activities from the seeders since seeders are already heavily loaded with
data processing etc.
So I was trying to correlate the same here, seeders are nothing but data
nodes in ignite and leeches are client nodes, so whemever a consumer is
trying to connectt to the cluster it will only establish a connection with
ignite Client node.
My understanding with ignite Client node may be wrong.


On 29-Nov-2017 10:24 PM, "slava.koptilin"  wrote:

> Hi Naveen,
>
> It seems there is no such XML property.
> Anyway, I don't think there is much sense to start REST server on a client
> node because it is always better to access server node directly if
> possible.
>
> Thanks.
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: orm for ignite

2017-11-29 Thread nunoo
Found these very promising links :)))

https://dzone.com/articles/apache-ignite-with-jpa-a-missing-element
https://www.javacodegeeks.com/2017/07/apache-ignite-spring-data.html
https://apacheignite-mix.readme.io/v2.0/docs/spring-data



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Enabling REST api for a apache client node

2017-11-29 Thread slava.koptilin
Hi Naveen,

It seems there is no such XML property.
Anyway, I don't think there is much sense to start REST server on a client
node because it is always better to access server node directly if possible.

Thanks.



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Index not getting created

2017-11-29 Thread Naveen Kumar
AM using 2.3

What could be the issue with below create index command.

: jdbc:ignite:thin://127.0.0.1>  select * from "Customer".CUSTOMER
where ACCOUNT_ID_LIST ='A10001';
+++++---+
|ACCOUNT_ID_LIST |  CUST_ADDRESS_ID_LIST  |
   PARTYROLE|   PARTY_STATUS_CODE|
  REFREE_ID   |
+++++---+
| A10001 | custAddressIdList1 |
partyrole1 | partyStatusCode1   |
refreeId1 |
+++++---+
1 row selected (3.324 seconds)
0: jdbc:ignite:thin://127.0.0.1>  select ACCOUNT_ID_LIST from
"Customer".CUSTOMER where ACCOUNT_ID_LIST ='A10001';
++
|ACCOUNT_ID_LIST |
++
| A10001 |
++
1 row selected (2.078 seconds)
0: jdbc:ignite:thin://127.0.0.1> CREATE INDEX idx_customer_accountId
ON "Customer".CUSTOMER (ACCOUNT_ID_LIST);
Error: Cache doesn't exist: Customer (state=5,code=0)
java.sql.SQLException: Cache doesn't exist: Customer
at 
org.apache.ignite.internal.jdbc.thin.JdbcThinConnection.sendRequest(JdbcThinConnection.java:671)
at 
org.apache.ignite.internal.jdbc.thin.JdbcThinStatement.execute0(JdbcThinStatement.java:130)
at 
org.apache.ignite.internal.jdbc.thin.JdbcThinStatement.execute(JdbcThinStatement.java:299)
at sqlline.Commands.execute(Commands.java:823)
at sqlline.Commands.sql(Commands.java:733)
at sqlline.SqlLine.dispatch(SqlLine.java:795)
at sqlline.SqlLine.begin(SqlLine.java:668)
at sqlline.SqlLine.start(SqlLine.java:373)
at sqlline.SqlLine.main(SqlLine.java:265)
0: jdbc:ignite:thin://127.0.0.1>

-- 
Thanks & Regards,
Naveen Bandaru


Re: orm for ignite

2017-11-29 Thread nunoo
But that object is not modeled as a JPA entity is it? That approach is using
a JDBC-like API, right ? 



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: orm for ignite

2017-11-29 Thread Pavel Tupitsyn
Ignite has built-in object mapping.
Basically you put your objects into cache and it is possible to query them
with SQL.

On Wed, Nov 29, 2017 at 5:56 PM, nunoo  wrote:

> Is there any orm for ignite ? This question is part of my larger enquiry on
> how seamlessly it would be possible to migrate a spring based micro-service
> into a ignite-services based one. I'm aware of the need to change the
> inter-service communication part but I would like to maintain the internal
> business logic and the jpa model (assuming I would construct a properly
> mirrored ignite cache to support it, of course). Has anyone for instance
> created an hibernate idiom for ignite ?
>
> Thank you very much.
> Nuno
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Ignite node crashes after one query fetches many entries from cache

2017-11-29 Thread Nick Pordash
Ray,

I think to avoid excessive GC and OOM you could try switching to a lazy
result set:
https://apacheignite-sql.readme.io/docs/performance-and-debugging#result-set-lazy-load

- Nick

On Wed, Nov 29, 2017, 7:19 AM Anton Vinogradov 
wrote:

> Ray,
>
> Seems you're looking
> for org.apache.ignite.cache.query.SqlFieldsQuery#timeout?
>
> On Tue, Nov 28, 2017 at 5:30 PM, Alexey Kukushkin <
> kukushkinale...@gmail.com> wrote:
>
>> Ignite Developers,
>>
>> I know community is developing an "Internal Problems Detection" feature
>> .
>> Do you know if it addresses the problem Ray described below? May be we
>> already have a setting to prevent this from happening?
>>
>> On Tue, Nov 28, 2017 at 5:13 PM, Ray  wrote:
>>
>>> I try to fetch all the results of a table with billions of entries using
>>> sql
>>> like this "select * from table_name".
>>> As far as my understanding, Ignite will prepare all the data on the node
>>> running this query then return the results to the client.
>>> The problem is that after a while, the node crashes(probably because of
>>> long
>>> GC pause or running out of memory).
>>> Is node crashing the expected behavior?
>>> I mean it's unreasonable that Ignite node crashes after this kind of
>>> query.
>>>
>>> From my experience with other databases,  running this kind of full table
>>> scan will not crash the node.
>>>
>>> The optimal way for handling this kind of situation is Ignite node stays
>>> alive, the query will be stopped by Ignite node when the node find out it
>>> will run out of memory soon.
>>> Then an error response shall be returned to the client.
>>>
>>> Please advice me if this mechanism already exists and there is hidden
>>> switch
>>> to turn it on.
>>> Thanks
>>>
>>>
>>>
>>> --
>>> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>>>
>>
>>
>>
>> --
>> Best regards,
>> Alexey
>>
>
>


Apache Ignite & Cassandra configuration with code?

2017-11-29 Thread Kenan Dalley
Is it possible to fully configure the connection between Apache Ignite v2.3 &
Cassandra?  I started the Cassandra Integration with 1.8 and, at the time,
the configuration had to be done fully with xml.  I'm just curious if there
have been enough changes and updates over time (that I'm unaware of) that
now allow Cassandra to be configured fully in code (aka Java) without
needing xml just like it can be done for other types of connections.  Also,
if that's possible, can someone post a fully-workable example?



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Local node and remote node have different version numbers

2017-11-29 Thread Evgenii Zhuravlev
Hi,

In this case, you should specify ports for addresses, for example:

10.23.153.56:47500


Evgenii


2017-11-29 18:20 GMT+03:00 Rajarshi Pain :

> Hi,
>
> I was working on Ignite 2,3, and my colleague is working on 1.8. We are
> running different application on our local machine using the example ignite
> xml, but it getting failed with the below exception:-
>
> We are using our local system ip under TcpDiscoverySpi, but still, it is
> detecting each other. How I can stop this and not let them discover each
> other.
>
> 
> 
>   
>  class="org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder">
>   
> 10.23.153.56
>
>   
>
>
> Exception:-
>
>  Local node and remote node have different version numbers (node will not
> join, Ignite does not support rolling updates, so versions must be exactly
> the same) [locBuildVer=2.3.0, rmtBuildVer=1.8.0, locNodeAddrs=[
> 01HW1083820.India.TCS.com/0:0:0:0:0:0:0:1, /10.23.209.152, /127.0.0.1],
> rmtNodeAddrs=[01Hw1083774.India.TCS.com/0:0:0:0:0:0:0:1, /10.23.209.149, /
> 127.0.0.1], locNodeId=c3e4f1ff-f6d7-4c26-9baf-e04f99a8eaac,
> rmtNodeId=75e5e5bc-5de2-484a-9a91-a3a9f2ec48d3]
>
> --
> Regards,
> Rajarshi Pain
>


Local node and remote node have different version numbers

2017-11-29 Thread Rajarshi Pain
Hi,

I was working on Ignite 2,3, and my colleague is working on 1.8. We are
running different application on our local machine using the example ignite
xml, but it getting failed with the below exception:-

We are using our local system ip under TcpDiscoverySpi, but still, it is
detecting each other. How I can stop this and not let them discover each
other.



  

  
10.23.153.56

  


Exception:-

 Local node and remote node have different version numbers (node will not
join, Ignite does not support rolling updates, so versions must be exactly
the same) [locBuildVer=2.3.0, rmtBuildVer=1.8.0, locNodeAddrs=[
01HW1083820.India.TCS.com/0:0:0:0:0:0:0:1, /10.23.209.152, /127.0.0.1],
rmtNodeAddrs=[01Hw1083774.India.TCS.com/0:0:0:0:0:0:0:1, /10.23.209.149, /
127.0.0.1], locNodeId=c3e4f1ff-f6d7-4c26-9baf-e04f99a8eaac,
rmtNodeId=75e5e5bc-5de2-484a-9a91-a3a9f2ec48d3]

-- 
Regards,
Rajarshi Pain


Re: Ignite node crashes after one query fetches many entries from cache

2017-11-29 Thread Anton Vinogradov
Ray,

Seems you're looking
for org.apache.ignite.cache.query.SqlFieldsQuery#timeout?

On Tue, Nov 28, 2017 at 5:30 PM, Alexey Kukushkin  wrote:

> Ignite Developers,
>
> I know community is developing an "Internal Problems Detection" feature
> .
> Do you know if it addresses the problem Ray described below? May be we
> already have a setting to prevent this from happening?
>
> On Tue, Nov 28, 2017 at 5:13 PM, Ray  wrote:
>
>> I try to fetch all the results of a table with billions of entries using
>> sql
>> like this "select * from table_name".
>> As far as my understanding, Ignite will prepare all the data on the node
>> running this query then return the results to the client.
>> The problem is that after a while, the node crashes(probably because of
>> long
>> GC pause or running out of memory).
>> Is node crashing the expected behavior?
>> I mean it's unreasonable that Ignite node crashes after this kind of
>> query.
>>
>> From my experience with other databases,  running this kind of full table
>> scan will not crash the node.
>>
>> The optimal way for handling this kind of situation is Ignite node stays
>> alive, the query will be stopped by Ignite node when the node find out it
>> will run out of memory soon.
>> Then an error response shall be returned to the client.
>>
>> Please advice me if this mechanism already exists and there is hidden
>> switch
>> to turn it on.
>> Thanks
>>
>>
>>
>> --
>> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>>
>
>
>
> --
> Best regards,
> Alexey
>


Re: Register Contentious query quit slow

2017-11-29 Thread Denis Mekhanikov
Jin,

It's not a bug. All nodes, that will be executing the initial query, should
have metadata of the query class.
When you register an initial query or a remote filter for a CQ, their
objects should be sent to remote nodes, so metadata update is triggered.

It's possible, that anonymous classes, that you are using as remote filter
and initial query, cause the whole outer class to be serialized.
Try making them static or external classes. It will eliminate possibility,
that more data is serialized, than needed.

Denis

вт, 28 нояб. 2017 г. в 18:41, guojin0...@126.com :

> denis, thanks for your reply.
>
> actually in my case, there is.no data in cache when registering CQ. the
> log shows that meta data thing will take serveral mins.
> currently i disabled the initial query and remotefilter.
> my concern is why those meta data exchange happys?  as you can see, my
> initial query and remotefilter just do a string compare and dont need any
> extra class help, all my logic is in local listener which just runs
> locally.
> i dont see necessary for that exchange in this case.  is that a bug?
> br,
> jin
> ---Original---
> *From:* "Denis Mekhanikov"
> *Date:* 2017/11/28 16:18:03
> *To:* "user";
> *Subject:* Re: Register Contentious query quit slow
>
> Jin,
>
> Regarding the problem with initial query: it's not binary metadata update,
> what causes the slow-down, but processing the data.
> Binary metadata update is done by just sending a few messages across the
> ring, it's not a big deal, you shouldn't care too much about it.
> You just have a lot of data, and all of it is processed by initial query.
> That's why you get such performance impact.
>
> Metadata does not contain Java classes, it's some internal information
> about structure of objects, that are stored in cache.
> So, if you put data of some type for the first time, it will cause
> metadata update messages to be sent across the ring.
>
> I'm not sure, that I understand your goal completely, but you can take a
> look at a node filter
> 
> cache configuration parameter, looks like it could help.
>
> As I already said, you shouldn't build a cluster on nodes, that are
> connected with an unreliable network. It will lead to stability issues like
> segmentation or split-brain.
> It will also lead to slow work of discovery SPI, because nodes has a ring
> topology, and they may be connected in interleaving order (1st DC, then
> 2nd, then 1st, then 2nd again, etc).
>
> If you want to connect multiple DCs, you can consider using proprietary
> solutions for datacenter replication. Here is one of the options:
> https://www.gridgain.com/products/software/enterprise-edition/data-center-replication
> You can also implement it yourself, since Ignite has pluggable modules
> support.
>
> Denis
>
> вт, 28 нояб. 2017 г. в 3:58, gunman524 :
>
>> Denis,
>>
>> For this cross region case, I've some thought to share and please help
>> verify whether they are  already here.
>>
>> 1. Two DC,   A ---corss region--- B
>> 2. A have several ignite node, A1,A2,A3   and B have several ignite
>> node, B1,B2,B3 ..., there are in a same cluster
>> 3. In ingite, A1,A2, A3 can belongs to same group, says A' and B1,B2,B3
>> can
>> be B'
>>
>> So, can Ignite do this way
>> 1. For data deployed in A' and B' by REPLICATED mode and within A' using
>> PARTITION mode
>> 2. The Continuous Query can only register in A'
>>
>> In this way:
>> 1. As within A', all node in same DC, every action can be performed fast,
>> 2 .meanwhile, as A' B' have some data, so we should not missing any event
>> 3. As data in A',B' deployed in PARTITION mode, each node do not need to
>> hold a hug memory.
>>
>> Thanks,
>> Jin
>>
>>
>>
>>
>>
>> --
>> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>>
>


orm for ignite

2017-11-29 Thread nunoo
Is there any orm for ignite ? This question is part of my larger enquiry on
how seamlessly it would be possible to migrate a spring based micro-service
into a ignite-services based one. I'm aware of the need to change the 
inter-service communication part but I would like to maintain the internal
business logic and the jpa model (assuming I would construct a properly
mirrored ignite cache to support it, of course). Has anyone for instance
created an hibernate idiom for ignite ?

Thank you very much.
Nuno



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Re:Poor performance select query with jdbc thin mode

2017-11-29 Thread Denis Mekhanikov
Lucky,

Try enabling *enforceJoinOrder *parameter in JDBC connection string and let
us know the result.
Your JDBC connection string should look like this:
jdbc:ignite:thin://127.0.0.1?*enforceJoinOrder=true*

Denis

ср, 29 нояб. 2017 г. в 10:22, Lucky :

> Have you come to a conclusion?
>
>
>
>
>
>
> At 2017-11-27 17:23:16, "Lucky"  wrote:
>
> Hi,Denis:
> 2.3 will be better than 2.2.
> But the performance improvement is not obvious.
> It took 2.5 seconds in single ignite  node ,and took 5 seconds in
> cluster(3 ignite nodes).
>
> JDBC Client driver is worse than JDBC thin driver. it took about more
> 1 seconds.
>
> Did there any other suggestiones?
> Thanks.
>
>
> 2017-11-24 20:25,Denis Mekhanikov
>  write:
>
> Hi!
>
> Please try running the same test on Ignite 2.3
> Also try changing to JDBC client driver
>  and let us
> know the results.
>
> Denis
>
>


Re: Trouble with v 2.3.0 on AIX

2017-11-29 Thread Michael Cherkasov
Hi Vladimir,

Unfortunately I don't have AIX or any other big-endian machine at hand, so
could you please assist me with fixing this bug.

Could you please run the following unit test on your AIX box:
org.apache.ignite.internal.direct.stream.v2.DirectByteBufferStreamImplV2ByteOrderSelfTest
Could you please say results of this test?

Thanks,
Mike.

2017-11-02 12:03 GMT+03:00 Vladimir :

> Hi,
>
> Just upgraded from Ignite 2.1.0 to v.2.3.0. And now our several nodes
> cannot
> start on AIX 7. The error:
>
> ERROR 2017-11-02 11:59:01.331 [grid-nio-worker-tcp-comm-1-#22]
> org.apache.ignite.internal.util.nio.GridDirectParser: Failed to read
> message
> [msg=GridIoMessage [plc=0, topic=null, topicOrd=-1, ordered=false,
> timeout=0, skipOnTimeout=false, msg=null],
> buf=java.nio.DirectByteBuffer[pos=16841 lim=16844 cap=32768],
> reader=DirectMessageReader [state=DirectMessageState [pos=0,
> stack=[StateItem [stream=DirectByteBufferStreamImplV2
> [baseOff=1100144027456, arrOff=-1, tmpArrOff=0, tmpArrBytes=0,
> msgTypeDone=true, msg=GridDhtPartitionsFullMessage [parts=null,
> partCntrs=null, partCntrs2=null, partHistSuppliers=null,
> partsToReload=null,
> topVer=null, errs=null, compress=false, resTopVer=null, partCnt=0,
> super=GridDhtPartitionsAbstractMessage [exchId=GridDhtPartitionExchangeId
> [topVer=AffinityTopologyVersion [topVer=5, minorTopVer=0], discoEvt=null,
> nodeId=e3ac3f40, evt=NODE_JOINED], lastVer=GridCacheVersion [topVer=0,
> order=1509612963224, nodeOrder=0], super=GridCacheMessage [msgId=369,
> depInfo=null, err=null, skipPrepare=false]]], mapIt=null, it=null,
> arrPos=-1, keyDone=false, readSize=-1, readItems=0, prim=0, primShift=0,
> uuidState=0, uuidMost=0, uuidLeast=0, uuidLocId=0, lastFinished=true],
> state=0], StateItem [stream=DirectByteBufferStreamImplV2
> [baseOff=1100144027456, arrOff=-1, tmpArrOff=0, tmpArrBytes=0,
> msgTypeDone=true, msg=CacheGroupAffinityMessage [], mapIt=null, it=null,
> arrPos=-1, keyDone=true, readSize=7, readItems=2, prim=0, primShift=0,
> uuidState=0, uuidMost=0, uuidLeast=0, uuidLocId=0, lastFinished=true],
> state=0], StateItem [stream=DirectByteBufferStreamImplV2
> [baseOff=1100144027456, arrOff=-1, tmpArrOff=0, tmpArrBytes=0,
> msgTypeDone=true, msg=GridLongList [idx=0, arr=[]], mapIt=null, it=null,
> arrPos=-1, keyDone=false, readSize=512, readItems=47, prim=0, primShift=0,
> uuidState=0, uuidMost=0, uuidLeast=0, uuidLocId=0, lastFinished=true],
> state=0], StateItem [stream=DirectByteBufferStreamImplV2
> [baseOff=1100144027456, arrOff=-1, tmpArrOff=40, tmpArrBytes=40,
> msgTypeDone=false, msg=null, mapIt=null, it=null, arrPos=-1, keyDone=false,
> readSize=-1, readItems=0, prim=0, primShift=0, uuidState=0, uuidMost=0,
> uuidLeast=0, uuidLocId=0, lastFinished=true], state=0], null, null, null,
> null, null, null]], lastRead=true], ses=GridSelectorNioSessionImpl
> [worker=DirectNioClientWorker [super=AbstractNioClientWorker [idx=1,
> bytesRcvd=404253, bytesSent=1989, bytesRcvd0=16886, bytesSent0=28,
> select=true, super=GridWorker [name=grid-nio-worker-tcp-comm-1,
> igniteInstanceName=null, finished=false, hashCode=-2134841549,
> interrupted=false, runner=grid-nio-worker-tcp-comm-1-#22]]],
> writeBuf=java.nio.DirectByteBuffer[pos=0 lim=32768 cap=32768],
> readBuf=java.nio.DirectByteBuffer[pos=16841 lim=16844 cap=32768],
> inRecovery=GridNioRecoveryDescriptor [acked=6, resendCnt=0, rcvCnt=0,
> sentCnt=6, reserved=true, lastAck=0, nodeLeft=false, node=TcpDiscoveryNode
> [id=7683662b-16c9-42b7-aa0d-8328a60fc58e, addrs=[127.0.0.1],
> sockAddrs=[/127.0.0.1:6250], discPort=6250, order=1, intOrder=1,
> lastExchangeTime=1509612963744, loc=false, ver=2.3.0#20171028-sha1:
> 8add7fd5,
> isClient=false], connected=true, connectCnt=11, queueLimit=131072,
> reserveCnt=175, pairedConnections=false],
> outRecovery=GridNioRecoveryDescriptor [acked=6, resendCnt=0, rcvCnt=0,
> sentCnt=6, reserved=true, lastAck=0, nodeLeft=false, node=TcpDiscoveryNode
> [id=7683662b-16c9-42b7-aa0d-8328a60fc58e, addrs=[127.0.0.1],
> sockAddrs=[/127.0.0.1:6250], discPort=6250, order=1, intOrder=1,
> lastExchangeTime=1509612963744, loc=false, ver=2.3.0#20171028-sha1:
> 8add7fd5,
> isClient=false], connected=true, connectCnt=11, queueLimit=131072,
> reserveCnt=175, pairedConnections=false], super=GridNioSessionImpl
> [locAddr=/127.0.0.1:6284, rmtAddr=/127.0.0.1:61790,
> createTime=1509613141318, closeTime=0, bytesSent=28, bytesRcvd=16886,
> bytesSent0=28, bytesRcvd0=16886, sndSchedTime=1509613141318,
> lastSndTime=1509613141318, lastRcvTime=1509613141318, readsPaused=false,
> filterChain=FilterChain[filters=[GridNioCodecFilter
> [parser=o.a.i.i.util.nio.GridDirectParser@8e66d834, directMode=true],
> GridConnectionBytesVerifyFilter], accepted=true]]]
> java.lang.IllegalArgumentException: null
> at java.nio.Buffer.position(Buffer.java:255) ~[?:1.8.0]
> at
> org.apache.ignite.internal.direct.stream.v2.DirectByteBufferStreamImplV2.
> 

Re: Lucene query syntaxt support in TextQuery ?

2017-11-29 Thread zbyszek
Val, thank you for confirmation.

zbyszek



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Regards the ContinuousQuery and MutableCacheEntryListenerConfiguration

2017-11-29 Thread aa...@tophold.com

BTW I Use the SQL to update the cache then can not trigger the ContinuousQuery  
while If I update one by one seem can work. 

Is this the reason? 

SqlFieldsQuery update = new 
SqlFieldsQuery(UPDATE).setArgs(Utils.utcEpochMills())
.setTimeout(20, TimeUnit.SECONDS)
.setCollocated(true)
.setLocal(true);

Regards
Aaron


aa...@tophold.com
 
From: aa...@tophold.com
Date: 2017-11-29 19:22
To: user
Subject: Regards the ContinuousQuery and MutableCacheEntryListenerConfiguration
hi All, 

We use the exactly same configuration  with same CacheEntryListener  and  
CacheEntryEventFilter in both ContinuousQuery  and  
MutableCacheEntryListenerConfiguration

But the ContinuousQuery  seem never continues trigger any events while the the 
MutableCacheEntryListenerConfiguration can continues trigger things. 

Also  If in the ContinuousQuery  no interface to set include old value disable. 


This can not work even after the cache update
final ContinuousQuery query = new ContinuousQuery<>();
query.setLocal(true);
query.setPageSize(1);
query.setTimeInterval(2_000);
final ScanQuery scanQuery = new ScanQuery<>(new 
ScanDataFilter());
scanQuery.setLocal(true);
query.setInitialQuery(scanQuery);
query.setLocalListener(new DataCreateUpdateListener());
query.setRemoteFilterFactory(new CacheEntryEventFilterFactory());
try (QueryCursor> cur = 
accountCache.query(query)) {
for (Cache.Entry row : cur) {
processUpdate(row.getValue());
}
}

This can not work after cache updates trigger 

MutableCacheEntryListenerConfiguration 
mutableCacheEntryListenerConfiguration = new 
MutableCacheEntryListenerConfiguration(
new Factory>() {
private static final long serialVersionUID = 5358838258503369206L;
@Override
public CacheEntryListener create() {
return new DataCreateUpdateListener();
}
},
new CacheEntryEventFilterFactory(),
false,
true
);
ignite.cache(AccountEntry.IG_CACHE_NAME).registerCacheEntryListener(mutableCacheEntryListenerConfiguration);

did I configuration something wrong?  thanks for your advice!


Regards
Aaron


aa...@tophold.com


Enabling REST api for a apache client node

2017-11-29 Thread Naveen
How do we set this in the config XML ‘-DIGNITE_REST_START_ON_CLIENT=true’, I 
wanted to enable REST for a client node 

 

What is the XML property 

 

Regards

Naveen



Regards the ContinuousQuery and MutableCacheEntryListenerConfiguration

2017-11-29 Thread aa...@tophold.com
hi All, 

We use the exactly same configuration  with same CacheEntryListener  and  
CacheEntryEventFilter in both ContinuousQuery  and  
MutableCacheEntryListenerConfiguration

But the ContinuousQuery  seem never continues trigger any events while the the 
MutableCacheEntryListenerConfiguration can continues trigger things. 

Also  If in the ContinuousQuery  no interface to set include old value disable. 


This can not work even after the cache update
final ContinuousQuery query = new ContinuousQuery<>();
query.setLocal(true);
query.setPageSize(1);
query.setTimeInterval(2_000);
final ScanQuery scanQuery = new ScanQuery<>(new 
ScanDataFilter());
scanQuery.setLocal(true);
query.setInitialQuery(scanQuery);
query.setLocalListener(new DataCreateUpdateListener());
query.setRemoteFilterFactory(new CacheEntryEventFilterFactory());
try (QueryCursor> cur = 
accountCache.query(query)) {
for (Cache.Entry row : cur) {
processUpdate(row.getValue());
}
}

This can not work after cache updates trigger 

MutableCacheEntryListenerConfiguration 
mutableCacheEntryListenerConfiguration = new 
MutableCacheEntryListenerConfiguration(
new Factory>() {
private static final long serialVersionUID = 5358838258503369206L;
@Override
public CacheEntryListener create() {
return new DataCreateUpdateListener();
}
},
new CacheEntryEventFilterFactory(),
false,
true
);
ignite.cache(AccountEntry.IG_CACHE_NAME).registerCacheEntryListener(mutableCacheEntryListenerConfiguration);

did I configuration something wrong?  thanks for your advice!


Regards
Aaron


aa...@tophold.com


Re: Re: Re: JdbcQueryExecuteResult cast error

2017-11-29 Thread Denis Mekhanikov
Lucky,

What tool are you using to access Ignite over JDBC? Does this problem
reproduce, if you use DBeaver, for example?
As Taras said, looks like the same JDBC connection is used concurrently.

Denis

ср, 29 нояб. 2017 г. в 5:17, Lucky :

> OK,This is the stack trace.
> Sorry,  Because of the company's privacy rules, I hide the actual path
> of some of the class names.
>
> When I refresh first time:
> Caused by: java.lang.ClassCastException:
> org.apache.ignite.internal.processors.odbc.jdbc.JdbcQueryExecuteResult
> cannot be cast to
> org.apache.ignite.internal.processors.odbc.jdbc.JdbcQueryMetadataResult
> at
> org.apache.ignite.internal.jdbc.thin.JdbcThinResultSet.meta(JdbcThinResultSet.java:1877)
> at
> org.apache.ignite.internal.jdbc.thin.JdbcThinResultSet.getMetaData(JdbcThinResultSet.java:714)
> at com.XXX.XXXxx.sql.shell.KDResultSet.(KDResultSet.java:80)
> at
> com.XXX.XXXxx.sql.shell.KDPreparedStatement.executeQuery(KDPreparedStatement.java:393)
> at
> com.xx.jdbc.adapter.PreparedStatementHandle.executeQuery(PreparedStatementHandle.java:61)
> at
> com.XXX.XXXxx.xxx.impl.EntityAccess.execQuerySQL(EntityAccess.java:483)
> at com.XXX.XXXxx.xxx.impl.EntityAccess.select(EntityAccess.java:365)
> at
> com.XXX.XXXxx.xxx.impl.ObjectReader.innerSelect(ObjectReader.java:241)
> at com.XXX.XXXxx.xxx.impl.ObjectReader.select(ObjectReader.java:139)
> at com.XXX.XXXxx.xxx.MappingDAO.innerGetCollection(MappingDAO.java:785)
> at com.kXXX.XXXxx.xxx.MappingDAO.getCollection(MappingDAO.java:683)r
> at
> com.XXX.XXXxx.xxx.xxx.AbstractEntityControllerBean.innerGetCollection(AbstractEntityControllerBean.java:580)
> at
> com.XXX.XXXxx.xxx.app.AbstractCustomerGroupDetailControllerBean._getCollection(AbstractCustomerGroupDetailControllerBean.java:124)
> at
> com.XXX.XXXxx.xxx.app.AbstractCustomerGroupDetailControllerBean.getCollection(AbstractCustomerGroupDetailControllerBean.java:112)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
> at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:606)
> at
> com.XXX.XXXxx.xxx.transaction.EJBTxFacade.TxInvokerBean.invoke(TxInvokerBean.java:125)
>
>  But when I refresh again, it change to this:
>
> Caused by: java.sql.SQLException
>java.sql.SQLException: Failed to communicate with Ignite cluster.
> at
> org.apache.ignite.internal.jdbc.thin.JdbcThinConnection.sendRequest(JdbcThinConnection.java:681)
> at
> org.apache.ignite.internal.jdbc.thin.JdbcThinResultSet.meta(JdbcThinResultSet.java:1877)
> at
> org.apache.ignite.internal.jdbc.thin.JdbcThinResultSet.getMetaData(JdbcThinResultSet.java:714)
> at com.x..sql.shell.KDResultSet.getMetaData(KDResultSet.java:319)
> at
> com.apusic.jdbc.adapter.ResultSetHandle.getMetaData(ResultSetHandle.java:323)
> at
> com.x..x.x..impl.ImplUtils$FieldTypes.(ImplUtils.java:1344)
> at
> com.x..x..impl.ImplUtils.fillObjectValue(ImplUtils.java:307)
> at
> com.x..x..impl.EntityAccess.select(EntityAccess.java:367)
> at
> com.x..x..impl.ObjectReader.innerSelect(ObjectReader.java:241)
> at
> com.x..x..impl.ObjectReader.select(ObjectReader.java:139)
> at
> com.x..x..ORMappingDAO.innerGetCollection(ORMappingDAO.java:785)
> at
> com.x..x..ORMappingDAO.getCollection(ORMappingDAO.java:683)
> at
> com.x..framework.ejb.AbstractEntityControllerBean.innerGetCollection(AbstractEntityControllerBean.java:580)
> at
> com.x..x.x.app.AbstractCustomerGroupDetailControllerBean._getCollection(AbstractCustomerGroupDetailControllerBean.java:124)
> at
> com.x..x.x.app.AbstractCustomerGroupDetailControllerBean.getCollection(AbstractCustomerGroupDetailControllerBean.java:112)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
> at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:606)
> at
> com.x..transaction.EJBTxFacade.TxInvokerBean.invoke(TxInvokerBean.java:125)
> at
> com.x..transaction.EJBTxFacade.TxInvokerBean.INVOKE_SUPPORTS(TxInvokerBean.java:64)
> at
> com.x..transaction.EJBTxFacade.TxInvokerBean_LocalObjectImpl_2.INVOKE_SUPPORTS(Unknown
> Source)
> at
> com.x..transaction.EJBTransactionProxy.invoke(EJBTransactionProxy.java:179)
> at
> com.x..transaction.EJBTransactionProxy.invoke(EJBTransactionProxy.java:324)
> at com.sun.proxy.$Proxy292.getCollection(Unknown Source)
> at
> com.x..x.x.CustomerGroupDetail.getCollection(CustomerGroupDetail.java:89)
> at
> 

Re: copyOnRead to false

2017-11-29 Thread ezhuravlev
Hi Colin,

I would suggest you send a new message to user list with a detailed
description of your use case, I think the community could check it and give
some comments.

Evgenii



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Clients reconnection problem

2017-11-29 Thread Ilya Kasnacheev
Hello!

Are you still having this problem?
There's an IgniteCheckedException for which we need full stack traces
(including causes) in order to proceed).

Regards,

-- 
Ilya Kasnacheev

2017-10-20 17:36 GMT+03:00 daniels :

> Hi.thank you for response
>
> No there isnt any logs.
>
> The same bellow mentioned log appears for some time
>
> --
> ERROR  1 --- [er-#2%%] o.a.i.spi.discovery.tcp.TcpDiscoverySpi  : Failed
> to
> send message: null
>  java.io.IOException: Failed to get acknowledge for message:
> TcpDiscoveryClientMetricsUpdateMessage [super=TcpDiscoveryAbstractMessage
> [sndNodeId=null, id=12qwrji-d--fsadas, verifierNodeId=null, topVer=0,
> pendingIdx=0, failedNodes=null, isClient=true]]
> at
> org.apache.ignite.spi.discovery.tcp.ClientImpl$
> SocketWriter.body(ClientImpl.java:1246)
> ~[ignite-core-2.0.0.jar:2.0.0]
> at
> org.apache.ignite.spi.IgniteSpiThread.run(IgniteSpiThread.java:62)
> [ignite-core-2.0.0.jar:2.0.0]
> 
>
> then this log disappears and starts to show
> this log before restarting
>
>
> 
>  2017-10-19 13:35:09.197  WARN [um-role-service,,,] 1 --- [-#228%sys_grid%]
> o.a.i.spi.discovery.tcp.TcpDiscoverySpi  : Failed to reconnect to cluster
> (will retry): class o.a.i.IgniteCheckedException: Failed to deserialize
> object with given class loader: sun.misc.Launcher$AppClassLoader@5c647e05
> -
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Issues with multiple kafka connectors, questions regarding ignite caches

2017-11-29 Thread Alexey Kukushkin
Hi,

I opened a ticket for this issue:
https://issues.apache.org/jira/browse/IGNITE-7065. If no one pick it up by
next week then I might find time to fix it next week (although I cannot
promise). I am not sure when community is going to release 2.4 but I (or
anyone who fixes it) could share a branch with you and you could grab the
fix from the branch. The fix will be limited to the connector module so it
should be safe to apply it.


Re: Facing issue Ignite 2.3 and Vertex-ignite 3.5.0

2017-11-29 Thread Alexey Kukushkin
Hi,

No, you cannot activate cluster through XML config. Do you really need
persistence enabled? The cluster is active by default if you disable
persistence.

You might also consider creating a simple ignite.sh wrapper that would also
call "control.sh --activate" to activate the cluster. In this case your
cluster would be active once you start the first node.