Unsubscribe

2018-05-15 Thread Rajarshi Pain
-- 

Thanks
Rajarshi


Add ignite jars using maven

2018-02-11 Thread Rajarshi Pain
Hi,


Previously i was manually adding all the jars to my build path, now i am
using maven to do this. Is there any way to add all ignite jars with it's
dependency in to the project?
Or i have to add one by into the pom file?


-- 

Thanks
Rajarshi


Re: horizontal scaling

2018-01-29 Thread Rajarshi Pain
Hi  Evgenii,

Yes for now we are running all the nodes on same physical server.

Thanks,
Raj

On Mon, Jan 29, 2018 at 7:03 PM, ezhuravlev <e.zhuravlev...@gmail.com>
wrote:

> Hi,
>
> > Right now we are running 4 nodes (same physical system)
>
> Do you mean that you run 4 nodes on the same physical machine and after
> this, you add 2 additional nodes on the same physical machine?
>
> Evgenii
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>



-- 
Regards,
Rajarshi Pain


horizontal scaling

2018-01-27 Thread Rajarshi Pain
Hello,

We are trying to scale up our application to achieve more throughput. Right
now we are running 4 nodes (same physical system) and getting almost 1200
tps.

But when we are adding more node performance is getting decreased and not
able to get more than 600 tps while adding more than 6 nodes.

Our application is running on each node.  we have used *compute.run* to
perform the activity on local node.

We trying to investigate why it's giving less tps while adding more node,
but somehow we are not able to figure it out if we are making some mistake
or something is going wrong.

Will be able to help me how we can investigate and see if each node is
working as expected or there is some bottleneck and it's not able to
process it properly ?

I am not able to share the code as it's in client machine.

-- 
Regards,
Raj


AffinityKey - best practice

2018-01-20 Thread Rajarshi Pain
Hello,

I am trying to explore different functionality offered by Ignite (2.3) but
this AffinityKey concept is not quite clear to me so need some suggestion
from Ignite experts.

we are having a scenario where we will load all static data from DB to
cache and do various validations for the incoming messages.

will AffinityKey  be useful in this scenario ? can we except better
performance that an implementation without AffinityKey ?

We will run a multi-node, multi-threaded application

-- 
Regards,
Raj


Re: Exception Occuredjava.sql.BatchUpdateException: ORA-00001: unique constraint

2018-01-13 Thread Rajarshi Pain
Hello,

Can someone please help me on this scenario if somewhere I am making any
mistake ?

Thanks,
Raj

On Fri, Jan 12, 2018 at 11:57 PM, Rajarshi Pain <rajarsh...@gmail.com>
wrote:

> Hi,
>
> We were doing a POC(Ignite 2.3 - oracle) to check how cache Persistence
> Store works using writeBehind, We are updating data as a batch rather than
> updating one by one record.
>
> though there are no duplication record but still getting "unique
> constraint" exception.
>
> We couldnt find any root cause for this not sure is any issue if this is a
> possible ignite bug as this error is coming randomly.
>
> Also in another POC we have seen that it's not inserting all the data to
> the database.
> For example we have pushed 10K but in Database we can only see 9800,
> though there is no error or exception.
>
> could you please shade some lights on these issues ?
>
> Sample Code:
>
> for(int i=1;i<=10;i++)
> {
> frstnm="test1name"+i;
> scndnm="test2name";
> p1 = new Person(i, frstnm, scndnm);
> cache.put((long)i, p1);
> }
>
>
> insert into PERSON (id, firstname, lastname) values (?, ?, ?)
>
> *inside write: *
>
> st.setLong(1, val.getId());
> st.setString(2, val.getFirstName());
> st.setString(3, val.getLastName());
> st.addBatch();
> if(counter==100)
> {
> System.out.println("Counter "+ counter);
> st.executeBatch();
> counter=0;
> }
>
> got exception after 11K records
>
> Exception Occuredjava.sql.BatchUpdateException: ORA-1: unique
> constraint ​
>
> --
> Regards,
> Rajarshi Pain
>



-- 
Regards,
Rajarshi Pain


Exception Occuredjava.sql.BatchUpdateException: ORA-00001: unique constraint

2018-01-12 Thread Rajarshi Pain
Hi,

We were doing a POC(Ignite 2.3 - oracle) to check how cache Persistence
Store works using writeBehind, We are updating data as a batch rather than
updating one by one record.

though there are no duplication record but still getting "unique
constraint" exception.

We couldnt find any root cause for this not sure is any issue if this is a
possible ignite bug as this error is coming randomly.

Also in another POC we have seen that it's not inserting all the data to
the database.
For example we have pushed 10K but in Database we can only see 9800, though
there is no error or exception.

could you please shade some lights on these issues ?

Sample Code:

for(int i=1;i<=10;i++)
{
frstnm="test1name"+i;
scndnm="test2name";
p1 = new Person(i, frstnm, scndnm);
cache.put((long)i, p1);
}


insert into PERSON (id, firstname, lastname) values (?, ?, ?)

*inside write: *

st.setLong(1, val.getId());
st.setString(2, val.getFirstName());
st.setString(3, val.getLastName());
st.addBatch();
if(counter==100)
{
System.out.println("Counter "+ counter);
st.executeBatch();
counter=0;
}

got exception after 11K records

Exception Occuredjava.sql.BatchUpdateException: ORA-1: unique
constraint ​

-- 
Regards,
Rajarshi Pain


Re: java.lang.IllegalStateException: Cache has been closed:

2018-01-05 Thread Rajarshi Pain
thanks Denis.
Can you please tell me how a try-with-resources works in Ignite ?

On Thu, Jan 4, 2018 at 8:57 PM, Denis Mekhanikov <dmekhani...@gmail.com>
wrote:

> Hi Rajarshi!
>
> Well, it must be, that you are actually closing the cache.
> Check, that you didn't put cache creation into a try-with-resources.
> When *IgniteCache.close()* is called, then cache is destroyed.
>
> Denis
>
> чт, 4 янв. 2018 г. в 18:17, Rajarshi Pain <rajarsh...@gmail.com>:
>
>> Hi,
>>
>> I am using Ignite 2.3 for my application. But got Cache closed exception
>> while running.
>>
>> This is a multi threaded application through testing on single thread. it
>> will load the reference number from database to cache during startup and
>> while processing a message, it will check if that particular reference
>> number is present or not on that cache.
>>
>> if there is no entry it will add refernce_number-approved else
>> refernce_number-rejected.
>>
>> I was testing with 2000 message but for some of the message during the
>> validation it's throwing *Cache has been closed *exception.
>>
>> while tested the same code with 10 messages didnt get any exception.
>>
>> couldnt figure it out why I am getting this exception while the cache is
>> active ?
>>
>> *java.lang.IllegalStateException: Cache has been closed:
>> ReferenceStatusCache*
>> at org.apache.ignite.internal.processors.cache.
>> GatewayProtectedCacheProxy.checkProxyIsValid(GatewayProtectedCacheProxy.
>> java:1639)
>> at org.apache.ignite.internal.processors.cache.
>> GatewayProtectedCacheProxy.onEnter(GatewayProtectedCacheProxy.java:1654)
>> at org.apache.ignite.internal.processors.cache.
>> GatewayProtectedCacheProxy.put(GatewayProtectedCacheProxy.java:869)
>> at sample.example.ProcessingConsumerThread.populateMsgObj(
>> ProcessingConsumerThread.java:232)
>> at sample.example.ProcessingConsumerThread..run(
>> ProcessingConsumerThread.java:164)
>> at org.apache.ignite.internal.processors.closure.
>> GridClosureProcessor$C4.execute(GridClosureProcessor.java:1944)
>> at org.apache.ignite.internal.processors.job.GridJobWorker$
>> 2.call(GridJobWorker.java:566)
>> at org.apache.ignite.internal.util.IgniteUtils.
>> wrapThreadLoader(IgniteUtils.java:6631)
>> at org.apache.ignite.internal.processors.job.GridJobWorker.
>> execute0(GridJobWorker.java:560)
>> at org.apache.ignite.internal.processors.job.GridJobWorker.
>> body(GridJobWorker.java:489)
>> at org.apache.ignite.internal.util.worker.GridWorker.run(
>> GridWorker.java:110)
>> at java.util.concurrent.ThreadPoolExecutor.runWorker(
>> ThreadPoolExecutor.java:1149)
>> at java.util.concurrent.ThreadPoolExecutor$Worker.run(
>> ThreadPoolExecutor.java:624)
>> at java.lang.Thread.run(Thread.java:748)
>>
>> --
>> Regards,
>> Raj
>>
>


-- 
Regards,
Rajarshi Pain


java.lang.IllegalStateException: Cache has been closed:

2018-01-04 Thread Rajarshi Pain
Hi,

I am using Ignite 2.3 for my application. But got Cache closed exception
while running.

This is a multi threaded application through testing on single thread. it
will load the reference number from database to cache during startup and
while processing a message, it will check if that particular reference
number is present or not on that cache.

if there is no entry it will add refernce_number-approved else
refernce_number-rejected.

I was testing with 2000 message but for some of the message during the
validation it's throwing *Cache has been closed *exception.

while tested the same code with 10 messages didnt get any exception.

couldnt figure it out why I am getting this exception while the cache is
active ?

*java.lang.IllegalStateException: Cache has been closed:
ReferenceStatusCache*
at
org.apache.ignite.internal.processors.cache.GatewayProtectedCacheProxy.checkProxyIsValid(GatewayProtectedCacheProxy.java:1639)
at
org.apache.ignite.internal.processors.cache.GatewayProtectedCacheProxy.onEnter(GatewayProtectedCacheProxy.java:1654)
at
org.apache.ignite.internal.processors.cache.GatewayProtectedCacheProxy.put(GatewayProtectedCacheProxy.java:869)
at
sample.example.ProcessingConsumerThread.populateMsgObj(ProcessingConsumerThread.java:232)
at
sample.example.ProcessingConsumerThread..run(ProcessingConsumerThread.java:164)
at
org.apache.ignite.internal.processors.closure.GridClosureProcessor$C4.execute(GridClosureProcessor.java:1944)
at
org.apache.ignite.internal.processors.job.GridJobWorker$2.call(GridJobWorker.java:566)
at
org.apache.ignite.internal.util.IgniteUtils.wrapThreadLoader(IgniteUtils.java:6631)
at
org.apache.ignite.internal.processors.job.GridJobWorker.execute0(GridJobWorker.java:560)
at
org.apache.ignite.internal.processors.job.GridJobWorker.body(GridJobWorker.java:489)
at
org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:110)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)

-- 
Regards,
Raj


Re: H2 console - GridH2QueryContext is not initialized

2018-01-04 Thread Rajarshi Pain
Hi Slava,

Yes it might be the root cause as i have set QueryParallelism as 4.
On Thu 4 Jan, 2018, 20:25 slava.koptilin,  wrote:

> Hi Rajarshi,
>
> I was able to reproduce the issue.
> It seems that using CacheConfiguration#QueryParallelism greater than 1 may
> lead to this behavior.
> I will investigate this use case further (will create a JIRA ticket for
> that
> bug).
>
> Thanks!
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>
-- 

Thanks
Rajarshi


Re: H2 console - GridH2QueryContext is not initialized

2018-01-03 Thread Rajarshi Pain
I am using 2.3

On Thu 4 Jan, 2018, 01:43 vkulichenko, 
wrote:

> Which version are you on?
>
> -Val
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>
-- 

Thanks
Rajarshi


Re: H2 console - GridH2QueryContext is not initialized

2018-01-02 Thread Rajarshi Pain
Hi Val,

I can not share the code as it is on client machine.

My application is working perfectly. I have created some cache using ignite
getorcreatecache and loading data from database (using spring) . I have
created pojo to keep the data in different cache. There is no issue with
data loading.

I have enabled IGNITE_H2_DEBUG_CONSOLE as true and placed the h2 properties
file under my user profile in windows to run this h2 console, so that I
can check whether data is loading properly to the cache or not.

This issue is only with the h2 web console. After running the application
it is opening the h2 console on the browser and showing all the  cache that
i have created. But when i am running any select query on those cache from
the h2 console i am getting this error.

On Wed 3 Jan, 2018, 02:22 vkulichenko, 
wrote:

> Can you please show what exactly you're doing? How do you create the table?
> How do you enable the console and what are your next steps? Is the issue
> reproduced only in H2 console (i.e. what if you execute the query in Web
> Console for example)?
>
> -Val
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>
-- 

Thanks
Rajarshi


Re: H2 console - GridH2QueryContext is not initialized

2018-01-02 Thread Rajarshi Pain
Hi Slava,

Yes i have set it as true and after running the application, i am querying
the cache from h2 console, but getting same error while running a select
query.

Is there anything else i need to set for this or how i will initialize
GridH2QueryContext?

On Thu 28 Dec, 2017, 19:11 slava.koptilin,  wrote:

> Hello,
>
> Please make sure that you started Ignite node and `IGNITE_H2_DEBUG_CONSOLE`
> system property was set to true.
>
> https://apacheignite-net.readme.io/docs/sql-queries#using-h2-debug-console
>
> Thanks!
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>
-- 

Thanks
Rajarshi


H2 console - GridH2QueryContext is not initialized

2017-12-27 Thread Rajarshi Pain
Hi,

 I was trying to explore H2 console in my application, but when I was
trying to query a cache I got below error.

can you please tell me how to configure or fix it ?


SELECT * FROM "tradingMembersCache".TRADINGMEMBERS;
General error: "java.lang.IllegalStateException: GridH2QueryContext is not
initialized."; SQL statement:
SELECT * FROM "tradingMembersCache".TRADINGMEMBERS [5-195]
<http://10.23.209.152:56732/query.do?jsessionid=4f8c86efcb1a05099c05db855a65b6fe#>
 HY000/5 (Help)
<http://h2database.com/javadoc/org/h2/api/ErrorCode.html#c5>
org.h2.jdbc.JdbcSQLException: General error:
"java.lang.IllegalStateException: GridH2QueryContext is not initialized.";
SQL statement:
SELECT * FROM "tradingMembersCache".TRADINGMEMBERS [5-195]
at org.h2.message.DbException.getJdbcSQLException(DbException.java:345
<http://h2database.com/html/source.html?file=org/h2/message/DbException.java=345=195>
)
at org.h2.message.DbException.get(DbException.java:168
<http://h2database.com/html/source.html?file=org/h2/message/DbException.java=168=195>
)
at org.h2.message.DbException.convert(DbException.java:295
<http://h2database.com/html/source.html?file=org/h2/message/DbException.java=295=195>
)
at org.h2.command.Command.executeQuery(Command.java:215
<http://h2database.com/html/source.html?file=org/h2/command/Command.java=215=195>
)
at org.h2.jdbc.JdbcStatement.executeInternal(JdbcStatement.java:187
<http://h2database.com/html/source.html?file=org/h2/jdbc/JdbcStatement.java=187=195>
)
at org.h2.jdbc.JdbcStatement.execute(JdbcStatement.java:165
<http://h2database.com/html/source.html?file=org/h2/jdbc/JdbcStatement.java=165=195>
)
at org.h2.server.web.WebApp.getResult(WebApp.java:1380
<http://h2database.com/html/source.html?file=org/h2/server/web/WebApp.java=1380=195>
)
at org.h2.server.web.WebApp.query(WebApp.java:1053
<http://h2database.com/html/source.html?file=org/h2/server/web/WebApp.java=1053=195>
)
at org.h2.server.web.WebApp$1.next(WebApp.java:1015
<http://h2database.com/html/source.html?file=org/h2/server/web/WebApp.java=1015=195>
)
at org.h2.server.web.WebApp$1.next(WebApp.java:1002
<http://h2database.com/html/source.html?file=org/h2/server/web/WebApp.java=1002=195>
)
at org.h2.server.web.WebThread.process(WebThread.java:164
<http://h2database.com/html/source.html?file=org/h2/server/web/WebThread.java=164=195>
)
at org.h2.server.web.WebThread.run(WebThread.java:89
<http://h2database.com/html/source.html?file=org/h2/server/web/WebThread.java=89=195>
)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.lang.IllegalStateException: GridH2QueryContext is not
initialized.
at
org.apache.ignite.internal.processors.query.h2.opt.GridH2IndexBase.threadLocalSegment(GridH2IndexBase.java:178)
at
org.apache.ignite.internal.processors.query.h2.database.H2TreeIndex.find(H2TreeIndex.java:177)
at
org.apache.ignite.internal.processors.query.h2.opt.GridH2ScanIndex.find(GridH2ScanIndex.java:108)
at
org.apache.ignite.internal.processors.query.h2.opt.GridH2ScanIndex.find(GridH2ScanIndex.java:103)
at org.h2.index.IndexCursor.find(IndexCursor.java:169
<http://h2database.com/html/source.html?file=org/h2/index/IndexCursor.java=169=195>
)
at org.h2.table.TableFilter.next(TableFilter.java:468
<http://h2database.com/html/source.html?file=org/h2/table/TableFilter.java=468=195>
)
at
org.h2.command.dml.Select$LazyResultQueryFlat.fetchNextRow(Select.java:1452
<http://h2database.com/html/source.html?file=org/h2/command/dml/Select.java=1452=195>
)
at org.h2.result.LazyResult.hasNext(LazyResult.java:79
<http://h2database.com/html/source.html?file=org/h2/result/LazyResult.java=79=195>
)
at org.h2.result.LazyResult.next(LazyResult.java:59
<http://h2database.com/html/source.html?file=org/h2/result/LazyResult.java=59=195>
)
at org.h2.command.dml.Select.queryFlat(Select.java:519
<http://h2database.com/html/source.html?file=org/h2/command/dml/Select.java=519=195>
)
at org.h2.command.dml.Select.queryWithoutCache(Select.java:625
<http://h2database.com/html/source.html?file=org/h2/command/dml/Select.java=625=195>
)
at org.h2.command.dml.Query.queryWithoutCacheLazyCheck(Query.java:114
<http://h2database.com/html/source.html?file=org/h2/command/dml/Query.java=114=195>
)
at org.h2.command.dml.Query.query(Query.java:352
<http://h2database.com/html/source.html?file=org/h2/command/dml/Query.java=352=195>
)
at org.h2.command.dml.Query.query(Query.java:333
<http://h2database.com/html/source.html?file=org/h2/command/dml/Query.java=333=195>
)
at org.h2.command.CommandContainer.query(CommandContainer.java:113
<http://h2database.com/html/source.html?file=org/h2/command/CommandContainer.java=113=195>
)
at org.h2.command.Command.executeQuery(Command.java:201
<http://h2database.com/html/source.html?file=org/h2/command/Command.java=201=195>
)
... 9 more

-- 
Regards,
Rajarshi Pain


Re: What is the path for Log4j configuration?

2017-12-14 Thread Rajarshi Pain
Hello,
Check if you have log4j jar in build path? I guess jar is missing.

Thanks,
Raj

On Thu 14 Dec, 2017, 23:22 sherryhw,  wrote:

> Hi guys,
> I am using Log4j2 for Ignite logging, but I always have this error:
> "* Failed to instantiate [org.apache.ignite.logger.log4j2.Log4J2Logger]:
> Constructor threw exception; nested exception is class
> org.apache.ignite.IgniteCheckedException: Log4j configuration path was not
> found:config/log4j2.xml*"
>
> This is my configuration.
> *data-node-config.xml*
> 
> 
> 
>  value="config/log4j2.xml"/>
> 
> 
> 
> And my log4j2.xml locates in the same folder as data-node-config.xml. Both
> of them are in config folder.
> File structure:
> --src
>   |--main
> |--java
>   |--DataNodeStartUp.java
> |--resources
>   |--config
>  |--data-node-config.xml
>  |--log4j2.xml
> However, whenever I run DataNodeStartUp.java main method to call "Ignite
> ignite = Ignition.start("config/data-node-config.xml");" to start my
> data-node, it always has the error for cannot find the  Log4j configuration
> path.
> If I use the absolute path it can work. So there is nothing wrong with my
> other configurations. What is this relative path of log4j2.xml?
>
> Thanks!
>
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>
-- 

Thanks
Rajarshi


Re: Failed to instantiate Spring XML application context

2017-12-14 Thread Rajarshi Pain
Thanks Alex. I will try this tomorrow.
It's given wrong in the Apache Ignite Website which has to be corrected.

*Link: *
https://apacheignite.readme.io/v1.0/docs/off-heap-memory#section-onheap_tiered



[image: Inline image 1]

Thanks,
Raj

On Thu, Dec 14, 2017 at 5:43 PM, Alexey Kukushkin <kukushkinale...@gmail.com
> wrote:

> You have a wrong class name CacheFifoEvictionPolicy. It should be
> FifoEvictionPolicy
>



-- 
Regards,
Rajarshi Pain


Re: Failed to instantiate Spring XML application context

2017-12-13 Thread Rajarshi Pain
Hi Alex,

Can you please tell me which jar is missing?
If i remove the below then it's running perfectly. So i guess there is some
issue with  this portion.

ONHEAP_TIERED"/>












On Wed 13 Dec, 2017, 22:47 afedotov,  wrote:

> Hi,
>
> Looks like you don't have required Ignite libs on your classpath. Please
> check your classpath for its presence.
>
> Kind regards,
> Alex
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>
-- 

Thanks
Rajarshi


Re: javax.cache.CacheException: Failed to run map query remotely.

2017-12-13 Thread Rajarshi Pain
Hi Alex,

Sorry there was a mistake on my end that's why it was failing. I fixed it
and now it's working.

Thanks,
Raj

On Wed, Dec 13, 2017 at 5:18 PM, afedotov <alexander.fedot...@gmail.com>
wrote:

> Hi Raj,
>
> Provided is only the top of the reason, there should be more details
> regarding the problem on the remote nodes.
> Probably query could not be parsed or something wrong with config.
> Could you please share Ignite config and the query you run?
>
> Kind regards,
> Alex
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>



-- 
Regards,
Rajarshi Pain


Re: javax.cache.CacheException: Failed to run map query remotely.

2017-12-12 Thread Rajarshi Pain
Hi Alex,

I was running this on Eclipse and this the exception printed on the
console. May I know what else you need for this to investigate?,

Thanks
Raj

On Tue, Dec 12, 2017 at 10:44 PM, afedotov <alexander.fedot...@gmail.com>
wrote:

> Hi,
>
> Please share the full log.
>
> Kind regards,
> Alex
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>



-- 
Regards,
Rajarshi Pain


javax.cache.CacheException: Failed to run map query remotely.

2017-12-12 Thread Rajarshi Pain
Hi,

I am using Ignite 2.3 and was doing a cache query to get some details but
got below error while executing the query.

can you please let me know why I am getting this error ?

javax.cache.CacheException: Failed to run map query remotely.
at
org.apache.ignite.internal.processors.query.h2.twostep.GridReduceQueryExecutor.query(GridReduceQueryExecutor.java:748)
at
org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing$8.iterator(IgniteH2Indexing.java:1212)
at
org.apache.ignite.internal.processors.cache.QueryCursorImpl.iterator(QueryCursorImpl.java:95)
at
org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing$9.iterator(IgniteH2Indexing.java:1276)
at
org.apache.ignite.internal.processors.cache.QueryCursorImpl.iterator(QueryCursorImpl.java:95)
  at
org.apache.ignite.internal.processors.closure.GridClosureProcessor$C4.execute(GridClosureProcessor.java:1944)
at
org.apache.ignite.internal.processors.job.GridJobWorker$2.call(GridJobWorker.java:566)
at
org.apache.ignite.internal.util.IgniteUtils.wrapThreadLoader(IgniteUtils.java:6631)
at
org.apache.ignite.internal.processors.job.GridJobWorker.execute0(GridJobWorker.java:560)
at
org.apache.ignite.internal.processors.job.GridJobWorker.body(GridJobWorker.java:489)
at
org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:110)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: javax.cache.CacheException: Failed to execute map query on the
node: 50fd8f53-bb2b-4b32-bc14-08c31a858f9c, class
org.apache.ignite.IgniteCheckedException:Failed to execute SQL query.
at
org.apache.ignite.internal.processors.query.h2.twostep.GridReduceQueryExecutor.fail(GridReduceQueryExecutor.java:274)
at
org.apache.ignite.internal.processors.query.h2.twostep.GridReduceQueryExecutor.onFail(GridReduceQueryExecutor.java:264)
at
org.apache.ignite.internal.processors.query.h2.twostep.GridReduceQueryExecutor.onMessage(GridReduceQueryExecutor.java:243)
at
org.apache.ignite.internal.processors.query.h2.twostep.GridMapQueryExecutor.sendError(GridMapQueryExecutor.java:854)
at
org.apache.ignite.internal.processors.query.h2.twostep.GridMapQueryExecutor.onQueryRequest0(GridMapQueryExecutor.java:731)
at
org.apache.ignite.internal.processors.query.h2.twostep.GridMapQueryExecutor.onQueryRequest(GridMapQueryExecutor.java:516)
at
org.apache.ignite.internal.processors.query.h2.twostep.GridMapQueryExecutor.onMessage(GridMapQueryExecutor.java:214)
at
org.apache.ignite.internal.processors.query.h2.twostep.GridReduceQueryExecutor$1.applyx(GridReduceQueryExecutor.java:153)
at
org.apache.ignite.internal.processors.query.h2.twostep.GridReduceQueryExecutor$1.applyx(GridReduceQueryExecutor.java:151)
at
org.apache.ignite.internal.util.lang.IgniteInClosure2X.apply(IgniteInClosure2X.java:38)
at
org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing.send(IgniteH2Indexing.java:2191)
at
org.apache.ignite.internal.processors.query.h2.twostep.GridReduceQueryExecutor.send(GridReduceQueryExecutor.java:1420)
at
org.apache.ignite.internal.processors.query.h2.twostep.GridReduceQueryExecutor.query(GridReduceQueryExecutor.java:733)
... 17 more

-- 
Regards,
Rajarshi Pain


Re: Local node and remote node have different version numbers

2017-11-30 Thread Rajarshi Pain
The code is in client machine so can not share the code, config is there in
this mail chain. Yes we are not using multi cast ip. But as we couldn't fix
it so my colleague is using the multicast ip that given in ignite website.

On Thu 30 Nov, 2017, 15:44 Evgenii Zhuravlev, <e.zhuravlev...@gmail.com>
wrote:

> Are you sure that you are using not Multicast IP finder? Please share
> configuration for all nodes that find each other and log files
>
> 2017-11-30 9:32 GMT+03:00 Rajarshi Pain <rajarsh...@gmail.com>:
>
>> Hi Evgenii,
>>
>> We both tried to specify the port, but still we rae getting same error if
>> we both running it on the time in our system.
>>
>> On Wed 29 Nov, 2017, 21:04 Evgenii Zhuravlev, <e.zhuravlev...@gmail.com>
>> wrote:
>>
>>> Hi,
>>>
>>> In this case, you should specify ports for addresses, for example:
>>>
>>> 10.23.153.56:47500
>>>
>>>
>>> Evgenii
>>>
>>>
>>> 2017-11-29 18:20 GMT+03:00 Rajarshi Pain <rajarsh...@gmail.com>:
>>>
>>>> Hi,
>>>>
>>>> I was working on Ignite 2,3, and my colleague is working on 1.8. We are
>>>> running different application on our local machine using the example ignite
>>>> xml, but it getting failed with the below exception:-
>>>>
>>>> We are using our local system ip under TcpDiscoverySpi, but still, it
>>>> is detecting each other. How I can stop this and not let them discover each
>>>> other.
>>>>
>>>> 
>>>> 
>>>>   
>>>> >>> class="org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder">
>>>>   
>>>> 10.23.153.56
>>>>
>>>>   
>>>>
>>>>
>>>> Exception:-
>>>>
>>>>  Local node and remote node have different version numbers (node will
>>>> not join, Ignite does not support rolling updates, so versions must be
>>>> exactly the same) [locBuildVer=2.3.0, rmtBuildVer=1.8.0, locNodeAddrs=[
>>>> 01HW1083820.India.TCS.com/0:0:0:0:0:0:0:1, /10.23.209.152, /127.0.0.1],
>>>> rmtNodeAddrs=[01Hw1083774.India.TCS.com/0:0:0:0:0:0:0:1, /10.23.209.149,
>>>> /127.0.0.1], locNodeId=c3e4f1ff-f6d7-4c26-9baf-e04f99a8eaac,
>>>> rmtNodeId=75e5e5bc-5de2-484a-9a91-a3a9f2ec48d3]
>>>>
>>>> --
>>>> Regards,
>>>> Rajarshi Pain
>>>>
>>>
>>> --
>>
>> Thanks
>> Rajarshi
>>
>
> --

Thanks
Rajarshi


Re: Local node and remote node have different version numbers

2017-11-29 Thread Rajarshi Pain
Hi Evgenii,

We both tried to specify the port, but still we rae getting same error if
we both running it on the time in our system.

On Wed 29 Nov, 2017, 21:04 Evgenii Zhuravlev, <e.zhuravlev...@gmail.com>
wrote:

> Hi,
>
> In this case, you should specify ports for addresses, for example:
>
> 10.23.153.56:47500
>
>
> Evgenii
>
>
> 2017-11-29 18:20 GMT+03:00 Rajarshi Pain <rajarsh...@gmail.com>:
>
>> Hi,
>>
>> I was working on Ignite 2,3, and my colleague is working on 1.8. We are
>> running different application on our local machine using the example ignite
>> xml, but it getting failed with the below exception:-
>>
>> We are using our local system ip under TcpDiscoverySpi, but still, it is
>> detecting each other. How I can stop this and not let them discover each
>> other.
>>
>> 
>> 
>>   
>> > class="org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder">
>>   
>> 10.23.153.56
>>
>>   
>>
>>
>> Exception:-
>>
>>  Local node and remote node have different version numbers (node will
>> not join, Ignite does not support rolling updates, so versions must be
>> exactly the same) [locBuildVer=2.3.0, rmtBuildVer=1.8.0, locNodeAddrs=[
>> 01HW1083820.India.TCS.com/0:0:0:0:0:0:0:1, /10.23.209.152, /127.0.0.1],
>> rmtNodeAddrs=[01Hw1083774.India.TCS.com/0:0:0:0:0:0:0:1, /10.23.209.149,
>> /127.0.0.1], locNodeId=c3e4f1ff-f6d7-4c26-9baf-e04f99a8eaac,
>> rmtNodeId=75e5e5bc-5de2-484a-9a91-a3a9f2ec48d3]
>>
>> --
>> Regards,
>> Rajarshi Pain
>>
>
> --

Thanks
Rajarshi


Local node and remote node have different version numbers

2017-11-29 Thread Rajarshi Pain
Hi,

I was working on Ignite 2,3, and my colleague is working on 1.8. We are
running different application on our local machine using the example ignite
xml, but it getting failed with the below exception:-

We are using our local system ip under TcpDiscoverySpi, but still, it is
detecting each other. How I can stop this and not let them discover each
other.



  

  
10.23.153.56

  


Exception:-

 Local node and remote node have different version numbers (node will not
join, Ignite does not support rolling updates, so versions must be exactly
the same) [locBuildVer=2.3.0, rmtBuildVer=1.8.0, locNodeAddrs=[
01HW1083820.India.TCS.com/0:0:0:0:0:0:0:1, /10.23.209.152, /127.0.0.1],
rmtNodeAddrs=[01Hw1083774.India.TCS.com/0:0:0:0:0:0:0:1, /10.23.209.149, /
127.0.0.1], locNodeId=c3e4f1ff-f6d7-4c26-9baf-e04f99a8eaac,
rmtNodeId=75e5e5bc-5de2-484a-9a91-a3a9f2ec48d3]

-- 
Regards,
Rajarshi Pain


IOException: Bad file descriptor - while running application on remote node

2017-11-25 Thread Rajarshi Pain
Hi,

Some strange is happening. We are still trying to run our application on
multi node set up (same physical server).

While calling the method compute.broadcast getting an exception from
kafka producer.send(), it is only happening while a multi node set up. if
we are running the application on local it's working as expected.

could you please let me know where is the issue ? it is on kafka side or
ignite ?

*sample code:*

compute.broadcast(object1);

** object one has record,log and object of kafka producer
---
producer.send(new ProducerRecord<String, byte[]>(topicname,id,
binarydata));

*Exception:*

java.io.IOException: Bad file descriptor
at sun.nio.ch.EPollArrayWrapper.interrupt(Native Method)
at sun.nio.ch.EPollArrayWrapper.interrupt(EPollArrayWrapper.java:317)
at sun.nio.ch.EPollSelectorImpl.wakeup(EPollSelectorImpl.java:207)
at org.apache.kafka.common.network.Selector.wakeup(Selector.java:240)
at org.apache.kafka.clients.NetworkClient.wakeup(NetworkClient.java:497)
at
org.apache.kafka.clients.producer.internals.Sender.wakeup(Sender.java:674)
at
org.apache.kafka.clients.producer.KafkaProducer.waitOnMetadata(KafkaProducer.java:823)
at
org.apache.kafka.clients.producer.KafkaProducer.doSend(KafkaProducer.java:711)
at
org.apache.kafka.clients.producer.KafkaProducer.send(KafkaProducer.java:701)
at
org.apache.kafka.clients.producer.KafkaProducer.send(KafkaProducer.java:609)
at
sample.code.example.C_RTR_KafkaProducer2.send(C_RTR_KafkaProducer2.java:66)
at
sample.code.example.KafkaFileConsumerThread.sendToCOMMS(KafkaFileConsumerThread.java:320)
at
sample.code.example.KafkaFileConsumerThread.generateBinaryClearingMsg(KafkaFileConsumerThread.java:295)
at
sample.code.example.KafkaFileConsumerThread.populateClearingMsgObj(KafkaFileConsumerThread.java:207)
at
sample.code.example.KafkaFileConsumerThread.run(KafkaFileConsumerThread.java:154)
at
org.apache.ignite.internal.processors.closure.GridClosureProcessor$C4V2.execute(GridClosureProcessor.java:2215)
at
org.apache.ignite.internal.processors.job.GridJobWorker$2.call(GridJobWorker.java:556)
at
org.apache.ignite.internal.util.IgniteUtils.wrapThreadLoader(IgniteUtils.java:6564)
at
org.apache.ignite.internal.processors.job.GridJobWorker.execute0(GridJobWorker.java:550)
at
org.apache.ignite.internal.processors.job.GridJobWorker.body(GridJobWorker.java:479)
at
org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:110)
at
org.apache.ignite.internal.processors.job.GridJobProcessor.processJobExecuteRequest(GridJobProcessor.java:1180)
at
org.apache.ignite.internal.processors.job.GridJobProcessor$JobExecutionListener.onMessage(GridJobProcessor.java:1894)
at
org.apache.ignite.internal.managers.communication.GridIoManager.invokeListener(GridIoManager.java:1082)
at
org.apache.ignite.internal.managers.communication.GridIoManager.processRegularMessage0(GridIoManager.java:710)
at
org.apache.ignite.internal.managers.communication.GridIoManager.access$1700(GridIoManager.java:102)
at
org.apache.ignite.internal.managers.communication.GridIoManager$5.run(GridIoManager.java:673)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:748)


-- 
Regards,
Rajarshi Pain


Getting "BinaryObjectException: Failed to deserialize object" while trying to execute the application using multi node

2017-11-22 Thread Rajarshi Pain
ead(BinaryFieldAccessor.java:168)
at
org.apache.ignite.internal.binary.BinaryClassDescriptor.read(BinaryClassDescriptor.java:843)
... 33 more
Caused by: class org.apache.ignite.binary.BinaryObjectException: Failed to
deserialize object
[typeName=org.apache.kafka.clients.consumer.KafkaConsumer]
at
org.apache.ignite.internal.binary.BinaryClassDescriptor.read(BinaryClassDescriptor.java:874)
at
org.apache.ignite.internal.binary.BinaryReaderExImpl.deserialize0(BinaryReaderExImpl.java:1762)
at
org.apache.ignite.internal.binary.BinaryReaderExImpl.deserialize(BinaryReaderExImpl.java:1714)
at
org.apache.ignite.internal.binary.BinaryReaderExImpl.readField(BinaryReaderExImpl.java:1982)
at
org.apache.ignite.internal.binary.BinaryFieldAccessor$DefaultFinalClassAccessor.read0(BinaryFieldAccessor.java:679)
at
org.apache.ignite.internal.binary.BinaryFieldAccessor.read(BinaryFieldAccessor.java:164)
... 34 more
Caused by: class org.apache.ignite.binary.BinaryObjectException: Failed to
read field [name=metrics]
at
org.apache.ignite.internal.binary.BinaryFieldAccessor.read(BinaryFieldAccessor.java:168)
at
org.apache.ignite.internal.binary.BinaryClassDescriptor.read(BinaryClassDescriptor.java:843)
... 39 more
Caused by: class org.apache.ignite.binary.BinaryObjectException: Failed to
deserialize object [typeName=org.apache.kafka.common.metrics.Metrics]
at
org.apache.ignite.internal.binary.BinaryClassDescriptor.read(BinaryClassDescriptor.java:874)
at
org.apache.ignite.internal.binary.BinaryReaderExImpl.deserialize0(BinaryReaderExImpl.java:1762)
at
org.apache.ignite.internal.binary.BinaryReaderExImpl.deserialize(BinaryReaderExImpl.java:1714)
at
org.apache.ignite.internal.binary.BinaryReaderExImpl.readField(BinaryReaderExImpl.java:1982)
at
org.apache.ignite.internal.binary.BinaryFieldAccessor$DefaultFinalClassAccessor.read0(BinaryFieldAccessor.java:679)
at
org.apache.ignite.internal.binary.BinaryFieldAccessor.read(BinaryFieldAccessor.java:164)
... 40 more
Caused by: class org.apache.ignite.binary.BinaryObjectException: Failed to
read field [name=reporters]
at
org.apache.ignite.internal.binary.BinaryFieldAccessor.read(BinaryFieldAccessor.java:168)
at
org.apache.ignite.internal.binary.BinaryClassDescriptor.read(BinaryClassDescriptor.java:843)
... 45 more
Caused by: class org.apache.ignite.binary.BinaryObjectException: Failed to
deserialize object [typeName=org.apache.kafka.common.metrics.JmxReporter]
at
org.apache.ignite.internal.binary.BinaryClassDescriptor.read(BinaryClassDescriptor.java:874)
at
org.apache.ignite.internal.binary.BinaryReaderExImpl.deserialize0(BinaryReaderExImpl.java:1762)
at
org.apache.ignite.internal.binary.BinaryReaderExImpl.deserialize(BinaryReaderExImpl.java:1714)
at
org.apache.ignite.internal.binary.BinaryUtils.doReadObject(BinaryUtils.java:1799)
at
org.apache.ignite.internal.binary.BinaryUtils.deserializeOrUnmarshal(BinaryUtils.java:2162)
at
org.apache.ignite.internal.binary.BinaryUtils.doReadCollection(BinaryUtils.java:2093)
at
org.apache.ignite.internal.binary.BinaryReaderExImpl.deserialize0(BinaryReaderExImpl.java:1914)
at
org.apache.ignite.internal.binary.BinaryReaderExImpl.deserialize(BinaryReaderExImpl.java:1714)
at
org.apache.ignite.internal.binary.BinaryReaderExImpl.readField(BinaryReaderExImpl.java:1982)
at
org.apache.ignite.internal.binary.BinaryFieldAccessor$DefaultFinalClassAccessor.read0(BinaryFieldAccessor.java:679)
at
org.apache.ignite.internal.binary.BinaryFieldAccessor.read(BinaryFieldAccessor.java:164)
... 46 more

-- 
Regards,
Rajarshi Pain