Re: Not able to save data in Ignite datagrid using JavaIgniteRDD

2016-04-20 Thread vijayendra bhati
Hi Alexei,
Here it is, the file StockSimulationsCacheWriter (2).java is the one which is 
not working.
Regards,Vij 

On Thursday, April 21, 2016 1:54 AM, Alexei Scherbakov 
 wrote:
 

 Hi,

It's very strange, because JavaIgniteContext is just a wrapper around 
IgniteContext.
Could you provide the source code describing your problem ?

2016-04-20 21:14 GMT+03:00 vijayendra bhati :

Its working now.I moved from JavaIgniteContext to IgniteContext.Also invoked 
igniteConext.fromCache(cacheConfiguration) rather than 
javaigniteConext.fromCache(cacheName)
Also while initializing cacheConfiguration , added index to it.
Thanks,Vij 

On Wednesday, April 20, 2016 11:17 PM, vijayendra bhati 
 wrote:
 

 Hi,I am trying to save data in Ignite datagrid using JavaIgniteRDD by calling 
method savePairs().Some how job is getting finished properly , am not getting 
any exception.Still there is not data in cache.I am checking through H2 debug 
console.
I am not ablel to understand what could be the reason.The other thing is how to 
specify Indexes when using JavaIgniteContext.
Regards,Vij



   



-- 

Best regards,
Alexei Scherbakov


  

StockSimulationsCacheWriter.java
Description: Binary data


StockSimulationsCacheWriter (2).java
Description: Binary data


Re: Error running nodes in .net and c++

2016-04-20 Thread Murthy Kakarlamudi
Oh OK. Thanks for the information. I was able to start a java based server
node that reads data from SQL Server for my use case. Next I am going to
start c++ in client mode and see if I can access the cache.

Thanks,
Satya.

On Wed, Apr 20, 2016 at 4:31 AM, Vladimir Ozerov 
wrote:

> Hi Murthy,
>
> Yes, there will be more examples in further versions. Though, for now it
> is impossible to plug C++ based store, and this feature is not planned for
> 1.6 release. So I do not expect C++ examples with stores in 1.6.
> Instead, I'd better to look at Java or .NET examples with store as these
> platforms support plugable store implementations.
>
> Vladimir.
>
> On Tue, Apr 19, 2016 at 7:34 PM, Murthy Kakarlamudi 
> wrote:
>
>> Thanks Vladimir for the explanation. I am working on the workaround
>> suggested by Igor. I will reach out to the group if I run into any issues.
>>
>> One quick question. I am using 1.5 version. I only see 1 c++ example. Are
>> there more c++ examples in future versions? Especially around using stores.
>>
>> Regards
>> Satya.
>>
>> On Tue, Apr 19, 2016 at 9:20 AM, Vladimir Ozerov 
>> wrote:
>>
>>> Hi Murthy,
>>>
>>> Exception you observed is essentially not a bug, but rather expected
>>> behavior with current Ignite architecture. Ignite support transactions.
>>> When you initiate a transaction from a client node, only this node has the
>>> full set of updated keys, and hence only this node is able to propagate
>>> updates to underlying database within a single database transaction. For
>>> this reason, Ignite creates and initializes store on every node, even if
>>> this node is client.
>>>
>>> As Igor suggested, the best workaround for now is to rely on Java store
>>> because every node (Java, C++, .NET) has a Java inside and hence is able to
>>> work with Java-based store. On the other hand, I clearly understand that
>>> this architecture doesn't fit well in your use case and is not very
>>> convenient from user perspective. We will think about possible ways to
>>> resolve it.
>>>
>>> One very simple solution - do not initialize store if we know for sure
>>> that the client will not use it. For example, this is so in case of ATOMIC
>>> cache or asynchronous (write-behind) store.
>>>
>>> Vladimir.
>>>
>>>
>>>
>>> On Tue, Apr 19, 2016 at 2:31 PM, Murthy Kakarlamudi 
>>> wrote:
>>>
 OK Igor. Let me try from Java.

 From a high level, we have a backend application implemented in c++ and
 the front end is asp.net mvc. Data store is SQL Server.

 Use case is, I need to load data from SQL Server into Ignite Cache upon
 start up. .Net and C++ acting as clients need to access the cache and
 update it. Those updates should be written to the underlying SQL Server in
 an asynchronous way so as not to impact the cache performance.  The updates
 that gets written from .Net client need to be accessed by C++ client. We
 have a need to use SQL Queries to access cache from either of the clients.

 I can start the cache from Java server node. However, as .net and c++
 are being used in our application, we prefer sticking to those 2 and not
 introduce Java.

 Thanks,
 Satya.

 On Tue, Apr 19, 2016 at 6:30 AM, Igor Sapego 
 wrote:

> Right now I can see the following workaround for you: you can switch
> from .Net CacheStoreFactory to Java's one. This way all types of
> clients
> will be able to instantiate your cache.
>
> If you are willing to you can describe your use-case so we can
> try and find some other solution if this workaround is not suitable
> for you.
>
> Best Regards,
> Igor
>
> On Tue, Apr 19, 2016 at 1:06 PM, Murthy Kakarlamudi 
> wrote:
>
>> Thank You.
>> On Apr 19, 2016 6:01 AM, "Igor Sapego"  wrote:
>>
>>> Hi,
>>>
>>> It looks like a bug for me. I've submitted an issue for it - [1].
>>>
>>> [1] - https://issues.apache.org/jira/browse/IGNITE-3025.
>>>
>>> Best Regards,
>>> Igor
>>>
>>> On Mon, Apr 18, 2016 at 1:35 AM, Murthy Kakarlamudi <
>>> ksa...@gmail.com> wrote:
>>>
 The client node itself starts after making the change, but getting
 the below error trying to access the cache:

 [12:16:45] Topology snapshot [ver=2, servers=1, clients=1, CPUs=4,
 heap=1.4GB]

 >>> Cache node started.

 [12:16:45,439][SEVERE][exchange-worker-#38%null%][GridDhtPartitionsExchangeFuture]
 Failed to reinitialize local partitions (preloading will be stopped):
 GridDhtPartitionExchangeId [topVer=AffinityTopologyVersion [topVer=2,
 minorTopVer=1], nodeId=2bf10735, evt=DISCOVERY_CUSTOM_EVT]
 PlatformNoCallbackException []
 at
 

jar file update issue in ignite server side

2016-04-20 Thread Zhengqingzheng
Hi there,
In order to load data from database, I have defined several java classes and 
make a jar file stored inside the libs folder.
When I try to load one table and find there is property inside one class file 
that I need to remove. So I make a new jar file and replace the old one on the 
server side.
But I still get exception information like: Failed to find property in POJO 
class [class=org.apache.ignite.examples.model.IgniteMetaDataBase, 
prop=MAX_VALUE_SIZE].
I guess this is because the old jar file is not replaced by the new one at 
running time. Am I right?
If so, it is inconvenient for developers to make changes during the developing 
process. Is there anyway to flash the old jar file and make the changes take 
effect immediately?
Exception info shows message like this:
[09:29:17,792][SEVERE][pub-#41%null%][GridJobWorker] Failed to execute job 
[jobId=ebe2f663451-bca27e02-84a7-421b-bcd4-9f9f34e58409, ses=GridJobSessionImpl 
[ses=GridTaskSessionImpl 
[taskName=o.a.i.i.processors.cache.GridCacheAdapter$LoadCacheClosure, 
dep=LocalDeployment [super=GridDeployment [ts=1461202136714, depMode=SHARED, 
clsLdr=sun.misc.Launcher$AppClassLoader@73d16e93, 
clsLdrId=a8e2f663451-bca27e02-84a7-421b-bcd4-9f9f34e58409, userVer=0, loc=true, 
sampleClsName=java.lang.String, pendingUndeploy=false, undeployed=false, 
usage=0]], 
taskClsName=o.a.i.i.processors.cache.GridCacheAdapter$LoadCacheClosure, 
sesId=dbe2f663451-bca27e02-84a7-421b-bcd4-9f9f34e58409, 
startTime=1461202157703, endTime=9223372036854775807, 
taskNodeId=bca27e02-84a7-421b-bcd4-9f9f34e58409, 
clsLdr=sun.misc.Launcher$AppClassLoader@73d16e93, closed=false, cpSpi=null, 
failSpi=null, loadSpi=null, usage=1, fullSup=false, 
subjId=bca27e02-84a7-421b-bcd4-9f9f34e58409, mapFut=IgniteFuture 
[orig=GridFutureAdapter [resFlag=0, res=null, startTime=1461202157735, 
endTime=0, ignoreInterrupts=false, lsnr=null, state=INIT]]], 
jobId=ebe2f663451-bca27e02-84a7-421b-bcd4-9f9f34e58409]]
class org.apache.ignite.IgniteException: javax.cache.CacheException: Failed to 
find property in POJO class 
[class=org.apache.ignite.examples.model.IgniteMetaDataBase, prop=MAX_VALUE_SIZE]
   at 
org.apache.ignite.internal.processors.closure.GridClosureProcessor$C2.execute(GridClosureProcessor.java:1792)
   at 
org.apache.ignite.internal.processors.job.GridJobWorker$2.call(GridJobWorker.java:509)
   at 
org.apache.ignite.internal.util.IgniteUtils.wrapThreadLoader(IgniteUtils.java:6397)
   at 
org.apache.ignite.internal.processors.job.GridJobWorker.execute0(GridJobWorker.java:503)
   at 
org.apache.ignite.internal.processors.job.GridJobWorker.body(GridJobWorker.java:456)
   at org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:110)
   at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
   at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
   at java.lang.Thread.run(Thread.java:745)
Caused by: javax.cache.integration.CacheLoaderException: 
javax.cache.CacheException: Failed to find property in POJO class 
[class=org.apache.ignite.examples.model.IgniteMetaDataBase, prop=MAX_VALUE_SIZE]
   at 
org.apache.ignite.internal.processors.cache.store.GridCacheStoreManagerAdapter.loadCache(GridCacheStoreManagerAdapter.java:510)
   at 
org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtCacheAdapter.localLoadCache(GridDhtCacheAdapter.java:514)
   at 
org.apache.ignite.internal.processors.cache.IgniteCacheProxy.localLoadCache(IgniteCacheProxy.java:388)
   at 
org.apache.ignite.internal.processors.cache.GridCacheAdapter$LoadCacheClosure.call(GridCacheAdapter.java:5769)
   at 
org.apache.ignite.internal.processors.cache.GridCacheAdapter$LoadCacheClosure.call(GridCacheAdapter.java:5716)
   at 
org.apache.ignite.internal.processors.closure.GridClosureProcessor$C2.execute(GridClosureProcessor.java:1789)
   ... 8 more
Caused by: javax.cache.CacheException: Failed to find property in POJO class 
[class=org.apache.ignite.examples.model.IgniteMetaDataBase, prop=MAX_VALUE_SIZE]
   at 
org.apache.ignite.cache.store.jdbc.CacheJdbcPojoStore$PojoPropertiesCache.(CacheJdbcPojoStore.java:466)
   at 
org.apache.ignite.cache.store.jdbc.CacheJdbcPojoStore$PojoPropertiesCache.(CacheJdbcPojoStore.java:407)
   at 
org.apache.ignite.cache.store.jdbc.CacheJdbcPojoStore.prepareBuilders(CacheJdbcPojoStore.java:323)
   at 
org.apache.ignite.cache.store.jdbc.CacheAbstractJdbcStore.getOrCreateCacheMappings(CacheAbstractJdbcStore.java:740)
   at 
org.apache.ignite.cache.store.jdbc.CacheAbstractJdbcStore.loadCache(CacheAbstractJdbcStore.java:786)
   at 
org.apache.ignite.internal.processors.cache.store.GridCacheStoreManagerAdapter.loadCache(GridCacheStoreManagerAdapter.java:484)
   ... 13 more
[09:29:17,828][SEVERE][main][GridTaskWorker] Failed to obtain remote job result 
policy for result from ComputeTask.result(..) method (will fail the whole 
task): GridJobResultImpl [job=C2 [], sib=GridJobSiblingImpl 

Re: Maintaining relationships between tables

2016-04-20 Thread vkulichenko
ID is usually used as a cache key, so it's similar to primary key in
relational database.  It should not and cannot be changed without creating a
new entry in the cache.

But generally you can atomically update two entries by enlisting them into
one transaction [1].

[1] https://apacheignite.readme.io/docs/transactions

-Val



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Maintaining-relationships-between-tables-tp4236p4399.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Maven can't find ignite-hibernate-1.5.0.final

2016-04-20 Thread Dmitriy Setrakyan
Valya, any chance to host them in the GridGain Maven Repo? This way it
would be outside of Apache, but would still be available to Ignite users.

On Wed, Apr 20, 2016 at 12:27 PM, vkulichenko  wrote:

> Hibernate and other modules that depend on LGPL libraries are not deployed
> in
> Maven due to licensing issues. You can download binary build [1], find
> required modules in 'libs' folder and add them to your project manually.
>
> [1] https://ignite.apache.org/download.cgi#binaries
>
> -Val
>
>
>
> --
> View this message in context:
> http://apache-ignite-users.70518.x6.nabble.com/Maven-can-t-find-ignite-hibernate-1-5-0-final-tp4349p4390.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>


Re: exception during starting client

2016-04-20 Thread vkulichenko
Hi,

This should be already fixed in master. Can you build from there and try?

-Val



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/exception-during-starting-client-tp4396p4397.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


exception during starting client

2016-04-20 Thread tomk
Hello,  
Look at my error:

class org.apache.ignite.IgniteException: class
org.apache.ignite.binary.BinaryObjectException: Failed to register class.
at
org.apache.ignite.internal.processors.closure.GridClosureProcessor$C2.execute(GridClosureProcessor.java:1792)
at
org.apache.ignite.internal.processors.job.GridJobWorker$2.call(GridJobWorker.java:509)
at
org.apache.ignite.internal.util.IgniteUtils.wrapThreadLoader(IgniteUtils.java:6397)
at
org.apache.ignite.internal.processors.job.GridJobWorker.execute0(GridJobWorker.java:503)
at
org.apache.ignite.internal.processors.job.GridJobWorker.body(GridJobWorker.java:456)
at
org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:110)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: javax.cache.integration.CacheLoaderException: class
org.apache.ignite.binary.BinaryObjectException: Failed to register class.
at
org.apache.ignite.internal.processors.cache.store.GridCacheStoreManagerAdapter.loadCache(GridCacheStoreManagerAdapter.java:510)
at
org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtCacheAdapter.localLoadCache(GridDhtCacheAdapter.java:514)
at
org.apache.ignite.internal.processors.cache.IgniteCacheProxy.localLoadCache(IgniteCacheProxy.java:388)
at
org.apache.ignite.internal.processors.cache.GridCacheAdapter$LoadCacheClosure.call(GridCacheAdapter.java:5769)
at
org.apache.ignite.internal.processors.cache.GridCacheAdapter$LoadCacheClosure.call(GridCacheAdapter.java:5716)
at
org.apache.ignite.internal.processors.closure.GridClosureProcessor$C2.execute(GridClosureProcessor.java:1789)
... 8 more
Caused by: class org.apache.ignite.binary.BinaryObjectException: Failed to
register class.
at
org.apache.ignite.internal.binary.BinaryContext.registerUserClassDescriptor(BinaryContext.java:565)
at
org.apache.ignite.internal.binary.BinaryContext.registerClassDescriptor(BinaryContext.java:541)
at
org.apache.ignite.internal.binary.BinaryContext.descriptorForClass(BinaryContext.java:443)
at
org.apache.ignite.internal.binary.BinaryWriterExImpl.marshal(BinaryWriterExImpl.java:145)
at
org.apache.ignite.internal.binary.BinaryWriterExImpl.marshal(BinaryWriterExImpl.java:132)
at
org.apache.ignite.internal.binary.GridBinaryMarshaller.marshal(GridBinaryMarshaller.java:233)
at
org.apache.ignite.internal.processors.cache.binary.CacheObjectBinaryProcessorImpl.marshalToBinary(CacheObjectBinaryProcessorImpl.java:441)
at
org.apache.ignite.internal.processors.cache.binary.CacheObjectBinaryProcessorImpl.toBinary(CacheObjectBinaryProcessorImpl.java:785)
at
org.apache.ignite.internal.processors.cache.binary.CacheObjectBinaryProcessorImpl.toCacheKeyObject(CacheObjectBinaryProcessorImpl.java:714)
at
org.apache.ignite.internal.processors.cache.GridCacheContext.toCacheKeyObject(GridCacheContext.java:1808)
at
org.apache.ignite.internal.processors.cache.store.GridCacheStoreManagerAdapter$3.apply(GridCacheStoreManagerAdapter.java:498)
at
mytest.ignite.CacheWriteThrough.loadCache(CacheWriteThrough.java:119)
at
org.apache.ignite.internal.processors.cache.store.GridCacheStoreManagerAdapter.loadCache(GridCacheStoreManagerAdapter.java:484)
... 13 more
Caused by: class org.apache.ignite.IgniteCheckedException: Type ID collision
detected [id=504247427, clsName1=mytest.ignite.KeyCache, clsName2=KeyCache]
at
org.apache.ignite.internal.MarshallerContextImpl.registerClassName(MarshallerContextImpl.java:116)
at
org.apache.ignite.internal.MarshallerContextAdapter.registerClass(MarshallerContextAdapter.java:157)
at
org.apache.ignite.internal.binary.BinaryContext.registerUserClassDescriptor(BinaryContext.java:562)



It is running within sparkstreaming (I am going to save data into cache). 
Run by spark-submit. 
When I execute independently (simple testing) this code (responsible for
ignite work) it works. 
With sparkstreaming (by spark-submit) returns this error.

Could you help me, please ?



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/exception-during-starting-client-tp4396.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Maintaining relationships between tables

2016-04-20 Thread akritibahal91
I meant, for example we had to change a deptid for a particular department.
Then this is changed in the department table. Now, this deptid should be
updated in the employee table as well right, for all those employees who had
the previous deptid.



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Maintaining-relationships-between-tables-tp4236p4395.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: I have problems with balancing cache partitioning among nodes

2016-04-20 Thread AJoshua
my goal is to test the performance between gigaspaces and gridgain in
gigaspaces i have implemented partitioning in the same way as the gridgain.
On gigaspaces grid service container that corresponds to the Node on
griddgain and each GSC is assigned with a number of partitions. the goal is
that the test should be between 200 thousand to 500 thousand data and
perhaps still too little data to test on ??.should I increase the number
companykey?



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/I-have-problems-with-balancing-cache-partitioning-among-nodes-tp4387p4393.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


RE: BinaryObject performance issue

2016-04-20 Thread vkulichenko
Agree with Andrey. This use case doesn't look like a good fit for
BinaryObject, is there any particular reason for using it? Is Map not
working for you?

-Val



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/BinaryObject-performance-issue-tp4375p4392.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: can we make generic class for cachehibernatepojostore?

2016-04-20 Thread vkulichenko
Can you show the whole exception trace?

-Val



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/can-we-make-generic-class-for-cachehibernatepojostore-tp4355p4391.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


I have problems with balancing cache partitioning among nodes

2016-04-20 Thread AJoshua
so here goes to the senario, we can assume from the examples of gridgain
tutorial.
We have a person cache and I want to have only 8 partitions and all persons
should be distributed among these partitions. we can imagine, I have 8
companies in which each person belongs to one of those but I have no company
cache, only a key ’compKey' of type long. I have assigned @AffinityKeyMapped
to compKey.

The problem is if I have two node so the data is partitioned unbalanced i.e.
Node 1 keeps all compKey {2.4} and its people while Node 2 keeps compKey
{1,3,5,6,7,8} in most case in this form.
so if we write 1,000 people, exactly the same number of each org Key i.e.
125 of each, so people are balanced between node1 and 2 {250 respectively
750}



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/I-have-problems-with-balancing-cache-partitioning-among-nodes-tp4387.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Maintaining relationships between tables

2016-04-20 Thread vkulichenko
Hi,

If you're going to use SQL queries, you can organize your data model like
you would do it in a relational database. In your case you can have
departmentId field in Employee class and use this field to join tables. If
you update Department, there is nothing to update in the Employee, because
they are still linked with each other. 

You can refer to query example [1] for better understanding.

[1]
https://github.com/apache/ignite/blob/master/examples/src/main/java/org/apache/ignite/examples/datagrid/CacheQueryExample.java

-Val



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Maintaining-relationships-between-tables-tp4236p4386.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: can we make generic class for cachehibernatepojostore?

2016-04-20 Thread Ravi kumar Puri
I didnt get u... u mean i have to add cachehibernatestore to client
classpath class.

And how can i load it to all d nodes?
As i use only one server n one node
On 21-Apr-2016 00:24, "vkulichenko"  wrote:

> Hi Ravi,
>
> Currently you need to have the cache store implementation class on all
> nodes. So you need to add CacheHibernatePersonStore on client's classpath
> to
> make it work.
>
> -Val
>
>
>
> --
> View this message in context:
> http://apache-ignite-users.70518.x6.nabble.com/can-we-make-generic-class-for-cachehibernatepojostore-tp4355p4383.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>


RE: Data lost when using write-behind

2016-04-20 Thread vkulichenko
Hi,

1.6 will be released soon, as far as I know, but I'm not sure this fix will
be included there, unless someone in the community picks it up.

-Val



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Data-lost-when-using-write-behind-tp4265p4382.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Not able to save data in Ignite datagrid using JavaIgniteRDD

2016-04-20 Thread vijayendra bhati
Its working now.I moved from JavaIgniteContext to IgniteContext.Also invoked 
igniteConext.fromCache(cacheConfiguration) rather than 
javaigniteConext.fromCache(cacheName)
Also while initializing cacheConfiguration , added index to it.
Thanks,Vij 

On Wednesday, April 20, 2016 11:17 PM, vijayendra bhati 
 wrote:
 

 Hi,I am trying to save data in Ignite datagrid using JavaIgniteRDD by calling 
method savePairs().Some how job is getting finished properly , am not getting 
any exception.Still there is not data in cache.I am checking through H2 debug 
console.
I am not ablel to understand what could be the reason.The other thing is how to 
specify Indexes when using JavaIgniteContext.
Regards,Vij



  

Not able to save data in Ignite datagrid using JavaIgniteRDD

2016-04-20 Thread vijayendra bhati
Hi,I am trying to save data in Ignite datagrid using JavaIgniteRDD by calling 
method savePairs().Some how job is getting finished properly , am not getting 
any exception.Still there is not data in cache.I am checking through H2 debug 
console.
I am not ablel to understand what could be the reason.The other thing is how to 
specify Indexes when using JavaIgniteContext.
Regards,Vij



Re: C++ Distributed cache for caching files

2016-04-20 Thread rajs123
Code:

#include 
#include 
#include 
#include 
#include 
#include 
#include 
#include "ignite/ignite.h"
#include "ignite/ignition.h"


using namespace ignite;
using namespace cache;
using namespace std;

std::string get_file_contents(const char *filename)
{
std::ifstream in(filename, std::ios::in | std::ios::binary);
if (in)
{

return(std::string((std::istreambuf_iterator(in)),
std::istreambuf_iterator()));
}
throw(errno);
}

void PutFile(Cache& cache, const char * file) {
cout<(elapsed).count();
cout<(elapsed).count();
cout<<"Put time: "<<" "<(elapsed).count();
cout<<"Get time: "<>> Retrieved organization instance 
from cache: "
<< std::endl;

/*auto start = std::chrono::high_resolution_clock::now();
  val = cache.Get(key);
  auto elapsed = std::chrono::high_resolution_clock::now() - 
start;
  long  long microseconds =
std::chrono::duration_cast(elapsed).count();
  cout<<"Get time: "<>> Retrieved organization instance 
from cache:
" << std::endl;*/
//std::cout << val << std::endl;
std::cout << std::endl;
}

int main(int argc, const char * argv[]) {

IgniteConfiguration cfg;

cfg.jvmInitMem = 512;
cfg.jvmMaxMem = 1024*2;

cfg.springCfgPath = "config/example-cache.xml";
std::cout << std::endl;
std::cout << ">>> Example started ..." << std::endl;
try {
Ignite grid = Ignition::Start(cfg);
Cache cache = 
grid.GetOrCreateCache("example");
//  Cache 
cache = grid.GetCache("example");

PutFile(cache, argv[1]);

} catch (IgniteError& err) {
std::cout << "An error occurred: " << 
err.GetText() << std::endl;
}

std::cout << std::endl;
std::cout << ">>> Example finished, press any key to exit ..." 
<<
std::endl;
std::cout << std::endl;
return 0;
}




--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/C-Distributed-cache-for-caching-files-tp4158p4376.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


BinaryObject performance issue

2016-04-20 Thread dmreshet
Hello!
I need to put > data structure to cache.
I have found out that there is a BinaryObject, that solves the problem of
dynamic fields list and improves cache query operations performance. 
But I faced a performance issue. 

I have 3 node cluster with 5GB of RAM. I want to add 5 000 entries into
cache. 
In case I put > it takes over* 6,8 seconds*
In case I put  it takes *382 seconds*

I use atomic partitioned cache. Here is code example with BinaryObject:

Map> persons = ... //original data
structure
IgniteCache personCache =
Ignition.ignite().cache(PERSON_CACHE);

IgniteBinary binary = Ignition.ignite().binary();

persons.forEach((person, integers) -> {
BinaryObjectBuilder valBuilder =
binary.builder("categories");
integers.stream().forEach((integer -> {
valBuilder.setField(String.valueOf(integer),
integer);
}));
personCache.put(person.getId(), valBuilder.build());
});


Is that expected behaviour? 



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/BinaryObject-performance-issue-tp4375.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Maintaining relationships between tables

2016-04-20 Thread akritibahal91
Yes, could you explain how do I maintain relationships a bit more? I'm not
clear on this part.



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Maintaining-relationships-between-tables-tp4236p4374.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Problem with connecting to node from java client

2016-04-20 Thread vijayendra bhati
Thanks Vladimir, Its working.I was putting Ignition.ignite() in try block and 
hence it was getting closed because it implements Closeable interface and I was 
calling my code in loop with different arguments.
try(Ignite ignite = Ignition.ignite("myGrid"))
{}
Regards,Vij 

On Wednesday, April 20, 2016 5:33 PM, Vladimir Ozerov 
 wrote:
 

 Hi Vij,
You should pass the name you specified in "gridName" property. This is "myGrid" 
in your case. 
If you cannot locate your node with this name, you could try calling 
Ignition.allGrids() method. It will return all currently started Ignite 
instances, so that you can try looking for your node there.
But please note, that these methods return only nodes started in the same 
process. That is, if you start a node in one process (e.g. from console) and 
try to access it in another process using Ignition.ignite(), this will not work 
of course.
Vladimir.
On Wed, Apr 20, 2016 at 2:24 PM, vijayendra bhati  
wrote:

I looked at Ignite Java doc, it says ignite() takes gridName.So even when I am 
passing  "myGrid" it is still not working and says Ignite instance with 
provided name doesn't exist. Did you call Ignition.start(..) to start an Ignite 
instance? [name=myGrid]

Regards,Vij 

On Wednesday, April 20, 2016 4:22 PM, vijayendra bhati 
 wrote:
 

 Hi,
I am trying to connect to Ignite Data Grid.Till now I was not using the 
property  .Once I have started to 
use it I am not able to connect to Data Grid.
Although I have invoked  Ignition.start("config/cache-client.xml"); still when 
I do Ignition.ignite("config/cache-client.xml") say "Ignite instance with 
provided name doesn't exist. Did you call Ignition.start(..) to start an Ignite 
instance? [name=config/risk-analytics-cache-client.xml]"
I am not able to understand what should be input to ignite() method.I have 
tried to pass "myGrid" as well and even cache name also.But nothing is 
working.Is there is any documentation which I can look at.
Is there is any sample / example to see how Ignite integration will happen with 
Spark, mainly I am interested in connecting to Data Grid on Spark nodes.
I am also unable to understand how to use Ignite in case of multi threading 
scenario.The reason being if I close cache in one thread it will impact  other 
thread.So does that mean I should close after all the threads have executed or 
I can ignite same cache in multiple threads and close them individually (which 
will not work)
Regards,Vijayendra Bhati


   



  

Re: Map-reduce proceesing

2016-04-20 Thread Vladimir Ozerov
Hi,

If you broadcast the job and want to iterate over cache inside it, then
please make sure that you iterate only over local entries (e.g.
IgniteCache.localEntries(), ScanQuery.setLocal(true), etc.). Otherwise your
jobs will duplicate work and performance will suffer.

Also please note that returned result set might be incomplete if one of the
nodes failed during job processing. If you care about it, you should either
implement some failover, or use Ignite's built-in queries (ScanQuery,
SqlQuery) which already take care of it.

Anyway, I strongly recommend you to focus on SqlQuery first. You can
configure indexes on cache and they could give you great boost, because
instead of iterating over the whole cache, Ignite will use indexes for fast
data lookup.

Vladimir.

On Wed, Apr 20, 2016 at 12:31 PM, dmreshet  wrote:

> Yes, I know.
> I want to compare performance of SQL,  SQL with indexes and MapReduce job.
> I have found that I can use broadcast to garantie that my MapReduce job
> will
> be executed on each node exactly once.
> So now my job uses code:
> /Collection result =
>
> ignite.compute(ignite.cluster()).broadcast((IgniteCallable>)
> () -> {...});/
>
> And than I will reduce the result.
>
> Is that the best practise to implement MapReduce job in case that I should
> process data from cache?
>
>
>
> --
> View this message in context:
> http://apache-ignite-users.70518.x6.nabble.com/Map-reduce-proceesing-tp4357p4364.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>


Re: Problem with connecting to node from java client

2016-04-20 Thread Vladimir Ozerov
Hi Vij,

You should pass the name you specified in "gridName" property. This is
"myGrid" in your case.

If you cannot locate your node with this name, you could try calling
*Ignition.allGrids()* method. It will return all currently started Ignite
instances, so that you can try looking for your node there.

But please note, that these methods return only nodes started in the same
process. That is, if you start a node in one process (e.g. from console)
and try to access it in another process using *Ignition.ignite()*, this
will not work of course.

Vladimir.

On Wed, Apr 20, 2016 at 2:24 PM, vijayendra bhati 
wrote:

> I looked at Ignite Java doc, it says ignite() takes gridName.So even when
> I am passing  "myGrid" it is still not working and says
> Ignite instance with provided name doesn't exist. Did you call
> Ignition.start(..) to start an Ignite instance? [name=myGrid]
>
> Regards,
> Vij
>
>
> On Wednesday, April 20, 2016 4:22 PM, vijayendra bhati <
> veejayend...@yahoo.com> wrote:
>
>
> Hi,
>
> I am trying to connect to Ignite Data Grid.Till now I was not using the
> property  .
> Once I have started to use it I am not able to connect to Data Grid.
>
> Although I have invoked  Ignition.start("config/cache-client.xml");
> still when I do Ignition.ignite("config/cache-client.xml") say *"Ignite
> instance with provided name doesn't exist. Did you call Ignition.start(..)
> to start an Ignite instance? [name=config/risk-analytics-cache-client.xml]"*
>
> I am not able to understand what should be input to ignite() method.I have
> tried to pass "myGrid" as well and even cache name also.But nothing is
> working.
> Is there is any documentation which I can look at.
>
> Is there is any sample / example to see how Ignite integration will happen
> with Spark, mainly I am interested in connecting to Data Grid on Spark
> nodes.
>
> I am also unable to understand how to use Ignite in case of multi
> threading scenario.The reason being if I close cache in one thread it will
> impact  other thread.So does that mean I should close after all the threads
> have executed or I can ignite same cache in multiple threads and close them
> individually (which will not work)
>
> Regards,
> Vijayendra Bhati
>
>
>
>


Re: Problem with connecting to node from java client

2016-04-20 Thread vijayendra bhati
I looked at Ignite Java doc, it says ignite() takes gridName.So even when I am 
passing  "myGrid" it is still not working and says Ignite instance with 
provided name doesn't exist. Did you call Ignition.start(..) to start an Ignite 
instance? [name=myGrid]

Regards,Vij 

On Wednesday, April 20, 2016 4:22 PM, vijayendra bhati 
 wrote:
 

 Hi,
I am trying to connect to Ignite Data Grid.Till now I was not using the 
property  .Once I have started to 
use it I am not able to connect to Data Grid.
Although I have invoked  Ignition.start("config/cache-client.xml"); still when 
I do Ignition.ignite("config/cache-client.xml") say "Ignite instance with 
provided name doesn't exist. Did you call Ignition.start(..) to start an Ignite 
instance? [name=config/risk-analytics-cache-client.xml]"
I am not able to understand what should be input to ignite() method.I have 
tried to pass "myGrid" as well and even cache name also.But nothing is 
working.Is there is any documentation which I can look at.
Is there is any sample / example to see how Ignite integration will happen with 
Spark, mainly I am interested in connecting to Data Grid on Spark nodes.
I am also unable to understand how to use Ignite in case of multi threading 
scenario.The reason being if I close cache in one thread it will impact  other 
thread.So does that mean I should close after all the threads have executed or 
I can ignite same cache in multiple threads and close them individually (which 
will not work)
Regards,Vijayendra Bhati


  

Re: cache cannot load all the data into cache

2016-04-20 Thread Alexey Kuznetsov
Hi, Kevin.

Could you please make small example + db scripts to reproduce and debug
this issue?


On Wed, Apr 13, 2016 at 10:34 AM, Zhengqingzheng 
wrote:

> Dear all,
> I am trying to load two tables data into caches to speed up my queries.
> table1 contains 564 records, with one primary key as index.
> definition of table content from java  as follows:
> @QuerySqlField
> private String orgId;
>
> @QuerySqlField(index=true)
> private String objId;
>
> @QuerySqlField
> private int numRows;
>
> table2 contains 9626 records, with no primary key defined but a group
> index is defined.
> definition of table2 from java as follows:
> @QuerySqlField
> private String orgId;
>
> @QuerySqlField(orderedGroups={@QuerySqlField.Group(
> name="objId_fieldName_idx", order=0, descending = true)})
> private String objId;
>
> @QuerySqlField(orderedGroups={@QuerySqlField.Group(
> name="objId_fieldName_idx", order=1, descending = true)})
> private String fieldName;
>
> @QuerySqlField
> private int fieldNum;
>
> @QuerySqlField
> private int statVal;
>
> I defined two caches to load all the data from two tables:
> the first cache load data from table1, and works fine.
> but the second cache which load data from table2 cannot load all the data,
> only few of them.
> I think this is due to the configuration of cache2 was probability wrong[
> because cache content shows that objid was the unique key to retrieve the
> data record]:
>
> final String CACHE_NAME1=
> IgniteMetaDatabaseFieldStat.class.getSimpleName()+"_Cache";
> CacheConfiguration cfg =
> new CacheConfiguration IgniteMetaDatabaseFieldStat>(CACHE_NAME1);
>
> CacheJdbcPojoStoreExampleFactory IgniteMetaDatabaseFieldStat> storeFactory =
> new
> CacheJdbcPojoStoreExampleFactory();
>
> storeFactory.setDialect(new OracleDialect());
>
> JdbcType jdbcType = new JdbcType();
>
> jdbcType.setCacheName(CACHE_NAME1);
> jdbcType.setDatabaseSchema("besdb");
> jdbcType.setDatabaseTable("data_base_field_stat");
>
> 
> jdbcType.setKeyType("java.lang.String");
> jdbcType.setKeyFields(new JdbcTypeField(Types.VARCHAR, "OBJID",
> String.class, "objId")
>/* ,new JdbcTypeField(Types.VARCHAR, "FIELDNAME", String.class,
> "fieldName")*/);
> 
>
> jdbcType.setValueType("org.apache.ignite.examples.model.IgniteMetaDatabaseFieldStat");
> jdbcType.setValueFields(
> new JdbcTypeField(Types.VARCHAR,"ORGID", String.class,
> "orgId"),
> new JdbcTypeField(Types.VARCHAR,"OBJID", String.class,
> "objId"),
> new JdbcTypeField(Types.VARCHAR,"FIELDNAME", String.class,
> "fieldName"),
> new JdbcTypeField(Types.INTEGER,"FIELDNUM", Integer.class,
> "fieldNum"),
> new JdbcTypeField(Types.INTEGER,"STAT_VAL", Integer.class,
> "statVal")
> );
>
>
> storeFactory.setTypes(jdbcType);
>
> cfg.setCacheStoreFactory(storeFactory);
>
> // Set atomicity as transaction, since we are showing transactions
> in the example.
> cfg.setAtomicityMode(CacheAtomicityMode.ATOMIC);
> cfg.setIndexedTypes(String.class,
> IgniteMetaDatabaseFieldStat.class);
>
> cfg.setReadThrough(true);
> cfg.setWriteThrough(true);
>
> cfg.setCacheMode(CacheMode.PARTITIONED);
> //cfg.setAtomicityMode(CacheAtomicityMode.ATOMIC);
> //cfg.setMemoryMode(CacheMemoryMode.OFFHEAP_TIERED);
> //cfg.setOffHeapMaxMemory(64 * 1024L * 1024L);
>
> //cfg.setStartSize(100 * 1024 * 1024);
> cfg.setBackups(0);
>
> please note the jdbcType.setKeyTypes and setKeyFields part. I want to use
> the groupIndex as the cache key setting, which was defined in the
> annotation part of
>  @QuerySqlField(orderedGroups={@QuerySqlField.Group(
> name="objId_fieldName_idx", order=0, descending = true)})
> private String objId;
>
> @QuerySqlField(orderedGroups={@QuerySqlField.Group(
> name="objId_fieldName_idx", order=1, descending = true)})
> private String fieldName;
>
> but I don't know how to do that, in my example I just use objid as the
> key. In this case, if there are duplicate values come into the cache, the
> rest values was ignored.
>
> How to setup the correct key for jdbcTypes in my cache configuration?
>
>
> Best regards,
> Kevin
>
>


-- 
Alexey Kuznetsov
GridGain Systems
www.gridgain.com


Re: Map-reduce proceesing

2016-04-20 Thread dmreshet
Yes, I know.
I want to compare performance of SQL,  SQL with indexes and MapReduce job.
I have found that I can use broadcast to garantie that my MapReduce job will
be executed on each node exactly once.
So now my job uses code:
/Collection result =
ignite.compute(ignite.cluster()).broadcast((IgniteCallable>)
() -> {...});/

And than I will reduce the result.

Is that the best practise to implement MapReduce job in case that I should
process data from cache?



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Map-reduce-proceesing-tp4357p4364.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Map-reduce proceesing

2016-04-20 Thread Vladimir Ozerov
Hi,

There is no need to implement SQL queries using map-reduce. Ignite already
has it's own query engine. Please refer to
*org.apache.ignite.cache.query.SqlQuery
*class and *IgniteCache.query()* method.

Alternatively you can use scan queries for some cases. See
*org.apache.ignite.cache.query.ScanQuery*.

Vladimir.

On Wed, Apr 20, 2016 at 10:41 AM, dmreshet  wrote:

> Hello!
> I want to implement SQL query in terms of MapReduce with
> ComputeTaskSplitAdapter.
>
> /select * from Person where salary > ?/
>
> And I want to know what is the best practise to do this?
>
> At this moment I am using cache.localEntries() to get all cache values at
> Map stage and it look's like it is not coorect, because there is no
> garanties that each task will be executed on different nodes of Ignite Data
> Grid.
>
> Here is an example of split method of  my ComputeTaskSplitAdapter  class
>
>
> /@Override
> protected Collection split(int gridSize, Integer
> salary) throws IgniteException {
> List jobs = new ArrayList<>(gridSize);
>
> for (int i = 0; i < gridSize; i++) {
> jobs.add(new ComputeJobAdapter() {
> @Override
> public Object execute() {
> IgniteCache cache =
> Ignition.ignite().cache(Executor.PERSON_CACHE);
> List list = new ArrayList<>();
> Iterable> entries =
> cache.localEntries();
> entries.forEach((entry -> {
> if (entry.getValue().getSalary() > salary) {
> list.add(entry.getValue());
> }
> }));
>
> return list;
> }
> });
> }
>
> return jobs;
> }
> /
>
>
>
>
>
> --
> View this message in context:
> http://apache-ignite-users.70518.x6.nabble.com/Map-reduce-proceesing-tp4357.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>


Re: ignite logging not captured in log file (log4j)

2016-04-20 Thread Vladimir Ozerov
Hi Binti,

This appears to be a problem with your log4j configuration. Please
double-check it on your side. I did the following:
1) Create a file with the same content as you provided.
2) Set it through system
property:  -Dlog4j.configuration=file:/c:/logger/logger.cfg
3) Run Ignite node.

Result: thousands of debug/info statements from Ignite.

Vladimir.

On Tue, Apr 19, 2016 at 10:28 PM, bintisepaha 
wrote:

> Hi,
>
> I am using log4j for clients connecting to the grid. sample log4j file
> below
> (file name = RGP-log4j.properties). I set this file name in VM arguments
> for
> both these args
>
> -Dlog4j.configuration=file:RGP-log4j.properties
> -Djava.util.logging.config.file=file:RGP-log4j.properties
>
> log4j.rootCategory=DEBUG, file
> log4j.appender.file=org.apache.log4j.RollingFileAppender
> log4j.appender.file.File=C:/tmp/logs/RGP.log
> log4j.appender.file.threshold=DEBUG
> log4j.appender.file.layout=org.apache.log4j.EnhancedPatternLayout
> log4j.appender.file.layout.ConversionPattern=[%d{dd MMM  HH:mm:ss.SSS
> z}] [%t] %-5p (%F:%L) %m%n
> log4j.appender.file.MaxFileSize=250MB
> log4j.appender.file.MaxBackupIndex=200
> log4j.logger.org.springframework=INFO
> log4j.logger.org.springframework.jms=INFO
> log4j.logger.org.apache.activemq=INFO
> log4j.logger.org.apache.commons=INFO
> log4j.logger.com.tudor.datagrid=INFO
> log4j.logger.org.apache.ignite=DEBUG
>
> However, on console I see the logging like below, but I do not see the
> below
> being logged to the file.
> This error happen when the grid is down and a client tries to connect to
> it.
> since it does not get logged on the client logs, only the exception stack
> trace, its hard to say which all grid nodes it tried to connect to. Is
> there
> anyway to enable the below logging to file. I am also worried that maybe
> some ignite level logging does not make it to the file, and we may be
> unaware of some exceptions/errors.
>
> [13:44:27]   /  _/ ___/ |/ /  _/_  __/ __/
> [13:44:27]  _/ // (7 7// /  / / / _/
> [13:44:27] /___/\___/_/|_/___/ /_/ /___/
> [13:44:27]
> [13:44:27] ver. 1.5.0-final#20151229-sha1:f1f8cda2
> [13:44:27] 2015 Copyright(C) Apache Software Foundation
> [13:44:27]
> [13:44:27] Ignite documentation: http://ignite.apache.org
> [13:44:27]
> [13:44:27] Quiet mode.
> [13:44:27]   ^-- To see **FULL** console log here add -DIGNITE_QUIET=false
> or "-v" to ignite.{sh|bat}
> [13:44:27]
> [13:44:27] OS: Windows 7 6.1 amd64
> [13:44:27] VM information: Java(TM) SE Runtime Environment 1.7.0_45-b18
> Oracle Corporation Java HotSpot(TM) 64-Bit Server VM 24.45-b08
> [13:44:27] Initial heap size is 95MB (should be no less than 512MB, use
> -Xms512m -Xmx512m).
> [13:44:29] Configured plugins:
> [13:44:29]   ^-- None
> [13:44:29]
> [13:44:30] Security status [authentication=off, tls/ssl=off]
> [13:45:54] Failed to connect to any address from IP finder (will retry to
> join topology every 2 secs): [grid-tp1-dev/10.22.50.95:47500,
> grid-tp1-dev/10.22.50.95:47501, grid-tp1-dev/10.22.50.95:47502,
> grid-tp1-dev/10.22.50.95:47503, grid-tp1-dev/10.22.50.95:47504,
> grid-tp1-dev/10.22.50.95:47505, grid-tp1-dev/10.22.50.95:47506,
> grid-tp1-dev/10.22.50.95:47507, grid-tp1-dev/10.22.50.95:47508,
> grid-tp1-dev/10.22.50.95:47509, grid-tp2-dev/10.22.50.249:47500,
> grid-tp2-dev/10.22.50.249:47501, grid-tp2-dev/10.22.50.249:47502,
> grid-tp2-dev/10.22.50.249:47503, grid-tp2-dev/10.22.50.249:47504,
> grid-tp2-dev/10.22.50.249:47505, grid-tp2-dev/10.22.50.249:47506,
> grid-tp2-dev/10.22.50.249:47507, grid-tp2-dev/10.22.50.249:47508,
> grid-tp2-dev/10.22.50.249:47509, grid-tp3-dev/10.22.50.250:47500,
> grid-tp3-dev/10.22.50.250:47501, grid-tp3-dev/10.22.50.250:47502,
> grid-tp3-dev/10.22.50.250:47503, grid-tp3-dev/10.22.50.250:47504,
> grid-tp3-dev/10.22.50.250:47505, grid-tp3-dev/10.22.50.250:47506,
> grid-tp3-dev/10.22.50.250:47507, grid-tp3-dev/10.22.50.250:47508,
> grid-tp3-dev/10.22.50.250:47509, grid-tp4-dev/10.22.50.251:47500,
> grid-tp4-dev/10.22.50.251:47501, grid-tp4-dev/10.22.50.251:47502,
> grid-tp4-dev/10.22.50.251:47503, grid-tp4-dev/10.22.50.251:47504,
> grid-tp4-dev/10.22.50.251:47505, grid-tp4-dev/10.22.50.251:47506,
> grid-tp4-dev/10.22.50.251:47507, grid-tp4-dev/10.22.50.251:47508,
> grid-tp4-dev/10.22.50.251:47509]
>
> However, thi sis what is logged in the log file
> [19 Apr 2016 13:45:54.345 EDT] [main] ERROR (ReconcilePositions.java:149)
> class org.apache.ignite.IgniteException: Failed to start manager:
> GridManagerAdapter [enabled=true,
> name=org.apache.ignite.internal.managers.discovery.GridDiscoveryManager]
> class org.apache.ignite.IgniteException: Failed to start manager:
> GridManagerAdapter [enabled=true,
> name=org.apache.ignite.internal.managers.discovery.GridDiscoveryManager]
> at
>
> org.apache.ignite.internal.util.IgniteUtils.convertException(IgniteUtils.java:882)
> at org.apache.ignite.Ignition.start(Ignition.java:350)
> at
>
> 

Re: Apache Ignite - YARN integration

2016-04-20 Thread Vladimir Ozerov
Vij,

Please try again. Problem should be resolved at the moment.

Vladimir.

On Tue, Apr 19, 2016 at 11:11 AM, Vladimir Ozerov 
wrote:

> Hi Vij,
>
> I am looking at the problem with "404" response at the moment. Will come
> back to you as soon as I have any additional information.
> As per Spark design, I hope other community members who a more familiar
> with it will answer you soon.
>
> Vladimir.
>
> On Tue, Apr 19, 2016 at 10:49 AM, vijayendra bhati  > wrote:
>
>> Still I have not been able to figure out how to resolve this issue but I
>> think I can change my approach.
>> I can start Ignite node on the same node of AWS cluster on which YARN
>> (Node manager) and HDFS (Data node) would be configured but I want to
>> understand is it necessary to run Ignite Node as YARN job to achieve data
>> locality while accessing Ignite data using Spark job ?
>>
>> Regards,
>> Vij
>>
>>
>> On Sunday, April 17, 2016 6:11 PM, vijayendra bhati <
>> veejayend...@yahoo.com> wrote:
>>
>>
>> Hi,
>> I am trying to run Ignite nodes over YARN cluster by following the
>> documentation given on YARN Deployment · Apache Ignite
>> 
>>
>>
>> [image: image] 
>>
>>
>>
>>
>>
>> YARN Deployment · Apache Ignite
>> 
>> Deploy Ignite in YARN cluster.
>> View on apacheignite.gridgai...
>> 
>> Preview by Yahoo
>>
>>
>> I am using Cloudera supplied VM for initial installation purpose and have
>> downloaded Apache Ignite version 1.5.0.
>>
>> But I am getting error when I am trying to run below command -
>>
>> [cloudera@quickstart ignite-yarn]$ hadoop jar
>> /home/cloudera/vij/ignite_config/apache-ignite-fabric-1.5.0.final-bin/libs/optional/ignite-yarn/ignite-yarn-1.5.0.final.jar
>> ./home/cloudera/vij/ignite_config/apache-ignite-fabric-1.5.0.final-bin/libs/optional/ignite-yarn/ignite-yarn-1.5.0.final.jar
>> /home/cloudera/vij/ignite_config/cluster.properties
>> 16/04/17 05:29:16 INFO client.RMProxy: Connecting to ResourceManager at /
>> 0.0.0.0:8032
>> Exception in thread "main" java.lang.RuntimeException: Got unexpected
>> response code. Response code: 404
>> at
>> org.apache.ignite.yarn.IgniteProvider.updateIgnite(IgniteProvider.java:240)
>> at
>> org.apache.ignite.yarn.IgniteProvider.getIgnite(IgniteProvider.java:93)
>> at
>> org.apache.ignite.yarn.IgniteYarnClient.getIgnite(IgniteYarnClient.java:169)
>> at
>> org.apache.ignite.yarn.IgniteYarnClient.main(IgniteYarnClient.java:79)
>> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>> at
>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>> at
>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>> at java.lang.reflect.Method.invoke(Method.java:498)
>> at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
>> at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
>>
>> Can somebody help me what could be the issue ?Also I want to understand
>> whats the benefit of running Ignite cluster over YARN.We could run Ignite
>> nodes separately as well.One reason I could think is management of cluster
>> become easy, as you dont need to manually start Ignite nodes on each node.
>>
>> Regards,
>> Vijayendra Bhati
>>
>>
>>
>


Re: Affinity Collocation - Using CacheKeyConfiguration - Multiple fields

2016-04-20 Thread arthi
Hi Val,

I could get the values of each cache collocate on nodes based on field A and
field B using the AffinityKey interface. But, on top of this, I want the
values of both caches with the same field A and field B in the SAME node as
well so that the joins work. Currently, they appear in different nodes.

Should we do something with AffinityFunction -- that maps keys to nodes
across caches?

Please advice.

Thanks,
Arthi



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Affinity-Collocation-Using-CacheKeyConfiguration-Multiple-fields-tp3812p4359.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Error running nodes in .net and c++

2016-04-20 Thread Vladimir Ozerov
Hi Murthy,

Yes, there will be more examples in further versions. Though, for now it is
impossible to plug C++ based store, and this feature is not planned for 1.6
release. So I do not expect C++ examples with stores in 1.6.
Instead, I'd better to look at Java or .NET examples with store as these
platforms support plugable store implementations.

Vladimir.

On Tue, Apr 19, 2016 at 7:34 PM, Murthy Kakarlamudi 
wrote:

> Thanks Vladimir for the explanation. I am working on the workaround
> suggested by Igor. I will reach out to the group if I run into any issues.
>
> One quick question. I am using 1.5 version. I only see 1 c++ example. Are
> there more c++ examples in future versions? Especially around using stores.
>
> Regards
> Satya.
>
> On Tue, Apr 19, 2016 at 9:20 AM, Vladimir Ozerov 
> wrote:
>
>> Hi Murthy,
>>
>> Exception you observed is essentially not a bug, but rather expected
>> behavior with current Ignite architecture. Ignite support transactions.
>> When you initiate a transaction from a client node, only this node has the
>> full set of updated keys, and hence only this node is able to propagate
>> updates to underlying database within a single database transaction. For
>> this reason, Ignite creates and initializes store on every node, even if
>> this node is client.
>>
>> As Igor suggested, the best workaround for now is to rely on Java store
>> because every node (Java, C++, .NET) has a Java inside and hence is able to
>> work with Java-based store. On the other hand, I clearly understand that
>> this architecture doesn't fit well in your use case and is not very
>> convenient from user perspective. We will think about possible ways to
>> resolve it.
>>
>> One very simple solution - do not initialize store if we know for sure
>> that the client will not use it. For example, this is so in case of ATOMIC
>> cache or asynchronous (write-behind) store.
>>
>> Vladimir.
>>
>>
>>
>> On Tue, Apr 19, 2016 at 2:31 PM, Murthy Kakarlamudi 
>> wrote:
>>
>>> OK Igor. Let me try from Java.
>>>
>>> From a high level, we have a backend application implemented in c++ and
>>> the front end is asp.net mvc. Data store is SQL Server.
>>>
>>> Use case is, I need to load data from SQL Server into Ignite Cache upon
>>> start up. .Net and C++ acting as clients need to access the cache and
>>> update it. Those updates should be written to the underlying SQL Server in
>>> an asynchronous way so as not to impact the cache performance.  The updates
>>> that gets written from .Net client need to be accessed by C++ client. We
>>> have a need to use SQL Queries to access cache from either of the clients.
>>>
>>> I can start the cache from Java server node. However, as .net and c++
>>> are being used in our application, we prefer sticking to those 2 and not
>>> introduce Java.
>>>
>>> Thanks,
>>> Satya.
>>>
>>> On Tue, Apr 19, 2016 at 6:30 AM, Igor Sapego 
>>> wrote:
>>>
 Right now I can see the following workaround for you: you can switch
 from .Net CacheStoreFactory to Java's one. This way all types of clients
 will be able to instantiate your cache.

 If you are willing to you can describe your use-case so we can
 try and find some other solution if this workaround is not suitable
 for you.

 Best Regards,
 Igor

 On Tue, Apr 19, 2016 at 1:06 PM, Murthy Kakarlamudi 
 wrote:

> Thank You.
> On Apr 19, 2016 6:01 AM, "Igor Sapego"  wrote:
>
>> Hi,
>>
>> It looks like a bug for me. I've submitted an issue for it - [1].
>>
>> [1] - https://issues.apache.org/jira/browse/IGNITE-3025.
>>
>> Best Regards,
>> Igor
>>
>> On Mon, Apr 18, 2016 at 1:35 AM, Murthy Kakarlamudi > > wrote:
>>
>>> The client node itself starts after making the change, but getting
>>> the below error trying to access the cache:
>>>
>>> [12:16:45] Topology snapshot [ver=2, servers=1, clients=1, CPUs=4,
>>> heap=1.4GB]
>>>
>>> >>> Cache node started.
>>>
>>> [12:16:45,439][SEVERE][exchange-worker-#38%null%][GridDhtPartitionsExchangeFuture]
>>> Failed to reinitialize local partitions (preloading will be stopped):
>>> GridDhtPartitionExchangeId [topVer=AffinityTopologyVersion [topVer=2,
>>> minorTopVer=1], nodeId=2bf10735, evt=DISCOVERY_CUSTOM_EVT]
>>> PlatformNoCallbackException []
>>> at
>>> org.apache.ignite.internal.processors.platform.callback.PlatformCallbackUtils.cacheStoreCreate(Native
>>> Method)
>>> at
>>> org.apache.ignite.internal.processors.platform.callback.PlatformCallbackGateway.cacheStoreCreate(PlatformCallbackGateway.java:63)
>>> at
>>> org.apache.ignite.internal.processors.platform.dotnet.PlatformDotNetCacheStore.initialize(PlatformDotNetCacheStore.java:338)
>>> at
>>> 

Map-reduce proceesing

2016-04-20 Thread dmreshet
Hello!
I want to implement SQL query in terms of MapReduce with
ComputeTaskSplitAdapter.

/select * from Person where salary > ?/

And I want to know what is the best practise to do this?

At this moment I am using cache.localEntries() to get all cache values at
Map stage and it look's like it is not coorect, because there is no
garanties that each task will be executed on different nodes of Ignite Data
Grid.

Here is an example of split method of  my ComputeTaskSplitAdapter  class


/@Override
protected Collection split(int gridSize, Integer
salary) throws IgniteException {
List jobs = new ArrayList<>(gridSize);

for (int i = 0; i < gridSize; i++) {
jobs.add(new ComputeJobAdapter() {
@Override
public Object execute() {
IgniteCache cache =
Ignition.ignite().cache(Executor.PERSON_CACHE);
List list = new ArrayList<>();
Iterable> entries =
cache.localEntries();
entries.forEach((entry -> {
if (entry.getValue().getSalary() > salary) {
list.add(entry.getValue());
}
}));

return list;
}
});
}

return jobs;
}
/





--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Map-reduce-proceesing-tp4357.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: SQL Aliases are not interpreted correctly

2016-04-20 Thread jan.swaelens
Superb, that would really do the trick for my use cases!

best regards
jan



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/SQL-Aliases-are-not-interpreted-correctly-tp4281p4356.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


can we make generic class for cachehibernatepojostore?

2016-04-20 Thread Ravi Puri
i have a query 

i have one class which loads cacheconfiguration class(CacheConfig) with its
cachehibernatepojostore(specific to class) passed in factory builder which i
passed as T.


public class CacheConfig {

private static final long serialVersionUID = 1L;
private static final String HIBERNATE_CFG = "hibernate.cfg.xml";

@SuppressWarnings({ "unchecked", "rawtypes" })
public static CacheConfiguration loadConfig(String
CacheName,
 Class  T) {
CacheConfiguration cacheCfg = new
CacheConfiguration<>(p_strCacheName
);

// Set atomicity as transaction, since we are showing 
transactions in
// example.
cacheCfg.setAtomicityMode(TRANSACTIONAL);

// Configure JDBC store.
cacheCfg.setCacheStoreFactory(FactoryBuilder.factoryOf(T));

// Configure hibernate session listener.
cacheCfg.setCacheStoreSessionListenerFactories(new
Factory() {
/**
 * 
 */
private static final long serialVersionUID = 1L;

@Override
public CacheStoreSessionListener create() {
CacheHibernateStoreSessionListener lsnr = new
CacheHibernateStoreSessionListener();


lsnr.setHibernateConfigurationPath(HIBERNATE_CFG);

return lsnr;
}
});

cacheCfg.setReadThrough(true);
cacheCfg.setWriteThrough(true);

System.out.println("ended configuration loadConfig");

return cacheCfg;
}

}


so form main i can access
CacheConfig objCacheConfig = new CacheConfig();
IgniteCache cache= objCacheConfig 
.loadConfig("CacheName",
CacheHibernatePersonStore .class);
cache.loadCache(null, 100_00);


IT LOAD ALL DATA RELATED TO CacheHibernatePersonStore extends
CacheStoreAdapter
means in short it load all the data to server side



now m using client side and has different ignite.xml 

now i want to access the data to this client side in different package of
eclipse which was loaded in server side with different config.(as m able to
connect server and client is connected )
but when m trying to fetch it shows
error:
Failed to create an instance of CacheHibernatePersonStore class


how can i able to load it??




--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/can-we-make-generic-class-for-cachehibernatepojostore-tp4355.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


RE: Data lost when using write-behind

2016-04-20 Thread wang shuai
Thank you, vkulichenko.

The ticket plans to be fixed in the 1.6 version. Do you know when the target
date of 1.6 release is?  



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Data-lost-when-using-write-behind-tp4265p4354.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: How to auto generate spring configuration file ?

2016-04-20 Thread Alexey Kuznetsov
Kamal, I created an issue for this, you can track in JIRA.

https://issues.apache.org/jira/browse/IGNITE-3030

On Tue, Apr 19, 2016 at 1:00 PM, Alexey Kuznetsov 
wrote:

> Hi, Kamal!
>
> Thank you for feedback on web console.
>
> I will take a look and create issues in JIRA for this properties.
> I will let you know in this thread.
>
> On Tue, Apr 19, 2016 at 12:32 PM, Kamal C  wrote:
>
>> Thanks Alexey!
>>
>> This is what I've been looking for. Still, some keys are not supported in
>> web-console:
>>
>> 1. userAttributes
>> 2. failoverSpi
>> 3. gridLogger
>> ...
>>
>> How to configure userAttributes in XML ?
>>
>> --Kamal
>>
>>
>> On Mon, Apr 18, 2016 at 7:22 PM, Alexey Kuznetsov <
>> akuznet...@gridgain.com> wrote:
>>
>>> Hi, Kamal!
>>>
>>> You could try web console
>>> https://ignite.apache.org/addons.html#web-console
>>> It will generate XML and Java code for you.
>>>
>>> On Mon, Apr 18, 2016 at 8:19 PM, Kamal C  wrote:
>>>
 Hi,

 Ignite can be configured either through IgniteConfiguration or by
 passing bean XML file.

 In XML file approach, typing the property keys seems to be error-prone.

 e.g.




 *>>> class="org.apache.ignite.configuration.IgniteConfiguration">...
   *
 **


 How to auto-generate the configuration file using the bean object ?
 [or]
 Is any sample file available which contains all the property keys ?
 (Users can copy-paste and edit only the values)

 --Kamal

>>>
>>>
>>>
>>> --
>>> Alexey Kuznetsov
>>> GridGain Systems
>>> www.gridgain.com
>>>
>>
>>
>
>
> --
> Alexey Kuznetsov
> GridGain Systems
> www.gridgain.com
>



-- 
Alexey Kuznetsov
GridGain Systems
www.gridgain.com


Re: SQL Aliases are not interpreted correctly

2016-04-20 Thread Alexey Kuznetsov
Hi Jan!

It seems that we can fix this issue with help of fields aliases.

I created issues for support fields aliases generation.
 https://issues.apache.org/jira/browse/IGNITE-3028 (Schema Import utility)
 https://issues.apache.org/jira/browse/IGNITE-3029 (Web Console)


Thanks!

On Wed, Apr 20, 2016 at 3:51 AM, vkulichenko 
wrote:

> Hi Jan,
>
> The first approach you mentioned should work for you, Ignite uses object
> field names as SQL names. There is currently no way to generate classes
> with
> DB names used as field names, but as Alexey mentioned, this is something
> that can be added later.
>
> In the meantime, you can modify POJOs and CacheConfig manually.
>
> -Val
>
>
>
> --
> View this message in context:
> http://apache-ignite-users.70518.x6.nabble.com/SQL-Aliases-are-not-interpreted-correctly-tp4281p4341.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>



-- 
Alexey Kuznetsov
GridGain Systems
www.gridgain.com


re: re: ignite problem when loading large amount of data into cache

2016-04-20 Thread Zhengqingzheng
Hi Val,
When the exception occurred, I checked the forum and reset the java vm size to 
30g, and also I split my table into 10 smaller tables, each contains 1g data.
In this case, I saw your suggestion on off-heap settings. I don't want to 
reload all the data again, So I asked if it is possible to make the 
configuration take effect immediately at running time.

Btw, I see there are backup settings in partition mode,like this  
But I did not see where those backups are stored. Is there any settings like 
redis which will automatically load dump files when redis recovered from an 
accident?

Regards,
Kevin  

-邮件原件-
发件人: vkulichenko [mailto:valentin.kuliche...@gmail.com] 
发送时间: 2016年4月20日 12:42
收件人: user@ignite.apache.org
主题: Re: re: ignite problem when loading large amount of data into cache

Kevin,

Configuration of the existing cache can't be changed in runtime. The only 
option is to destroy the cache and create it again with new parameters (you 
will lose all in-memory data, of course). What's the use case when you can need 
this?

-Val



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/ignite-problem-when-loading-large-amount-of-data-into-cache-tp4324p4347.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Question for data persistence

2016-04-20 Thread wang shuai
Thank you, Alexei.



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Question-for-data-persistence-tp4264p4350.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.