Re: Does Ignite support SQL statement of "INSERT"?

2016-11-14 Thread Alexey Kuznetsov
Hi,

No, Ignite does not support SQL DELETE for now.
You may watch this issue: https://issues.apache.org/jira/browse/IGNITE-2294
I hope this will be available in upcoming ignite 1.8 release.

Meanwhile you could try smth. like this:
1) select _key from Tbl where _your_condition.
2) put all keys to set.
3) cache.removeAll(set)

Will this work for you?


On Tue, Nov 15, 2016 at 11:31 AM, ght230  wrote:

> Does Ignite SQL statement support "DELETE" operation now?
>
> If not, What is the best way when I want to remove some data from cache in
> the specified condition?
>
>
>
> --
> View this message in context: http://apache-ignite-users.
> 70518.x6.nabble.com/Does-Ignite-support-SQL-statement-
> of-INSERT-tp1838p8981.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>



-- 
Alexey Kuznetsov


Re: Does Ignite support SQL statement of "INSERT"?

2016-11-14 Thread ght230
Does Ignite SQL statement support "DELETE" operation now?

If not, What is the best way when I want to remove some data from cache in
the specified condition?



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Does-Ignite-support-SQL-statement-of-INSERT-tp1838p8981.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: How can I detect DB writing abnormal in case of write-behind?

2016-11-14 Thread ght230
I am not understand what you mean: "In this process, something wrong in the
DB, then data can not be written into it."
-->For one case, some field setting was wrong in DB, such as the field
length of DB is less than that in the Pojo.
For another case, when process write-behind, the DB down.
Both the above cases will make the data can not be written into the DB.

How did you detect this?
-->This is exactly what I want to know.
For the first case above, It will cause Ignite down.
Is there any metrics of the cache can I use to know that the DB is
abnormal.

If "write behind" flag was set as true, a data will inserted into DB
asynchronously (in dedicated thread). You should to wait when the data will
be saved into DB.

By cache metrics you can watch over number of "put" operation on cache
(org.apache.ignite.cache.CacheMetrics#getCachePuts).
-->I had tried to use getCachePuts, but it is always 0.



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/How-can-I-detect-DB-writing-abnormal-in-case-of-write-behind-tp8954p8979.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Cluster hung after a node killed

2016-11-14 Thread javastuff....@gmail.com
Do you want Ignite to be running in DEBUG? or System.out should be enough
from all 3 nodes? 



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Cluster-hung-after-a-node-killed-tp8965p8978.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Checkingpointing

2016-11-14 Thread vkulichenko
Yes, checkpoints are the feature of the Compute Grid. What kind of
checkpoints do you want to create? Can you give more details on the use
case?

-Val



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Checkingpointing-tp8933p8977.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Cluster hung after a node killed

2016-11-14 Thread vkulichenko
Hi Sam,

Please attach full logs and full thread dumps if you want someone to take a
look. There is not enough information in your message to understand the
reason of the issue.

-Val



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Cluster-hung-after-a-node-killed-tp8965p8976.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: rest-http can't get data if key is Integer or others

2016-11-14 Thread vkulichenko
Only strings are currently supported in HTTP REST. This will be improved when
we have full JSON support (probably next year). In the meantime I would
recommend to use Ignite API directly if you need support for different data
types.

-Val



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/rest-http-can-t-get-data-if-key-is-Integer-or-others-tp8762p8975.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Multiple servers in a Ignite Cluster

2016-11-14 Thread vkulichenko
Tracy,

You can limit the set of nodes where the cache is deployed via
CacheConfiguration.setNodeFilter() config property. Generally, all your
nodes should be in the same cluster, but you can create multiple roles and
logical cluster groups.

-Val



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Multiple-servers-in-a-Ignite-Cluster-tp8840p8974.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: How can I obtain a list of executing jobs on an ignite node

2016-11-14 Thread vkulichenko
How do you identify a job in a bad state? What does this exactly mean? If you
can detect this state within the job, you can simply throw an exception from
it and will be automatically failed over to another node.

-Val



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/How-can-I-obtain-a-list-of-executing-jobs-on-an-ignite-node-tp8841p8973.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Problem with v for listening updates

2016-11-14 Thread vkulichenko
Can you create a small standalone project that will reproduce the issue? It
looks like we're missing somethin here.

-Val



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Problem-with-Continuous-query-for-listening-updates-tp8709p8972.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Multithreading SQL queries in Apache Ignite

2016-11-14 Thread vkulichenko
Hi,

Single query is currently not parallelized on per-node level. I.e., a query
will be be split across nodes, but within each node there will be a single
thread processing this query. However, if you issue multiple concurrent
queries, they will be executed concurrently in the system thread pool. As
for performance, I recommend to create a benchmark and check what numbers
you get with different settings.

-Val



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Multithreading-SQL-queries-in-Apache-Ignite-tp8944p8971.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: CacheInterceptor with client nodes

2016-11-14 Thread vkulichenko
Hi,

What exactly are you trying to do? What's the use case?

-Val



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/CacheInterceptor-with-client-nodes-tp8964p8970.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Question about backups

2016-11-14 Thread Andrey Gura
Hi,

Replicated cache has backups amount that equal to cluster nodes amount
minus one because this one is primary. Amount of nodes in cluster can be
changed during time so we use Integer.MAX_VALUE magic number for backups in
case of replicated cache. Amount of backups doesn't depend on cache memory
mode and you should not specify what node is primary and what is backup. It
is affinity function responsibility. If you will set backups parameter to
zero (or any other value) for replicated cache it will be ignored.

In fact, partitioned and replicated caches are implemented in the same way.
But there are some implementation details because our focus is high
performance.


On Tue, Nov 15, 2016 at 1:43 AM, styriver  wrote:

> Hello I am dumping the cache configuration for my defined caches. I am
> seeing
> this as the backup number
> memMode=OFFHEAP_TIERED cacheMode=REPLICATED, atomicityMode=TRANSACTIONAL,
> atomicWriteOrderMode=null, backups=2147483647
>
> I am not setting the backups property in any of my configurations so this
> must be the default. This is the same number for both the OFFHEAP_TIERED
> and
> ONHEAP_TIERED. We have two server nodes and am not specifying any of the
> nodes as primary or backup. Wondering what the implications of having this
> number set to this value. Is it only applicable if the cacheMode is
> PARTIONED? Like to know if I should set this to zero or not?
>
>
>
> --
> View this message in context: http://apache-ignite-users.
> 70518.x6.nabble.com/Question-about-backups-tp8968.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>


Question about backups

2016-11-14 Thread styriver
Hello I am dumping the cache configuration for my defined caches. I am seeing
this as the backup number
memMode=OFFHEAP_TIERED cacheMode=REPLICATED, atomicityMode=TRANSACTIONAL,
atomicWriteOrderMode=null, backups=2147483647

I am not setting the backups property in any of my configurations so this
must be the default. This is the same number for both the OFFHEAP_TIERED and
ONHEAP_TIERED. We have two server nodes and am not specifying any of the
nodes as primary or backup. Wondering what the implications of having this
number set to this value. Is it only applicable if the cacheMode is
PARTIONED? Like to know if I should set this to zero or not?



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Question-about-backups-tp8968.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Apache Spark & Ignite Integration

2016-11-14 Thread Denis Magda
Hi,

Here is the ticket
https://issues.apache.org/jira/browse/IGNITE-3084 


Feel free to paste your questions there as well so that the implementer takes 
them into account.

—
Denis

> On Nov 14, 2016, at 6:14 AM, pragmaticbigdata  wrote:
> 
> Ok. Is there a jira task that I can track for the dataframes and datasets
> support?
> 
> I do have a couple of follow up questions to understand the memory
> representation of the shared RDD support that ignite brings with the spark
> integration. 
> 
> 1. Could you detail on how are shared RDD's implemented when ignite is
> deployed in a standalone mode? Assuming we have a ignite cluster consisting
> a cached named "partitioned" would creating a IgniteRDD through val
> sharedRDD: IgniteRDD[Int,Int] = ic.fromCache("partitioned")  create another
> copy of the cache on the spark executor jvm or would the spark executor
> operate on the original copy of the cache that is present on the ignite
> nodes? I am more interested in understanding the performance impact of data
> shuffling or movement if there is any.
> 
> 2. Since spark does not have transaction support, how I can use the ACID
> transaction support that Ignite provides when updating RDD's? A code example
> would be helpful if possible.
> 
> Thanks.
> 
> 
> 
> --
> View this message in context: 
> http://apache-ignite-users.70518.x6.nabble.com/Apache-Spark-Ignite-Integration-tp8556p8951.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.



Re: Remote Server Thread Not exit when Job finished,Cause out of memory

2016-11-14 Thread Denis Magda
You may want to implement ComputeJobMasterLeaveAware [1] interface for your 
compute jobs. An interface implementation will be called on servers side for 
every job that was spawned by client node that has been shut down.

Besides, you can refer to this example [2] that demonstrates how to use the 
interface.

However, I still don’t see a reason why you spawn 6 threads for every compute 
job. In general you may reuse existing threads from the public pool by relying 
on ComputeJobContinuation [3] which usage is demonstrated in this example [4].

[1] 
https://ignite.apache.org/releases/1.7.0/javadoc/org/apache/ignite/compute/ComputeJobMasterLeaveAware.html
 

[2] 
https://github.com/gridgain/gridgain-advanced-examples/blob/master/src/main/java/org/gridgain/examples/compute/masterleave/ComputeMasterLeaveAwareExample.java
[3] 
https://ignite.apache.org/releases/1.7.0/javadoc/org/apache/ignite/compute/ComputeJobContinuation.html
 

[4] 
https://github.com/apache/ignite/blob/master/examples/src/main/java/org/apache/ignite/examples/computegrid/ComputeFibonacciContinuationExample.java
 


—
Denis

> On Nov 14, 2016, at 5:41 AM, alex  wrote:
> 
> Hi vdpyatkov, I tried like this , problem still exists. The client code just
> as below:
> 
>ClusterGroup rmts = getIgnite().cluster().forRemotes();
>IgniteCompute compute = getIgnite().compute(rmts).withAsync();
>compute.apply(new IgniteClosure() {
>@Override
>public String apply(String o) {
>return o;
>}
>}, Arrays.asList("Print words using runnable".split(" ")));
> 
>IgniteFuture future = compute.future();
>future.cancel();
> 
>getIgnite().close();
> 
> 
> 
> --
> View this message in context: 
> http://apache-ignite-users.70518.x6.nabble.com/Remote-Server-Thread-Not-exit-when-Job-finished-Cause-out-of-memory-tp8934p8947.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.



Cluster hung after a node killed

2016-11-14 Thread javastuff....@gmail.com
Hi,

I have configured cache as off-heap partitioned cache. Running 3 nodes on
separate machine. Loaded some data into cache using my application's normal
operations. 

Used "/kill -9 /" to kill node 3.

Node 2 shows below Warning on console after every 10 seconds -

/11:03:03,320 WARNING
[org.apache.ignite.internal.processors.cache.GridCachePartitionExchangeManager]
(exchange-worker-#256%TESTNODE%) Failed to wait for partition map exchange
[topVer=AffinityTopologyVersion [topVer=3, minorTopVer=0],
node=8cc0ac24-24b9-4d69-8472-b6a567f4d907]. Dumping pending objects that
might be the cause:/

Node 1 looks fine. However application does not work anymore and threaddump
shows it is waiting on cache put -

/java.lang.Thread.State: WAITING (parking)
at sun.misc.Unsafe.park(Native Method)
- parking to wait for  <0x0007ecbd4a38> (a
org.apache.ignite.internal.processors.affinity.GridAffinityAssignmentCache$AffinityReadyFuture)
at java.util.concurrent.locks.LockSupport.park(LockSupport.java:186)
at
java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:834)
at
java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:994)
at
java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1303)
at
org.apache.ignite.internal.util.future.GridFutureAdapter.get0(GridFutureAdapter.java:159)
at
org.apache.ignite.internal.util.future.GridFutureAdapter.get(GridFutureAdapter.java:117)
at
org.apache.ignite.internal.processors.affinity.GridAffinityAssignmentCache.awaitTopologyVersion(GridAffinityAssignmentCache.java:523)
at
org.apache.ignite.internal.processors.affinity.GridAffinityAssignmentCache.cachedAffinity(GridAffinityAssignmentCache.java:434)
at
org.apache.ignite.internal.processors.affinity.GridAffinityAssignmentCache.nodes(GridAffinityAssignmentCache.java:387)
at
org.apache.ignite.internal.processors.cache.GridCacheAffinityManager.nodes(GridCacheAffinityManager.java:259)
at
org.apache.ignite.internal.processors.cache.GridCacheAffinityManager.primary(GridCacheAffinityManager.java:295)
at
org.apache.ignite.internal.processors.cache.GridCacheAffinityManager.primary(GridCacheAffinityManager.java:286)
at
org.apache.ignite.internal.processors.cache.GridCacheAffinityManager.primary(GridCacheAffinityManager.java:310)
at
org.apache.ignite.internal.processors.cache.distributed.dht.colocated.GridDhtColocatedCache.entryExx(GridDhtColocatedCache.java:176)
at
org.apache.ignite.internal.processors.cache.distributed.near.GridNearTxLocal.entryEx(GridNearTxLocal.java:1251)
at
org.apache.ignite.internal.processors.cache.transactions.IgniteTxLocalAdapter.enlistWriteEntry(IgniteTxLocalAdapter.java:2354)
at
org.apache.ignite.internal.processors.cache.transactions.IgniteTxLocalAdapter.enlistWrite(IgniteTxLocalAdapter.java:1990)
at
org.apache.ignite.internal.processors.cache.transactions.IgniteTxLocalAdapter.putAsync0(IgniteTxLocalAdapter.java:2902)
at
org.apache.ignite.internal.processors.cache.transactions.IgniteTxLocalAdapter.putAsync(IgniteTxLocalAdapter.java:1859)
at
org.apache.ignite.internal.processors.cache.GridCacheAdapter$22.op(GridCacheAdapter.java:2240)
at
org.apache.ignite.internal.processors.cache.GridCacheAdapter$22.op(GridCacheAdapter.java:2238)
at
org.apache.ignite.internal.processors.cache.GridCacheAdapter.syncOp(GridCacheAdapter.java:4351)
at
org.apache.ignite.internal.processors.cache.GridCacheAdapter.put(GridCacheAdapter.java:2238)
at
org.apache.ignite.internal.processors.cache.GridCacheAdapter.put(GridCacheAdapter.java:2215)
at
org.apache.ignite.internal.processors.cache.IgniteCacheProxy.put(IgniteCacheProxy.java:1214)/


Is there any specific configuration I need to provide for self recovery of
cluster? Losing cache data is fine, data is backedup in some persistent
store Example - DATABASE.

-Sam



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Cluster-hung-after-a-node-killed-tp8965.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


CacheInterceptor with client nodes

2016-11-14 Thread Игорь Гнатюк
Hi, all!.

I've tried to add a CacheInterceptor to Apache Ignite cache. It works fine
on server nodes, but when i am trying to access cache("get" operation) from
a client node interceptor is invoked localy (on a client node). Is there
any way to make it work only on server node, before sending results to the
clients?


?????? C++ API can't build successfully in my linux environment

2016-11-14 Thread smile
Thank you for your answer, and I  download ignite  from this link:  
 
http://ignite.apache.org/download.cgi#sources
and download the ignite1.7.0, which realse on 2016-08-05;


And I will try the link that you gived to for the latest ignite,  then I will 
feedback when I have tried it!


Thanks again!






--  --
??: "Igor Sapego";;
: 2016??11??14??(??) 11:41
??: "user"; 

: Re: C++ API can't build successfully in my linux environment



Here is the link: 
https://builds.apache.org/view/H-L/view/Ignite/job/Ignite-nightly/lastSuccessfulBuild/


Best Regards,
Igor



 
On Mon, Nov 14, 2016 at 6:41 PM, Igor Sapego  wrote:
Or you can try nightly build if you need binaries:

Best Regards,
Igor



 
On Mon, Nov 14, 2016 at 6:28 PM, Igor Sapego  wrote:
Can you try master? I'm pretty sure I was fixing these issues.

Best Regards,
Igor



 
On Mon, Nov 14, 2016 at 6:19 PM, smile  wrote:
I used ignite1.7.0--  --
??: "Igor Sapego"
: 2016??11??14??(??) 10:09
??: "user";
: Re: C++ API can't build successfully in my linux environment


Hi,

Which version do you use?


Best Regards,
Igor



 
On Mon, Nov 14, 2016 at 4:59 PM, smile  wrote:
Hi, all
I build C++ api in linux environment, linux is Center OS, and g++ version 
is 4.4.6?? I get a lot build error,  and I have to modify the code, include :

1??ignite_error.h : ~IgniteError();   ~IgniteError() throw();

2??ignite_error.cpp :  
IgniteError::~IgniteError()??IgniteError::~IgniteError() throw();



3java.cppnullPtr??NULL

4concurrent.hexplicit SharedPointer(T* ptr)

explicit SharedPointer(T* ptr)

{

/*

if (ptr)

{

impl = new SharedPointerImpl(ptr, 
reinterpret_cast());

ImplEnableShared(ptr, impl);

}

else

impl = 0;

*/

SharedPointer(ptr, );

 

}

   ??

   ../common/include/ignite/common/concurrent.h: In constructor 
'ignite::common::concurrent::SharedPointer::SharedPointer(T*) [with T = 
ignite::impl::cache::CacheImpl]':

../core/include/ignite/cache/cache.h:71:   instantiated from 
'ignite::cache::Cache::Cache(ignite::impl::cache::CacheImpl*) [with K = 
int, V = int]'

../core/include/ignite/ignite.h:133:   instantiated from 
'ignite::cache::Cache ignite::Ignite::GetOrCreateCache(const char*, 
ignite::IgniteError*) [with K = int, V = int]'

../core/include/ignite/ignite.h:112:   instantiated from 
'ignite::cache::Cache ignite::Ignite::GetOrCreateCache(const char*) [with 
K = int, V = int]'

src/ignite.cpp:207:   instantiated from here

../common/include/ignite/common/concurrent.h:145: error: address of overloaded 
function with no contextual type information




  ok,  I have modifty them, then it can build success, but when I run the 
example,  it is core dump?? and I think that the memory is modified during 
process running.




 How can I solve it ?

 Thank you very much!

Re: C++ API can't build successfully in my linux environment

2016-11-14 Thread Igor Sapego
Here is the link:
https://builds.apache.org/view/H-L/view/Ignite/job/Ignite-nightly/lastSuccessfulBuild/

Best Regards,
Igor

On Mon, Nov 14, 2016 at 6:41 PM, Igor Sapego  wrote:

> Or you can try nightly build if you need binaries:
>
> Best Regards,
> Igor
>
> On Mon, Nov 14, 2016 at 6:28 PM, Igor Sapego  wrote:
>
>> Can you try master? I'm pretty sure I was fixing these issues.
>>
>> Best Regards,
>> Igor
>>
>> On Mon, Nov 14, 2016 at 6:19 PM, smile  wrote:
>>
>>> I used ignite1.7.0
>>> -- 原始邮件 --
>>> *发件人:* "Igor Sapego"
>>> *发送时间:* 2016年11月14日(星期一) 晚上10:09
>>> *收件人:* "user";
>>> *主题:* Re: C++ API can't build successfully in my linux environment
>>> Hi,
>>>
>>> Which version do you use?
>>>
>>> Best Regards,
>>> Igor
>>>
>>> On Mon, Nov 14, 2016 at 4:59 PM, smile  wrote:
>>>
 Hi, all
 I build C++ api in linux environment, linux is Center OS, and g++
 version is 4.4.6, I get a lot build error,  and I have to modify the code,
 include :

 1、ignite_error.h : ~IgniteError();  改为 ~IgniteError() throw();

 2、ignite_error.cpp :  
 IgniteError::~IgniteError()修改为IgniteError::~IgniteError()
 throw();

 3、将java.cpp中的nullPtr修改为NULL

 4、将concurrent.h中的explicit SharedPointer(T* ptr)修改为:

 explicit SharedPointer(T* ptr)

 {

 /*

 if (ptr)

 {

 impl = new SharedPointerImpl(ptr,
 reinterpret_cast(
 terDefaultDeleter));

 ImplEnableShared(ptr, impl);

 }

 else

 impl = 0;

 */

 SharedPointer(ptr, >>> >);

 }

否则,将报如下的错误信息:

 *   ../common/include/ignite/common/concurrent.h: In constructor
 'ignite::common::concurrent::SharedPointer::SharedPointer(T*) [with T =
 ignite::impl::cache::CacheImpl]':*

 *../core/include/ignite/cache/cache.h:71:   instantiated from
 'ignite::cache::Cache::Cache(ignite::impl::cache::CacheImpl*) [with K
 = int, V = int]'*

 *../core/include/ignite/ignite.h:133:   instantiated from
 'ignite::cache::Cache ignite::Ignite::GetOrCreateCache(const char*,
 ignite::IgniteError*) [with K = int, V = int]'*

 *../core/include/ignite/ignite.h:112:   instantiated from
 'ignite::cache::Cache ignite::Ignite::GetOrCreateCache(const char*)
 [with K = int, V = int]'*

 *src/ignite.cpp:207:   instantiated from here*

 *../common/include/ignite/common/concurrent.h:145: error: address of
 overloaded function with no contextual type information*


 *  ok,  I have modifty them, then it can build success, but when I run
 the example,  it is core dump, and I think that the memory is modified
 during process running.*


 * How can I solve it ?*

  Thank you very much!

>>>
>>>
>>
>


Re: C++ API can't build successfully in my linux environment

2016-11-14 Thread Igor Sapego
Or you can try nightly build if you need binaries:

Best Regards,
Igor

On Mon, Nov 14, 2016 at 6:28 PM, Igor Sapego  wrote:

> Can you try master? I'm pretty sure I was fixing these issues.
>
> Best Regards,
> Igor
>
> On Mon, Nov 14, 2016 at 6:19 PM, smile  wrote:
>
>> I used ignite1.7.0
>> -- 原始邮件 --
>> *发件人:* "Igor Sapego"
>> *发送时间:* 2016年11月14日(星期一) 晚上10:09
>> *收件人:* "user";
>> *主题:* Re: C++ API can't build successfully in my linux environment
>> Hi,
>>
>> Which version do you use?
>>
>> Best Regards,
>> Igor
>>
>> On Mon, Nov 14, 2016 at 4:59 PM, smile  wrote:
>>
>>> Hi, all
>>> I build C++ api in linux environment, linux is Center OS, and g++
>>> version is 4.4.6, I get a lot build error,  and I have to modify the code,
>>> include :
>>>
>>> 1、ignite_error.h : ~IgniteError();  改为 ~IgniteError() throw();
>>>
>>> 2、ignite_error.cpp :  
>>> IgniteError::~IgniteError()修改为IgniteError::~IgniteError()
>>> throw();
>>>
>>> 3、将java.cpp中的nullPtr修改为NULL
>>>
>>> 4、将concurrent.h中的explicit SharedPointer(T* ptr)修改为:
>>>
>>> explicit SharedPointer(T* ptr)
>>>
>>> {
>>>
>>> /*
>>>
>>> if (ptr)
>>>
>>> {
>>>
>>> impl = new SharedPointerImpl(ptr,
>>> reinterpret_cast(
>>> terDefaultDeleter));
>>>
>>> ImplEnableShared(ptr, impl);
>>>
>>> }
>>>
>>> else
>>>
>>> impl = 0;
>>>
>>> */
>>>
>>> SharedPointer(ptr, );
>>>
>>> }
>>>
>>>否则,将报如下的错误信息:
>>>
>>> *   ../common/include/ignite/common/concurrent.h: In constructor
>>> 'ignite::common::concurrent::SharedPointer::SharedPointer(T*) [with T =
>>> ignite::impl::cache::CacheImpl]':*
>>>
>>> *../core/include/ignite/cache/cache.h:71:   instantiated from
>>> 'ignite::cache::Cache::Cache(ignite::impl::cache::CacheImpl*) [with K
>>> = int, V = int]'*
>>>
>>> *../core/include/ignite/ignite.h:133:   instantiated from
>>> 'ignite::cache::Cache ignite::Ignite::GetOrCreateCache(const char*,
>>> ignite::IgniteError*) [with K = int, V = int]'*
>>>
>>> *../core/include/ignite/ignite.h:112:   instantiated from
>>> 'ignite::cache::Cache ignite::Ignite::GetOrCreateCache(const char*)
>>> [with K = int, V = int]'*
>>>
>>> *src/ignite.cpp:207:   instantiated from here*
>>>
>>> *../common/include/ignite/common/concurrent.h:145: error: address of
>>> overloaded function with no contextual type information*
>>>
>>>
>>> *  ok,  I have modifty them, then it can build success, but when I run
>>> the example,  it is core dump, and I think that the memory is modified
>>> during process running.*
>>>
>>>
>>> * How can I solve it ?*
>>>
>>>  Thank you very much!
>>>
>>
>>
>


Re: C++ API can't build successfully in my linux environment

2016-11-14 Thread Igor Sapego
Can you try master? I'm pretty sure I was fixing these issues.

Best Regards,
Igor

On Mon, Nov 14, 2016 at 6:19 PM, smile  wrote:

> I used ignite1.7.0
> -- 原始邮件 --
> *发件人:* "Igor Sapego"
> *发送时间:* 2016年11月14日(星期一) 晚上10:09
> *收件人:* "user";
> *主题:* Re: C++ API can't build successfully in my linux environment
> Hi,
>
> Which version do you use?
>
> Best Regards,
> Igor
>
> On Mon, Nov 14, 2016 at 4:59 PM, smile  wrote:
>
>> Hi, all
>> I build C++ api in linux environment, linux is Center OS, and g++
>> version is 4.4.6, I get a lot build error,  and I have to modify the code,
>> include :
>>
>> 1、ignite_error.h : ~IgniteError();  改为 ~IgniteError() throw();
>>
>> 2、ignite_error.cpp :  
>> IgniteError::~IgniteError()修改为IgniteError::~IgniteError()
>> throw();
>>
>> 3、将java.cpp中的nullPtr修改为NULL
>>
>> 4、将concurrent.h中的explicit SharedPointer(T* ptr)修改为:
>>
>> explicit SharedPointer(T* ptr)
>>
>> {
>>
>> /*
>>
>> if (ptr)
>>
>> {
>>
>> impl = new SharedPointerImpl(ptr,
>> reinterpret_cast(
>> terDefaultDeleter));
>>
>> ImplEnableShared(ptr, impl);
>>
>> }
>>
>> else
>>
>> impl = 0;
>>
>> */
>>
>> SharedPointer(ptr, );
>>
>> }
>>
>>否则,将报如下的错误信息:
>>
>> *   ../common/include/ignite/common/concurrent.h: In constructor
>> 'ignite::common::concurrent::SharedPointer::SharedPointer(T*) [with T =
>> ignite::impl::cache::CacheImpl]':*
>>
>> *../core/include/ignite/cache/cache.h:71:   instantiated from
>> 'ignite::cache::Cache::Cache(ignite::impl::cache::CacheImpl*) [with K
>> = int, V = int]'*
>>
>> *../core/include/ignite/ignite.h:133:   instantiated from
>> 'ignite::cache::Cache ignite::Ignite::GetOrCreateCache(const char*,
>> ignite::IgniteError*) [with K = int, V = int]'*
>>
>> *../core/include/ignite/ignite.h:112:   instantiated from
>> 'ignite::cache::Cache ignite::Ignite::GetOrCreateCache(const char*)
>> [with K = int, V = int]'*
>>
>> *src/ignite.cpp:207:   instantiated from here*
>>
>> *../common/include/ignite/common/concurrent.h:145: error: address of
>> overloaded function with no contextual type information*
>>
>>
>> *  ok,  I have modifty them, then it can build success, but when I run
>> the example,  it is core dump, and I think that the memory is modified
>> during process running.*
>>
>>
>> * How can I solve it ?*
>>
>>  Thank you very much!
>>
>
>


?????? C++ API can't build successfully in my linux environment

2016-11-14 Thread smile
I used ignite1.7.0--  --
??: "Igor Sapego"
: 2016??11??14??(??) 10:09
??: "user";
: Re: C++ API can't build successfully in my linux environment


Hi,

Which version do you use?


Best Regards,
Igor



 
On Mon, Nov 14, 2016 at 4:59 PM, smile  wrote:
Hi, all
  I build C++ api in linux environment, linux is Center OS, and g++ 
version is 4.4.6?? I get a lot build error, and I have to modify the 
code, include :

1??ignite_error.h : ~IgniteError();  ~IgniteError() throw();

2??ignite_error.cpp : 
IgniteError::~IgniteError()??IgniteError::~IgniteError() throw();



3java.cppnullPtr??NULL

4concurrent.hexplicit SharedPointer(T* ptr)

 explicit SharedPointer(T* ptr)


 {


 /*


 if (ptr)


 {


 impl = new SharedPointerImpl(ptr, 
reinterpret_cast());


 ImplEnableShared(ptr, impl);


 }


 else


 impl = 0;


 */


 SharedPointer(ptr, );

 


 }

 ??

 ../common/include/ignite/common/concurrent.h: In constructor 
'ignite::common::concurrent::SharedPointer::SharedPointer(T*) [with T = 
ignite::impl::cache::CacheImpl]':

../core/include/ignite/cache/cache.h:71:  instantiated from 
'ignite::cache::Cache::Cache(ignite::impl::cache::CacheImpl*) [with K = 
int, V = int]'

../core/include/ignite/ignite.h:133:  instantiated from 
'ignite::cache::Cache ignite::Ignite::GetOrCreateCache(const char*, 
ignite::IgniteError*) [with K = int, V = int]'

../core/include/ignite/ignite.h:112:  instantiated from 
'ignite::cache::Cache ignite::Ignite::GetOrCreateCache(const char*) [with 
K = int, V = int]'

src/ignite.cpp:207:  instantiated from here

../common/include/ignite/common/concurrent.h:145: error: address of overloaded 
function with no contextual type information




 ok, I have modifty them, then it can build success, but when I run 
the example, it is core dump?? and I think that the memory is modified 
during process running.




How can I solve it ?

Thank you very much!

Re: Cache Memory Behavior \ GridDhtLocalPartition

2016-11-14 Thread Isaeed Mohanna
Hi
My Cache key class is java.util.UUID
I am not using any collocation affinity, could you please elaborate how can
i use constant affinity to check if cache entry still exists on backup?

Thanks

On Mon, Nov 14, 2016 at 4:50 PM, Andrey Mashenkov 
wrote:

> Hi,
>
> On remove operation entry should be removed from primary node and backup
> nodes as well.
> Can you reproduce the issue? Can you check if entry was removed only from
> primary node and exists on backup, e.g. using constant affinity function?
>
> I think its possible that the backup is not being cleaned, due to key
> serialization issues. Would you provide key class implementation?
>
> On Sun, Nov 13, 2016 at 4:14 PM, Isaeed Mohanna  wrote:
>
>> Hi
>> There is no eviction policy since entries in the caches are removed by my
>> application. (calling IgniteCache.remove ).
>>
>> Digging through the core dump I can see that most resident items are
>> cache entries were the cachecontext.cachename points to EventsCache, as i
>> have mentioned before this cache has very frequent writes and deletions of
>> events (i am using remove to delete the events), however this cache is also
>> atomic,partitioned and have a backup of at least one so in case a node
>> fails the event is not lost. when calling remove on a cache, is the backup
>> of an entry removed as well? is it possible that the backup is not being
>> cleaned?
>>
>> Currently i am using the default garbage collector settings, i can't see
>> any spikes in performance due to GC, since i experience memory outage in
>> several days i am not sure i am collecting data more than the GC is able to
>> claim, I will try manually performing a GC when the system is about to
>> crash to see wither forcing a GC will clean the memory.
>>
>> Thank you for ur help
>>
>> On Fri, Nov 11, 2016 at 11:59 AM, Andrey Mashenkov <
>> amashen...@gridgain.com> wrote:
>>
>>> Hi Isaeed Mohanna,
>>>
>>> I don't see any eviction or expired policy configured. Is entry deletion
>>> performed by you application?
>>>
>>> Have you try to detect which of caches id grows unexpectedly?
>>> Have you analyse GC logs or tried to tune GC? Actually, you can putting
>>> data faster as garbage is collecting. This page may be helpful
>>> http://apacheignite.gridgain.org/v1.7/docs/performan
>>> ce-tips#tune-garbage-collection.
>>>
>>> Also you can get profile (with e.g. JavaFlightRecorder) of grid under
>>> load to understand what is really going on.
>>>
>>> Please let me know, if there are any issues.
>>>
>>>
>>>
>>> On Thu, Nov 10, 2016 at 10:10 AM, Isaeed Mohanna 
>>> wrote:
>>>
 Hi
 My cache configurations appear below.

 // Cache 1 - a cache of ~15 entities that has a date stamp that is
 updated every 30 - 120 seconds
 CacheConfiguration Cache1Cfg = new CacheConfiguration<>();
 Cache1Cfg cheCfg.setName("Cache1Name");
 Cache1Cfg .setCacheMode(CacheMode.REPLICATED);
 Cache1Cfg .setAtomicityMode(CacheAtomicityMode.ATOMIC);
 Cache1Cfg .setStartSize(50);

 // Cache 2 - A cache used as an ignite queue with frequent inserts and
 removal from the queue
 CacheConfiguration Cache2Cfg = new CacheConfiguration<>();
 Cache2Cfg .setName("Cache2Name");
 Cache2Cfg .setCacheMode(CacheMode.REPLICATED);
 Cache2Cfg .setAtomicityMode(CacheAtomicityMode.ATOMIC);

 // Cache 3 - hundreds of entities updated daily
 CacheConfiguration Cache3Cfg = new CacheConfiguration<>();
 Cache3Cfg .setName("Cache3Name");
 Cache3Cfg .setCacheMode(CacheMode.REPLICATED);
 Cache3Cfg .setAtomicityMode(CacheAtomicityMode.ATOMIC);
 Cache3Cfg .setIndexedTypes(UUID.class, SomeClass.class);

 // Cache 4 - Cache with very few writes and reads
 CacheConfiguration Cache4Cfg = new CacheConfiguration<>();
 Cache4Cfg .setName("Cache4Name");
 Cache4Cfg .setCacheMode(CacheMode.REPLICATED);
 Cache4Cfg .setAtomicityMode(CacheAtomicityMode.ATOMIC);

 // Events Cache - cache with very frequent writes and delete, acts as
 events queue
 CacheConfiguration eventsCacheConfig= new CacheConfiguration<>();
 eventsCacheConfig.setName("EventsCache");
 eventsCacheConfig.setCacheMode(CacheMode.PARTITIONED);
 eventsCacheConfig.setAtomicityMode(CacheAtomicityMode.ATOMIC);
 eventsCacheConfig.setIndexedTypes(UUID.class, SomeClass.class);
 eventsCacheConfig.setBackups(1);
 eventsCacheConfig.setOffHeapMaxMemory(0);

 // Failed Events Cache - cache with less writes and reads stores failed
 events
 CacheConfiguration failedEventsCacheConfig = new
 CacheConfiguration<>();
 failedEventsCacheConfig.setName("FailedEventsCache");
 failedEventsCacheConfig.setCacheMode(CacheMode.PARTITIONED);
 failedEventsCacheConfig.setAtomicityMode(CacheAtomicityMode.ATOMIC);
 failedEventsCacheConfig.setIndexedTypes(UUID.class, EventEntity.class);
 

Re: How can I detect DB writing abnormal in case of write-behind?

2016-11-14 Thread Vladislav Pyatkov
Hi,

I am not understand what you mean: "In this process, something wrong in the
DB, then data can not be written into it."

How did you detect this?
If "write behind" flag was set as true, a data will inserted into DB
asynchronously (in dedicated thread). You should to wait when the data will
be saved into DB.

By cache metrics you can watch over number of "put" operation on cache
(org.apache.ignite.cache.CacheMetrics#getCachePuts).

On Mon, Nov 14, 2016 at 5:44 PM, ght230  wrote:

> Hello:
>
> I am trying to put some data to a cache configured with
> write-through and write-behind.
>
> In this process, something wrong in the DB, then data
> can not be written into it.
>
> I want to know how can I detect the database error as soon as possible?
>
> Can I detect it by the metrics of the CacheMetrics?
>
> If the answer is "YES", there are so many metrics in the
> class "org.apache.ignite.cache.CacheMetrics", which one can I use?
>



-- 
Vladislav Pyatkov


Re: Cache Memory Behavior \ GridDhtLocalPartition

2016-11-14 Thread Andrey Mashenkov
Hi,

On remove operation entry should be removed from primary node and backup
nodes as well.
Can you reproduce the issue? Can you check if entry was removed only from
primary node and exists on backup, e.g. using constant affinity function?

I think its possible that the backup is not being cleaned, due to key
serialization issues. Would you provide key class implementation?

On Sun, Nov 13, 2016 at 4:14 PM, Isaeed Mohanna  wrote:

> Hi
> There is no eviction policy since entries in the caches are removed by my
> application. (calling IgniteCache.remove ).
>
> Digging through the core dump I can see that most resident items are cache
> entries were the cachecontext.cachename points to EventsCache, as i have
> mentioned before this cache has very frequent writes and deletions of
> events (i am using remove to delete the events), however this cache is also
> atomic,partitioned and have a backup of at least one so in case a node
> fails the event is not lost. when calling remove on a cache, is the backup
> of an entry removed as well? is it possible that the backup is not being
> cleaned?
>
> Currently i am using the default garbage collector settings, i can't see
> any spikes in performance due to GC, since i experience memory outage in
> several days i am not sure i am collecting data more than the GC is able to
> claim, I will try manually performing a GC when the system is about to
> crash to see wither forcing a GC will clean the memory.
>
> Thank you for ur help
>
> On Fri, Nov 11, 2016 at 11:59 AM, Andrey Mashenkov <
> amashen...@gridgain.com> wrote:
>
>> Hi Isaeed Mohanna,
>>
>> I don't see any eviction or expired policy configured. Is entry deletion
>> performed by you application?
>>
>> Have you try to detect which of caches id grows unexpectedly?
>> Have you analyse GC logs or tried to tune GC? Actually, you can putting
>> data faster as garbage is collecting. This page may be helpful
>> http://apacheignite.gridgain.org/v1.7/docs/performan
>> ce-tips#tune-garbage-collection.
>>
>> Also you can get profile (with e.g. JavaFlightRecorder) of grid under
>> load to understand what is really going on.
>>
>> Please let me know, if there are any issues.
>>
>>
>>
>> On Thu, Nov 10, 2016 at 10:10 AM, Isaeed Mohanna 
>> wrote:
>>
>>> Hi
>>> My cache configurations appear below.
>>>
>>> // Cache 1 - a cache of ~15 entities that has a date stamp that is
>>> updated every 30 - 120 seconds
>>> CacheConfiguration Cache1Cfg = new CacheConfiguration<>();
>>> Cache1Cfg cheCfg.setName("Cache1Name");
>>> Cache1Cfg .setCacheMode(CacheMode.REPLICATED);
>>> Cache1Cfg .setAtomicityMode(CacheAtomicityMode.ATOMIC);
>>> Cache1Cfg .setStartSize(50);
>>>
>>> // Cache 2 - A cache used as an ignite queue with frequent inserts and
>>> removal from the queue
>>> CacheConfiguration Cache2Cfg = new CacheConfiguration<>();
>>> Cache2Cfg .setName("Cache2Name");
>>> Cache2Cfg .setCacheMode(CacheMode.REPLICATED);
>>> Cache2Cfg .setAtomicityMode(CacheAtomicityMode.ATOMIC);
>>>
>>> // Cache 3 - hundreds of entities updated daily
>>> CacheConfiguration Cache3Cfg = new CacheConfiguration<>();
>>> Cache3Cfg .setName("Cache3Name");
>>> Cache3Cfg .setCacheMode(CacheMode.REPLICATED);
>>> Cache3Cfg .setAtomicityMode(CacheAtomicityMode.ATOMIC);
>>> Cache3Cfg .setIndexedTypes(UUID.class, SomeClass.class);
>>>
>>> // Cache 4 - Cache with very few writes and reads
>>> CacheConfiguration Cache4Cfg = new CacheConfiguration<>();
>>> Cache4Cfg .setName("Cache4Name");
>>> Cache4Cfg .setCacheMode(CacheMode.REPLICATED);
>>> Cache4Cfg .setAtomicityMode(CacheAtomicityMode.ATOMIC);
>>>
>>> // Events Cache - cache with very frequent writes and delete, acts as
>>> events queue
>>> CacheConfiguration eventsCacheConfig= new CacheConfiguration<>();
>>> eventsCacheConfig.setName("EventsCache");
>>> eventsCacheConfig.setCacheMode(CacheMode.PARTITIONED);
>>> eventsCacheConfig.setAtomicityMode(CacheAtomicityMode.ATOMIC);
>>> eventsCacheConfig.setIndexedTypes(UUID.class, SomeClass.class);
>>> eventsCacheConfig.setBackups(1);
>>> eventsCacheConfig.setOffHeapMaxMemory(0);
>>>
>>> // Failed Events Cache - cache with less writes and reads stores failed
>>> events
>>> CacheConfiguration failedEventsCacheConfig = new
>>> CacheConfiguration<>();
>>> failedEventsCacheConfig.setName("FailedEventsCache");
>>> failedEventsCacheConfig.setCacheMode(CacheMode.PARTITIONED);
>>> failedEventsCacheConfig.setAtomicityMode(CacheAtomicityMode.ATOMIC);
>>> failedEventsCacheConfig.setIndexedTypes(UUID.class, EventEntity.class);
>>> failedEventsCacheConfig.setBackups(1);
>>> failedEventsCacheConfig.setOffHeapMaxMemory(0);
>>>
>>> // In addition i have one atomic reference
>>> AtomicConfiguration atomicCfg = new AtomicConfiguration();
>>> atomicCfg.setCacheMode(CacheMode.REPLICATED);
>>> Thanks again
>>>
>>> On Wed, Nov 9, 2016 at 5:26 PM, Andrey Mashenkov <
>>> amashen...@gridgain.com> wrote:
>>>
 Hi Isaeed Mohanna,

 Would you please provide 

How can I detect DB writing abnormal in case of write-behind?

2016-11-14 Thread ght230
Hello:

I am trying to put some data to a cache configured with write-through and 
write-behind.

In this process, something wrong in the DB, then data can not be written into 
it.

I want to know how can I detect the database error as soon as possible?

Can I detect it by the metrics of the CacheMetrics?

If the answer is "YES", there are so many metrics in the class 
"org.apache.ignite.cache.CacheMetrics", which one can I use?


Re: java.lang.ClassNotFoundException: Failed to peer load class

2016-11-14 Thread vdpyatkov
Hi Alsex,

Can you please provide this class com.testlab.api.inf.dao.RepositoryDao?
It can has serialization issue for particular class.



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/java-lang-ClassNotFoundException-Failed-to-peer-load-class-tp8778p8953.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Multithreading SQL queries in Apache Ignite

2016-11-14 Thread Andrey Gura
>From my point of view, it depends. Only performance measurements can give
answer.

On Mon, Nov 14, 2016 at 5:06 PM, rishi007bansod 
wrote:

> I have set it to default value i.e. double the number of cores. But will it
> improve performance if I increase it further?
>
>
>
> --
> View this message in context: http://apache-ignite-users.
> 70518.x6.nabble.com/Multithreading-SQL-queries-in-
> Apache-Ignite-tp8944p8949.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>


Re: Multithreading SQL queries in Apache Ignite

2016-11-14 Thread rishi007bansod
I have set it to default value i.e. double the number of cores. But will it
improve performance if I increase it further?



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Multithreading-SQL-queries-in-Apache-Ignite-tp8944p8949.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Apache Spark & Ignite Integration

2016-11-14 Thread pragmaticbigdata
Ok. Is there a jira task that I can track for the dataframes and datasets
support?

I do have a couple of follow up questions to understand the memory
representation of the shared RDD support that ignite brings with the spark
integration. 

1. Could you detail on how are shared RDD's implemented when ignite is
deployed in a standalone mode? Assuming we have a ignite cluster consisting
a cached named "partitioned" would creating a IgniteRDD through val
sharedRDD: IgniteRDD[Int,Int] = ic.fromCache("partitioned")  create another
copy of the cache on the spark executor jvm or would the spark executor
operate on the original copy of the cache that is present on the ignite
nodes? I am more interested in understanding the performance impact of data
shuffling or movement if there is any.

2. Since spark does not have transaction support, how I can use the ACID
transaction support that Ignite provides when updating RDD's? A code example
would be helpful if possible.

Thanks.



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Apache-Spark-Ignite-Integration-tp8556p8951.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: C++ API can't build successfully in my linux environment

2016-11-14 Thread Igor Sapego
Hi,

Which version do you use?

Best Regards,
Igor

On Mon, Nov 14, 2016 at 4:59 PM, smile  wrote:

> Hi, all
> I build C++ api in linux environment, linux is Center OS, and g++
> version is 4.4.6, I get a lot build error,  and I have to modify the code,
> include :
>
> 1、ignite_error.h : ~IgniteError();  改为 ~IgniteError() throw();
>
> 2、ignite_error.cpp :  
> IgniteError::~IgniteError()修改为IgniteError::~IgniteError()
> throw();
>
> 3、将java.cpp中的nullPtr修改为NULL
>
> 4、将concurrent.h中的explicit SharedPointer(T* ptr)修改为:
>
> explicit SharedPointer(T* ptr)
>
> {
>
> /*
>
> if (ptr)
>
> {
>
> impl = new SharedPointerImpl(ptr, reinterpret_cast<
> SharedPointerImpl::DeleterType>());
>
> ImplEnableShared(ptr, impl);
>
> }
>
> else
>
> impl = 0;
>
> */
>
> SharedPointer(ptr, );
>
> }
>
>否则,将报如下的错误信息:
>
> *   ../common/include/ignite/common/concurrent.h: In constructor
> 'ignite::common::concurrent::SharedPointer::SharedPointer(T*) [with T =
> ignite::impl::cache::CacheImpl]':*
>
> *../core/include/ignite/cache/cache.h:71:   instantiated from
> 'ignite::cache::Cache::Cache(ignite::impl::cache::CacheImpl*) [with K
> = int, V = int]'*
>
> *../core/include/ignite/ignite.h:133:   instantiated from
> 'ignite::cache::Cache ignite::Ignite::GetOrCreateCache(const char*,
> ignite::IgniteError*) [with K = int, V = int]'*
>
> *../core/include/ignite/ignite.h:112:   instantiated from
> 'ignite::cache::Cache ignite::Ignite::GetOrCreateCache(const char*)
> [with K = int, V = int]'*
>
> *src/ignite.cpp:207:   instantiated from here*
>
> *../common/include/ignite/common/concurrent.h:145: error: address of
> overloaded function with no contextual type information*
>
>
> *  ok,  I have modifty them, then it can build success, but when I run the
> example,  it is core dump, and I think that the memory is modified during
> process running.*
>
>
> * How can I solve it ?*
>
>  Thank you very much!
>


C++ API can't build successfully in my linux environment

2016-11-14 Thread smile
Hi, all
I build C++ api in linux environment, linux is Center OS, and g++ version 
is 4.4.6?? I get a lot build error,  and I have to modify the code, include :

1??ignite_error.h : ~IgniteError();   ~IgniteError() throw();

2??ignite_error.cpp :  
IgniteError::~IgniteError()??IgniteError::~IgniteError() throw();



3java.cppnullPtr??NULL

4concurrent.hexplicit SharedPointer(T* ptr)

explicit SharedPointer(T* ptr)

{

/*

if (ptr)

{

impl = new SharedPointerImpl(ptr, 
reinterpret_cast());

ImplEnableShared(ptr, impl);

}

else

impl = 0;

*/

SharedPointer(ptr, );

 

}

   ??

   ../common/include/ignite/common/concurrent.h: In constructor 
'ignite::common::concurrent::SharedPointer::SharedPointer(T*) [with T = 
ignite::impl::cache::CacheImpl]':

../core/include/ignite/cache/cache.h:71:   instantiated from 
'ignite::cache::Cache::Cache(ignite::impl::cache::CacheImpl*) [with K = 
int, V = int]'

../core/include/ignite/ignite.h:133:   instantiated from 
'ignite::cache::Cache ignite::Ignite::GetOrCreateCache(const char*, 
ignite::IgniteError*) [with K = int, V = int]'

../core/include/ignite/ignite.h:112:   instantiated from 
'ignite::cache::Cache ignite::Ignite::GetOrCreateCache(const char*) [with 
K = int, V = int]'

src/ignite.cpp:207:   instantiated from here

../common/include/ignite/common/concurrent.h:145: error: address of overloaded 
function with no contextual type information




  ok,  I have modifty them, then it can build success, but when I run the 
example,  it is core dump?? and I think that the memory is modified during 
process running.




 How can I solve it ?

 Thank you very much!

Re: Remote Server Thread Not exit when Job finished,Cause out of memory

2016-11-14 Thread alex
Hi vdpyatkov, I tried like this , problem still exists. The client code just
as below:

ClusterGroup rmts = getIgnite().cluster().forRemotes();
IgniteCompute compute = getIgnite().compute(rmts).withAsync();
compute.apply(new IgniteClosure() {
@Override
public String apply(String o) {
return o;
}
}, Arrays.asList("Print words using runnable".split(" ")));

IgniteFuture future = compute.future();
future.cancel();

getIgnite().close();



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Remote-Server-Thread-Not-exit-when-Job-finished-Cause-out-of-memory-tp8934p8947.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Multithreading SQL queries in Apache Ignite

2016-11-14 Thread Andrey Gura
Hi,

Of course all requests will be processed concurrently by server. At this
moment queries use Ignite system pool for execution. You can adjust size of
this pool if needed using IgniteConfiguration.setSystemThreadPoolSize()
method.

On Mon, Nov 14, 2016 at 4:10 PM, rishi007bansod 
wrote:

> In my case I have data present on 1 server node and 25 clients connected to
> this server, concurrently firing sql queries. So, does Ignite by default
> parallelizes these queries or do we have to do some settings? Can we apply
> some kind of multithreading on server side to handle these queries for
> performance improvement?
>
>
>
> --
> View this message in context: http://apache-ignite-users.
> 70518.x6.nabble.com/Multithreading-SQL-queries-in-
> Apache-Ignite-tp8944.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>


Re: Problem with v for listening updates

2016-11-14 Thread Andry
Have you used configuration from ignite-examples module ?  -
"examples/config/example-ignite.xml"

I asked my colleagues to run that code and they've got same result as me.

Also, I've found if I disable peer class loading (peerClassLoadingEnabled in
example-default.xml) it works as expected for that example. 
But in our project we already disabled peer class loading and only removing
indexedTypes from cache configuration causing old value in Update event
works as expected




--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Problem-with-Continuous-query-for-listening-updates-tp8709p8945.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Multithreading SQL queries in Apache Ignite

2016-11-14 Thread rishi007bansod
In my case I have data present on 1 server node and 25 clients connected to
this server, concurrently firing sql queries. So, does Ignite by default
parallelizes these queries or do we have to do some settings? Can we apply
some kind of multithreading on server side to handle these queries for
performance improvement?



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Multithreading-SQL-queries-in-Apache-Ignite-tp8944.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: start C++ server in linux, it report "Failed to initialize JVM" error

2016-11-14 Thread vdpyatkov
Hi,

Ignite fully compatible with JDK 7.
I think, you try to run Ignite into lower version than 7.

51.0 matches to JDK 1.7

Please, make sure, that Ignite runs in particular JDK (check env).



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/start-C-server-in-linux-it-report-Failed-to-initialize-JVM-error-tp8907p8943.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Partitioning on a non-uniform cluster

2016-11-14 Thread Andrey Gura
Hi,

In order to implement resources aware affinity function you can use node
attributes (available via ClusterNode interface). But node attributes
should be initialized before Ignite node started (except of default node
attributes that can be found in IgniteNodeAttributes class).

Also you can start one Ignite instance on 16GB and two instances on 32GB.
In this case you should configure RendezvousAffinityFunction with
excludeNeighbors == true flag in order to increase cluster reliability.


On Mon, Nov 14, 2016 at 3:09 PM, Krzysztof  wrote:

> Hello,
>
> Judging by the documentation and some discussions on this list, can you
> confirm that Ignite cache does not take into account different memory
> settings, i.e. if we have various nodes with 16GB and 32GB allocated for
> cache, there would be no two times more partitions assigned to larger
> nodes?
>
> In order to not to underutilize larger nodes or overfill smaller nodes we
> would have to develop our own affinity strategy via AffinityFunction in
> order to make it cache-size aware?
>
> RendezvousAffinityFunction seems to be completely resource-blind?
>
> Could you please clarify what would be the best way to achieve balanced
> distribution cluster memory-wise?
>
> Thanks
> Krzysztof
>
>
>
>
>
>
>
> --
> View this message in context: http://apache-ignite-users.
> 70518.x6.nabble.com/Partitioning-on-a-non-uniform-cluster-tp8940.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>


Re: Remote Server Thread Not exit when Job finished,Cause out of memory

2016-11-14 Thread Vladislav Pyatkov
Hi Alex,

You should do like this:

IgniteCompute compute = ignite.compute().withAsync();
compute.apply(...)
IgniteFuture future = compute.future();
...
future.cancel();

And handle Thread.interrupted() flag into closure.

On Mon, Nov 14, 2016 at 2:35 PM, alex  wrote:

> Thanks vdpyatkov.
>
> Currently, the number of threads is not the problem.
>
> The problem is that when client finished, how to finish threads on server
> node which created by this client.
>
> For example, client code is:
>
> String cacheKey = "jobIds";
> String cname = "myCacheName";
> ClusterGroup rmts =ignite.cluster().forRemotes();
> IgniteCache> cache = ignite.getOrCreateCache(cname)
> ;
> List jobList = cache.get(cacheKey);
> Collection res = ignite.compute(rmts).apply(
> new IgniteClosure() {
> @Override
> public String apply(String word) {
> return word;
> }
> },
> jobList
> );
> ignite.close();
> System.out.println("ignite Closed");
>
> if (res == null) {
> System.out.println("Error: Result is null");
> return;
> }
>
> res.forEach(s -> {
> System.out.println(s);
> });
> System.out.println("Finished!");
>
>
> When client initiate ignite instance, server side create 6 threads for this
> computing job.
> After client program exit,  the 6 threads still alive on server. And never
> exit until I kill the server.
> How can I finish threads after client job finished gracefully.
>
> Thanks for any suggestions
>
>
> vdpyatkov wrote
> > Hi Alex,
> > I think, these threads are executing into pools of threads, and number of
> > threads always restricted by pool size[1].
> > You can configure sizes manually:
> >
> >
> > 
> >
> > 
> > [1]:
> > https://apacheignite.readme.io/v1.7/docs/performance-tips#
> configure-thread-pools
>
>
> vdpyatkov wrote
> > Hi Alex,
> > I think, these threads are executing into pools of threads, and number of
> > threads always restricted by pool size[1].
> > You can configure sizes manually:
> >
> >
> > 
> >
> > 
> > [1]:
> > https://apacheignite.readme.io/v1.7/docs/performance-tips#
> configure-thread-pools
>
>
>
>
>
> --
> View this message in context: http://apache-ignite-users.
> 70518.x6.nabble.com/Remote-Server-Thread-Not-exit-when-
> Job-finished-Cause-out-of-memory-tp8934p8939.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>



-- 
Vladislav Pyatkov


Partitioning on a non-uniform cluster

2016-11-14 Thread Krzysztof
Hello,

Judging by the documentation and some discussions on this list, can you
confirm that Ignite cache does not take into account different memory
settings, i.e. if we have various nodes with 16GB and 32GB allocated for
cache, there would be no two times more partitions assigned to larger nodes?

In order to not to underutilize larger nodes or overfill smaller nodes we
would have to develop our own affinity strategy via AffinityFunction in
order to make it cache-size aware?

RendezvousAffinityFunction seems to be completely resource-blind?

Could you please clarify what would be the best way to achieve balanced
distribution cluster memory-wise?

Thanks
Krzysztof







--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Partitioning-on-a-non-uniform-cluster-tp8940.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Remote Server Thread Not exit when Job finished,Cause out of memory

2016-11-14 Thread alex
Thanks vdpyatkov.

Currently, the number of threads is not the problem.

The problem is that when client finished, how to finish threads on server
node which created by this client.

For example, client code is:

String cacheKey = "jobIds";
String cname = "myCacheName";
ClusterGroup rmts =ignite.cluster().forRemotes();
IgniteCache> cache = ignite.getOrCreateCache(cname);
List jobList = cache.get(cacheKey);
Collection res = ignite.compute(rmts).apply(
new IgniteClosure() {
@Override
public String apply(String word) {
return word;
}
},
jobList
);
ignite.close();
System.out.println("ignite Closed");

if (res == null) {
System.out.println("Error: Result is null");
return;
}

res.forEach(s -> {
System.out.println(s);
});
System.out.println("Finished!");


When client initiate ignite instance, server side create 6 threads for this
computing job. 
After client program exit,  the 6 threads still alive on server. And never
exit until I kill the server.
How can I finish threads after client job finished gracefully.

Thanks for any suggestions


vdpyatkov wrote
> Hi Alex,
> I think, these threads are executing into pools of threads, and number of
> threads always restricted by pool size[1].
> You can configure sizes manually:
> 
>   
> 
>   
> 
> [1]:
> https://apacheignite.readme.io/v1.7/docs/performance-tips#configure-thread-pools


vdpyatkov wrote
> Hi Alex,
> I think, these threads are executing into pools of threads, and number of
> threads always restricted by pool size[1].
> You can configure sizes manually:
> 
>   
> 
>   
> 
> [1]:
> https://apacheignite.readme.io/v1.7/docs/performance-tips#configure-thread-pools





--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Remote-Server-Thread-Not-exit-when-Job-finished-Cause-out-of-memory-tp8934p8939.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Remote Server Thread Not exit when Job finished,Cause out of memory

2016-11-14 Thread vdpyatkov
Hi Alex,
I think, these threads are executing into pools of threads, and number of
threads always restricted by pool size[1].
You can configure sizes manually:

  
  

[1]:
https://apacheignite.readme.io/v1.7/docs/performance-tips#configure-thread-pools



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Remote-Server-Thread-Not-exit-when-Job-finished-Cause-out-of-memory-tp8934p8938.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Ignite Cache setIndexedTypes Question

2016-11-14 Thread vdpyatkov
Hi,

You can use Igniteconfiguration.setIndexedTypes(Kei.class, Val.class) and
annotations or (not both) Query Entity and configure indexes in it.



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Ignite-Cache-setIndexedTypes-Question-tp8895p8937.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Remote Server Thread Not exit when Job finished,Cause out of memory

2016-11-14 Thread alex
Hi, When I start a remote compute job , call() Or affinityCall(). Remote
server will create 6 threads, and these thread never exit. Just like the
VisualVM shows below:


 

thread name from "utility-#153%null%" to "marshaller-cache-#14i%null%"

After local client got results from server, use ignite.close() to exit the
connection. But the 6 threads is still alive and never ended until Server
node exit.

The problem is that when I create hundreds of jobs , the huge amount of
threads will cause out of memory.



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Remote-Server-Thread-Not-exit-when-Job-finished-Cause-out-of-memory-tp8934.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.