Re: SQLQuery with simple Join return no results

2017-09-01 Thread Denis Magda
Yes, this one of the ways.

Denis

On Friday, September 1, 2017, matt  wrote:

> OK thanks for that. So does that then mean that the key type (K) for my
> Cache
> needs to be AffinityKey ?
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Quick questions on Evictions

2017-09-01 Thread John Wilson
Hi,

I have been reading through Ignite doc and I still have these questions. I
appreciate your answer.

Assume my Ignite native persistence is *not *enabled:


   1. if on-heap cache is also not enabled, then there are no entry-based
   evictions, right?
   2. if on-heap cache is now enabled, does a write to on-heap cache also
   results in a write-through or write-behind behavior to the off-heap entry?
   3. If on-heap cache is not enabled but data page eviction mode is
   enabled, then where do evicted pages from off-heap go/written to?

and, need confirmation on how data page eviction is implemented:

4. when a data page eviction is initiated, Ignite works by iterating
through each entry in the page and evicting entries one by one. It may
happen that certain entries may be involved in active transactions and
hence certain entries may not be evicted at all.


Thanks,


Re: SQLQuery with simple Join return no results

2017-09-01 Thread matt
OK thanks for that. So does that then mean that the key type (K) for my Cache
needs to be AffinityKey ?



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: SQLQuery with simple Join return no results

2017-09-01 Thread Denis Magda
If you use Strings as the keys you won’t get affinity collocation set up 
properly and distributed joins will return an incomplete result. One of the 
keys have to comprise a “parent” class key that will be an affinity key. Look 
at the example here:
https://apacheignite.readme.io/docs/affinity-collocation#section-collocate-data-with-data
 


As for the NON collocated joins suggested by Roger (qry.setDistributedJoins( 
true)), I would use them only if it’s impossible to set up the collocation 
between 2 entities. That’s not your case from what I see. NON collocated joins 
are slower than collocated ones.

—
Denis


> On Sep 1, 2017, at 2:53 PM, Roger Fischer (CW)  wrote:
> 
> Hi Matt,
> 
> are the objects to join collocated, ie. do they have the same affinity key? 
> If yes, it should work (it worked for me).
> 
> If no, you need to enable distributed joins for the query. See the middle 
> line.
> 
>SqlFieldsQuery qry = new SqlFieldsQuery( stmt);
>qry.setDistributedJoins( true);
>queryCursor = aCache.query( qry);
> 
> Roger
> 
> -Original Message-
> From: matt [mailto:goodie...@gmail.com ] 
> Sent: Friday, September 01, 2017 1:52 PM
> To: user@ignite.apache.org 
> Subject: SQLQuery with simple Join return no results
> 
> I have 2 caches defined, both with String keys, and classes that make use of 
> the Ignite annotations for indexes and affinity. I've got 3 different nodes 
> running, and the code I'm using to populate the cache w/test data works, and 
> I can see each node is updated with its share of the data. My index types are 
> set on the caches as well.
> 
> If I do a ScanQuery, I can see that all of the fields and IDs are correct, 
> Ignite returns them all. But when doing a SqlQuery, I get nothing back.
> Ignite is not complaining about the query, it's just returning an empty 
> cursor.
> 
> If I remove the Join, results are returned.
> 
> So I'm wondering if this is related to the way I've set up my affinity 
> mapping. It's basically setup like the code below... and the query looks like 
> this:
> 
> "from B, A WHERE B.id = A.bID"
> 
> Any ideas on what I'm doing wrong here?
> 
> class A implements Serializable {
>  @QuerySqlField(index = true)
>  String id;
> 
>  @QuerySqlField(index = true)
>  String bId;  
> 
>  @AffinityKeyMapped
>  @QuerySqlField(index = true)
>  String group;
> }
> 
> class B implements Serializable {
>  @QuerySqlField(index = true)
>  String id;
> 
>  @AffinityKeyMapped
>  @QuerySqlField(index = true)
>  String group;
> }
> 
> 
> 
> --
> Sent from: 
> https://urldefense.proofpoint.com/v2/url?u=http-3A__apache-2Dignite-2Dusers.70518.x6.nabble.com_=DwICAg=IL_XqQWOjubgfqINi2jTzg=1esZO0r0bYS90lcsaLA6N4AFxuNo6lzauhETGwdJQoQ=g9B7o3WZd7LuW39MRFWdU5Gim1S3DRPZGcugI0n9Wno=ehBogkdblpG8U9N0taPx5Jdf8G9uDMnZiRR-f34rEe0=
>  
> 


RE: SQLQuery with simple Join return no results

2017-09-01 Thread Roger Fischer (CW)
Hi Matt,

are the objects to join collocated, ie. do they have the same affinity key? If 
yes, it should work (it worked for me).

If no, you need to enable distributed joins for the query. See the middle line.

SqlFieldsQuery qry = new SqlFieldsQuery( stmt);
qry.setDistributedJoins( true);
queryCursor = aCache.query( qry);

Roger

-Original Message-
From: matt [mailto:goodie...@gmail.com] 
Sent: Friday, September 01, 2017 1:52 PM
To: user@ignite.apache.org
Subject: SQLQuery with simple Join return no results

I have 2 caches defined, both with String keys, and classes that make use of 
the Ignite annotations for indexes and affinity. I've got 3 different nodes 
running, and the code I'm using to populate the cache w/test data works, and I 
can see each node is updated with its share of the data. My index types are set 
on the caches as well.

If I do a ScanQuery, I can see that all of the fields and IDs are correct, 
Ignite returns them all. But when doing a SqlQuery, I get nothing back.
Ignite is not complaining about the query, it's just returning an empty cursor.

If I remove the Join, results are returned.

So I'm wondering if this is related to the way I've set up my affinity mapping. 
It's basically setup like the code below... and the query looks like this:

"from B, A WHERE B.id = A.bID"

Any ideas on what I'm doing wrong here?

class A implements Serializable {
  @QuerySqlField(index = true)
  String id;
  
  @QuerySqlField(index = true)
  String bId;  

  @AffinityKeyMapped
  @QuerySqlField(index = true)
  String group;
}

class B implements Serializable {
  @QuerySqlField(index = true)
  String id;

  @AffinityKeyMapped
  @QuerySqlField(index = true)
  String group;
}



--
Sent from: 
https://urldefense.proofpoint.com/v2/url?u=http-3A__apache-2Dignite-2Dusers.70518.x6.nabble.com_=DwICAg=IL_XqQWOjubgfqINi2jTzg=1esZO0r0bYS90lcsaLA6N4AFxuNo6lzauhETGwdJQoQ=g9B7o3WZd7LuW39MRFWdU5Gim1S3DRPZGcugI0n9Wno=ehBogkdblpG8U9N0taPx5Jdf8G9uDMnZiRR-f34rEe0=
 


Re: Memory is not going down after cache.clean()

2017-09-01 Thread davida
Thanks Dmitry,

Is it possible to limit the maximum memory size, if maxSize is configured
with memory policies for a cache without persistence ?



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Memory is not going down after cache.clean()

2017-09-01 Thread Denis Magda
Yes, maxSize parameter always limits amount of space available for an Ignite 
node in RAM (doesn’t matter whether you use persistence or not).

https://apacheignite.readme.io/docs/memory-configuration 


—
Denis

> On Sep 1, 2017, at 2:19 PM, davida  wrote:
> 
> Thanks Dmitry,
> 
> Is it possible to limit the maximum memory size, if maxSize is configured
> with memory policies for a cache without persistence ?
> 
> 
> 
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/



Re: Memory is not going down after cache.clean()

2017-09-01 Thread davida
Thanks Dmitry,

Is it possible to limit the maximum memory size, if maxSize is configured
with memory policies for a cache without persistence ?



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


SQLQuery with simple Join return no results

2017-09-01 Thread matt
I have 2 caches defined, both with String keys, and classes that make use of
the Ignite annotations for indexes and affinity. I've got 3 different nodes
running, and the code I'm using to populate the cache w/test data works, and
I can see each node is updated with its share of the data. My index types
are set on the caches as well.

If I do a ScanQuery, I can see that all of the fields and IDs are correct,
Ignite returns them all. But when doing a SqlQuery, I get nothing back.
Ignite is not complaining about the query, it's just returning an empty
cursor.

If I remove the Join, results are returned.

So I'm wondering if this is related to the way I've set up my affinity
mapping. It's basically setup like the code below... and the query looks
like this:

"from B, A WHERE B.id = A.bID"

Any ideas on what I'm doing wrong here?

class A implements Serializable {
  @QuerySqlField(index = true)
  String id;
  
  @QuerySqlField(index = true)
  String bId;  

  @AffinityKeyMapped
  @QuerySqlField(index = true)
  String group;
}

class B implements Serializable {
  @QuerySqlField(index = true)
  String id;

  @AffinityKeyMapped
  @QuerySqlField(index = true)
  String group;
}



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Cassandra failing to ReadThrough using Cache.get(key) without preloading

2017-09-01 Thread Dmitri Bronnikov
Last time I tried, which was early 2017, cache.getEntry() would pull it from
Cassandra (or whichever database is backing the cache) for me, while
cache.get() won't. I then found somewhere, in the docs or on the board, that
it's to be expected. Can someone confirm? I was most interested in SQL,
which definitely won't see entries that aren't preloaded into caches, not
clearly remembering "get" vs "getEntry" differences.



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Ignite 2.0 out of memory

2017-09-01 Thread ignite_user2016
Thanks for the quick response.

how many nodes you have on the one machine? we have single ignite node on
the machine.

How many visor clients on it machine? we mostly monitor ignite using
ignitevisor, and it might have happened the visor client did not shut down
correctly.

I think, it could be an issue with Nagios monitoring so will see if run into
same issue again however will try to get the logs from ops team.

Thanks ..



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Cassandra failing to ReadThrough using Cache.get(key) without preloading

2017-09-01 Thread Evgenii Zhuravlev
I would recommend changing

ignite
.getOrCreateCache(new CacheConfiguration(
Application.TEST_CACHE_NAME)))

to

ignite
.cache(Application.TEST_CACHE_NAME)

just to check if you really accessing the same cache that was creating
from your xml file and not creating a new one with empty
CacheConfiguration

Kind Regards,

Evgenii


2017-09-01 16:35 GMT+03:00 Kenan Dalley :

> FYI, my update wasn't that my problem was solved, but that the website got
> rid of the work that I did to post the code.  Please take a look at the
> code
> that I've posted yesterday because I'm still having the issue.
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Ignite 2.0 out of memory

2017-09-01 Thread Evgenii Zhuravlev
This error is not about heap memory, it indicates that you started more
threads within one process that it's allowed by OS. On Linux you can play
with ulimit to overcome the issue.

Also, how many nodes you have on the one machine? How many visor clients on
it machine? Could you share logs from nodes? Could it be possible that you
run more than one visor and just forget it, because it's not shown in
topology?

Evgenii


2017-09-01 20:48 GMT+03:00 ignite_user2016 :

> Hello igniters,
>
> I see the following warning in the log -
>
> [17:23:29,070][WARN ][main][TcpCommunicationSpi] Message queue limit is set
> to 0 which may lead to potential OOMEs when running cache operations in
> FULL_ASYNC or PRIMARY_SYNC modes due to message queues growth on sender and
> receiver sides.
>
> Is this warning relevant to following error ?
>
> And then our server went down with following error -
>
> java.lang.OutOfMemoryError: unable to create new native thread
> at java.lang.Thread.start0(Native Method) ~[?:1.8.0_121]
> at java.lang.Thread.start(Thread.java:714) ~[?:1.8.0_121]
> at
> org.apache.ignite.internal.managers.discovery.GridDiscoveryManager.start(
> GridDiscoveryManager.java:726)
> ~[ignite-core-2.0.0.jar:2.0.0]
> at
> org.apache.ignite.internal.IgniteKernal.startManager(
> IgniteKernal.java:1745)
> ~[ignite-core-2.0.0.jar:2.0.0]
> at
> org.apache.ignite.internal.IgniteKernal.start(IgniteKernal.java:968)
> [ignite-core-2.0.0.jar:2.0.0]
> at
> org.apache.ignite.internal.IgnitionEx$IgniteNamedInstance.start0(
> IgnitionEx.java:1895)
> [ignite-core-2.0.0.jar:2.0.0]
> at
> org.apache.ignite.internal.IgnitionEx$IgniteNamedInstance.start(
> IgnitionEx.java:1647)
> [ignite-core-2.0.0.jar:2.0.0]
> at
> org.apache.ignite.internal.IgnitionEx.start0(IgnitionEx.java:1075)
> [ignite-core-2.0.0.jar:2.0.0]
> at org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.
> java:595)
> [ignite-core-2.0.0.jar:2.0.0]
> at org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.
> java:519)
> [ignite-core-2.0.0.jar:2.0.0]
> at org.apache.ignite.Ignition.start(Ignition.java:322)
> [ignite-core-2.0.0.jar:2.0.0]
> at
> org.apache.ignite.visor.commands.open.VisorOpenCommand.open(
> VisorOpenCommand.scala:251)
> [ignite-visor-console-2.0.0.jar:2.0.0]
> at
> org.apache.ignite.visor.commands.open.VisorOpenCommand.open(
> VisorOpenCommand.scala:219)
> [ignite-visor-console-2.0.0.jar:2.0.0]
> at
> org.apache.ignite.visor.commands.open.VisorOpenCommand$$anonfun$2.
> apply(VisorOpenCommand.scala:306)
> [ignite-visor-console-2.0.0.jar:2.0.0]
> at
> org.apache.ignite.visor.commands.open.VisorOpenCommand$$anonfun$2.
> apply(VisorOpenCommand.scala:306)
> [ignite-visor-console-2.0.0.jar:2.0.0]
> at
> org.apache.ignite.visor.commands.VisorConsole.mainLoop(VisorConsole.scala:
> 217)
> [ignite-visor-console-2.0.0.jar:2.0.0]
> at
> org.apache.ignite.visor.commands.VisorConsole$.delayedEndpoint$org$apache$
> ignite$visor$commands$VisorConsole$1(VisorConsole.scala:329)
> [ignite-visor-console-2.0.0.jar:2.0.0]
> at
> org.apache.ignite.visor.commands.VisorConsole$delayedInit$body.apply(
> VisorConsole.scala:318)
> [ignite-visor-console-2.0.0.jar:2.0.0]
> at scala.Function0$class.apply$mcV$sp(Function0.scala:34)
> [scala-library-2.11.7.jar:?]
> at
> scala.runtime.AbstractFunction0.apply$mcV$sp(AbstractFunction0.scala:12)
> [scala-library-2.11.7.jar:?]
> at scala.App$$anonfun$main$1.apply(App.scala:76)
> [scala-library-2.11.7.jar:?]
> at scala.App$$anonfun$main$1.apply(App.scala:76)
> [scala-library-2.11.7.jar:?]
> at scala.collection.immutable.List.foreach(List.scala:381)
> [scala-library-2.11.7.jar:?]
> at
> scala.collection.generic.TraversableForwarder$class.
> foreach(TraversableForwarder.scala:35)
> [scala-library-2.11.7.jar:?]
> at scala.App$class.main(App.scala:76) [scala-library-2.11.7.jar:?]
> at
> org.apache.ignite.visor.commands.VisorConsole$.main(
> VisorConsole.scala:318)
> [ignite-visor-console-2.0.0.jar:2.0.0]
> at
> org.apache.ignite.visor.commands.VisorConsole.main(VisorConsole.scala)
> [ignite-visor-console-2.0.0.jar:2.0.0]
> [22:25:28,566][ERROR][main][G] Failed to initialize striped pool.
> java.lang.OutOfMemoryError: unable to create new native thread
> at java.lang.Thread.start0(Native Method) ~[?:1.8.0_121]
> at java.lang.Thread.start(Thread.java:714) ~[?:1.8.0_121]
> at
> org.apache.ignite.internal.util.StripedExecutor$Stripe.
> start(StripedExecutor.java:444)
> ~[ignite-core-2.0.0.jar:2.0.0]
> at
> org.apache.ignite.internal.util.StripedExecutor.(
> StripedExecutor.java:84)
> [ignite-core-2.0.0.jar:2.0.0]
> at
> org.apache.ignite.internal.IgnitionEx$IgniteNamedInstance.start0(
> IgnitionEx.java:1729)
> [ignite-core-2.0.0.jar:2.0.0]
> at
> 

Re: Task management - MapReduce & ForkJoin performance penalty

2017-09-01 Thread ezhuravlev
Hi,

I've added Thread.sleep(200) to Jobs to simulate a small load.
Here is what I've got: 
1node: 1 Task 2000 Jobs ~25 sec
2nodes(on the same machine): 1 Task 2000 Jobs ~13 sec

What I want to say here - this overhead will be not noticeable on real Jobs.

What about some configuration changes - you could change
igniteConfiguration.setPublicThreadPoolSize(). By default, it set to CPU
count(with hyperthreading), but for small tasks you can make it bigger, for
example, CPU count*2.

Also, Ignite contains module benchmarks - you can check it and find
benchmarks for ComputeGrid.

All the best,
Evgenii



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Ignite 2.0 out of memory

2017-09-01 Thread ignite_user2016
Hello igniters,

I see the following warning in the log - 

[17:23:29,070][WARN ][main][TcpCommunicationSpi] Message queue limit is set
to 0 which may lead to potential OOMEs when running cache operations in
FULL_ASYNC or PRIMARY_SYNC modes due to message queues growth on sender and
receiver sides.

Is this warning relevant to following error ? 

And then our server went down with following error - 

java.lang.OutOfMemoryError: unable to create new native thread
at java.lang.Thread.start0(Native Method) ~[?:1.8.0_121]
at java.lang.Thread.start(Thread.java:714) ~[?:1.8.0_121]
at
org.apache.ignite.internal.managers.discovery.GridDiscoveryManager.start(GridDiscoveryManager.java:726)
~[ignite-core-2.0.0.jar:2.0.0]
at
org.apache.ignite.internal.IgniteKernal.startManager(IgniteKernal.java:1745)
~[ignite-core-2.0.0.jar:2.0.0]
at
org.apache.ignite.internal.IgniteKernal.start(IgniteKernal.java:968)
[ignite-core-2.0.0.jar:2.0.0]
at
org.apache.ignite.internal.IgnitionEx$IgniteNamedInstance.start0(IgnitionEx.java:1895)
[ignite-core-2.0.0.jar:2.0.0]
at
org.apache.ignite.internal.IgnitionEx$IgniteNamedInstance.start(IgnitionEx.java:1647)
[ignite-core-2.0.0.jar:2.0.0]
at
org.apache.ignite.internal.IgnitionEx.start0(IgnitionEx.java:1075)
[ignite-core-2.0.0.jar:2.0.0]
at org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.java:595)
[ignite-core-2.0.0.jar:2.0.0]
at org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.java:519)
[ignite-core-2.0.0.jar:2.0.0]
at org.apache.ignite.Ignition.start(Ignition.java:322)
[ignite-core-2.0.0.jar:2.0.0]
at
org.apache.ignite.visor.commands.open.VisorOpenCommand.open(VisorOpenCommand.scala:251)
[ignite-visor-console-2.0.0.jar:2.0.0]
at
org.apache.ignite.visor.commands.open.VisorOpenCommand.open(VisorOpenCommand.scala:219)
[ignite-visor-console-2.0.0.jar:2.0.0]
at
org.apache.ignite.visor.commands.open.VisorOpenCommand$$anonfun$2.apply(VisorOpenCommand.scala:306)
[ignite-visor-console-2.0.0.jar:2.0.0]
at
org.apache.ignite.visor.commands.open.VisorOpenCommand$$anonfun$2.apply(VisorOpenCommand.scala:306)
[ignite-visor-console-2.0.0.jar:2.0.0]
at
org.apache.ignite.visor.commands.VisorConsole.mainLoop(VisorConsole.scala:217)
[ignite-visor-console-2.0.0.jar:2.0.0]
at
org.apache.ignite.visor.commands.VisorConsole$.delayedEndpoint$org$apache$ignite$visor$commands$VisorConsole$1(VisorConsole.scala:329)
[ignite-visor-console-2.0.0.jar:2.0.0]
at
org.apache.ignite.visor.commands.VisorConsole$delayedInit$body.apply(VisorConsole.scala:318)
[ignite-visor-console-2.0.0.jar:2.0.0]
at scala.Function0$class.apply$mcV$sp(Function0.scala:34)
[scala-library-2.11.7.jar:?]
at
scala.runtime.AbstractFunction0.apply$mcV$sp(AbstractFunction0.scala:12)
[scala-library-2.11.7.jar:?]
at scala.App$$anonfun$main$1.apply(App.scala:76)
[scala-library-2.11.7.jar:?]
at scala.App$$anonfun$main$1.apply(App.scala:76)
[scala-library-2.11.7.jar:?]
at scala.collection.immutable.List.foreach(List.scala:381)
[scala-library-2.11.7.jar:?]
at
scala.collection.generic.TraversableForwarder$class.foreach(TraversableForwarder.scala:35)
[scala-library-2.11.7.jar:?]
at scala.App$class.main(App.scala:76) [scala-library-2.11.7.jar:?]
at
org.apache.ignite.visor.commands.VisorConsole$.main(VisorConsole.scala:318)
[ignite-visor-console-2.0.0.jar:2.0.0]
at
org.apache.ignite.visor.commands.VisorConsole.main(VisorConsole.scala)
[ignite-visor-console-2.0.0.jar:2.0.0]
[22:25:28,566][ERROR][main][G] Failed to initialize striped pool.
java.lang.OutOfMemoryError: unable to create new native thread
at java.lang.Thread.start0(Native Method) ~[?:1.8.0_121]
at java.lang.Thread.start(Thread.java:714) ~[?:1.8.0_121]
at
org.apache.ignite.internal.util.StripedExecutor$Stripe.start(StripedExecutor.java:444)
~[ignite-core-2.0.0.jar:2.0.0]
at
org.apache.ignite.internal.util.StripedExecutor.(StripedExecutor.java:84)
[ignite-core-2.0.0.jar:2.0.0]
at
org.apache.ignite.internal.IgnitionEx$IgniteNamedInstance.start0(IgnitionEx.java:1729)
[ignite-core-2.0.0.jar:2.0.0]
at
org.apache.ignite.internal.IgnitionEx$IgniteNamedInstance.start(IgnitionEx.java:1647)
[ignite-core-2.0.0.jar:2.0.0]
at
org.apache.ignite.internal.IgnitionEx.start0(IgnitionEx.java:1075)
[ignite-core-2.0.0.jar:2.0.0]
at org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.java:595)
[ignite-core-2.0.0.jar:2.0.0]
at org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.java:519)
[ignite-core-2.0.0.jar:2.0.0]
at org.apache.ignite.Ignition.start(Ignition.java:322)
[ignite-core-2.0.0.jar:2.0.0]
at
org.apache.ignite.visor.commands.open.VisorOpenCommand.open(VisorOpenCommand.scala:251)
[ignite-visor-console-2.0.0.jar:2.0.0]
at

Re: Limiting the size of Persistent Store and clearing data on restart

2017-09-01 Thread Denis Mekhanikov
Sorry, I didn't know that. I found a bunch of matches by "swapEnabled"
keyword on master branch in the project, mostly in benchmarks
configurations. These properties should probably be removed not to confuse
readers.

пт, 1 сент. 2017 г. в 19:23, Dmitriy Setrakyan :

> On Fri, Sep 1, 2017 at 1:41 PM, Denis Mekhanikov 
> wrote:
>
>> Regarding your second question: looks like you don't actually need
>> persistence. Its purpose is the opposite: to save cache data between
>> restarts.
>> If you use persistence to store more data than RAM available, then you
>> can enable swap space:
>> https://apacheignite.readme.io/v1.9/docs/off-heap-memory#section-swap-space
>>
>
> Denis, starting with 2.1, Ignite does not have the swap space anymore,
> since it never worked well and kept hurting the performance. We recommend
> now that users do use persistence whenever the data size is bigger than the
> available memory size.
>


Re: Limiting the size of Persistent Store and clearing data on restart

2017-09-01 Thread Dmitriy Setrakyan
On Tue, Aug 29, 2017 at 7:00 PM, userx  wrote:

> Hi All,
>
> 1) Like the conventional configuration of
> https://apacheignite.readme.io/docs/memory-configuration#
> section-memory-policies
> where in we could limit the size of the memory, how do we define a
> configuration which can limit the size of Persistent Store so that say out
> of a 500GB HDisk, I don't want this Ignite related data to go beyond 300GB
> ?
>

You would have to control your data on disk by yourself. Evicting from disk
is very sensitive and use case specific, so it would be almost impossible
to automate it.


>
> 2) Also, is there a configuration such that when Ignite data grid servers
> are restarted (say some code change either on client or server side),
> whatever was persisted before gets wiped out completely on all the
> participating nodes and only fresh persisted data is there ?
>

How about calling destroyCache(...) on startup?


Re: Limiting the size of Persistent Store and clearing data on restart

2017-09-01 Thread Dmitriy Setrakyan
On Fri, Sep 1, 2017 at 1:41 PM, Denis Mekhanikov 
wrote:

> Regarding your second question: looks like you don't actually need
> persistence. Its purpose is the opposite: to save cache data between
> restarts.
> If you use persistence to store more data than RAM available, then you can
> enable swap space: https://apacheignite.readme.io/v1.9/docs/off-heap-mem
> ory#section-swap-space
>

Denis, starting with 2.1, Ignite does not have the swap space anymore,
since it never worked well and kept hurting the performance. We recommend
now that users do use persistence whenever the data size is bigger than the
available memory size.


Re: Cache Events for a specific cache.

2017-09-01 Thread dkarachentsev
Hi Chris,

Your listener receives CacheEvent, which contains cache name, key and value.
You may just check cache name and react accordingly [1] or use
ContinuousQuery as you suggested.

[1]
https://github.com/apache/ignite/blob/master/examples/src/main/java/org/apache/ignite/examples/datagrid/CacheEventsExample.java

Thanks!
-Dmitry.



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: fetching all the tasks already scheduled and to know the status of the task

2017-09-01 Thread afedotov
Hi,

You could use *ignite*.compute().activeTaskFutures() to get the map of all
the active tasks started 
from the *ignite* node.

Kind regards,
Alex



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Memory is not going down after cache.clean()

2017-09-01 Thread dkarachentsev
Hi,

Ignite can only increase memory consumption, there are no ways to reduce it.
Practically it's not needed, because it's enough just to set desired memory
size, and if persistence enabled - swap data to disk, when this limit was
hit.

Thanks!
-Dmitry.



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Cassandra failing to ReadThrough using Cache.get(key) without preloading

2017-09-01 Thread Kenan Dalley
FYI, my update wasn't that my problem was solved, but that the website got
rid of the work that I did to post the code.  Please take a look at the code
that I've posted yesterday because I'm still having the issue.



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Configuring path of binary_meta, db directories etc

2017-09-01 Thread userx
Thanks!



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Limiting the size of Persistent Store and clearing data on restart

2017-09-01 Thread Denis Mekhanikov
Regarding your second question: looks like you don't actually need
persistence. Its purpose is the opposite: to save cache data between
restarts.
If you use persistence to store more data than RAM available, then you can
enable swap space:
https://apacheignite.readme.io/v1.9/docs/off-heap-memory#section-swap-space

вт, 29 авг. 2017 г. в 20:06, ezhuravlev :

> Hi,
>
> 1) There is no possibility to limit Persistent Store size right now, but
> you
> always can use separate disk volume with size 300GB for your case to use it
> for Persistent Store.
>
> 2) Well it's not usual to use case, but after stopping node you can clear
> data from work folder or just invoke removeAll on cache
>
> Evgenii
>
>
>
> --
> View this message in context:
> http://apache-ignite-users.70518.x6.nabble.com/Limiting-the-size-of-Persistent-Store-and-clearing-data-on-restart-tp16507p16515.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>


Re: IgniteCache.localEntries(CachePeekMode... peekModes) result

2017-09-01 Thread Evgenii Zhuravlev
Hi,

> when we call cache.*localEntires *and we get the Iterable


,V

*>> *as a result,  does it mean that, those entries were "transferred" to
heap?

No, if you will use Iterator, you will get data from offheap to heap
one-by-one, so you will never face problems with OutOfMemory

Thanks,
Evgenii

2017-09-01 10:34 GMT+03:00 Krzysztof Chmielewski <
krzysiek.chmielew...@gmail.com>:

> Hello All,
>
> I have a question about IgniteCache's *localEntries(CachePeekMode
> *
> *... peekModes)* method result.
> In our case we are using Ignite 2.1.0
>
> We are broadcasting a closure to our ignite nodes to call
> igniteCache.localEntires. We are using offHeap mode, so obviously all
> entries are stored offHeap. But when we call cache.*localEntires *and we
> get the Iterable
> 
> 
> ,V
> 
> *>> *as a result,  does it men that, those entries were "transferred" to
> heap?
>
> I'm asking because in ScanQuery object we can set the pageSize to get only
> a certain number of entries at once, and further pages are fetched
> automatically.
>
> Thanks,
> Krzysztof
>


Re: hashcode() collisions for data having more than 2 power 32(around 4.3 billion) entries in ignite cache

2017-09-01 Thread Denis Mekhanikov
Hi!

Hash code used by Ignite is indeed 32-bit, but it seems to be enough in
most use-cases.

If you put so many objects into a cache, then collisions will definitely
occur, but they will be resolved using 'equals' methods. Collisions happen
on much smaller datasets, this is part of normal work of a cache or a hash
table, but they may make the cache work slower.

Do you have a use-case when you want to put more than 4 billion objects
into cache?

Thank you!
Denis.

пт, 1 сент. 2017 г. в 11:20, kotamrajuyashasvi :

> Hi
>
> I am using a user-defined java object as a cache key in my ignite cache.
> Ignite uses the cache key objects overridden hashcode() method during
> mapping in cache. But hashcode() method returns an int which is 32 bits.
> Now
> if my cache has entries more than 2 power 32 which is around 4.3 billion ,
> then definitely collisions occur. How to tackle this situation. Does ignite
> provide an option where it could use some other own implemented method
> which
> would return a hashcode presumably containing more number of
> bits(64,128,256
> etc) which could be used by ignite while mapping.
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


hashcode() collisions for data having more than 2 power 32(around 4.3 billion) entries in ignite cache

2017-09-01 Thread kotamrajuyashasvi
Hi

I am using a user-defined java object as a cache key in my ignite cache.
Ignite uses the cache key objects overridden hashcode() method during
mapping in cache. But hashcode() method returns an int which is 32 bits. Now
if my cache has entries more than 2 power 32 which is around 4.3 billion ,
then definitely collisions occur. How to tackle this situation. Does ignite
provide an option where it could use some other own implemented method which
would return a hashcode presumably containing more number of bits(64,128,256
etc) which could be used by ignite while mapping.



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


IgniteCache.localEntries(CachePeekMode... peekModes) result

2017-09-01 Thread Krzysztof Chmielewski
Hello All,

I have a question about IgniteCache's *localEntries(CachePeekMode
*
*... peekModes)* method result.
In our case we are using Ignite 2.1.0

We are broadcasting a closure to our ignite nodes to call
igniteCache.localEntires. We are using offHeap mode, so obviously all
entries are stored offHeap. But when we call cache.*localEntires *and we
get the Iterable


,V

*>> *as a result,  does it men that, those entries were "transferred" to
heap?

I'm asking because in ScanQuery object we can set the pageSize to get only
a certain number of entries at once, and further pages are fetched
automatically.

Thanks,
Krzysztof