Re: Query where clause on byte array

2019-06-28 Thread Ilya Lantukh
Hi,

Theoretically, you can create an index and use >= and <= comparisons for
any data type. In your particular case, I think, using BigInteger is the
most straightforward approach.

Hope this helps.

On Fri, Jun 28, 2019 at 9:39 AM Prasad Bhalerao <
prasadbhalerao1...@gmail.com> wrote:

> Hi,
>
> I want to store 128 bit number as a one the field in value object in my
> cache.
>
> I can do that using multiple ways.
> e.g.
> 1) I can store 128 bit number using java.math.BigInteger.
>   a) But If store it using BigInteger, can I create index on this
> field?
>   b) How can I use this field in where clause to filter the data?
>
> 2) I can store 128 bit number using byte array (byte[]).
> a) Can I create an index on byte array?
> b) Can I use this byte array field in where clause to filter the data.
> e.g.  mostly the where clause would be bytearr >=? and   bytearr
> <=
>
> 3) I can implement my own Number datatype e.g. Int128 using two long
> numbers and and I implement compareTo method which is a bit tricky.
>a) I can create index on nested objects but how I can use this Int128
> data type field in where clause and make use of overridden compareTo method
> to filter the data?
>
> Can someone please advise?
>
>
>
> Thanks,
> Prasad
>
>
>


Re: Partition map exchange in detail

2018-09-12 Thread Ilya Lantukh
Pavel K., can you please answer about Zookeeper discovery?

On Wed, Sep 12, 2018 at 5:49 PM, eugene miretsky 
wrote:

> Thanks for the patience with my questions - just trying to understand the
> system better.
>
> 3) I was referring to https://apacheignite.readme.io/docs/
> zookeeper-discovery#section-failures-and-split-brain-handling. How come
> it doesn't get the node to shut down?
> 4) Are there any docs/JIRAs that explain how counters are used, and why
> they are required in the state?
>
> Cheers,
> Eugene
>
>
> On Wed, Sep 12, 2018 at 10:04 AM Ilya Lantukh 
> wrote:
>
>> 3) Such mechanics will be implemented in IEP-25 (linked above).
>> 4) Partition map states include update counters, which are incremented on
>> every cache update and play important role in new state calculation. So,
>> technically, every cache operation can lead to partition map change, and
>> for obvious reasons we can't route them through coordinator. Ignite is a
>> more complex system than Akka or Kafka and such simple solutions won't work
>> here (in general case). However, it is true that PME could be simplified or
>> completely avoid for certain cases and the community is currently working
>> on such optimizations (https://issues.apache.org/jira/browse/IGNITE-9558
>> for example).
>>
>> On Wed, Sep 12, 2018 at 9:08 AM, eugene miretsky <
>> eugene.miret...@gmail.com> wrote:
>>
>>> 2b) I had a few situations where the cluster went into a state where PME
>>> constantly failed, and could never recover. I think the root cause was that
>>> a transaction got stuck and didn't timeout/rollback.  I will try to
>>> reproduce it again and get back to you
>>> 3) If a node is down, I would expect it to get detected and the node to
>>> get removed from the cluster. In such case, PME should not even be
>>> attempted with that node. Hence you would expect PME to fail very rarely
>>> (any faulty node will be removed before it has a chance to fail PME)
>>> 4) Don't all partition map changes go through the coordinator? I believe
>>> a lot of distributed systems work in this way (all decisions are made by
>>> the coordinator/leader) - In Akka the leader is responsible for making all
>>> cluster membership changes, in Kafka the controller does the leader
>>> election.
>>>
>>> On Tue, Sep 11, 2018 at 11:11 AM Ilya Lantukh 
>>> wrote:
>>>
>>>> 1) It is.
>>>> 2a) Ignite has retry mechanics for all messages, including PME-related
>>>> ones.
>>>> 2b) In this situation PME will hang, but it isn't a "deadlock".
>>>> 3) Sorry, I didn't understand your question. If a node is down, but
>>>> DiscoverySpi doesn't detect it, it isn't PME-related problem.
>>>> 4) How can you ensure that partition maps on coordinator are *latest 
>>>> *without
>>>> "freezing" cluster state for some time?
>>>>
>>>> On Sat, Sep 8, 2018 at 3:21 AM, eugene miretsky <
>>>> eugene.miret...@gmail.com> wrote:
>>>>
>>>>> Thanks!
>>>>>
>>>>> We are using persistence, so I am not sure if shutting down nodes will
>>>>> be the desired outcome for us since we would need to modify the baseline
>>>>> topolgy.
>>>>>
>>>>> A couple more follow up questions
>>>>>
>>>>> 1) Is PME triggered when client nodes join us well? We are using Spark
>>>>> client, so new nodes are created/destroy every time.
>>>>> 2) It sounds to me like there is a pontential for the cluster to get
>>>>> into a deadlock if
>>>>>a) single PME message is lost (PME never finishes, there are no
>>>>> retries, and all future operations are blocked on the pending PME)
>>>>>b) one of the nodes has a  long running/stuck pending operation
>>>>> 3) Under what circumastance can PME fail, while DiscoverySpi fails to
>>>>> detect the node being down? We are using ZookeeperSpi so I would expect 
>>>>> the
>>>>> split brain resolver to shut down the node.
>>>>> 4) Why is PME needed? Doesn't the coordinator know the altest
>>>>> toplogy/pertition map of the cluster through regualr gossip?
>>>>>
>>>>> Cheers,
>>>>> Eugene
>>>>>
>>>>> On Fri, Sep 7, 2018 at 5:18 PM Ilya Lantukh 
>>>>> wrote:
>>>>>
>>&g

Re: Partition map exchange in detail

2018-09-12 Thread Ilya Lantukh
3) Such mechanics will be implemented in IEP-25 (linked above).
4) Partition map states include update counters, which are incremented on
every cache update and play important role in new state calculation. So,
technically, every cache operation can lead to partition map change, and
for obvious reasons we can't route them through coordinator. Ignite is a
more complex system than Akka or Kafka and such simple solutions won't work
here (in general case). However, it is true that PME could be simplified or
completely avoid for certain cases and the community is currently working
on such optimizations (https://issues.apache.org/jira/browse/IGNITE-9558
for example).

On Wed, Sep 12, 2018 at 9:08 AM, eugene miretsky 
wrote:

> 2b) I had a few situations where the cluster went into a state where PME
> constantly failed, and could never recover. I think the root cause was that
> a transaction got stuck and didn't timeout/rollback.  I will try to
> reproduce it again and get back to you
> 3) If a node is down, I would expect it to get detected and the node to
> get removed from the cluster. In such case, PME should not even be
> attempted with that node. Hence you would expect PME to fail very rarely
> (any faulty node will be removed before it has a chance to fail PME)
> 4) Don't all partition map changes go through the coordinator? I believe a
> lot of distributed systems work in this way (all decisions are made by the
> coordinator/leader) - In Akka the leader is responsible for making all
> cluster membership changes, in Kafka the controller does the leader
> election.
>
> On Tue, Sep 11, 2018 at 11:11 AM Ilya Lantukh 
> wrote:
>
>> 1) It is.
>> 2a) Ignite has retry mechanics for all messages, including PME-related
>> ones.
>> 2b) In this situation PME will hang, but it isn't a "deadlock".
>> 3) Sorry, I didn't understand your question. If a node is down, but
>> DiscoverySpi doesn't detect it, it isn't PME-related problem.
>> 4) How can you ensure that partition maps on coordinator are *latest *without
>> "freezing" cluster state for some time?
>>
>> On Sat, Sep 8, 2018 at 3:21 AM, eugene miretsky <
>> eugene.miret...@gmail.com> wrote:
>>
>>> Thanks!
>>>
>>> We are using persistence, so I am not sure if shutting down nodes will
>>> be the desired outcome for us since we would need to modify the baseline
>>> topolgy.
>>>
>>> A couple more follow up questions
>>>
>>> 1) Is PME triggered when client nodes join us well? We are using Spark
>>> client, so new nodes are created/destroy every time.
>>> 2) It sounds to me like there is a pontential for the cluster to get
>>> into a deadlock if
>>>a) single PME message is lost (PME never finishes, there are no
>>> retries, and all future operations are blocked on the pending PME)
>>>b) one of the nodes has a  long running/stuck pending operation
>>> 3) Under what circumastance can PME fail, while DiscoverySpi fails to
>>> detect the node being down? We are using ZookeeperSpi so I would expect the
>>> split brain resolver to shut down the node.
>>> 4) Why is PME needed? Doesn't the coordinator know the altest
>>> toplogy/pertition map of the cluster through regualr gossip?
>>>
>>> Cheers,
>>> Eugene
>>>
>>> On Fri, Sep 7, 2018 at 5:18 PM Ilya Lantukh 
>>> wrote:
>>>
>>>> Hi Eugene,
>>>>
>>>> 1) PME happens when topology is modified (TopologyVersion is
>>>> incremented). The most common events that trigger it are: node
>>>> start/stop/fail, cluster activation/deactivation, dynamic cache start/stop.
>>>> 2) It is done by a separate ExchangeWorker. Events that trigger PME are
>>>> transferred using DiscoverySpi instead of CommunicationSpi.
>>>> 3) All nodes wait for all pending cache operations to finish and then
>>>> send their local partition maps to the coordinator (oldest node). Then
>>>> coordinator calculates new global partition maps and sends them to every
>>>> node.
>>>> 4) All cache operations.
>>>> 5) Exchange is never retried. Ignite community is currently working on
>>>> PME failure handling that should kick all problematic nodes after timeout
>>>> is reached (see https://cwiki.apache.org/confluence/display/IGNITE/IEP-
>>>> 25%3A+Partition+Map+Exchange+hangs+resolving for details), but it
>>>> isn't done yet.
>>>> 6) You shouldn't consider PME failure as a error by itself, but rather
>>>> as

Re: Partition map exchange in detail

2018-09-11 Thread Ilya Lantukh
1) It is.
2a) Ignite has retry mechanics for all messages, including PME-related ones.
2b) In this situation PME will hang, but it isn't a "deadlock".
3) Sorry, I didn't understand your question. If a node is down, but
DiscoverySpi doesn't detect it, it isn't PME-related problem.
4) How can you ensure that partition maps on coordinator are *latest *without
"freezing" cluster state for some time?

On Sat, Sep 8, 2018 at 3:21 AM, eugene miretsky 
wrote:

> Thanks!
>
> We are using persistence, so I am not sure if shutting down nodes will be
> the desired outcome for us since we would need to modify the baseline
> topolgy.
>
> A couple more follow up questions
>
> 1) Is PME triggered when client nodes join us well? We are using Spark
> client, so new nodes are created/destroy every time.
> 2) It sounds to me like there is a pontential for the cluster to get into
> a deadlock if
>a) single PME message is lost (PME never finishes, there are no
> retries, and all future operations are blocked on the pending PME)
>b) one of the nodes has a  long running/stuck pending operation
> 3) Under what circumastance can PME fail, while DiscoverySpi fails to
> detect the node being down? We are using ZookeeperSpi so I would expect the
> split brain resolver to shut down the node.
> 4) Why is PME needed? Doesn't the coordinator know the altest
> toplogy/pertition map of the cluster through regualr gossip?
>
> Cheers,
> Eugene
>
> On Fri, Sep 7, 2018 at 5:18 PM Ilya Lantukh  wrote:
>
>> Hi Eugene,
>>
>> 1) PME happens when topology is modified (TopologyVersion is
>> incremented). The most common events that trigger it are: node
>> start/stop/fail, cluster activation/deactivation, dynamic cache start/stop.
>> 2) It is done by a separate ExchangeWorker. Events that trigger PME are
>> transferred using DiscoverySpi instead of CommunicationSpi.
>> 3) All nodes wait for all pending cache operations to finish and then
>> send their local partition maps to the coordinator (oldest node). Then
>> coordinator calculates new global partition maps and sends them to every
>> node.
>> 4) All cache operations.
>> 5) Exchange is never retried. Ignite community is currently working on
>> PME failure handling that should kick all problematic nodes after timeout
>> is reached (see https://cwiki.apache.org/confluence/display/IGNITE/IEP-
>> 25%3A+Partition+Map+Exchange+hangs+resolving for details), but it isn't
>> done yet.
>> 6) You shouldn't consider PME failure as a error by itself, but rather as
>> a result of some other error. The most common reason of PME hang-up is
>> pending cache operation that couldn't finish. Check your logs - it should
>> list pending transactions and atomic updates. Search for "Found long
>> running" substring.
>>
>> Hope this helps.
>>
>> On Fri, Sep 7, 2018 at 11:45 PM, eugene miretsky <
>> eugene.miret...@gmail.com> wrote:
>>
>>> Hello,
>>>
>>> Out cluster occasionally fails with "partition map exchange failure"
>>> errors, I have searched around and it seems that a lot of people have had a
>>> similar issue in the past. My high-level understanding is that when one of
>>> the nodes fails (out of memory, exception, GC etc.) nodes fail to exchange
>>> partition maps. However, I have a few questions
>>> 1) When does partition map exchange happen? Periodically, when a node
>>> joins, etc.
>>> 2) Is it done in the same thread as communication SPI, or is a separate
>>> worker?
>>> 3) How does the exchange happen? Via a coordinator, peer to peer, etc?
>>> 4) What does the exchange block?
>>> 5) When is the exchange retried?
>>> 5) How to resolve the error? The only thing I have seen online is to
>>> decrease failureDetectionTimeout
>>>
>>> Our settings are
>>> - Zookeeper SPI
>>> - Persistence enabled
>>>
>>> Cheers,
>>> Eugene
>>>
>>
>>
>>
>> --
>> Best regards,
>> Ilya
>>
>


-- 
Best regards,
Ilya


Re: Partition map exchange in detail

2018-09-07 Thread Ilya Lantukh
Hi Eugene,

1) PME happens when topology is modified (TopologyVersion is incremented).
The most common events that trigger it are: node start/stop/fail, cluster
activation/deactivation, dynamic cache start/stop.
2) It is done by a separate ExchangeWorker. Events that trigger PME are
transferred using DiscoverySpi instead of CommunicationSpi.
3) All nodes wait for all pending cache operations to finish and then send
their local partition maps to the coordinator (oldest node). Then
coordinator calculates new global partition maps and sends them to every
node.
4) All cache operations.
5) Exchange is never retried. Ignite community is currently working on PME
failure handling that should kick all problematic nodes after timeout is
reached (see
https://cwiki.apache.org/confluence/display/IGNITE/IEP-25%3A+Partition+Map+Exchange+hangs+resolving
for details), but it isn't done yet.
6) You shouldn't consider PME failure as a error by itself, but rather as a
result of some other error. The most common reason of PME hang-up is
pending cache operation that couldn't finish. Check your logs - it should
list pending transactions and atomic updates. Search for "Found long
running" substring.

Hope this helps.

On Fri, Sep 7, 2018 at 11:45 PM, eugene miretsky 
wrote:

> Hello,
>
> Out cluster occasionally fails with "partition map exchange failure"
> errors, I have searched around and it seems that a lot of people have had a
> similar issue in the past. My high-level understanding is that when one of
> the nodes fails (out of memory, exception, GC etc.) nodes fail to exchange
> partition maps. However, I have a few questions
> 1) When does partition map exchange happen? Periodically, when a node
> joins, etc.
> 2) Is it done in the same thread as communication SPI, or is a separate
> worker?
> 3) How does the exchange happen? Via a coordinator, peer to peer, etc?
> 4) What does the exchange block?
> 5) When is the exchange retried?
> 5) How to resolve the error? The only thing I have seen online is to
> decrease failureDetectionTimeout
>
> Our settings are
> - Zookeeper SPI
> - Persistence enabled
>
> Cheers,
> Eugene
>



-- 
Best regards,
Ilya


Re: IgniteCheckedException: Error while creating file page store caused by NullPointerException

2018-06-20 Thread Ilya Lantukh
Hi,

This is clearly a usability issue in Ignite. I've created a ticket for it:
https://issues.apache.org/jira/browse/IGNITE-8839.

On Tue, Jun 19, 2018 at 6:17 PM, aealexsandrov 
wrote:

> As a workaround, you can try to add execution rights (like in your example)
> to all files under work directory.
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>



-- 
Best regards,
Ilya


Re: Effective Data through DataStream

2018-04-25 Thread Ilya Lantukh
Hi,

Ignite DataStreamer protocol has overhead for metadata: 11 bytes per
key-value pair + metadata for the whole batch, which size depends on many
factors. So, 400 MB of raw data that you observe is rather logical.

On Tue, Apr 24, 2018 at 9:42 PM, mimmo_c  wrote:

> Hi,
>
> Someone could help me?
>
> Thanks
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>



-- 
Best regards,
Ilya


Re: FsyncModeFileWriteAheadLogManager cannot be cast to FileWriteAheadLogManager ERROR

2018-04-12 Thread Ilya Lantukh
Hi,

This is a bug that has already been fixed in
https://issues.apache.org/jira/browse/IGNITE-7865.

As a workaround, you can change WALMode value from FSYNC to any other mode,
or start Ignite with JVM property
-DIGNITE_WAL_FSYNC_WITH_DEDICATED_WORKER=true.

On Thu, Apr 12, 2018 at 2:18 PM, NO <727418...@qq.com> wrote:

> Hi
>
> AM using 2.4
>
> When I stop a node, other nodes will have an error log as follows
> =LOG==
> [2018-04-12T19:03:39,767][ERROR][sys-#317][GridCacheIoManager] Failed
> processing message [senderId=b541279a-78fb-4ab0-a9b2-a50f8f36823d, 
> msg=GridDhtPartitionsFullMessage
> [parts={-2100569601=GridDhtPartitionFullMap 
> [nodeId=b541279a-78fb-4ab0-a9b2-a50f8f36823d,
> nodeOrder=1, updateSeq=26, size=7], 1813334792=GridDhtPartitionFullMap
> [nodeId=b541279a-78fb-4ab0-a9b2-a50f8f36823d, nodeOrder=1, updateSeq=27,
> size=7]}, partCntrs=o.a.i.i.processors.cache.distributed.dht.preloader.
> IgniteDhtPartitionCountersMap@1804adec, partCntrs2=o.a.i.i.processors.
> cache.distributed.dht.preloader.IgniteDhtPartitionCountersMap2@13e9b15e,
> partHistSuppliers=o.a.i.i.processors.cache.distributed.dht.preloader.
> IgniteDhtPartitionHistorySuppliersMap@7e26cb82, partsToReload=o.a.i.i.
> processors.cache.distributed.dht.preloader.IgniteDhtPartitionsToReloadMap@275b7c67,
> topVer=AffinityTopologyVersion [topVer=10, minorTopVer=0], errs={},
> compress=false, resTopVer=AffinityTopologyVersion [topVer=10,
> minorTopVer=0], partCnt=2, super=GridDhtPartitionsAbstractMessage 
> [exchId=GridDhtPartitionExchangeId
> [topVer=AffinityTopologyVersion [topVer=10, minorTopVer=0],
> discoEvt=null, nodeId=2e0ca1a6, evt=NODE_LEFT], lastVer=GridCacheVersion
> [topVer=135010956, order=1523530985052, nodeOrder=8],
> super=GridCacheMessage [msgId=26154, depInfo=null, err=null,
> skipPrepare=false
> java.lang.ClassCastException: org.apache.ignite.internal.
> processors.cache.persistence.wal.FsyncModeFileWriteAheadLogManager cannot
> be cast to org.apache.ignite.internal.processors.cache.persistence.
> wal.FileWriteAheadLogManager
> at org.apache.ignite.internal.processors.cache.distributed.
> dht.preloader.GridDhtPartitionsExchangeFuture.logExchange(
> GridDhtPartitionsExchangeFuture.java:1639) ~[ignite-core-2.4.0.jar:2.4.0]
> at org.apache.ignite.internal.processors.cache.distributed.
> dht.preloader.GridDhtPartitionsExchangeFuture.onDone(
> GridDhtPartitionsExchangeFuture.java:1620) ~[ignite-core-2.4.0.jar:2.4.0]
> at org.apache.ignite.internal.processors.cache.distributed.
> dht.preloader.GridDhtPartitionsExchangeFuture.processFullMessage(
> GridDhtPartitionsExchangeFuture.java:2953) ~[ignite-core-2.4.0.jar:2.4.0]
> at org.apache.ignite.internal.processors.cache.distributed.
> dht.preloader.GridDhtPartitionsExchangeFuture.access$1400(
> GridDhtPartitionsExchangeFuture.java:124) ~[ignite-core-2.4.0.jar:2.4.0]
> at org.apache.ignite.internal.processors.cache.distributed.
> dht.preloader.GridDhtPartitionsExchangeFuture$5.apply(
> GridDhtPartitionsExchangeFuture.java:2684) ~[ignite-core-2.4.0.jar:2.4.0]
> at org.apache.ignite.internal.processors.cache.distributed.
> dht.preloader.GridDhtPartitionsExchangeFuture$5.apply(
> GridDhtPartitionsExchangeFuture.java:2672) ~[ignite-core-2.4.0.jar:2.4.0]
> at org.apache.ignite.internal.util.future.GridFutureAdapter.
> notifyListener(GridFutureAdapter.java:383) ~[ignite-core-2.4.0.jar:2.4.0]
> at org.apache.ignite.internal.util.future.GridFutureAdapter.
> listen(GridFutureAdapter.java:353) ~[ignite-core-2.4.0.jar:2.4.0]
> at org.apache.ignite.internal.processors.cache.distributed.
> dht.preloader.GridDhtPartitionsExchangeFuture.onReceiveFullMessage(
> GridDhtPartitionsExchangeFuture.java:2672) ~[ignite-core-2.4.0.jar:2.4.0]
> at org.apache.ignite.internal.processors.cache.
> GridCachePartitionExchangeManager.processFullPartitionUpdate(
> GridCachePartitionExchangeManager.java:1481)
> ~[ignite-core-2.4.0.jar:2.4.0]
> at org.apache.ignite.internal.processors.cache.
> GridCachePartitionExchangeManager.access$1100(
> GridCachePartitionExchangeManager.java:133) ~[ignite-core-2.4.0.jar:2.4.0]
> at org.apache.ignite.internal.processors.cache.
> GridCachePartitionExchangeManager$3.onMessage(
> GridCachePartitionExchangeManager.java:339) ~[ignite-core-2.4.0.jar:2.4.0]
> at org.apache.ignite.internal.processors.cache.
> GridCachePartitionExchangeManager$3.onMessage(
> GridCachePartitionExchangeManager.java:337) ~[ignite-core-2.4.0.jar:2.4.0]
> at org.apache.ignite.internal.processors.cache.
> GridCachePartitionExchangeManager$MessageHandler.apply(
> GridCachePartitionExchangeManager.java:2689)
> ~[ignite-core-2.4.0.jar:2.4.0]
> at org.apache.ignite.internal.processors.cache.
> GridCachePartitionExchangeManager$MessageHandler.apply(
> GridCachePartitionExchangeManager.java:2668)
> ~[ignite-core-2.4.0.jar:2.4.0]
> at org.apache.ignite.internal.processors.cache.GridC

Re: Distributed transaction (Executing task on client as well as on key owner node)

2018-02-12 Thread Ilya Lantukh
Hi,

The fact that code from invoke(...) is executed on node that initiated
transaction ("near node" in ignite terminology) is a known issue. There is
a ticket for it (https://issues.apache.org/jira/browse/IGNITE-3471), but it
hasn't been fixed yet.

To solve your initial goal, you might want to start transaction on the
primary node for your key. It can be achieved by using
ignite.compute().affinityRun(...), but in this case you have to start
transaction inside affinityRun closure.

Like this:
ignite.compute().affinityRun(cacheName, key,
() -> {
try (Transaction tx =
Ignition.ignite().transactions().txStart(...)) {
cache.invoke(key, entryProcessor);

tx.commit();
}
}
);
}

In this case you will minimize overhead to modify entry in cache -
entryProcessor will be executed only on nodes that own the key, and stored
value shouldn't be transferred between nodes at all.

Hope this helps.



On Mon, Feb 12, 2018 at 1:57 PM, Prasad Bhalerao <
prasadbhalerao1...@gmail.com> wrote:

> Hi,
>
> I am trying to test the distributed transaction support using following
> piece of code. While debugging the code I observed that code executes on
> client node first and after doing commit the code executes on a node which
> owns that kay.
>
> What I am trying to do is, to collocate the data to avoid the network call
> as my data in real use case is going to big. But while debugging the code,
> I observed that entry processor first executes on client node, gets all the
> data executes the task. and after commit executes the same code on remote
> node.
>
> Can someone please explain this behavior? My use case to execute the task
> on nodes which owns the data in single transaction.
>
> private static void executeEntryProcessorTransaction(IgniteCache Person> cache) {
> Person val=null;
> try (Transaction tx = Ignition.ignite().transactions().txStart(
> TransactionConcurrency.OPTIMISTIC,TransactionIsolation.SERIALIZABLE)) {
>   long myid =6l;
> CacheEntryProcessor entryProcessor = new MyEntryProcessor();
> cache.invoke(myid, entryProcessor);
> System.out.println("Overwrote old value: " + val);
> val = cache.get(myid);
> System.out.println("Read value: " + val);
>
> tx.commit();
> System.out.println("Read value after commit: " +
> cache.get(myid));
> }
> }
>
>
>
> Thanks,
> Prasad
>



-- 
Best regards,
Ilya


Re: Distributed transaction support in Ingnite

2018-02-08 Thread Ilya Lantukh
Hi Phasad,

Your approach is incorrect, and function that you passed into
ignite.compute().affinityRun(...) will be executed outside of transaction
scope. If you want to execute your code on the affinity node to modify
value in cache, you should use IgniteCache.invoke(...) method - it will be
a part of transaction.

Hope this helps.

On Thu, Feb 8, 2018 at 5:14 PM, Prasad Bhalerao <
prasadbhalerao1...@gmail.com> wrote:

> Hi,
>
> Does ignite support distributed transaction in case of collocate
> computation?
>
> I started two ignite nodes and then pushed the data to cache using
> following code. Please check code as given below.  In this code I am
> rolling back the transaction at the end of compute affinity run. But after
> doing rollback the values in the map are not getting restored to previous
> version.
>
> Can anyone please help? Am I doing something wrong?
>
>
> *public static void *main(String[] args) *throws *Exception {
> Ignition.*setClientMode*(*true*);
>
> *try*( Ignite ignite = Ignition.*start*(*"ignite-configuration.xml"*)){
> IgniteCache cache = 
> ignite.getOrCreateCache(*"ipcache1"*);
>
> *for *(*int *i = 0; i < 10; i++)
> cache.put(i, Integer.*toString*(i));
>
> *for *(*int *i = 0; i < 10; i++)
> System.*out*.println(*"Got [key=" *+ i + *", val=" *+ 
> cache.get(i) + *']'*);
>
> System.*out*.println(*"Node Started"*);
>
> *final *IgniteCache cache1 = 
> ignite.cache(*"ipcache1"*);
> IgniteTransactions transactions = ignite.transactions();
> *Transaction tx = transactions.txStart(TransactionConcurrency.*
> *OPTIMISTIC,
> TransactionIsolation.SERIALIZABLE**);*
>
> *for *(*int *i = 0; i < 10; i++) {
> *int *key = i;
>
>
>ignite.compute().affinityRun(*"ipcache1"*, key,
> () -> {
> System.*out*.println(*"Co-located using 
> affinityRun [key= " *+ key + *", value=" *+ cache1.localPeek(key) + *']'*);
>
> String s = cache1.get(key);
> s = s+*"#Modified"*;
> cache1.put(key,s);
> }
> );
> }
> *tx.rollback();*
> System.*out*.println(*"RolledBack..."*);
> *for *(*int *i = 0; i < 10; i++)
> System.*out*.println(*"Got [key=" *+ i + *", val=" *+ 
> cache.get(i) + *']'*);
> }
>
> }
>
>
>
> Thanks,
> Prasad
>



-- 
Best regards,
Ilya


Re: Runtime.availableProcessors() returns hardware's CPU count which is the issue with Ignite in Kubernetes

2017-12-26 Thread Ilya Lantukh
Hi Yakov,

I think that property IGNITE_NODES_PER_HOST, as you suggested, would be
confusing, because users might want to reduce amount of available resources
for ignite node not only because they run multiple nodes per host, but also
because they run other software. Also, in my opinion all types of system
resources (CPU, memory, network) shouldn't be scaled using the same value.

So I'd prefer to have IGNITE_CONCURRENCY_LEVEL or
IGNITE_AVAILABLE_PROCESSORS, as it was originally suggested.

On Tue, Dec 26, 2017 at 4:05 PM, Yakov Zhdanov  wrote:

> Cross-posting to dev list.
>
> Guys,
>
> Suggestion below makes sense to me. Filed a ticket
> https://issues.apache.org/jira/browse/IGNITE-7310
>
> Perhaps, Arseny would like to provide a PR himself ;)
>
> --Yakov
>
> 2017-12-26 14:32 GMT+03:00 Arseny Kovalchuk :
>
> > Hi guys.
> >
> > Ignite configures all thread pools, selectors, etc. basing on
> Runtime.availableProcessors()
> > which seems not correct in containerized environment. In Kubernetes with
> > Docker that method returns CPU count of a Node/machine, which is 64 in
> our
> > particular case. But those 64 CPU and their timings are shared between
> > other stuff on the node like other Pods and services. Appropriate value
> of
> > available cores for Pod is usually configured as CPU Resource and
> estimated
> > basing on different things taking performance into account. General idea,
> > if you want to run several Pods on the same node, they all should request
> > less resources then the node provides. So, we give 4-8 cores for Ignite
> > instance in Kubernetes, but Ignite's thread pools are configured like
> they
> > get all 64 CPUs, and in turn we get a lot of threads for the Pod with 4-8
> > cores available.
> >
> > Now we manually set appropriate values for all available properties which
> > relate to thread pools.
> >
> > Would it be correct to have one environment variable, say
> > IGNITE_CONCURRENCY_LEVEL which will be used as a reference value for
> those
> > configurations and by default equals to Runtime.availableProcessors()?
> >
> > Thanks.
> >
> > ​
> > Arseny Kovalchuk
> >
> > Senior Software Engineer at Synesis
> > skype: arseny.kovalchuk
> > mobile: +375 (29) 666-16-16
> > ​LinkedIn Profile ​
> >
>



-- 
Best regards,
Ilya


Re: Data region's pages fill factor

2017-11-05 Thread Ilya Lantukh
Hi Andrey,

Looks like the tooltip is wrong and should be fixed.
Your interpretation of FreeListImpl.fillFactor() is correct - the former
value is used space, the latter one is total allocated. So, your pages are
99% full.

On Sun, Nov 5, 2017 at 4:08 AM, Andrey Kornev 
wrote:

> Oops! Hit "Send" a bit too soon.
>
> On the other hand, FreeListImpl.fillFactor() (used by the MBean under the
> hood to compute the PageFillFactor value) returns a tuple where loadSize
> == 409825856 and totalSize == 413892608. If my interpretation of these
> values is correct, shouldn't PageFillFactor be described as "the percentage
> of used space", or in other words, how full the pages are?
>
> Or, is it something else?
>
> Thanks
> Andrey
> --
> *From:* Andrey Kornev 
> *Sent:* Saturday, November 4, 2017 5:55 PM
> *To:* user@ignite.apache.org
> *Subject:* Data region's pages fill factor
>
> Hello,
>
> I'm a bit confused about the meaning of PagesFillFactor MBean attribute.
> According to the documentation (as well as a tooltip in the JMX MBean
> Browser -- here's a snapshot https://snag.gy/57FVw8.jpg), the attribute
> indicates "the percentage of space that is still free and can be filled
> in". I'm consistently getting  0.99... Does it mean that my pages are 99%
> empty?
>
>
>


-- 
Best regards,
Ilya


Re: affinityRun then invoke

2017-08-30 Thread Ilya Lantukh
Hi Matt,

In your case ignite.compute().affinityRun(...) is redundant - if you simply
call cache.invoke(...), it will send your EntryProcessor to the primary
node for the specified key, where it will be executed.

Hope this helps.

On Wed, Aug 30, 2017 at 4:39 PM, matt  wrote:

> I'm using @AffinityKeyMapped to ensure that all items of a particular
> field (the "parent" in my case) are processed on the same node. When I want
> to process an entree, I'm essentially doing 
> ignite.compute().affinityRun("my-cache",
> key, () -> this::processEntry); where processEntry does: cache.invoke(key,
> (entry, args) -> { if( entry.exists() ){ modify(entry); } else {
> create(entry); } return true; }); Is this generally a valid way to deal
> with atomically updating entrees within a partitioned cache? Thanks, - Matt
> --
> Sent from the Apache Ignite Users mailing list archive
>  at Nabble.com.
>



-- 
Best regards,
Ilya


Re: IgniteAtomicSequence durability

2017-08-17 Thread Ilya Lantukh
Hi Mike,

Ignite 2.1 will store data structures (including AtomicSequence) on disk if
you have configured persistent store globally
(IgniteConfiguration.getPersistentStoreConfiguration() != null).

On Thu, Aug 17, 2017 at 11:48 AM, Michael Griggs <
michael.gri...@gridgain.com> wrote:

> Dmitry,
>
> Can Ignite 2.1 persist an Ignite atomicSequence on disk?
>
> Regards
> Mike
>
> -Original Message-
> From: dkarachentsev [mailto:dkarachent...@gridgain.com]
> Sent: 16 August 2017 15:15
> To: user@ignite.apache.org
> Subject: Re: IgniteAtomicSequence durability
>
> Hi Mike,
>
> All atomics may be configured with
> IgniteConfiguration.setAtomicConfiguration() or Ignite.atomicSequence(),
> where you can specify number of backups, or set it as REPLICATED, but
> cannot
> configure persistent store.
>
> Only Ignite 2.1 can persist datastructures on disk.
>
> Thanks!
> -Dmitry.
>
>
>
> --
> View this message in context:
> http://apache-ignite-users.70518.x6.nabble.com/
> IgniteAtomicSequence-durabili
> ty-tp16220p16229.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>
>


-- 
Best regards,
Ilya


Re: Cache.get does not move entry from disk to RAM

2017-08-07 Thread Ilya Lantukh
Hi,

Can you describe how did you observe that entry isn't stored in RAM and
write a simple reproducer, if possible?

By design, page with requested data should be loaded from disk to off-heap
RAM when it is accessed for the first time. It should evict that page only
when memory segment gets full. To me on v2.1 it works as expected.

On Sun, Aug 6, 2017 at 8:27 AM, iostream  wrote:

> Hi,
>
> I am using v2.1 with persistence store enabled. I observed that cache.get()
> does not move requested entry from the disk to the RAM. Is this an expected
> behaviour?
>
> Is there any best practice to warm-up RAM with a subset of data from the
> durable memory when I restart my cluster.
>
> Thanks!
>
>
>
> --
> View this message in context: http://apache-ignite-users.
> 70518.x6.nabble.com/Cache-get-does-not-move-entry-from-disk-
> to-RAM-tp16016.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>



-- 
Best regards,
Ilya


Re: Ignite2.0 Topology

2017-07-21 Thread Ilya Lantukh
Yes, with default configuration Ignite will try to connect to every host in
the same IP network.

On Fri, Jul 21, 2017 at 5:54 PM, devis76 
wrote:

> Thank you for your mail, I will be on vacation until August 8th with no
> access to mail or voice mail so please expect some delay in my answers.
>
>
> urgent request write to [hidden email]
> 
>
> Best regards
> Devis Balsemin
>
>
> --
> View this message in context: Re: Ignite2.0 Topology
> 
>
> Sent from the Apache Ignite Users mailing list archive
>  at Nabble.com.
>



-- 
Best regards,
Ilya


Re: Ignite2.0 Topology

2017-07-21 Thread Ilya Lantukh
Hi Ajay,

By default Ignite uses discovery mechanism based on IP multicast (
https://en.wikipedia.org/wiki/IP_multicast)
More information about discovery configuration can be found here:
https://apacheignite.readme.io/docs/cluster-config.

Hope this helps.

On Fri, Jul 21, 2017 at 4:19 PM, Ajay  wrote:

> Hi,
>
> How Ignite servers connecting with each other even i did not configure any
> ip details in configuration xml?
>
> Thanks,
> Ajay.
>
>
>
> --
> View this message in context: http://apache-ignite-users.
> 70518.x6.nabble.com/Ignite2-0-Topology-tp15236.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>



-- 
Best regards,
Ilya


Re: Problem with Messages after client reconnect

2017-07-20 Thread Ilya Lantukh
Hi Sebastian,

Thanks for example and detailed steps to reproduce.

Unfortunately, this is a flaw in current binary metadata exchange
algorithm. I have created a ticket for it -
https://issues.apache.org/jira/browse/IGNITE-5794.

To avoid this issue for now, you should either:
- ensure that there is always at least 1 server node alive,
- restart all client nodes after last server node left cluster.


On Thu, Jul 20, 2017 at 9:18 AM, Sebastian Sindelar <
sebastian.sinde...@ibh-ks.de> wrote:

> Hi.
>
>
>
> Just figured out while the example setup doesn’t produce the error. The
> following TestClient will always cause the exception at the server.
>
>
>
> *public* *class* TestClient {
>
>
>
>*public* *static* *void* main(String[] args) {
>
>  IgniteConfiguration igniteConfiguration = *new*
> IgniteConfiguration();
>
>  igniteConfiguration.setClientMode(*true*);
>
>  igniteConfiguration.setMetricsLogFrequency(0);
>
>  Ignite ignite = *Ignition.start(**igniteConfiguration**)*;
>
>
>
>  Scanner *scanner* = *new* Scanner(System.*in*);
>
>  *while** (**true**) {*
>
> *String **message** = **scanner**.nextLine();*
>
> *IgniteMessaging **messaging** = **ignite**.message(*
> *ignite**.cluster());*
>
> *messaging**.send(**"test"**, **new** ComplexMessage(*
> *message**));*
>
> * }*
>
>
>
>
>
>}
>
> }
>
>
>
> Ignite.message(…) is now called before each message is send. In the main
> application it is called only once.
>
> I also noticed I didn’t describe the steps to reproduce the problem very
> detailed yesterday evening.
>
>
>
>1. Start 1 Client and 1 Server
>2. Type in some message in the client console and hit Enter
>3. Check that message is received and logged by the server
>4. Stop the Server
>5. Start the Server and wait for the client to reconnect
>6. Type in another message in the client console and hit Enter
>7. See the Exception at the server log
>
>
>
> Also one additional note. This only happens if all cluster servers are
> offline and the cluster gets restartet. If you have multiple servers there
> is no problem. At least one server must survive to keep the record on how
> to unmarshall the sent class.
>
>
>
> Best regards,
> Sebastian Sindelar
>
> *Von:* Sebastian Sindelar [mailto:sebastian.sinde...@ibh-ks.de]
> *Gesendet:* Mittwoch, 19. Juli 2017 16:53
> *An:* user@ignite.apache.org
> *Betreff:* Problem with Messages after client reconnect
>
>
>
> Hi.
>
>
>
> I just started with Ignite messaging and I encountered the following
> problem:
>
>
>
> I have a simple Setup with 1 server and 1 client. Sending messages from
> the client to the server works fine. But after a server restart I encounter
> the problem that when I send a message with a custom class I get an
> unmarshalling error:
>
>
>
>
>
> *org.apache.ignite.IgniteCheckedException*: null
>
>at org.apache.ignite.internal.util.IgniteUtils.unmarshal(
> *IgniteUtils.java:9893*) ~[ignite-core-2.0.0.jar:2.0.0]
>
>at org.apache.ignite.internal.managers.communication.GridIoManager$
> GridUserMessageListener.onMessage(*GridIoManager.java:2216*)
> ~[ignite-core-2.0.0.jar:2.0.0]
>
>at org.apache.ignite.internal.managers.communication.
> GridIoManager.invokeListener(*GridIoManager.java:1257*)
> [ignite-core-2.0.0.jar:2.0.0]
>
>at org.apache.ignite.internal.managers.communication.
> GridIoManager.access$2000(*GridIoManager.java:114*)
> [ignite-core-2.0.0.jar:2.0.0]
>
>at org.apache.ignite.internal.managers.communication.GridIoManager$
> GridCommunicationMessageSet.unwind(*GridIoManager.java:2461*)
> [ignite-core-2.0.0.jar:2.0.0]
>
>at org.apache.ignite.internal.managers.communication.GridIoManager.
> unwindMessageSet(*GridIoManager.java:1217*) [ignite-core-2.0.0.jar:2.0.0]
>
>at org.apache.ignite.internal.managers.communication.
> GridIoManager.access$2300(*GridIoManager.java:114*)
> [ignite-core-2.0.0.jar:2.0.0]
>
>at org.apache.ignite.internal.managers.communication.
> GridIoManager$8.run(*GridIoManager.java:1186*)
> [ignite-core-2.0.0.jar:2.0.0]
>
>at java.util.concurrent.ThreadPoolExecutor.runWorker(
> *ThreadPoolExecutor.java:1142*) [na:1.8.0_112]
>
>at java.util.concurrent.ThreadPoolExecutor$Worker.run(
> *ThreadPoolExecutor.java:617*) [na:1.8.0_112]
>
>at java.lang.Thread.run(*Thread.java:745*) [na:1.8.0_112]
>
> Caused by: *java.lang.NullPointerException*: null
>
>at org.apache.ignite.internal.processors.cache.binary.
> CacheObjectBinaryProcessorImpl.metadata(
> *CacheObjectBinaryProcessorImpl.java:492*) ~[ignite-core-2.0.0.jar:2.0.0]
>
>at org.apache.ignite.internal.processors.cache.binary.
> CacheObjectBinaryProcessorImpl$2.metadata(
> *CacheObjectBinaryProcessorImpl.java:174*) ~[ignite-core-2.0.0.jar:2.0.0]
>
>at org.apache.ignite.internal.binary.BinaryCon

Re: Small value for IGNITE_ATOMIC_CACHE_DELETE_HISTORY_SIZE

2017-07-03 Thread Ilya Lantukh
Yes, such races during pre-loading are possible even if you configured 0
backups.

On Fri, Jun 30, 2017 at 7:31 PM, charly  wrote:

> It helps a lot. Thanks for the detailed example. I have a last question.
> Would it still be a potential issue to lower the queue size if we don't
> have any backup? Like if a new node pre-loads from the primary, any new
> changes will also be populated to the soon-to-be new primary?
>
> Sent from my iPhone
>
> On Jun 30, 2017, at 11:27 AM, Ilya Lantukh [via Apache Ignite Users] <[hidden
> email] <http:///user/SendEmail.jtp?type=node&node=14183&i=0>> wrote:
>
> Hi Charly,
>
> There is no special thread in charge of clearing the queue. Entry is
> cleared when you try to add new entry to the queue, but it is already full.
> So, once queue becomes full, it will always stay full, and lowering queue
> size obviously reduces heap consumption. If cache is destroyed, the queue
> should be cleared immediately.
>
> However, if queue size is too low, it may lead to data inconsistency on
> unstable topology.
>
> Here is an example:
> Node A just joined the cluster, became primary node for some partitions
> and started preloading from node B, which is backup. Node B sends value V
> for key K, but before it was received and processed on node A user executed
> cache.remove(K).
> - If queue size is 0, node A now doesn't have any information that entry
> was just removed and has to store (K, V) pair from node B. On the other
> hand, node B will remove K and now primary and backup nodes have different
> data.
> - If queue size is N > 0, node A will save a tombstone entry for K. It
> will have all necessary information to understand that (K, V) pair from
> node B is older than current value and don't store it. Tombstone will be
> cleared after N other keys are removed.
>
> Hope this helps.
>
>
> On Thu, Jun 22, 2017 at 2:11 AM, charly <[hidden email]
> <http:///user/SendEmail.jtp?type=node&node=14182&i=0>> wrote:
>
>> Hey everyone,
>>
>> We use Ignite 1.8 and see a difference in heap used when lowering the
>> value
>> for IGNITE_ATOMIC_CACHE_DELETE_HISTORY_SIZE. I'm not sure to understand
>> what
>> it's going to happen if we decrease that value on production. Could
>> someone
>> clear that up for me please? From what I understand, deleted entries go in
>> that queue for cleaning, where one thread per cache is in charge of
>> cleaning
>> the queue. But what if the queue is full? I might have missed that
>> information in the documentation, my bad if I did.
>>
>> Maybe on the same subject, is it going to slow down the free up of the
>> heap
>> after cache.destroy()? We have a use-case where we delete 4 caches at the
>> same time containing ~500k entries heavily indexed. When doing so, it
>> takes
>> few minutes for the heap to decrease as you can see here
>> http://imgur.com/a/kPsSc . We would not want to extend the time to free
>> up
>> memory even more.
>>
>> More information about our setup:
>>  - on heap (we'll eventually move to off heap but back then we could not
>> make Ignite 1.8 to free up memory at all with our version of glibc. It
>> would
>> work with jmalloc but that was not permitted on production at that time).
>>  - no backup
>>  - atomicity mode: atomic
>>  - cache mode: partitioned
>>  - cluster of 4 nodes
>>  - jvm options: -J-DIGNITE_LONG_OPERATIONS_DUMP_TIMEOUT=30 -J-Xms12g
>> -J-Xmx12g -J-server -J-XX:+UseParNewGC -J-XX:+UseConcMarkSweepGC
>> -J-XX:+UseTLAB -J-XX:NewSize=128m -J-XX:MaxNewSize=128m
>> -J-XX:MaxTenuringThreshold=0 -J-XX:SurvivorRatio=1024
>> -J-XX:+UseCMSInitiatingOccupancyOnly -J-XX:CMSInitiatingOccupancyFr
>> action=10
>> -J-XX:MaxGCPauseMillis=1000 -J-XX:InitiatingHeapOccupancyPercent=10
>> -J-XX:+UseCompressedOops -J-XX:ParallelGCThreads=8 -J-XX:ConcGCThreads=8
>> -J-XX:+PrintGCDetails -J-XX:+PrintGCTimeStamps -J-XX:+PrintGCDateStamps
>> -J-XX:+UseGCLogFileRotation -J-XX:NumberOfGCLogFiles=10
>> -J-XX:GCLogFileSize=100M -J-Xloggc:/tmp/ignite-gc.log
>> -J-Dcom.sun.management.jmxremote.port=49130
>>
>> Thanks for your help,
>> Charly
>>
>>
>>
>>
>>
>> --
>> View this message in context: http://apache-ignite-users.705
>> 18.x6.nabble.com/Small-value-for-IGNITE-ATOMIC-CACHE-DELETE
>> -HISTORY-SIZE-tp14037.html
>> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>>
>
>
>
> --
> Best regards,
> Ilya
>
>
> --
> If you reply to this em

Re: Small value for IGNITE_ATOMIC_CACHE_DELETE_HISTORY_SIZE

2017-06-30 Thread Ilya Lantukh
Hi Charly,

There is no special thread in charge of clearing the queue. Entry is
cleared when you try to add new entry to the queue, but it is already full.
So, once queue becomes full, it will always stay full, and lowering queue
size obviously reduces heap consumption. If cache is destroyed, the queue
should be cleared immediately.

However, if queue size is too low, it may lead to data inconsistency on
unstable topology.

Here is an example:
Node A just joined the cluster, became primary node for some partitions and
started preloading from node B, which is backup. Node B sends value V for
key K, but before it was received and processed on node A user executed
cache.remove(K).
- If queue size is 0, node A now doesn't have any information that entry
was just removed and has to store (K, V) pair from node B. On the other
hand, node B will remove K and now primary and backup nodes have different
data.
- If queue size is N > 0, node A will save a tombstone entry for K. It will
have all necessary information to understand that (K, V) pair from node B
is older than current value and don't store it. Tombstone will be cleared
after N other keys are removed.

Hope this helps.


On Thu, Jun 22, 2017 at 2:11 AM, charly  wrote:

> Hey everyone,
>
> We use Ignite 1.8 and see a difference in heap used when lowering the value
> for IGNITE_ATOMIC_CACHE_DELETE_HISTORY_SIZE. I'm not sure to understand
> what
> it's going to happen if we decrease that value on production. Could someone
> clear that up for me please? From what I understand, deleted entries go in
> that queue for cleaning, where one thread per cache is in charge of
> cleaning
> the queue. But what if the queue is full? I might have missed that
> information in the documentation, my bad if I did.
>
> Maybe on the same subject, is it going to slow down the free up of the heap
> after cache.destroy()? We have a use-case where we delete 4 caches at the
> same time containing ~500k entries heavily indexed. When doing so, it takes
> few minutes for the heap to decrease as you can see here
> http://imgur.com/a/kPsSc . We would not want to extend the time to free up
> memory even more.
>
> More information about our setup:
>  - on heap (we'll eventually move to off heap but back then we could not
> make Ignite 1.8 to free up memory at all with our version of glibc. It
> would
> work with jmalloc but that was not permitted on production at that time).
>  - no backup
>  - atomicity mode: atomic
>  - cache mode: partitioned
>  - cluster of 4 nodes
>  - jvm options: -J-DIGNITE_LONG_OPERATIONS_DUMP_TIMEOUT=30 -J-Xms12g
> -J-Xmx12g -J-server -J-XX:+UseParNewGC -J-XX:+UseConcMarkSweepGC
> -J-XX:+UseTLAB -J-XX:NewSize=128m -J-XX:MaxNewSize=128m
> -J-XX:MaxTenuringThreshold=0 -J-XX:SurvivorRatio=1024
> -J-XX:+UseCMSInitiatingOccupancyOnly -J-XX:CMSInitiatingOccupancyFraction
> =10
> -J-XX:MaxGCPauseMillis=1000 -J-XX:InitiatingHeapOccupancyPercent=10
> -J-XX:+UseCompressedOops -J-XX:ParallelGCThreads=8 -J-XX:ConcGCThreads=8
> -J-XX:+PrintGCDetails -J-XX:+PrintGCTimeStamps -J-XX:+PrintGCDateStamps
> -J-XX:+UseGCLogFileRotation -J-XX:NumberOfGCLogFiles=10
> -J-XX:GCLogFileSize=100M -J-Xloggc:/tmp/ignite-gc.log
> -J-Dcom.sun.management.jmxremote.port=49130
>
> Thanks for your help,
> Charly
>
>
>
>
>
> --
> View this message in context: http://apache-ignite-users.
> 70518.x6.nabble.com/Small-value-for-IGNITE-ATOMIC-CACHE-
> DELETE-HISTORY-SIZE-tp14037.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>



-- 
Best regards,
Ilya


Re: Why is custom cacheStore.write() being called in clientMode?

2017-06-06 Thread Ilya Lantukh
Hi Rick,

Regarding your example: in Ignite Cache you can update keys 2 and 4 in the
same transaction without any problems, but it might be impossible to
perform a distributed transaction in DB behind CacheStore (it must
transactionally save key2 from node1 and key4 from node2). To solve this
problem, we put both keys into CacheStore from the same node - the one that
initiated transaction ("near node" in Ignite terminology).

Hope this helps.

On Tue, Jun 6, 2017 at 6:10 PM, Nikolai Tikhonov 
wrote:

> Rick,
>
> > What you are saying is that I cannot update keys 2 and 4 in the same
> transaction, correct?
> No, it's will be update in the same transaction. I explained you why
> Ignite can't update store from dht nodes (nodes which own this data) and
> why Ignite propagates updates to store from client node.
>
> On Tue, Jun 6, 2017 at 5:42 PM, rick_tem  wrote:
>
>> Hi Nikolai,
>>
>> Thanks for your reply.  It is appreciated!  Thanks for your answer to 2) I
>> will look into it. 3) and 4) are really the same issue I am trying to
>> understand how it works.
>>
>> With regards to 1) below, we aren't speaking about distributed databases,
>> but distributed caches that are java JVMs.  But isn't that what a JTA
>> transaction manager is supposed to do?  ie handle distributed
>> transactions?
>> if I enlist MQ and Jboss in the same transaction that is two seperate JVMs
>> and I believe should work with one atomic transaction...
>>
>> But regardless, I believe this is what you are saying here:  Please
>> correct
>> me if I am wrong.  Say I have keys 1, 2, 3 on node 1 and keys 4, 5, 6 on
>> node 2.  What you are saying is that I cannot update keys 2 and 4 in the
>> same transaction, correct?  This is because they live in two different
>> JVMs...If this is the case, that is a severe limitation as then I need to
>> know which node my data is on.  What would your recommendation be here
>> then
>> for write-through cache?  Have everything replicated?  It is a requirement
>> that the transaction be rock solid in whatever model I implement.  I
>> cannot
>> afford to lose writes or have half-committed data.
>>
>> Thanks,
>> Rick
>>
>>
>>
>> --
>> View this message in context: http://apache-ignite-users.705
>> 18.x6.nabble.com/Why-is-custom-cacheStore-write-being-called
>> -in-clientMode-tp13309p13424.html
>> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>>
>
>


-- 
Best regards,
Ilya


Re: Correct Way to Store Data

2017-05-26 Thread Ilya Lantukh
Hi Matt,

>From what I've seen, the most commonly used approach is the one you took:
have caches associated with object classes. This approach is efficient and
completely corresponds to "the Ignite way".

Having a separate cache for each product is definitely not a good idea,
especially if you have thousands of products and that number is going to
increase rapidly. Every cache requires additional memory to store it's
internal data structures. In addition, you will have to perform dynamic
cache start when a new product is added, which is a relatively expensive
operation and causes grid to pause all other operations for some time.

Hope this helps.


On Fri, May 26, 2017 at 10:51 AM, Matt  wrote:

> Hello,
>
> Right now I have a couple of caches associated with the kind of objects I
> store. For instance I have one cache for products, one for sales, one for
> stats, etc. I use the id of the product as the affinity key in all cases.
>
> Some questions I have regarding this approach...
>
> *1.* I get the impression I'm not doing it "the Ignite way", since I'm
> only storing one kind of object (ie, objects of only one class) in each
> cache. The approach I'm using is equivalent to having a PostgreSQL schema
> for products, another one for sales and a third for stats. Is that right?
>
> *2.* I believe it would make more sense to have only one cache (for
> instance, "analytics") and save all objects there (products, sales and
> stats). That would be equivalent to having one single scheme and inside it
> one table for each class I store. Right?
>
> *3.* Is there any problem in terms of performance or is it a bad practice
> to have one cache with all products and one cache per product with all
> related objects to that particular product? I think some queries would run
> much faster that way since all objects in a certain cache are related to
> the same product, there is no need to filter by sales or stats with a
> certain product id.
>
> *4.* What's the best approach or which one is more commonly used?
>
> As a side note, in all 3 cases I'll use as the affinity key the id of the
> product, except for the "products" cache in #3, which would be stored in a
> single node. Also, right now I'm storing about 10k products but that number
> increases as clients arrives, so I expect the cardinality to increase
> rapidly.
>
> Cheers,
> Matt
>



-- 
Best regards,
Ilya


Re: restore Java Object from BinaryObject

2017-02-09 Thread Ilya Lantukh
Hi,

In your case you should disable compact footer. See
https://ignite.apache.org/releases/mobile/org/apache/ignite/configuration/BinaryConfiguration.html#isCompactFooter()
.

On Thu, Feb 9, 2017 at 1:28 PM, shawn.du  wrote:

> Hi,
>
> I implement a cacheStore, this cachestore will persist a binaryObject
> into bytes and store in MySQL blob.
> Exceptions occurred when calling loadCache function:
> binaryObject.deserialize() will throw exceptions like "Cannot find
> metadata for object with compact footer: -1615140068"
> If  I just put the binaryObject into the cache without deserialization ,
> it is ok.  when i get it and use it, it will throw the exception again.
> How to fix it? Thanks in advance.
>
> public class BlobCacheStore extends CacheStoreAdapter
>
> {
>
>public void loadCache(IgniteBiInClosure clo, 
> Object... args)
>
>   {
> init();
> String sql = TEMPLATE_LOAD_SQL.replace(FAKE_TABLE_NAME, tableName);
> try (Connection connection = dataSource.getConnection();
>  Statement statement = connection.createStatement();
>  ResultSet rs = statement.executeQuery(sql))
> {
> while (rs.next())
> {
> String key = rs.getString("aKey");
> Blob blob = rs.getBlob("val");
> BinaryObject binaryObject = ignite.configuration().getMarshaller()
> .unmarshal(blob.getBytes(1, (int) blob.length()), 
> getClass().getClassLoader());
> blob.free();
> ignite.cache(cacheName).put(key, binaryObject.deserialize()); 
> //here will throw exceptions
> }
> }
> catch (Exception e)
> {
> throw new IgniteException(e);
> }
>   }
>
> }
>
>
> Thanks
> Shawn
>
>


-- 
Best regards,
Ilya


Re: Persistent local storage

2017-01-17 Thread Ilya Lantukh
Hi,

Yes, it is possible to configure such behavior by using CacheStore [1]. You
can use it to persist data in any kind of database, global or local.

[1] https://apacheignite.readme.io/docs/persistent-store

-- 
Best regards,
Ilya