Re: C# CacheStoreAdapter - Customizing Load, LoadCache methods

2020-11-23 Thread Pavel Tupitsyn
Ok, here is the problem, per documentation [1]:
"In case of partitioned caches, keys that are not mapped to this node,
either as primary or backups, will be automatically discarded by the cache."

Since you have two nodes in the cluster, but call localLoadCache only on
one node, part of the cache entries that belong to the other nodes are
discarded.
You can do one of the following:
1. Call LoadCache instead of LocalLoadCache, so that cache store is invoked
on every node. This will perform the same Oracle query multiple times,
which may be suboptimal.
2. Use DataStreamer [2] instead of LoadCache to load all the data from a
single node

[1] https://apacheignite-net.readme.io/docs/data-loading#icacheloadcache
[2] https://apacheignite-net.readme.io/docs/data-streamers

On Tue, Nov 24, 2020 at 1:25 AM ABDumalagan 
wrote:

> 1. I currently just have 2 Ignite nodes--first one remotely to start the
> cluster and the second one (this one) started programmatically with C#.
>
> 2. Adding Thread.Sleep(5000) doesn't change the result, unfortunately.
>
> --
> Sent from the Apache Ignite Users mailing list archive
>  at Nabble.com.
>


Re: C# CacheStoreAdapter - Customizing Load, LoadCache methods

2020-11-23 Thread ABDumalagan
1. I currently just have 2 Ignite nodes--first one remotely to start the
cluster and the second one (this one) started programmatically with C#.
2. Adding Thread.Sleep(5000) doesn't change the result, unfortunately.




--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/

Re: Apache Ignite Clientside NearCache and Serverside Eviction blocking cluster

2020-11-23 Thread pvprsd
Hi,

Did anyone get a chance to look into this issue? Is this supported
configuration for ignite? As these 2 (clientside NearCache and serverside
eviction) are very common features, I am wondering many projects should be
using this combination.

Can this be reported as a defect for ignite, if there is no configuration
fix for this problem?

Many thanks in advance.

Thanks,
Prasad




--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Out-of-memory issue on single node cache with durable persistence

2020-11-23 Thread Scott Prater
Hello,

I recently ran into an out-of-memory error on a durable persistent cache I
set up a few weeks ago.  I have a single node, with durable persistence
enabled, as well as WAL archiving.  I'm running Ignite ver.
2.8.1#20200521-sha1:86422096.

I looked at the stack trace, but I couldn't get a clear fix on what part of
the system ran out of memory, or what parameters I should change to fix the
problem.  From what I could tell of the stack dump, it looks like the WAL
archive ran out of memory;  but the memory usage report that occurred just
a minute before the exception showed plenty of memory was available.

Can someone with more experience tuning Ignite memory point me towards the
configuration parameters I should adjust?  Below are my log and my
configuration.  ( I have read the wiki page on memory tuning, but I'm happy
to be referred back to it.)

The log, with the metrics right before the OOM exception, then the OOM
exception:

[2020-11-22T19:20:39,787][INFO ][grid-timeout-worker-#22][IgniteKernal]
Metrics for local node (to disable set 'metricsLogFrequency' to 0)
^-- Node [id=2845fe3e, uptime=5 days, 15:08:38.033]
^-- Cluster [hosts=1, CPUs=4, servers=1, clients=0, topVer=1,
minorTopVer=1]
^-- Network [addrs=[0:0:0:0:0:0:0:1%lo, xxx.xxx.xxx.xxx, 127.0.0.1,
yyy.yyy.yyy.yyy], discoPort=47500, commPort=47100]
^-- CPU [CPUs=4, curLoad=0.33%, avgLoad=0.29%, GC=0%]
^-- Heap [used=316MB, free=62.34%, comm=812MB]
^-- Off-heap memory [used=4288MB, free=33.45%, allocated=6344MB]
^-- Page memory [pages=1085139]
^--   sysMemPlc region [type=internal, persistence=true,
lazyAlloc=false,
  ...  initCfg=40MB, maxCfg=100MB, usedRam=0MB, freeRam=99.99%,
allocRam=100MB, allocTotal=0MB]
^--   default_region region [type=default, persistence=true,
lazyAlloc=true,
  ...  initCfg=256MB, maxCfg=6144MB, usedRam=4288MB, freeRam=30.2%,
allocRam=6144MB, allocTotal=4240MB]
^--   metastoreMemPlc region [type=internal, persistence=true,
lazyAlloc=false,
  ...  initCfg=40MB, maxCfg=100MB, usedRam=0MB, freeRam=99.94%,
allocRam=0MB, allocTotal=0MB]
^--   TxLog region [type=internal, persistence=true, lazyAlloc=false,
  ...  initCfg=40MB, maxCfg=100MB, usedRam=0MB, freeRam=100%,
allocRam=100MB, allocTotal=0MB]
^-- Ignite persistence [used=4240MB]
^-- Outbound messages queue [size=0]
^-- Public thread pool [active=0, idle=0, qSize=0]
^-- System thread pool [active=0, idle=6, qSize=0]
[2020-11-22T19:21:15,585][ERROR][db-checkpoint-thread-#63][] Critical
system error detected. Will be handled accordingly to configured handler
[hnd=StopNodeOrHaltFailureHandler [tryStop=false, timeout=0,
super=AbstractFailureHandler [ignoredFailureTypes=UnmodifiableSet
[SYSTEM_WORKER_BLOCKED, SYSTEM_CRITICAL_OPERATION_TIMEOUT]]],
failureCtx=FailureContext [type=CRITICAL_ERROR,
err=java.lang.OutOfMemoryError]]
java.lang.OutOfMemoryError: null
at sun.misc.Unsafe.allocateMemory(Native Method) ~[?:1.8.0_121]
at
org.apache.ignite.internal.util.GridUnsafe.allocateMemory(GridUnsafe.java:1205)
~[ignite-core-2.9.0.jar:2.9.0]
at
org.apache.ignite.internal.util.GridUnsafe.allocateBuffer(GridUnsafe.java:264)
~[ignite-core-2.9.0.jar:2.9.0]
at
org.apache.ignite.internal.processors.cache.persistence.wal.ByteBufferExpander.(ByteBufferExpander.java:36)
~[ignite-core-2.9.0.jar:2.9.0]
at
org.apache.ignite.internal.processors.cache.persistence.wal.AbstractWalRecordsIterator.(AbstractWalRecordsIterator.java:125)
~[ignite-core-2.9.0.jar:2.9.0]
at
org.apache.ignite.internal.processors.cache.persistence.wal.FileWriteAheadLogManager$RecordsIterator.(FileWriteAheadLogManager.java:2701)
~[ignite-core-2.9.0.jar:2.9.0]
at
org.apache.ignite.internal.processors.cache.persistence.wal.FileWriteAheadLogManager$RecordsIterator.(FileWriteAheadLogManager.java:2637)
~[ignite-core-2.9.0.jar:2.9.0]
at
org.apache.ignite.internal.processors.cache.persistence.wal.FileWriteAheadLogManager.replay(FileWriteAheadLogManager.java:944)
~[ignite-core-2.9.0.jar:2.9.0]
at
org.apache.ignite.internal.processors.cache.persistence.wal.FileWriteAheadLogManager.replay(FileWriteAheadLogManager.java:920)
~[ignite-core-2.9.0.jar:2.9.0]
at
org.apache.ignite.internal.processors.cache.persistence.checkpoint.CheckpointEntry$GroupStateLazyStore.initIfNeeded(CheckpointEntry.java:347)
~[ignite-core-2.9.0.jar:2.9.0]
at
org.apache.ignite.internal.processors.cache.persistence.checkpoint.CheckpointEntry$GroupStateLazyStore.access$300(CheckpointEntry.java:243)
~[ignite-core-2.9.0.jar:2.9.0]
at
org.apache.ignite.internal.processors.cache.persistence.checkpoint.CheckpointEntry.initIfNeeded(CheckpointEntry.java:122)
~[ignite-core-2.9.0.jar:2.9.0]
at
org.apache.ignite.internal.processors.cache.persistence.checkpoint.CheckpointEntry.groupState(CheckpointEntry.java:104)
~[ignite-core-2.9.0.jar:2.9.0]
at

Re: C# CacheStoreAdapter - Customizing Load, LoadCache methods

2020-11-23 Thread Pavel Tupitsyn
1. How many Ignite nodes do you have?
2. What if you add Thread.Sleep(5000) before the last Console.WriteLine?
Does the resulting number change?

On Mon, Nov 23, 2020 at 6:01 PM ABDumalagan 
wrote:

> 1. Your program worked for me!
>
> 2. I added something to my LoadCache(Action, params
> object[] args) method in OracleStore.cs. I added the following 3 lines
> after the while loop:
>
> reader.Dispose();
>
> cmd.Dispose();
>
> con.Dispose();
>
> Console returned a non-zero cache size of 5136, however, the queries I
> wanted to do and the queries counted by the method is 10,000 - I was
> wondering what happened to the other ~5000 queries and why they aren't in
> cache?
>
> --
> Sent from the Apache Ignite Users mailing list archive
>  at Nabble.com.
>
>


Re: C# CacheStoreAdapter - Customizing Load, LoadCache methods

2020-11-23 Thread ABDumalagan
1. Your program worked for me!

2. I added something to my LoadCache(Action, params object[]
args) method in OracleStore.cs. I added the following 3 lines after the
while loop: 
reader.Dispose();
cmd.Dispose();
con.Dispose();
Console returned a non-zero cache size of 5136, however, the queries I
wanted to do and the queries counted by the method is 10,000 - I was
wondering what happened to the other ~5000 queries and why they aren't in
cache? 




--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/

Re: Ignite communicating with non ignite servers

2020-11-23 Thread Evgenii Zhuravlev
Hi,

Can you please tell what scan were you running? I want to reproduce this
issue using tenable.sc.

Thank you,
Evgenii


вт, 22 сент. 2020 г. в 06:55, Ilya Kasnacheev :

> Hello!
>
> I don't think it should cause heap dumps. Here you are showing just a
> warning. This warning may be ignored.
>
> It's outside of scope of Apache Ignite to disable something else to try
> connecting to it. If you have invasive security port scanning, you will
> expect to see warnings/errors in the logs of any network application.
>
> Regards,
> --
> Ilya Kasnacheev
>
>
> вт, 22 сент. 2020 г. в 16:26, ignite_user2016 :
>
>> We have SSL enabled on all servers but some how it s trying to attempt
>> connection on SSL causing heap dumps. Is there a way to disable to
>> external
>> server try connecting to ignite ?
>>
>> 2020-09-10 22:52:47,029 WARN [grid-nio-worker-tcp-comm-3-#27%NAME_GRID%]
>> org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi Client
>> disconnected abruptly due to network connection loss or because the
>> connection was left open on application shutdown. [cls=class
>> o.a.i.i.util.nio.GridNioException, msg=Failed to decode SSL data:
>> GridSelectorNioSessionImpl [worker=DirectNioClientWorker
>> [super=AbstractNioClientWorker [idx=3, bytesRcvd=13315002728, bytesSent=0,
>> bytesRcvd0=18, bytesSent0=0, select=true, super=GridWorker
>> [name=grid-nio-worker-tcp-comm-3, igniteInstanceName=WebGrid,
>> finished=false, heartbeatTs=1599796365124, hashCode=1230825885,
>> interrupted=false, runner=grid-nio-worker-tcp-comm-3-#27%WebGrid%]]],
>> writeBuf=java.nio.DirectByteBuffer[pos=0 lim=32768 cap=32768],
>> readBuf=java.nio.DirectByteBuffer[pos=18 lim=18 cap=32768],
>> inRecovery=null,
>> outRecovery=null, closeSocket=true,
>>
>> outboundMessagesQueueSizeMetric=o.a.i.i.processors.metric.impl.LongAdderMetric@69a257d1
>> ,
>> super=GridNioSessionImpl [locAddr=/*IG_SERVER1*:47101, rmtAddr=/*SEC_SCAN*
>> SERVER:52082, createTime=1599796365124, closeTime=0, bytesSent=0,
>> bytesRcvd=18, bytesSent0=0, bytesRcvd0=18, sndSchedTime=1599796365124,
>> lastSndTime=1599796365124, lastRcvTime=1599796367026, readsPaused=false,
>> filterChain=FilterChain[filters=[GridNioCodecFilter
>> [parser=o.a.i.i.util.nio.GridDirectParser@20ca1d6a, directMode=true],
>> GridConnectionBytesVerifyFilter, SSL filter], accepted=true,
>> markedForClose=false]]]
>>
>>
>>
>> --
>> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>>
>


Re: Nodes failed to join the cluster after restarting

2020-11-23 Thread Ivan Bessonov
Hi,

sadly, logs from the latest message show nothing. There are no visible
issues with the code either, I already checked it. Sorry to say, but what
we need is additional logs in Ignite code and stable reproducer, we don't
have both.

You shouldn't worry about it I think. It's most likely a bug that only
occurs once.

чт, 19 нояб. 2020 г. в 02:50, Cong Guo :

> Hi,
>
> I attach the log from the only working node while two others are
> restarted. There is no error message other than the "failed to join"
> message. I do not see any clue in the log. I cannot reproduce this issue
> either. That's why I am asking about the code. Maybe you know certain
> suspicious places. Thank you.
>
> On Wed, Nov 18, 2020 at 2:45 AM Ivan Bessonov 
> wrote:
>
>> Sorry, I see that you use TcpDiscoverySpi.
>>
>> ср, 18 нояб. 2020 г. в 10:44, Ivan Bessonov :
>>
>>> Hello,
>>>
>>> these parameters are configured automatically, I know that you don't
>>> configure them. And with the fact that all "automatic" configuration is
>>> completed, chances of seeing the same bug are low.
>>>
>>> Understanding the reason is tricky, we would need to debug the starting
>>> node or at least add more logs. Is this possible? I see that you're asking
>>> me about the code.
>>>
>>> Knowing the content of "ver" and "histCache.toArray()" in
>>> "org.apache.ignite.internal.processors.metastorage.persistence.DistributedMetaStorageImpl#collectJoiningNodeData"
>>> would certainly help.
>>> More specifically - *ver.id ()* and 
>>> *Arrays.stream(histCache.toArray()).map(item
>>> -> Arrays.toString(item.keys())).collect(Collectors.joining(","))*
>>>
>>> Honestly, I have no idea how your situation is even possible, otherwise
>>> we would find the solution rather quickly. Needless to say, I can't
>>> reproduce it. Error message that you see was created for the case when you
>>> join your node to the wrong cluster.
>>>
>>> Do you have any custom code during the node start? And one more question
>>> - what discovery SPI are you using? TCP or Zookeeper?
>>>
>>>
>>> ср, 18 нояб. 2020 г. в 02:29, Cong Guo :
>>>
 Hi,

 The parameters values on two other nodes are the same. Actually I do
 not configure these values. When you enable the native persistence, you
 will see these logs by default. Nothing is special. When this error occurs
 on the restarting node, nothing happens on two other nodes. When I restart
 the second node, it also fails due to the same error.

 I will still need to restart the nodes in the future,  one by one
 without stopping the service. This issue may happen again. The workaround
 has to deactivate the cluster and stop the service, which does not work in
 a production environment.

 I think we need to fix this bug or at least understand the reason to
 avoid it. Could you please tell me where this version value could be
 modified when a node just starts? Do you have any guess about this bug now?
 I can help analyze the code. Thank you.

 On Tue, Nov 17, 2020 at 4:09 AM Ivan Bessonov 
 wrote:

> Thank you for the reply!
>
> Right now the only existing distributed properties I see are these:
> - Baseline parameter 'baselineAutoAdjustEnabled' was changed from
> 'null' to 'false'
> - Baseline parameter 'baselineAutoAdjustTimeout' was changed from
> 'null' to '30'
> - SQL parameter 'sql.disabledFunctions' was changed from 'null' to
> '[FILE_WRITE, CANCEL_SESSION, MEMORY_USED, CSVREAD, LINK_SCHEMA,
> MEMORY_FREE, FILE_READ, CSVWRITE, SESSION_ID, LOCK_MODE]'
>
> I wonder what values they have on nodes that rejected the new node. I
> suggest sending logs of those nodes as well.
> Right now I believe that this bug won't happen again on your
> installation, but it only makes it more elusive...
>
> The most probable reason is that node (somehow) initialized some
> properties with defaults before joining the cluster, while cluster didn't
> have those values at all.
> The rule is that activated cluster can't accept changed properties
> from joining node. So, the workaround would be deactivating the cluster,
> joining the node and activating it again. But as I said, I don't think 
> that
> you'll see this bug ever again.
>
> вт, 17 нояб. 2020 г. в 07:34, Cong Guo :
>
>> Hi,
>>
>> Please find the attached log for a complete but failed reboot. You
>> can see the exceptions.
>>
>> On Mon, Nov 16, 2020 at 4:00 AM Ivan Bessonov 
>> wrote:
>>
>>> Hello,
>>>
>>> there must be a bug somewhere during node start, it updates its
>>> distributed metastorage content and tries to join an already activated
>>> cluster, thus creating a conflict. It's hard to tell the exact data that
>>> caused conflict, especially without any logs.
>>>
>>> Topic that you mentioned (
>>> 

Re: [2.8.1]Checking optimistic transaction state on remote nodes

2020-11-23 Thread Ilya Kasnacheev
Hello!

You can set concurrency mode and isolation for transactions by default by
specifying them in TransactionConfiguration. Otherwise you are correct.

Regards,
-- 
Ilya Kasnacheev


пн, 23 нояб. 2020 г. в 14:49, 38797715 <38797...@qq.com>:

> Hi Ilya,
>
> Then confirm again that according to the log message, optimistic
> transaction and READ_COMMITTED are used for single data operation of
> transactional cache?
>
> If transactions are explicitly turned on, the default concurrency model
> and isolation level are pessimistic and REPEATABLE_READ?
> 在 2020/11/20 下午7:50, Ilya Kasnacheev 写道:
>
> Hello!
>
> It will happen when the node has left but the transaction has to be
> committed.
>
> Most operations on transactional cache will involve implicit transactions
> so there you go.
>
> Regards,
> --
> Ilya Kasnacheev
>
>
> чт, 19 нояб. 2020 г. в 16:46, 38797715 <38797...@qq.com>:
>
>> Hi community,
>>
>> Although there is a transactional cache, no transaction operation is
>> performed, but there is a lot of output below in the log. Why?
>>
>> [2020-11-16 14:01:44,947][INFO ][sys-stripe-8-#9][IgniteTxManager]
>> Checking optimistic transaction state on remote nodes [tx=GridDhtTxLocal
>> [nearNodeId=a7eded9b-4078-4ee5-a1dd-426b8debc203,
>> nearFutId=e0576afd571-dbd82c53-1772-4c53-a4ea-38e601002379,
>> nearMiniId=1, nearFinFutId=null, nearFinMiniId=0,
>> nearXidVer=GridCacheVersion [topVer=216485010, order=1607062821327,
>> nodeOrder=30], lb=null, super=GridDhtTxLocalAdapter
>> [nearOnOriginatingNode=false, nearNodes=KeySetView [],
>> dhtNodes=KeySetView [e4d4fc27-d2d9-47f9-8d21-dfac2c003b55,
>> 3060fc02-e94a-4b6d-851a-05d75ea751e0], explicitLock=false,
>> super=IgniteTxLocalAdapter [completedBase=null,
>> sndTransformedVals=false, depEnabled=false,
>> txState=IgniteTxImplicitSingleStateImpl [init=true, recovery=false,
>> useMvccCaching=false], super=IgniteTxAdapter [xidVer=GridCacheVersion
>> [topVer=216485010, order=1607062856849, nodeOrder=1],
>> writeVer=GridCacheVersion [topVer=216485023, order=1607062856850,
>> nodeOrder=1], implicit=true, loc=true, threadId=24070,
>> startTime=1605506134277, nodeId=2b0db4f4-86d1-42c2-babf-f6318bd932e5,
>> startVer=GridCacheVersion [topVer=216485010, order=1607062856849,
>> nodeOrder=1], endVer=null, isolation=READ_COMMITTED,
>> concurrency=OPTIMISTIC, timeout=0, sysInvalidate=false, sys=false,
>> plc=2, commitVer=null, finalizing=RECOVERY_FINISH, invalidParts=null,
>> state=PREPARED, timedOut=false, topVer=AffinityTopologyVersion
>> [topVer=117, minorTopVer=0], mvccSnapshot=null, skipCompletedVers=false,
>> parentTx=null, duration=370668ms, onePhaseCommit=false], size=1]]],
>> fut=GridCacheTxRecoveryFuture [trackable=true,
>> futId=81c3b7af571-1093b7fe-20ae-4c3f-9adb-4ecac23c136e,
>> tx=GridDhtTxLocal [nearNodeId=a7eded9b-4078-4ee5-a1dd-426b8debc203,
>> nearFutId=e0576afd571-dbd82c53-1772-4c53-a4ea-38e601002379,
>> nearMiniId=1, nearFinFutId=null, nearFinMiniId=0,
>> nearXidVer=GridCacheVersion [topVer=216485010, order=1607062821327,
>> nodeOrder=30], lb=null, super=GridDhtTxLocalAdapter
>> [nearOnOriginatingNode=false, nearNodes=KeySetView [],
>> dhtNodes=KeySetView [e4d4fc27-d2d9-47f9-8d21-dfac2c003b55,
>> 3060fc02-e94a-4b6d-851a-05d75ea751e0], explicitLock=false,
>> super=IgniteTxLocalAdapter [completedBase=null,
>> sndTransformedVals=false, depEnabled=false,
>> txState=IgniteTxImplicitSingleStateImpl [init=true, recovery=false,
>> useMvccCaching=false], super=IgniteTxAdapter [xidVer=GridCacheVersion
>> [topVer=216485010, order=1607062856849, nodeOrder=1],
>> writeVer=GridCacheVersion [topVer=216485023, order=1607062856850,
>> nodeOrder=1], implicit=true, loc=true, threadId=24070,
>> startTime=1605506134277, nodeId=2b0db4f4-86d1-42c2-babf-f6318bd932e5,
>> startVer=GridCacheVersion [topVer=216485010, order=1607062856849,
>> nodeOrder=1], endVer=null, isolation=READ_COMMITTED,
>> concurrency=OPTIMISTIC, timeout=0, sysInvalidate=false, sys=false,
>> plc=2, commitVer=null, finalizing=RECOVERY_FINISH, invalidParts=null,
>> state=PREPARED, timedOut=false, topVer=AffinityTopologyVersion
>> [topVer=117, minorTopVer=0], mvccSnapshot=null, skipCompletedVers=false,
>> parentTx=null, duration=370668ms, onePhaseCommit=false], size=1]]],
>> failedNodeIds=SingletonSet [a7eded9b-4078-4ee5-a1dd-426b8debc203],
>> nearTxCheck=false, innerFuts=EmptyList [],
>> super=GridCompoundIdentityFuture [super=GridCompoundFuture [rdc=Bool
>> reducer: true, initFlag=0, lsnrCalls=0, done=false, cancelled=false,
>> err=null, futs=EmptyList []
>> [2020-11-16 14:01:44,947][INFO ][sys-stripe-8-#9][IgniteTxManager]
>> Finishing prepared transaction [commit=true, tx=GridDhtTxLocal
>> [nearNodeId=a7eded9b-4078-4ee5-a1dd-426b8debc203,
>> nearFutId=e0576afd571-dbd82c53-1772-4c53-a4ea-38e601002379,
>> nearMiniId=1, nearFinFutId=null, nearFinMiniId=0,
>> nearXidVer=GridCacheVersion [topVer=216485010, order=1607062821327,
>> nodeOrder=30], lb=null, super=GridDhtTxLocalAdapter
>> [nearOnOriginatingNode=false, 

Re: [2.8.1]Checking optimistic transaction state on remote nodes

2020-11-23 Thread 38797715

Hi Ilya,

Then confirm again that according to the log message, optimistic 
transaction and READ_COMMITTED are used for single data operation of 
transactional cache?


If transactions are explicitly turned on, the default concurrency model 
and isolation level are pessimistic and REPEATABLE_READ?


在 2020/11/20 下午7:50, Ilya Kasnacheev 写道:

Hello!

It will happen when the node has left but the transaction has to be 
committed.


Most operations on transactional cache will involve implicit 
transactions so there you go.


Regards,
--
Ilya Kasnacheev


чт, 19 нояб. 2020 г. в 16:46, 38797715 <38797...@qq.com 
>:


Hi community,

Although there is a transactional cache, no transaction operation is
performed, but there is a lot of output below in the log. Why?

[2020-11-16 14:01:44,947][INFO ][sys-stripe-8-#9][IgniteTxManager]
Checking optimistic transaction state on remote nodes
[tx=GridDhtTxLocal
[nearNodeId=a7eded9b-4078-4ee5-a1dd-426b8debc203,
nearFutId=e0576afd571-dbd82c53-1772-4c53-a4ea-38e601002379,
nearMiniId=1, nearFinFutId=null, nearFinMiniId=0,
nearXidVer=GridCacheVersion [topVer=216485010, order=1607062821327,
nodeOrder=30], lb=null, super=GridDhtTxLocalAdapter
[nearOnOriginatingNode=false, nearNodes=KeySetView [],
dhtNodes=KeySetView [e4d4fc27-d2d9-47f9-8d21-dfac2c003b55,
3060fc02-e94a-4b6d-851a-05d75ea751e0], explicitLock=false,
super=IgniteTxLocalAdapter [completedBase=null,
sndTransformedVals=false, depEnabled=false,
txState=IgniteTxImplicitSingleStateImpl [init=true, recovery=false,
useMvccCaching=false], super=IgniteTxAdapter [xidVer=GridCacheVersion
[topVer=216485010, order=1607062856849, nodeOrder=1],
writeVer=GridCacheVersion [topVer=216485023, order=1607062856850,
nodeOrder=1], implicit=true, loc=true, threadId=24070,
startTime=1605506134277, nodeId=2b0db4f4-86d1-42c2-babf-f6318bd932e5,
startVer=GridCacheVersion [topVer=216485010, order=1607062856849,
nodeOrder=1], endVer=null, isolation=READ_COMMITTED,
concurrency=OPTIMISTIC, timeout=0, sysInvalidate=false, sys=false,
plc=2, commitVer=null, finalizing=RECOVERY_FINISH, invalidParts=null,
state=PREPARED, timedOut=false, topVer=AffinityTopologyVersion
[topVer=117, minorTopVer=0], mvccSnapshot=null,
skipCompletedVers=false,
parentTx=null, duration=370668ms, onePhaseCommit=false], size=1]]],
fut=GridCacheTxRecoveryFuture [trackable=true,
futId=81c3b7af571-1093b7fe-20ae-4c3f-9adb-4ecac23c136e,
tx=GridDhtTxLocal [nearNodeId=a7eded9b-4078-4ee5-a1dd-426b8debc203,
nearFutId=e0576afd571-dbd82c53-1772-4c53-a4ea-38e601002379,
nearMiniId=1, nearFinFutId=null, nearFinMiniId=0,
nearXidVer=GridCacheVersion [topVer=216485010, order=1607062821327,
nodeOrder=30], lb=null, super=GridDhtTxLocalAdapter
[nearOnOriginatingNode=false, nearNodes=KeySetView [],
dhtNodes=KeySetView [e4d4fc27-d2d9-47f9-8d21-dfac2c003b55,
3060fc02-e94a-4b6d-851a-05d75ea751e0], explicitLock=false,
super=IgniteTxLocalAdapter [completedBase=null,
sndTransformedVals=false, depEnabled=false,
txState=IgniteTxImplicitSingleStateImpl [init=true, recovery=false,
useMvccCaching=false], super=IgniteTxAdapter [xidVer=GridCacheVersion
[topVer=216485010, order=1607062856849, nodeOrder=1],
writeVer=GridCacheVersion [topVer=216485023, order=1607062856850,
nodeOrder=1], implicit=true, loc=true, threadId=24070,
startTime=1605506134277, nodeId=2b0db4f4-86d1-42c2-babf-f6318bd932e5,
startVer=GridCacheVersion [topVer=216485010, order=1607062856849,
nodeOrder=1], endVer=null, isolation=READ_COMMITTED,
concurrency=OPTIMISTIC, timeout=0, sysInvalidate=false, sys=false,
plc=2, commitVer=null, finalizing=RECOVERY_FINISH, invalidParts=null,
state=PREPARED, timedOut=false, topVer=AffinityTopologyVersion
[topVer=117, minorTopVer=0], mvccSnapshot=null,
skipCompletedVers=false,
parentTx=null, duration=370668ms, onePhaseCommit=false], size=1]]],
failedNodeIds=SingletonSet [a7eded9b-4078-4ee5-a1dd-426b8debc203],
nearTxCheck=false, innerFuts=EmptyList [],
super=GridCompoundIdentityFuture [super=GridCompoundFuture [rdc=Bool
reducer: true, initFlag=0, lsnrCalls=0, done=false, cancelled=false,
err=null, futs=EmptyList []
[2020-11-16 14:01:44,947][INFO ][sys-stripe-8-#9][IgniteTxManager]
Finishing prepared transaction [commit=true, tx=GridDhtTxLocal
[nearNodeId=a7eded9b-4078-4ee5-a1dd-426b8debc203,
nearFutId=e0576afd571-dbd82c53-1772-4c53-a4ea-38e601002379,
nearMiniId=1, nearFinFutId=null, nearFinMiniId=0,
nearXidVer=GridCacheVersion [topVer=216485010, order=1607062821327,
nodeOrder=30], lb=null, super=GridDhtTxLocalAdapter
[nearOnOriginatingNode=false, nearNodes=KeySetView [],
dhtNodes=KeySetView [e4d4fc27-d2d9-47f9-8d21-dfac2c003b55,
3060fc02-e94a-4b6d-851a-05d75ea751e0], explicitLock=false,

Re: IgniteSecurity vs GridSecurityProcessor

2020-11-23 Thread Ilya Kasnacheev
Hello!

Please refer to this specific ticket:
https://issues.apache.org/jira/browse/IGNITE-9560

As well as this Javadoc of the new class:

/**
 * Ignite Security Processor.
 * 
 * The differences between {@code IgniteSecurity} and {@code
GridSecurityProcessor} are:
 * 
 * {@code IgniteSecurity} allows to define a current security context by
 * {@link #withContext(SecurityContext)} or {@link #withContext(UUID)} methods.
 * {@code IgniteSecurity} doesn't require to pass {@code
SecurityContext} to authorize operations.
 * {@code IgniteSecurity} doesn't extend {@code GridProcessor} interface
 * sequentially it doesn't have any methods of the lifecycle of {@code
GridProcessor}.
 * 
 */


Regards,
-- 
Ilya Kasnacheev


пт, 20 нояб. 2020 г. в 19:26, Vishwas Bm :

> Hi,
>
> We were using 2.7.6 and had implemented a custom security plugin for
> authorization and authentication by implementing GridSecurityProcessor.
>
> Now in 2.9 we see that a new interface is provided IgniteSecurity.
> May I know what is the difference between the interfaces, as both look
> similar and what is appropriate place to implement them.
>
> Also in 2.7.6 there was a class called SecurityContextHolder to  hold the
> context.
> Now in 2.9 we do not see that class and we see a class
> OperartionClassContext.
> How do we use this new class when using a custom security plugin?
>
>
>
> Regards,
> Vishwas
>


RE: Getting error Node is out of topology (probably, due to short-time network problems)

2020-11-23 Thread ibelyakov
Hi,

According to the provided log I see "Blocked system-critical thread has been
detected" message and that the node was segmented since it was unable to
respond to another node. Most probably it's caused by JVM pauses, possibly
related with GC. 

Do you collect GC logs for the nodes?

You can find an information how to enable GC logs here:
https://ignite.apache.org/docs/latest/perf-and-troubleshooting/troubleshooting#detailed-gc-logs

Igor



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: C# CacheStoreAdapter - Customizing Load, LoadCache methods

2020-11-23 Thread Pavel Tupitsyn
Your code seems to be correct. It works for me in a simplified form:
https://gist.github.com/ptupitsyn/a64c899b32b73ab55cb706cd4a09e6e9

1. Can you try the program above - does it work for you?
2. Can you confirm that the Oracle query returns a non-empty result set?


On Mon, Nov 23, 2020 at 3:00 AM ABDumalagan 
wrote:

> I see - am I dealing with 1.) in my case?
>
> When I hover over the method LoadCache(Action, params
> object[] args) in Visual Studio, it says that *"...This method is called
> whenever
> Apache.Ignite.Core.Cache.ICache.LocalLoadCache(Apache.Ignite.Core.Cache.ICacheEntryFilter,
> params object[]) method is invoked which is usually to preload the cache
> from persistent storage"*.
>
> From the .NET docs
> ,
> it says that for LocalLoadCache(ICacheEntryFilter, Object[]), the
> loaded values will then be given to the optionally passed in predicate,
> and, if the predicate returns true, will be stored in cache. If predicate
> is null, then all loaded values will be stored in cache.
>
> In my case, I call cache.LocalLoadCache(null) in Program.cs
> ,
> where it then calls on LoadCache(Action act, params
> object[] args) in OracleStore.cs
> 
> and the Associate IDs are being printed to console.
>
> However, when I try to print the cache size in console, it returns 0 -
> does this mean that the values are not being stored in cache?
>
> Do I need to do something else to make sure the values are loaded into
> cache? I thought that LocalLoadCache would just put all the values queried
> from the underlying storage into cache.
>
> --
> Sent from the Apache Ignite Users mailing list archive
>  at Nabble.com.
>
>


Ignite persistence: Data of node lost that got excluded from baseline topology

2020-11-23 Thread VincentCE
Hi!

in our project we are currently using ignite 2.8.1 without ignite native
persistence enabled. No we would like to enable this feature to prevent data
loss during node restart. 

Important: We use AttributeNodeFilters to separate our data, e.g data of
type1 only lives in the type1-clustergroup and so on.

I have two question regarding *native persistence*:

1. After some time we would like to shut down nodes of type1 to save
resources (but possibly would like to use it in the future again if business
requires it -> we would like the data to remain there safely/persisted).
However in order to start up the cluster again the type1 nodes need to be
excluded from baseline topology. But then if later on we would like to reuse
type1 nodes again their data is getting deleted as soon as they are
rejoining the baseline topology. Is there a way to prevent this?

2. When using AttributeNodeFilter it seems that we need to use fixed
consistentIds (e.g. consistentId = "type1-0") since otherwise a node of
type2 would potentially use the directory of a type1 node in the persistence
storage directory which finally would break the data separation when new
data is loaded into the caches.

Thanks in advance!



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/