Failed to process selector key (Connection reset by peer)

2021-03-05 Thread erathina
Hi, We have an ignite cluster setup with two ignite servers. At certain times
during the week, we get these error messages in a sequence that we believe
is causing the JVM Memory size to increase. We have 2gb xmx and xms set
using jdk 11. Ignite version used is 2.8.0. We know 2gb is very small but we
believe increasing the heap size allocation is not going to solve the issue.
The exact stack trace is


/Mar 02, 2021 1:45:20 AM org.apache.ignite.logger.java.JavaLogger error
SEVERE: Failed to process selector key [ses=GridSelectorNioSessionImpl
[worker=ByteBufferNioClientWorker [readBuf=java.nio.HeapByteBuffer[pos=0
lim=8192 cap=8192], super=AbstractNioClientWorker [idx=3, bytesRcvd=0,
bytesSent=0, bytesRcvd0=0, bytesSent0=0, select=true, super=GridWorker
[name=grid-nio-worker-client-listener-3, igniteInstanceName=null,
finished=false, heartbeatTs=1614667518323, hashCode=92764489,
interrupted=false, runner=grid-nio-worker-client-listener-3-#133]]],
writeBuf=null, readBuf=null, inRecovery=null, outRecovery=null,
closeSocket=true, outboundMessagesQueueSizeMetric=null,
super=GridNioSessionImpl [locAddr=/x.x.x.x:x, rmtAddr=/x.x.x.x:x,
createTime=1614667512243, closeTime=0, bytesSent=0, bytesRcvd=517,
bytesSent0=0, bytesRcvd0=0, sndSchedTime=1614667512243,
lastSndTime=1614667512243, lastRcvTime=1614667512273, readsPaused=false,
filterChain=FilterChain[filters=[GridNioAsyncNotifyFilter,
GridNioCodecFilter [parser=ClientListenerBufferedParser, directMode=false]],
accepted=true, markedForClose=false]]]
java.io.IOException: Connection reset by peer
at java.base/sun.nio.ch.FileDispatcherImpl.read0(Native Method)
at java.base/sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:39)
at java.base/sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:276)
at java.base/sun.nio.ch.IOUtil.read(IOUtil.java:245)
at java.base/sun.nio.ch.IOUtil.read(IOUtil.java:223)
at java.base/sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:358)
at
org.apache.ignite.internal.util.nio.GridNioServer$ByteBufferNioClientWorker.processRead(GridNioServer.java:1162)
at
org.apache.ignite.internal.util.nio.GridNioServer$AbstractNioClientWorker.processSelectedKeysOptimized(GridNioServer.java:2449)
at
org.apache.ignite.internal.util.nio.GridNioServer$AbstractNioClientWorker.bodyInternal(GridNioServer.java:2216)
at
org.apache.ignite.internal.util.nio.GridNioServer$AbstractNioClientWorker.body(GridNioServer.java:1857)
at
org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:120)
at java.base/java.lang.Thread.run(Thread.java:834)/


The server crashes with JAVA OOM and upon looking at the .hprof file
analyzing the biggest objects at the time of OOM, we saw this, 

 

It looks like just the ClientListenerNioServerBuffer is consuming 1GB of
memory at the time of crash. Shouldn't this buffer cleared when there is any
issue with NC's.

Other threads suggest increasing the socket timeout or reducing the failure
detection timeout. Although, I will try them out, I am skeptical that those
fixes will work.

Any help is appreciated!

Thanks!



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Loading data to cache using a data streamer

2021-03-05 Thread Josh Katz
Client node gets disconnected while trying to load data to cache using a data 
streamer...

How can we overcome this issue and load data to cache reliably?

See Exception call stack:

Apache.Ignite.Core.Common.ClientDisconnectedException: Client node 
disconnected: ignite-instance-dab0a266-a4fb-403f-863c-289d3ed18ab7 ---> 
Apache.Ignite.Core.Common.JavaException: class 
org.apache.ignite.IgniteClientDisconnectedException: Client node disconnected: 
ignite-instance-dab0a266-a4fb-403f-863c-289d3ed18ab7   at 
org.apache.ignite.internal.GridKernalGatewayImpl.readLock(GridKernalGatewayImpl.java:93)
   at 
org.apache.ignite.internal.cluster.ClusterGroupAdapter.guard(ClusterGroupAdapter.java:175)
   at 
org.apache.ignite.internal.cluster.ClusterGroupAdapter.forPredicate(ClusterGroupAdapter.java:379)
   at 
org.apache.ignite.internal.cluster.ClusterGroupAdapter.forCacheNodes(ClusterGroupAdapter.java:593)
   at 
org.apache.ignite.internal.processors.datastreamer.DataStreamerImpl.allowOverwrite(DataStreamerImpl.java:496)
   at 
org.apache.ignite.internal.processors.platform.datastreamer.PlatformDataStreamer.processInLongOutLong(PlatformDataStreamer.java:187)
   at 
org.apache.ignite.internal.processors.platform.PlatformTargetProxyImpl.inLongOutLong(PlatformTargetProxyImpl.java:55)
   at Apache.Ignite.Core.Impl.Unmanaged.Jni.Env.ExceptionCheck() at 
Apache.Ignite.Core.Impl.Unmanaged.Jni.Env.CallLongMethod(GlobalRef obj, IntPtr 
methodId, Int64* argsPtr) at 
Apache.Ignite.Core.Impl.Unmanaged.UnmanagedUtils.TargetInLongOutLong(GlobalRef 
target, Int32 opType, Int64 memPtr) at 
Apache.Ignite.Core.Impl.PlatformJniTarget.InLongOutLong(Int32 type, Int64 val)  
   --- End of inner exception stack trace --- at 
Apache.Ignite.Core.Impl.PlatformJniTarget.InLongOutLong(Int32 type, Int64 val)  
   at 
Apache.Ignite.Core.Impl.Datastream.DataStreamerImpl`2.set_AllowOverwrite(Boolean
 value) at PopulateCache(String accountStyle, DateTime startDate, DateTime 
endDate)

Thanks,
Josh Katz

--
Please follow the hyperlink to important 
disclosures.https://www.dodgeandcox.com/disclosures/email_disclosure_funds.html



Re: [2.10 branch]cpp thin client transaction :Transaction with id 1 not found.

2021-03-05 Thread 38797715

Hello Igor,

Thank you very much for your hard work!

在 2021/3/5 下午6:50, Igor Sapego 写道:
Guys, I just want to notify you that the issue is fixed and is 
included in Ignite-2.10


Best Regards,
Igor


On Thu, Feb 18, 2021 at 3:41 AM 18624049226 <18624049...@163.com 
> wrote:


Hello Ilya,

https://issues.apache.org/jira/browse/IGNITE-14204


在 2021/2/18 上午12:14, Ilya Kasnacheev 写道:

Hello!

I confirm that I see this issue. Can you please file a ticket
against IGNITE JIRA?

Thanks,
-- 
Ilya Kasnacheev



вт, 16 февр. 2021 г. в 11:58, jjimeno mailto:jjim...@omp.com>>:

Hello!

In fact, it's very simple:

int main()
   {
   IgniteClientConfiguration cfg;

   cfg.SetEndPoints("10.250.0.10, 10.250.0.4");

   try
      {
      IgniteClient client = IgniteClient::Start(cfg);

      CacheClient cache =
client.GetOrCreateCache("vds");

      ClientTransactions transactions =
client.ClientTransactions();

      ClientTransaction tx = transactions.TxStart(PESSIMISTIC,
READ_COMMITTED);

      cache.Put(1, 1);

      tx.Commit();
      }
   catch (IgniteError & err)
      {
      std::cout << "An error occurred: " << err.GetText() <<
std::endl;

      return err.GetCode();
      }

   return 0;
   }

Not always, but sometimes, I get an "stack overflow" error,
which makes me
think about a concurrence problem in the code.

Cluster configuration:
>


Error:
>


Just in case, the C++ version I'm currently using is:
685c1b70ca (HEAD -> master, origin/master, origin/HEAD)
IGNITE-13865 Support
DateTime as a key or value in .NET and Java (#8580)

Le me know if you need anything else



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/




Re: [2.9.1]Failed to find DHT update future for deferred update response

2021-03-05 Thread 38797715

Hi Ilya,

It's not easy to build a reproducible environment, which is probably a 
problem in use, not necessarily a bug.


In particular, I want to know the behavior of DataStreamer in case of 
node failure and whether it has the function of fail over.


在 2021/3/5 下午8:16, Ilya Kasnacheev 写道:

Hello!

Do you happen to have a reproducer for this issue? I've not seen 
anything similar.


Regards,
--
Ilya Kasnacheev


ср, 3 мар. 2021 г. в 12:31, 38797715 <38797...@qq.com 
>:


Hi team,

When using DataStreamer to write a large amount of data at high
speed, if one server node fails, then other nodes will appear a
lot of following information, and finally they will also fail:

[2021-03-03 15:49:44,516][WARN ][sys-stripe-6-#7][atomic] Failed
to find DHT update future for deferred update response
[futId=142939814, nodeId=bfe0d1a9-8e0c-4a9e-8b62-041f2f252a80,
res=GridDhtAtomicDeferredUpdateResponse [futIds=GridLongList
[idx=256,

arr=[142939814,142731432,142827914,142939816,142731434,142939818,142939820,142939822,142939824,142939826,142939828,142731436,142939830,142731438,142939832,142939834,142731440,142827916,142731442,142827918,142827920,142939836,142827922,142939838,142827924,142827926,142939840,142939842,142827928,142731444,142939844,142827930,142731446,142939846,142827932,142939848,142731448,142827934,142939850,142827936,142939852,142827938,142827940,142939854,142827942,142939856,142939858,142827944,142827946,142939860,142827948,142939862,142827950,142731450,142939864,142827952,142731452,142939866,142939868,142827954,142731454,142827956,142731456,142827958,142939870,142827960,142731458,142939872,142827962,142939874,142731460,142939876,142939878,142731462,142731464,142939880,142731466,142939882,142939884,142731468,142939886,142731470,142939888,142731472,142731474,142939890,142731476,142939892,142731478,142939894,142939896,142731480,142731482,142731484,142731486,142731488,142731490,142731492,142731494,142939898,142939900,142731496,142731498,142939902,142939904,142731500,142939906,142939908,142731502,142939910,142731504,142939912,142939914,142731506,142939916,142731508,142731510,142939918,142939920,142939922,142731512,142939924,142731514,142939926,142731516,142939928,142731518,142939930,142731520,142939932,142939934,142939936,142731522,142939938,142731524,142939940,142731526,142939942,142939944,142731528,142939946,142731530,142939948,142939950,142731532,142939952,142939954,142731534,142939956,142731536,142939958,142731538,142939960,142731540,142731542,142939962,142939964,142731544,142731546,142731548,142939966,142731550,142939968,142731552,142939970,142731554,142731556,142939972,142731558,142731560,142731562,142939974,142731564,142939976,142939978,142939980,

What I want to ask is:

1.Are these logs related to node failures when the DataStreamer
writes?

2.Does DataStreamer have a mechanism for fail over? We know that
DataStreamer sends data to specified nodes in batches. When a node
fails, what is the behavior of DataStreamer?



Re: [2.9.1]Failed to find DHT update future for deferred update response

2021-03-05 Thread Ilya Kasnacheev
Hello!

Do you happen to have a reproducer for this issue? I've not seen anything
similar.

Regards,
-- 
Ilya Kasnacheev


ср, 3 мар. 2021 г. в 12:31, 38797715 <38797...@qq.com>:

> Hi team,
>
> When using DataStreamer to write a large amount of data at high speed, if
> one server node fails, then other nodes will appear a lot of following
> information, and finally they will also fail:
>
> [2021-03-03 15:49:44,516][WARN ][sys-stripe-6-#7][atomic] Failed to find
> DHT update future for deferred update response [futId=142939814,
> nodeId=bfe0d1a9-8e0c-4a9e-8b62-041f2f252a80,
> res=GridDhtAtomicDeferredUpdateResponse [futIds=GridLongList [idx=256,
> arr=[142939814,142731432,142827914,142939816,142731434,142939818,142939820,142939822,142939824,142939826,142939828,142731436,142939830,142731438,142939832,142939834,142731440,142827916,142731442,142827918,142827920,142939836,142827922,142939838,142827924,142827926,142939840,142939842,142827928,142731444,142939844,142827930,142731446,142939846,142827932,142939848,142731448,142827934,142939850,142827936,142939852,142827938,142827940,142939854,142827942,142939856,142939858,142827944,142827946,142939860,142827948,142939862,142827950,142731450,142939864,142827952,142731452,142939866,142939868,142827954,142731454,142827956,142731456,142827958,142939870,142827960,142731458,142939872,142827962,142939874,142731460,142939876,142939878,142731462,142731464,142939880,142731466,142939882,142939884,142731468,142939886,142731470,142939888,142731472,142731474,142939890,142731476,142939892,142731478,142939894,142939896,142731480,142731482,142731484,142731486,142731488,142731490,142731492,142731494,142939898,142939900,142731496,142731498,142939902,142939904,142731500,142939906,142939908,142731502,142939910,142731504,142939912,142939914,142731506,142939916,142731508,142731510,142939918,142939920,142939922,142731512,142939924,142731514,142939926,142731516,142939928,142731518,142939930,142731520,142939932,142939934,142939936,142731522,142939938,142731524,142939940,142731526,142939942,142939944,142731528,142939946,142731530,142939948,142939950,142731532,142939952,142939954,142731534,142939956,142731536,142939958,142731538,142939960,142731540,142731542,142939962,142939964,142731544,142731546,142731548,142939966,142731550,142939968,142731552,142939970,142731554,142731556,142939972,142731558,142731560,142731562,142939974,142731564,142939976,142939978,142939980,
>
> What I want to ask is:
>
> 1.Are these logs related to node failures when the DataStreamer writes?
>
> 2.Does DataStreamer have a mechanism for fail over? We know that
> DataStreamer sends data to specified nodes in batches. When a node fails,
> what is the behavior of DataStreamer?
>


Re: Unable to evict cache entry in all nodes synchronously

2021-03-05 Thread Ilya Kasnacheev
Hello!

Do you have a reproducer which shows that entries are still contained in
cache after remove/clear?

Regards,
-- 
Ilya Kasnacheev


пт, 5 мар. 2021 г. в 08:22, ashishg :

> Can we get an acknowledgement from all cluster nodes that the cache has
> been
> updated or cleared?
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Exception while running Select Query in ignite 2.8.1 - Fetched result set was too large

2021-03-05 Thread Ilya Kasnacheev
Hello!

Please check out
https://lists.apache.org/thread.html/rf9a3f10fb719f9c1e31516992cc624f9898b08857c34b6e5a3a0fc13%40%3Cuser.ignite.apache.org%3E

Regards,
-- 
Ilya Kasnacheev


чт, 4 мар. 2021 г. в 08:52, Veena :

> Hi,
>
> We are getting below exception execution join query using Cache.
> We are using left outer join on two tables one with 2lakh rows and another
> with 80K and output has around 60K rows with various where conditions.
>
>
> Cache.executequery() is returning below exception.
>
> Error:
> class org.apache.ignite.IgniteException: Fetched result set was too large.
> exception
> javax.cache.CacheException: Failed to run reduce query locally. Failed to
> execute SQL query.
> General error: "class org.apache.ignite.IgniteException: Fetched result set
> was too large."; SQL statement:
>
> We are using Ignite persistence enabled cache and tried lazy loading and
> enforce join order options already.
> Tried executing the join query in dbeaver and got same exception.
>
> Will increasing RAM helps or is there any other solution to it. Please
> suggest.
>
> Thanks in advance.
>
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Ignite hanging with log "TcpCommunicationSpi - TCP client created"

2021-03-05 Thread Ilya Kasnacheev
Hello!

This looks like the case of half-open connections with very long
retransmits. One host acknowledges that connection has failed when the
other does not.

Please provide complete logs from both directions.

Regards,
-- 
Ilya Kasnacheev


чт, 4 мар. 2021 г. в 08:42, swara :

> Hi,
>
>
> In our application suddenly ignite is getting hanged(threads blocked) for 1
> to 5 hours and then all threads are releasing at the same time and working
> normally.
>
> The logs which we are getting repeatedly at that time is
>
> log info: o.a.i.s.c.tcp.TcpCommunicationSpi - TCP client created
> [client=null, node addrs=[/*.*.*.*:48112, /127.0.0.1:48112],
> duration=10143ms]
>
> Ignite version: 2.8.1
>
> Please tell me the solution to avoid hanging.
>
> Thank You
> Swara
>
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Ignite 2.8.1 ContinuousQueryWithTransformer: ClassCastException on PME

2021-03-05 Thread Ilya Kasnacheev
Hello!

Can you please provide a runnable reproducer project? Save me the trouble
of pasting all these snippets and writing the boilerplate.

Regards,
-- 
Ilya Kasnacheev


чт, 4 мар. 2021 г. в 18:21, :

> Ilya, unfortunately, I am unable to reproduce this issue in a pet project.
>
> I have face with this issue on Ignite 2.9.1 again when I have brought one
> of two nodes of a cluster down:
>
> org.apache.ignite.IgniteCheckedException:
> com.devexperts.tos.riskmonitor.domain.RmAccount cannot be cast to
> com.devexperts.tos.riskmonitor.cluster.cache.cq.CacheKeyWithEventType
> at
> org.apache.ignite.internal.util.IgniteUtils.cast(IgniteUtils.java:7563)
> [ignite-core-2.9.1.jar:2.9.1]
> at
> org.apache.ignite.internal.util.future.GridFutureAdapter.resolve(GridFutureAdapter.java:260)
> ~[ignite-core-2.9.1.jar:2.9.1]
> at
> org.apache.ignite.internal.util.future.GridFutureAdapter.get0(GridFutureAdapter.java:209)
> ~[ignite-core-2.9.1.jar:2.9.1]
> at
> org.apache.ignite.internal.util.future.GridFutureAdapter.get(GridFutureAdapter.java:160)
> ~[ignite-core-2.9.1.jar:2.9.1]
> at
> org.apache.ignite.internal.processors.cache.GridCachePartitionExchangeManager$ExchangeWorker.body0(GridCachePartitionExchangeManager.java:3342)
> [ignite-core-2.9.1.jar:2.9.1]
> at
> org.apache.ignite.internal.processors.cache.GridCachePartitionExchangeManager$ExchangeWorker.body(GridCachePartitionExchangeManager.java:3163)
> [ignite-core-2.9.1.jar:2.9.1]
> at
> org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:120)
> [ignite-core-2.9.1.jar:2.9.1]
> at java.lang.Thread.run(Thread.java:748) [?:1.8.0_202]
> Caused by: java.lang.ClassCastException:
> com.devexperts.tos.riskmonitor.domain.RmAccount cannot be cast to
> com.devexperts.tos.riskmonitor.cluster.cache.cq.CacheKeyWithEventType
> at
> com.devexperts.tos.riskmonitor.cluster.cache.distributiontracker.IgniteCacheKeysDistributionTracker$LocalCacheListener.onUpdated(IgniteCacheKeysDistributionTracker.java:365)
> ~[main/:?]
> at
> org.apache.ignite.internal.processors.cache.query.continuous.CacheContinuousQueryHandler.notifyLocalListener(CacheContinuousQueryHandler.java:1128)
> ~[ignite-core-2.9.1.jar:2.9.1]
> at
> org.apache.ignite.internal.processors.cache.query.continuous.CacheContinuousQueryHandler.notifyCallback0(CacheContinuousQueryHandler.java:954)
> ~[ignite-core-2.9.1.jar:2.9.1]
> at
> org.apache.ignite.internal.processors.cache.query.continuous.CacheContinuousQueryHandler.notifyCallback(CacheContinuousQueryHandler.java:895)
> ~[ignite-core-2.9.1.jar:2.9.1]
> at
> org.apache.ignite.internal.processors.continuous.GridContinuousProcessor.addBackupNotification(GridContinuousProcessor.java:1162)
> ~[ignite-core-2.9.1.jar:2.9.1]
> at
> org.apache.ignite.internal.processors.cache.query.continuous.CacheContinuousQueryHandler$2.flushBackupQueue(CacheContinuousQueryHandler.java:512)
> ~[ignite-core-2.9.1.jar:2.9.1]
> at
> org.apache.ignite.internal.processors.cache.query.continuous.CacheContinuousQueryManager.flushBackupQueue(CacheContinuousQueryManager.java:691)
> ~[ignite-core-2.9.1.jar:2.9.1]
> at
> org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture.onDone(GridDhtPartitionsExchangeFuture.java:2394)
> ~[ignite-core-2.9.1.jar:2.9.1]
> at
> org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture.finishExchangeOnCoordinator(GridDhtPartitionsExchangeFuture.java:3972)
> ~[ignite-core-2.9.1.jar:2.9.1]
> at
> org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture.onAllReceived(GridDhtPartitionsExchangeFuture.java:3687)
> ~[ignite-core-2.9.1.jar:2.9.1]
> at
> org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture.distributedExchange(GridDhtPartitionsExchangeFuture.java:1729)
> ~[ignite-core-2.9.1.jar:2.9.1]
> at
> org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture.init(GridDhtPartitionsExchangeFuture.java:943)
> ~[ignite-core-2.9.1.jar:2.9.1]
> at
> org.apache.ignite.internal.processors.cache.GridCachePartitionExchangeManager$ExchangeWorker.body0(GridCachePartitionExchangeManager.java:3314)
> ~[ignite-core-2.9.1.jar:2.9.1]
> ... 3 more
>
> Cache:
>
>
> new CacheConfiguration()
> .setSqlSchema("PUBLIC")
> .setCacheMode(CacheMode.PARTITIONED)
> // This means that we won't be able to use ACID compliant transactions
> .setAtomicityMode(CacheAtomicityMode.ATOMIC)
> // Wait only primary nodes to finish write operations, do not wait
> back up nodes
>
> 

Re: [2.10 branch]cpp thin client transaction :Transaction with id 1 not found.

2021-03-05 Thread Igor Sapego
Guys, I just want to notify you that the issue is fixed and is included in
Ignite-2.10

Best Regards,
Igor


On Thu, Feb 18, 2021 at 3:41 AM 18624049226 <18624049...@163.com> wrote:

> Hello Ilya,
>
> https://issues.apache.org/jira/browse/IGNITE-14204
> 在 2021/2/18 上午12:14, Ilya Kasnacheev 写道:
>
> Hello!
>
> I confirm that I see this issue. Can you please file a ticket against
> IGNITE JIRA?
>
> Thanks,
> --
> Ilya Kasnacheev
>
>
> вт, 16 февр. 2021 г. в 11:58, jjimeno :
>
>> Hello!
>>
>> In fact, it's very simple:
>>
>> int main()
>>{
>>IgniteClientConfiguration cfg;
>>
>>cfg.SetEndPoints("10.250.0.10, 10.250.0.4");
>>
>>try
>>   {
>>   IgniteClient client = IgniteClient::Start(cfg);
>>
>>   CacheClient cache =
>> client.GetOrCreateCache> int32_t>("vds");
>>
>>   ClientTransactions transactions = client.ClientTransactions();
>>
>>   ClientTransaction tx = transactions.TxStart(PESSIMISTIC,
>> READ_COMMITTED);
>>
>>   cache.Put(1, 1);
>>
>>   tx.Commit();
>>   }
>>catch (IgniteError & err)
>>   {
>>   std::cout << "An error occurred: " << err.GetText() << std::endl;
>>
>>   return err.GetCode();
>>   }
>>
>>return 0;
>>}
>>
>> Not always, but sometimes, I get an "stack overflow" error, which makes me
>> think about a concurrence problem in the code.
>>
>> Cluster configuration:
>> 
>>
>>
>> Error:
>> 
>>
>> Just in case, the C++ version I'm currently using is:
>> 685c1b70ca (HEAD -> master, origin/master, origin/HEAD) IGNITE-13865
>> Support
>> DateTime as a key or value in .NET and Java (#8580)
>>
>> Le me know if you need anything else
>>
>>
>>
>> --
>> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>>
>


Re: ContinuousQuery and injected bean

2021-03-05 Thread mvolkomorov
Initially i need singleton service on single node to collect events from data
nodes, and process them localy using my injected bean properties. It seems
remoteFilterFactory requires the service to be remote too.
Could somebody clarify me how exactly making service properties statical
affects on service marshalling?
When continuous query comes as ignite service property, should it be static
or not?



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/