Re: Getting "Failed to deserialize object" exception while trying print value from Cache.Entry oblect

2021-03-01 Thread Alex Plehanov
Hello,

You can use as query filter only classes that already deployed to the
server. In your case, class ConnectAndExecuteTestDataInJava only exists on
the client-side and the server knows nothing about it. Unlike Ignite nodes,
Ignite thin clients don't have a P2P class deployment feature.

пн, 1 мар. 2021 г. в 21:05, ChandanS :

> Hi,
>
> I am working on implementing ignite in AWS as EC2 cluster and I am
> following
> documentation from
> "
> https://www.gridgain.com/docs/latest/installation-guide/aws/manual-install-on-ec2
> "
> for my POC. I am using three EC2 instances to form a cluster with
> gridgain-community-8.8.1 package ("./gridgain-community-8.8.1/bin/ignite.sh
> aws-static-ip-finder.xml"). I am able to start ignite as cluster, load data
> in ignite cache and print the cache entries. Below is my code snippet:
>
> public class ConnectAndExecuteTestDataInJava {
>public static void main(String args[]) throws Exception {
> ConnectAndExecuteTestDataInJava igniteTestObj = new
> ConnectAndExecuteTestDataInJava();
> igniteTestObj .connectToIgniteClusterAndExecuteData();  //working
> fine
> igniteTestObj .printIgniteCache();  //working fine
> igniteTestObj .queryIgniteCache();  //not working
> }
>
>private void connectToIgniteClusterAndExecuteData() {
> try {
> System.out.println("Starting the client Program");
> ClientConfiguration cfg = new
> ClientConfiguration().setAddresses("XX.XXX.XX.XXX:10800");
> IgniteClient client = Ignition.startClient(cfg);
> System.out.println("Connection successfull");
>
> ClientCache cache =
> client.getOrCreateCache("myCache");
>
> System.out.println("Caching data");
> for (int i = 0; i <= 100; ++i) {
> cache.put(i, "String -" + i);
> }
>
> System.out.println("Cache created with 100 sample data");
> client.close();
> } catch (Exception excp) {
> excp.printStackTrace();
> }
> }
>
> private void printIgniteCache() {
> System.out.println("Creating client connection");
> ClientConfiguration cfg = new
> ClientConfiguration().setAddresses("XX.XXX.XX.XXX:10800");
> System.out.println("Connection successful");
> try (IgniteClient client = Ignition.startClient(cfg)) {
> System.out.println("Getting data from the cache");
> ClientCache cache = client.cache("myCache");
> System.out.println("Data retrieving");
>
> // Get data from the cache
> for(int i = 0; i < cache.size(); ++i){
> System.out.println(cache.get(i));  //printing the cache
> like
> "String -11"
> }
> System.out.println("Data retrieved");
> } catch (Exception e) {
> System.out.println("Error connecting to client, program will
> exit");
> e.printStackTrace();
> }
> }
>
> private void queryIgniteCache() {
> System.out.println("Creating client connection to query");
> ClientConfiguration cfg = new
> ClientConfiguration().setAddresses("XX.XXX.XX.XXX:10800");
> System.out.println("Connection successful");
> try (IgniteClient client = Ignition.startClient(cfg)) {
> System.out.println("Init cache");
> ClientCache cache =
> client.getOrCreateCache("myCache");
>
> System.out.println("Creating filter 1");
> IgniteBiPredicate filter1 = (key, p) ->
> key.equals(new Integer(31));
> System.out.println("Applying filter 1");
> QueryCursor> qryCursor1 =
> cache.query(new ScanQuery<>(filter1));
> System.out.println("Printing filter 1");
> qryCursor1.forEach(
> entry -> System.out.println("Key1 = " + entry.getKey()
> +
> ", Value1 = " + entry.getValue()));  //throwing exception here
> qryCursor1.close();
>
>
> System.out.println("Filter data retrieved");
>
> } catch (Exception e) {
> System.out.println("Error connecting to client, program will
> exit");
> e.printStackTrace();
> }
> }
> }
>
> Now in the above codes, connecting to ignite cache and printing of the
> cache
> using for loop is working as expected, but while using Cache.Entry String> in queryIgniteCache() method, its throwing "Failed to deserialize
> object" exception, I am running the program from another EC2 instance's
> session manager terminal, below is exception stack-trace:
>
> Applying filter 1
> Printing filter 1
> Error connecting to client, program will exit
> org.apache.ignite.client.ClientException: Ignite failed to process request
> [2]: Failed to deserialize object
> [typeName=java.lang.invoke.SerializedLambda] (server status code [1])
> at
>
> 

Re[6]: Mixing persistent and in memory cache

2021-03-01 Thread Zhenya Stanilovsky


Ok i found !
18:36:07 noringBase.info            INFO   Topology snapshot [ver=2, 
locNode=2bf85583, servers=2, clients=0, state=ACTIVE, CPUs=8, offheap=6.3GB, 
heap=3.5GB]
18:36:07 noringBase.info            INFO     ^-- Baseline [id=0, size=1, 
online=1, offline=0]
you call baseline command after first node was started, thus you have 2 alive 
nodes with only one in baseline.
 
clear your persistent directories, and rewrite code like :
startNode(...a..)
ignite = startNode(..b...)
ignite.cluster().state(ClusterState.ACTIVE);
 
 
> 
>> 
>>>Hi Zhenya,
>>>To be on the safe side, I increased the sleep to 5 sec. I also removed the 
>>>setBackups(2) – no difference – the test is still failing!
>>> 
>>>Newest log file with corresponding source attached.
>>> 
>>>I added more specific logs:
>>> 
>>>18:36:07 lusterTest.testMem2    INFO   >> value has been written to 
>>>'a': aval
>>>18:36:12 lusterTest.testMem2    INFO   >> value retrieved from 'b': 
>>>null
>>> 
>>>---
>>>Mit freundlichen Grüßen
>>> 
>>>Stephan Hesse
>>>Geschäftsführer
>>> 
>>>DICOS GmbH Kommunikationssysteme
>>>Alsfelder Straße 11, 64289 Darmstadt
>>> 
>>>Telefon:  +49 6151 82787 27 , Mobil:  +49 1761 82787 27
>>> 
>>>www.dicos.de
>>> 
>>>DICOS GmbH Kommunikationssysteme, Darmstadt, Amtsgericht Darmstadt HRB 7024,
>>>Geschäftsführer: Dr. Winfried Geyer, Stephan Hesse, Waldemar Wiesner
>>> 
>>> 
>>> 
>>>From: Zhenya Stanilovsky < arzamas...@mail.ru >
>>>Sent: Monday, March 1, 2021 7:15 AM
>>>To: user@ignite.apache.org
>>>Subject: Re[4]: Mixing persistent and in memory cache
>>> 
>>>
>>>hi  Stephan,  due to logs rebalance  still in progress (probably slow 
>>>network ?) will test pass if you increase sleep interval ? 2 sec fro example 
>>>?
>>>Additionally no need to set .setBackups(2) in CacheMode.REPLICATED cache, 
>>>plz check documentation.
>>> 
>>> 
Hi Zhenya,
 
your 2nd point: yes, the cache itself has been propagated.
 
Please be aware that I have successfully used the same test with only the 
in-memory region as well as with only the persistent region. Only when I 
combine both, the synchronization stops working for the in memory region.
 
 
Please find attached the log file (both Ignite nodes run in the same 
process and contribute to this log file) as well as the current Junit test.
 
In the log file you will find:
 
The node startup:
 starting node A
 starting node B
 
The test case startup:
 testMem2
 
The test stops with:
java.lang.AssertionError: expected: but was:
    …
    at 
de.dicos.cpcfe.ignite.IgniteClusterTest.testMem2(IgniteClusterTest.java:175)
 
---
Mit freundlichen Grüßen
 
Stephan Hesse
Geschäftsführer
 
DICOS GmbH Kommunikationssysteme
Alsfelder Straße 11, 64289 Darmstadt
 
Telefon:  +49 6151 82787 27 , Mobil:  +49 1761 82787 27
 
www.dicos.de
 
DICOS GmbH Kommunikationssysteme, Darmstadt, Amtsgericht Darmstadt HRB 7024,
Geschäftsführer: Dr. Winfried Geyer, Stephan Hesse, Waldemar Wiesner
 
 
 
From: Zhenya Stanilovsky < arzamas...@mail.ru >
Sent: Friday, February 26, 2021 6:57 AM
To: user@ignite.apache.org
Subject: Re[2]: Mixing persistent and in memory cache
 

hi Stephan, something wrong with configuration probably … it`s not expected 
issue.
*  plz attach somehow or send me ignite.log from all server nodes ? 
*  If you change second call :
IgniteCache kva = getInMemoryKeyValue(igA);
 
 IgniteCache kvb = getInMemoryKeyValue(igB); ← here
for something like : IgniteCache kvb = 
getInMemoryKeyValue2(igB);
 
private IgniteCache getInMemoryKeyValue2(Ignite ignite)
 {
 return ignite.cache(new CacheConfiguration() <--- 
 
just to check that cache has been already created.
 
Does ignite.cache will see the previously created cache ?
 
thanks !
 
 
>Hi Zhenya, thanks for this suggestion.
>
>However, neither setting CacheWriteSynchronizationMode to Full Sync nor
>setting it to FULL_ASYNC changes anything: memory cahce changes do not get
>propagated:
>
>private IgniteCache getInMemoryKeyValue(Ignite ignite)
>{
>return ignite.getOrCreateCache(new CacheConfiguration()
>.setName("memkv")
>.setCacheMode(CacheMode.REPLICATED)
>.setDataRegionName(NodeController.IN_MEMORY_REGION)
>.setWriteSynchronizationMode(CacheWriteSynchronizationMode.FULL_SYNC)
>.setBackups(2));
>}
>
>
>
>
>--
>Sent from:  http://apache-ignite-users.70518.x6.nabble.com/
 
 
 
 
>>> 
>>> 
>>> 
>>>  
>> 
>> 
>> 
>> 

Getting "Failed to deserialize object" exception while trying print value from Cache.Entry oblect

2021-03-01 Thread ChandanS
Hi,

I am working on implementing ignite in AWS as EC2 cluster and I am following
documentation from
"https://www.gridgain.com/docs/latest/installation-guide/aws/manual-install-on-ec2;
for my POC. I am using three EC2 instances to form a cluster with
gridgain-community-8.8.1 package ("./gridgain-community-8.8.1/bin/ignite.sh
aws-static-ip-finder.xml"). I am able to start ignite as cluster, load data
in ignite cache and print the cache entries. Below is my code snippet:

public class ConnectAndExecuteTestDataInJava {
   public static void main(String args[]) throws Exception {
ConnectAndExecuteTestDataInJava igniteTestObj = new
ConnectAndExecuteTestDataInJava();
igniteTestObj .connectToIgniteClusterAndExecuteData();  //working
fine
igniteTestObj .printIgniteCache();  //working fine
igniteTestObj .queryIgniteCache();  //not working
}

   private void connectToIgniteClusterAndExecuteData() {
try {
System.out.println("Starting the client Program");
ClientConfiguration cfg = new
ClientConfiguration().setAddresses("XX.XXX.XX.XXX:10800");
IgniteClient client = Ignition.startClient(cfg);
System.out.println("Connection successfull");

ClientCache cache =
client.getOrCreateCache("myCache");

System.out.println("Caching data");
for (int i = 0; i <= 100; ++i) {
cache.put(i, "String -" + i);
}

System.out.println("Cache created with 100 sample data");
client.close();
} catch (Exception excp) {
excp.printStackTrace();
}
}

private void printIgniteCache() {
System.out.println("Creating client connection");
ClientConfiguration cfg = new
ClientConfiguration().setAddresses("XX.XXX.XX.XXX:10800");
System.out.println("Connection successful");
try (IgniteClient client = Ignition.startClient(cfg)) {
System.out.println("Getting data from the cache");
ClientCache cache = client.cache("myCache");
System.out.println("Data retrieving");

// Get data from the cache
for(int i = 0; i < cache.size(); ++i){
System.out.println(cache.get(i));  //printing the cache like
"String -11"
}
System.out.println("Data retrieved");
} catch (Exception e) {
System.out.println("Error connecting to client, program will
exit");
e.printStackTrace();
}
}

private void queryIgniteCache() {
System.out.println("Creating client connection to query");
ClientConfiguration cfg = new
ClientConfiguration().setAddresses("XX.XXX.XX.XXX:10800");
System.out.println("Connection successful");
try (IgniteClient client = Ignition.startClient(cfg)) {
System.out.println("Init cache");
ClientCache cache =
client.getOrCreateCache("myCache");

System.out.println("Creating filter 1");
IgniteBiPredicate filter1 = (key, p) ->
key.equals(new Integer(31));
System.out.println("Applying filter 1");
QueryCursor> qryCursor1 =
cache.query(new ScanQuery<>(filter1));
System.out.println("Printing filter 1");
qryCursor1.forEach(
entry -> System.out.println("Key1 = " + entry.getKey() +
", Value1 = " + entry.getValue()));  //throwing exception here
qryCursor1.close();


System.out.println("Filter data retrieved");

} catch (Exception e) {
System.out.println("Error connecting to client, program will
exit");
e.printStackTrace();
}
}
}

Now in the above codes, connecting to ignite cache and printing of the cache
using for loop is working as expected, but while using Cache.Entry in queryIgniteCache() method, its throwing "Failed to deserialize
object" exception, I am running the program from another EC2 instance's
session manager terminal, below is exception stack-trace:

Applying filter 1
Printing filter 1
Error connecting to client, program will exit
org.apache.ignite.client.ClientException: Ignite failed to process request
[2]: Failed to deserialize object
[typeName=java.lang.invoke.SerializedLambda] (server status code [1])
at
org.apache.ignite.internal.client.thin.TcpClientChannel.convertException(TcpClientChannel.java:365)
at
org.apache.ignite.internal.client.thin.TcpClientChannel.receive(TcpClientChannel.java:326)
at
org.apache.ignite.internal.client.thin.TcpClientChannel.service(TcpClientChannel.java:242)
at
org.apache.ignite.internal.client.thin.ReliableChannel.lambda$service$1(ReliableChannel.java:193)
at
org.apache.ignite.internal.client.thin.ReliableChannel.applyOnDefaultChannel(ReliableChannel.java:807)
at

Re: wal issues

2021-03-01 Thread ткаленко кирилл
There may be several reasons:
1)Rare checkpoints;
2)Historical rebalancing;
3)Large transactions.

reason='too big size of WAL without checkpoint' - means that there are too many 
segments without a checkpoint.This is possible with a heavy data load.

01.03.2021, 19:35, "narges saleh" :
> Hi All,
>
> What could be the reason for the wal folder filling up and the message "too 
> big size of WAL without checkpoint"? Too small or too big checkpoint interval 
> or buffer?
>
> [2021-02-28 09:12:12,9811][INFO 
> ][db-checkpoint-thread-#108][GridCacheDatabaseSharedManager] Checkpoint 
> started [checkpointId=5c947ef6-d9c3-40a2-b99e-09ffdaddd304, 
> startPtr=FileWALPointer [idx=25903, fileOff=12783980, len=228894], 
> checkpointBeforeLockTime=92ms, checkpointLockWait=0ms, 
> checkpointListenersExecuteTime=19ms, checkpointLockHoldTime=22ms, 
> walCpRecordFsyncDuration=58ms, writeCheckpointEntryDuration=9ms, 
> splitAndSortCpPagesDuration=214ms,  pages=229075, reason='too big size of WAL 
> without checkpoint']
>
> thanks


wal issues

2021-03-01 Thread narges saleh
Hi All,

What could be the reason for the wal folder filling up and the message "too
big size of WAL without checkpoint"? Too small or too big checkpoint
interval or buffer?

[2021-02-28 09:12:12,9811][INFO
][db-checkpoint-thread-#108][GridCacheDatabaseSharedManager] Checkpoint
started [checkpointId=5c947ef6-d9c3-40a2-b99e-09ffdaddd304,
startPtr=FileWALPointer [idx=25903, fileOff=12783980, len=228894],
checkpointBeforeLockTime=92ms, checkpointLockWait=0ms,
checkpointListenersExecuteTime=19ms, checkpointLockHoldTime=22ms,
walCpRecordFsyncDuration=58ms, writeCheckpointEntryDuration=9ms,
splitAndSortCpPagesDuration=214ms,  pages=229075, *reason='too big size of
WAL without checkpoint']*

thanks


Re: BinaryObjectException: Conflicting Enum Values

2021-03-01 Thread Ilya Kasnacheev
Hello!

I guess this information was stored somewhere and is causing conflicts now.
You can try dropping binary_meta/ directory from ignite work dir on all
nodes, hope that it would be repopulated all right. Make sure to back it up
first!

Regards,
-- 
Ilya Kasnacheev


чт, 25 февр. 2021 г. в 21:21, Mitchell Rathbun (BLOOMBERG/ 731 LEX) <
mrathb...@bloomberg.net>:

> We have recently been seeing the following error:
>
> SEVERE: Failed to serialize object
> [typeName=com.bloomberg.aim.wingman.cachemgr.Ts3DataCache$Ts3CalcrtKey]
> class org.apache.ignite.binary.BinaryObjectException: Failed to write
> field [name=calcrtType]
> at
> org.apache.ignite.internal.binary.BinaryFieldAccessor.write(BinaryFieldAccessor.java:164)
> at
> org.apache.ignite.internal.binary.BinaryClassDescriptor.write(BinaryClassDescriptor.java:822)
> at
> org.apache.ignite.internal.binary.BinaryWriterExImpl.marshal0(BinaryWriterExImpl.java:232)
> at
> org.apache.ignite.internal.binary.BinaryWriterExImpl.marshal(BinaryWriterExImpl.java:165)
> at
> org.apache.ignite.internal.binary.BinaryWriterExImpl.marshal(BinaryWriterExImpl.java:152)
> at
> org.apache.ignite.internal.binary.GridBinaryMarshaller.marshal(GridBinaryMarshaller.java:248)
> at
> org.apache.ignite.internal.processors.cache.binary.CacheObjectBinaryProcessorImpl.marshalToBinary(CacheObjectBinaryProcessorImpl.java:548)
> at
> org.apache.ignite.internal.processors.cache.binary.CacheObjectBinaryProcessorImpl.toBinary(CacheObjectBinaryProcessorImpl.java:1403)
> at
> org.apache.ignite.internal.processors.cache.binary.CacheObjectBinaryProcessorImpl.toCacheKeyObject(CacheObjectBinaryProcessorImpl.java:1198)
> at
> org.apache.ignite.internal.processors.datastreamer.DataStreamerImpl.addData(DataStreamerImpl.java:751)
> at
> com.bloomberg.aim.wingman.cachemgr.Ts3DataCache.lambda$null$29(Ts3DataCache.java:1457)
> at java.base/java.lang.Iterable.forEach(Iterable.java:75)
> at
> com.bloomberg.aim.wingman.cachemgr.Ts3DataCache.lambda$updateMetadataAsyncHelper$30(Ts3DataCache.java:1456)
> at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
> at
> java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
> at
> java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
> at java.base/java.lang.Thread.run(Thread.java:834)
> Caused by: class org.apache.ignite.binary.BinaryObjectException:
> Conflicting enum values. Name 'INT' uses ordinal value (2) that is also
> used for name 'INTEGER'
> [typeName='com.bloomberg.aim.wingman.common.dto.RealCalcrtFieldType']
> at
> org.apache.ignite.internal.binary.BinaryUtils.mergeEnumValues(BinaryUtils.java:2501)
> at
> org.apache.ignite.internal.binary.BinaryUtils.mergeMetadata(BinaryUtils.java:1028)
> at
> org.apache.ignite.internal.processors.cache.binary.BinaryMetadataTransport.requestMetadataUpdate(BinaryMetadataTransport.java:211)
> at
> org.apache.ignite.internal.processors.cache.binary.CacheObjectBinaryProcessorImpl.addMeta(CacheObjectBinaryProcessorImpl.java:603)
> at
> org.apache.ignite.internal.processors.cache.binary.CacheObjectBinaryProcessorImpl$1.addMeta(CacheObjectBinaryProcessorImpl.java:285)
> at
> org.apache.ignite.internal.binary.BinaryContext.registerUserClassDescriptor(BinaryContext.java:764)
> at
> org.apache.ignite.internal.binary.BinaryContext.registerDescriptor(BinaryContext.java:723)
> at
> org.apache.ignite.internal.binary.BinaryContext.registerClass(BinaryContext.java:581)
> at
> org.apache.ignite.internal.binary.BinaryContext.registerClass(BinaryContext.java:556)
> at
> org.apache.ignite.internal.binary.BinaryWriterExImpl.doWriteEnum(BinaryWriterExImpl.java:829)
> at
> org.apache.ignite.internal.binary.BinaryWriterExImpl.writeEnumField(BinaryWriterExImpl.java:1323)
> at
> org.apache.ignite.internal.binary.BinaryFieldAccessor$DefaultFinalClassAccessor.write0(BinaryFieldAccessor.java:670)
> at
> org.apache.ignite.internal.binary.BinaryFieldAccessor.write(BinaryFieldAccessor.java:157)
> ... 16 more
>
>
> As part of the key in one of our classes, we have an enum, for which one
> of the values is INT. It used to be INTEGER, but that was like 5 months
> ago. Looking in this cache, there are multiple entries with INT being used,
> and none with INTEGER. So why are we still getting this error? Writes with
> INT have clearly worked in the past, and INTEGER is not in the cache and
> hasn't been used in a long time. We recently updated from 2.7.5 to 2.9.1 as
> well, not sure if that is related.
>
>


Re: Set Think Client ClassLoader

2021-03-01 Thread Ilya Kasnacheev
Hello!

Have you tried the solution posted there?

Regards,
-- 
Ilya Kasnacheev


сб, 27 февр. 2021 г. в 01:10, PunxsutawneyPhil3 :

> Stack over flow link
> <
> https://stackoverflow.com/questions/66393352/is-there-a-way-to-set-an-ignite-thin-client-classloader>
>
> to this question as well.
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Client node disconnected

2021-03-01 Thread Ilya Kasnacheev
Hello!

You can set clientReconnectDisabled to 'true' on the client nodes, in this
case the client will not try to reconnect and instead will produce an
error. When you see this error you may create a new client which will
hopefully not have these problems.

Regards,
-- 
Ilya Kasnacheev


ср, 24 февр. 2021 г. в 14:14, Oğuzhan Melez :

>
> Thank you. So what should i do? Client node disconnected after this error
> and client can not reconnect to the cluster until i reboot my application,
> client node and server node. How to client node reconnect to cluster?
>
> Ilya Kasnacheev , 24 Şub 2021 Çar, 13:57
> tarihinde şunu yazdı:
>
>> Hello!
>>
>> Looks like network problems, long GC on server node or some kind of
>> deadlock on server node which prevents it from responding.
>>
>> Regards,
>> --
>> Ilya Kasnacheev
>>
>>
>> ср, 24 февр. 2021 г. в 13:09, oguzhan :
>>
>>> Hello,
>>>
>>> We have 1 client node and 1 server node and we are using ignite version
>>> 2.9.1.
>>>
>>> Our application is scheduled to do the same jobs every day. Then our
>>> application did not get any errors for 2 weeks, but 2 weeks later, we are
>>> getting this error as you can see below (We get such an error about
>>> every 2
>>> weeks):
>>>
>>> I hope you support to solve my problem. Thanks and best regards...
>>>
>>>
>>> 2021-02-14 02:07:34 WARN  tcp-client-disco-reconnector-#7-#77756
>>> TcpDiscoverySpi:576 - Failed to connect to any address from IP finder
>>> (will
>>> retry to join topology every 2000 ms; change 'reconnectDelay' to
>>> configure
>>> the frequency of retries): [/127.0.0.1:47500, /127.0.0.1:47501,
>>> /127.0.0.1:47502, /127.0.0.1:47503, /127.0.0.1:47504, /127.0.0.1:47505,
>>> /127.0.0.1:47506, /127.0.0.1:47507, /127.0.0.1:47508, /127.0.0.1:47509]
>>> 2021-02-14 02:07:37 INFO  grid-timeout-worker-#206 IgniteKernal:566 -
>>> Metrics for local node (to disable set 'metricsLogFrequency' to 0)
>>> ^-- Node [id=2fefd66f, uptime=4 days, 13:33:34.341]
>>> ^-- Cluster [hosts=1, CPUs=16, servers=1, clients=1, topVer=2,
>>> minorTopVer=18985]
>>> ^-- Network [addrs=[10.86.26.180, 127.0.0.1], discoPort=0,
>>> commPort=47101]
>>> ^-- CPU [CPUs=16, curLoad=1.07%, avgLoad=0.05%, GC=0.1%]
>>> ^-- Heap [used=865MB, free=92.96%, comm=12274MB]
>>> ^-- Off-heap memory [used=0MB, free=100%, allocated=0MB]
>>> ^-- Page memory [pages=0]
>>> ^--   sysMemPlc region [type=internal, persistence=false,
>>> lazyAlloc=false,
>>>   ...  initCfg=40MB, maxCfg=100MB, usedRam=0MB, freeRam=100%,
>>> allocRam=0MB]
>>> ^--   TxLog region [type=internal, persistence=false,
>>> lazyAlloc=false,
>>>   ...  initCfg=40MB, maxCfg=100MB, usedRam=0MB, freeRam=100%,
>>> allocRam=0MB]
>>> ^--   Default_Region region [type=default, persistence=false,
>>> lazyAlloc=true,
>>>   ...  initCfg=256MB, maxCfg=32768MB, usedRam=0MB, freeRam=100%,
>>> allocRam=0MB]
>>> ^-- Outbound messages queue [size=0]
>>> ^-- Public thread pool [active=0, idle=0, qSize=0]
>>> ^-- System thread pool [active=0, idle=81, qSize=0]
>>> 2021-02-14 02:07:38 ERROR tcp-client-disco-sock-writer-#2-#230
>>> TcpDiscoverySpi:586 - Failed to send message: null
>>> java.io.IOException: Failed to get acknowledge for message:
>>> TcpDiscoveryClientMetricsUpdateMessage [super=TcpDiscoveryAbstractMessage
>>> [sndNodeId=null, id=1d467368771-2fefd66f-0954-45dd-aa32-a33e58567950,
>>> verifierNodeId=null, topVer=0, pendingIdx=0, failedNodes=null,
>>> isClient=true]]
>>> at
>>>
>>> org.apache.ignite.spi.discovery.tcp.ClientImpl$SocketWriter.body(ClientImpl.java:1471)
>>> at
>>> org.apache.ignite.spi.IgniteSpiThread.run(IgniteSpiThread.java:58)
>>> 2021-02-14 02:07:44 WARN  tcp-comm-worker-#1-#216
>>> TcpCommunicationSpi:576 -
>>> Handshake timed out (will stop attempts to perform the handshake)
>>> [node=6953d599-d606-4781-a6ba-43de7aff59e4,
>>> connTimeoutStrategy=ExponentialBackoffTimeoutStrategy [maxTimeout=60,
>>> totalTimeout=1, startNanos=1671033974906026, currTimeout=60],
>>> err=Operation timed out [timeoutStrategy=
>>> ExponentialBackoffTimeoutStrategy
>>> [maxTimeout=60, totalTimeout=1, startNanos=1671033974906026,
>>> currTimeout=60]], addr=/127.0.0.1:47100,
>>> failureDetectionTimeoutEnabled=true, timeout=0]
>>> 2021-02-14 02:07:54 WARN  tcp-comm-worker-#1-#216
>>> TcpCommunicationSpi:576 -
>>> Handshake timed out (will stop attempts to perform the handshake)
>>> [node=6953d599-d606-4781-a6ba-43de7aff59e4,
>>> connTimeoutStrategy=ExponentialBackoffTimeoutStrategy [maxTimeout=60,
>>> totalTimeout=1, startNanos=1671044002786218, currTimeout=60],
>>> err=Operation timed out [timeoutStrategy=
>>> ExponentialBackoffTimeoutStrategy
>>> [maxTimeout=60, totalTimeout=1, startNanos=1671044002786218,
>>> currTimeout=60]], addr=dwccatp01/10.86.26.180:47100,
>>> failureDetectionTimeoutEnabled=true, timeout=0]
>>> 2021-02-14 02:08:06 ERROR grid-timeout-worker-#206 G:581 - Blocked
>>> 

Re: Database has been closed

2021-03-01 Thread Ilya Kasnacheev
Hello!

You don't seem to have INFO log level enabled, so there's not much to see
there unfortunately. Please enable INFO level, it should also add thread
dumps automatically, I think.

Regards,
-- 
Ilya Kasnacheev


пн, 1 мар. 2021 г. в 13:48, DonTequila :

> Hi Ilya,
>
> The complete log is with the link below, can you take a look:
> ignite-2dc05669.gz
> <
> http://apache-ignite-users.70518.x6.nabble.com/file/t2401/ignite-2dc05669.gz>
>
>
> I will try to add code that dumps stacktrace and threads in such cases, but
> I'm not sure yet how to do this.
>
> Thanks,
> Thomas.
>
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Database has been closed

2021-03-01 Thread DonTequila
Hi Ilya,

The complete log is with the link below, can you take a look:
ignite-2dc05669.gz
  

I will try to add code that dumps stacktrace and threads in such cases, but
I'm not sure yet how to do this.

Thanks,
Thomas.




--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Database has been closed

2021-03-01 Thread Ilya Kasnacheev
Hello!

Unfortunately, we can't go from here without thread dump or, at least, a
complete log from the beginning.

Regards,
-- 
Ilya Kasnacheev


чт, 25 февр. 2021 г. в 11:07, DonTequila :

> Hi again,
>
> is there anything that could help me identify the potential reason for the
> read lock timout without having a thread dump? Are there settings that I
> can
> tune to increase this timeout and prevent from the node/database to close?
>
> Thanks.
>
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Ignite PME query

2021-03-01 Thread VeenaMithare
Hi Kamlesh, 

PME related questions have been answered in this forum earlier as well.
Maybe if you search for these, you might get some of your questions
answered.

regards,
Veena.



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/