Re: Cache Expiry policy not working..

2020-08-26 Thread kay
Hi, 

I can get data at remain cache data. so that issue you are share is not my
problem.

It not happend every time. sometimes, data is not expired.

Here is my node log when after start, data not expired.
Cache1-2-2.zip
  

this time 17 data count was not expired.
nodeId : 55a72003-066e-4504-bd0e-638ab19c5127
partition Id : 4,5,7,9,12,13,16,20,22,24,27,33,39,42,47,48,49

Please check and analyze the log file to why thoes data were not expired.

I'll wait for reply.
Thank you so much.







--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Ignite client node raises "Sequence was removed from cache" after Ignite Server node restarts

2020-08-26 Thread xmw45688
*USE CASE *- use IgniteAtomicLong for table sequence generation (may not be
correct approach in a distributed environment).

*Ignite Server *(start Ignite as server mode) - apache-ignite-2.8.0.20190215
daily build
*Ignite Service* (start Ignite as client mode) - use Ignite Spring to
initialize the sequence, see code snippet below.  
*code snippet*

IgniteAtomicLong userSeq;

@Autowired
UserRepository userRepository;

@Autowired
Ignite igniteInstance;

@PostConstruct
@Override
public void initSequence() {
Long maxId = userRepository.getMaxId();
if (maxId == null)

{ maxId = 0L; }
LOG.info("Max User id: {}", maxId);
userSeq = igniteInstance.atomicLong("userSeq", maxId, true);
userSeq.getAndSet(maxId);
}

@Override
public Long getNextSequence() {
return userSeq.incrementAndGet();
}

*Exception*
This code works well until the Ignite Server restarted (Ignite Service was
not restarted).  It raised "Sequence was removed from cache" after Ignite
Server node restarted.

020-08-11 16:14:46 [http-nio-8282-exec-3] ERROR
c.p.c.p.service.PersistenceService - Error while saving entity:
java.lang.IllegalStateException: Sequence was removed from cache: userSeq
at
org.apache.ignite.internal.processors.datastructures.AtomicDataStructureProxy.removedError(AtomicDataStructureProxy.java:145)
at
org.apache.ignite.internal.processors.datastructures.AtomicDataStructureProxy.checkRemoved(AtomicDataStructureProxy.java:116)
at
org.apache.ignite.internal.processors.datastructures.GridCacheAtomicLongImpl.incrementAndGet(GridCacheAtomicLongImpl.java:94)

*Tried to reinitialize when the server node is down. But raises another
exception - "cannot start/stop cache within lock or transaction"*

How to solve such issues?  Any suggestions are appreciated.

@Override
public Long getNextSequence() {
if (useSeq == null || userSeq.removed())

{ initSeqence(); }
return userSeq.incrementAndGet();
}






--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


How to confirm that disk compression is in effect?

2020-08-26 Thread 38797715

Hi,

We turn on disk compression to see the trend of execution time and disk 
space.


Our expectation is that after disk compression is turned on, although 
more CPU is used, the disk space is less occupied. Because more data is 
written per unit time, the overall execution time will be shortened in 
the case of insufficient memory.


However, it is found that the execution time and disk consumption do not 
change significantly. We tested the diskPageCompressionLevel values as 
0, 10 and 17 respectively.


Our test method is as follows:
The ignite-compress module has been introduced.

The configuration of ignite is as follows:


http://www.springframework.org/schema/beans;
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"xsi:schemaLocation="http://www.springframework.org/schema/beans
http://www.springframework.org/schema/beans/spring-beans.xsd;>





































Re: Hibernate 2nd Level query cache with Ignite

2020-08-26 Thread Evgenii Zhuravlev
Hi,

Can you please share full logs from client and server nodes?

Thanks,
Evgenii

ср, 26 авг. 2020 г. в 14:26, Tathagata Roy :

> Hi,
>
>
>
> I am trying to do a POC on hibernate 2nd level cache with Apache Ignite.
> With this configuration I was able to make it work
>
>
>
> *spring.jpa.properties.hibernate.cache.use_second_level_cache*=
> *true **spring.jpa.properties.hibernate.cache.use_query_cache*=
> *true **spring.jpa.properties.hibernate.generate_statistics*=
> *false **spring.jpa.properties.hibernate.cache.region.factory_class*=
> *org.apache.ignite.cache.hibernate.HibernateRegionFactory *
> *spring.jpa.properties.org.apache.ignite.hibernate.default_access_type*=
> *READ_ONLY*
>
>
>
>
>
> <*dependency*>
>  <*groupId*>org.gridgain
>  <*artifactId*>ignite-hibernate_5.3
>  <*version*>8.7.23
>  <*exclusions*>
>  <*exclusion*>
>  <*groupId*>org.hibernate
>  <*artifactId*>hibernate-core
>  
>  
>  
>
>
>
> @Bean
> @ConditionalOnMissingBean
> *public *IgniteConfiguration igniteConfiguration(DiscoverySpi discoverySpi, 
> CommunicationSpi communicationSpi) {
> IgniteConfiguration igniteConfiguration = *new *IgniteConfiguration();
> igniteConfiguration.setClientMode(*clientMode*);
> igniteConfiguration.setMetricsLogFrequency(0);
>
> igniteConfiguration.setGridLogger(*new *Slf4jLogger());
>
> igniteConfiguration.setDiscoverySpi(discoverySpi);
> igniteConfiguration.setCommunicationSpi(communicationSpi);
> igniteConfiguration.setFailureDetectionTimeout(*failureDetectionTimeout*);
>
> CacheConfiguration cc = *new *CacheConfiguration<>();
> cc.setName(“Entity1”);
> cc.setCacheMode(CacheMode.*REPLICATED*);
>
>
>
> CacheConfiguration cc1 = *new *CacheConfiguration<>();
> cc1.setName(“*default-query-results-region*”);
> cc1.setCacheMode(CacheMode.*REPLICATED*);
>
>
>
> CacheConfiguration cc2 = *new *CacheConfiguration<>();
> cc2.setName(“*default-update-timestamps-region*”);
> cc2.setCacheMode(CacheMode.*REPLICATED*);
>
> igniteConfiguration.setCacheConfiguration(cc);
>
>
>
> *return *igniteConfiguration;
> }
>
>
>
>
>
>
>
> I am testing this with external ignite node, but if the external ig node
> is restarted , I see the error when trying to access Entity1
>
>
>
> "errorMessage": "class
> org.apache.ignite.internal.processors.cache.CacheStoppedException: Failed
> to perform cache operation (cache is stopped): Entity1; nested exception is
> java.lang.IllegalStateException: class
> org.apache.ignite.internal.processors.cache.CacheStoppedException: Failed
> to perform cache operation (cache is stopped): Entity1",
>
>
>
> It looks like the issue is as reported here ,
>
>
>
>
> https://stackoverflow.com/questions/46053089/ignite-cache-reconnection-issue-cache-is-stopped
>
> https://issues.apache.org/jira/browse/IGNITE-5789
>
>
>
>
>
> Are there any other way without restaring the client application we can
> make it work?
>


Hibernate 2nd Level query cache with Ignite

2020-08-26 Thread Tathagata Roy
Hi,

I am trying to do a POC on hibernate 2nd level cache with Apache Ignite. With 
this configuration I was able to make it work

spring.jpa.properties.hibernate.cache.use_second_level_cache=true
spring.jpa.properties.hibernate.cache.use_query_cache=true
spring.jpa.properties.hibernate.generate_statistics=false
spring.jpa.properties.hibernate.cache.region.factory_class=org.apache.ignite.cache.hibernate.HibernateRegionFactory
spring.jpa.properties.org.apache.ignite.hibernate.default_access_type=READ_ONLY




 org.gridgain
 ignite-hibernate_5.3
 8.7.23
 
 
 org.hibernate
 hibernate-core
 
 
 



@Bean
@ConditionalOnMissingBean
public IgniteConfiguration igniteConfiguration(DiscoverySpi discoverySpi, 
CommunicationSpi communicationSpi) {
IgniteConfiguration igniteConfiguration = new IgniteConfiguration();
igniteConfiguration.setClientMode(clientMode);
igniteConfiguration.setMetricsLogFrequency(0);

igniteConfiguration.setGridLogger(new Slf4jLogger());

igniteConfiguration.setDiscoverySpi(discoverySpi);
igniteConfiguration.setCommunicationSpi(communicationSpi);
igniteConfiguration.setFailureDetectionTimeout(failureDetectionTimeout);

CacheConfiguration cc = new CacheConfiguration<>();
cc.setName("Entity1");
cc.setCacheMode(CacheMode.REPLICATED);



CacheConfiguration cc1 = new CacheConfiguration<>();
cc1.setName("default-query-results-region");
cc1.setCacheMode(CacheMode.REPLICATED);



CacheConfiguration cc2 = new CacheConfiguration<>();
cc2.setName("default-update-timestamps-region");
cc2.setCacheMode(CacheMode.REPLICATED);

igniteConfiguration.setCacheConfiguration(cc);



return igniteConfiguration;
}




I am testing this with external ignite node, but if the external ig node is 
restarted , I see the error when trying to access Entity1

"errorMessage": "class 
org.apache.ignite.internal.processors.cache.CacheStoppedException: Failed to 
perform cache operation (cache is stopped): Entity1; nested exception is 
java.lang.IllegalStateException: class 
org.apache.ignite.internal.processors.cache.CacheStoppedException: Failed to 
perform cache operation (cache is stopped): Entity1",

It looks like the issue is as reported here ,

https://stackoverflow.com/questions/46053089/ignite-cache-reconnection-issue-cache-is-stopped
https://issues.apache.org/jira/browse/IGNITE-5789


Are there any other way without restaring the client application we can make it 
work?


Re: Cache Expiry policy not working..

2020-08-26 Thread Evgenii Zhuravlev
Hi,

It looks like a little bit different problem then. As far as I see, the
only issue here is related to the cache size, no to the get operations. It
is a known issue: https://issues.apache.org/jira/browse/IGNITE-9474

Best Regards,
Evgenii

вт, 25 авг. 2020 г. в 21:22, kay :

> Hello, There is a get method in my code.
>
> but that method is not for expiry check that method to check if data is
> saved well.
>
> I figured out in GirdGain webconsole cache size after 4hours data
> put(expiry
> policy is 4 minutes).
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Shared counter

2020-08-26 Thread Srikanta Patanjali
Hi Bastien,

Ignore provides the ability to have distributed data structure. And in your
case an atomic long would work as a distributed counter.

For more info https://apacheignite.readme.io/docs/atomic-types


Regards,
Srikanta

On Wed, 26 Aug 2020, 12:24 pm Bastien Durel,  wrote:

> Hello,
>
> I whish to know if there is a supported way to implement some kind of
> shared counter in ignite, where any node could increment or decrement a
> value, which would be decremented automatically if a node is leaving
> the cluster ?
> I know I can use an AtomicInteger but there will be no decrement on
> exit, i guess ?
>
> Should I use a cache with  (summing all counters) and
> manually evict rows when I get a EVT_NODE_FAILED/EVT_NODE_LEFT event,
> or is there a better way ?
>
> Thanks,
>
> --
> Bastien Durel
> DATA
> Intégration des données de l'entreprise,
> Systèmes d'information décisionnels.
>
> bastien.du...@data.fr
> tel : +33 (0) 1 57 19 59 28
> fax : +33 (0) 1 57 19 59 73
> 12 avenue Raspail, 94250 GENTILLY France
> www.data.fr
>
>


Ignite's memory consumption

2020-08-26 Thread Dana Milan
Hi all Igniters,

I am trying to minimize Ignite's memory consumption on my server.

Some background:
My server has 16GB RAM, and is supposed to run applications other than
Ignite.
I use Ignite to store a cache. I use the TRANSACTIONAL_SNAPSHOT mode and I
don't use persistence (configuration file attached). To read and update the
cache I use SQL queries, through ODBC Client in C++ and through an
embedded client-mode node in C#.
My data consists of a table with 5 columns, and I guess around tens of
thousands of rows.
Ignite metrics tell me that my data takes 167MB ("CFGDataRegion region
[used=167MB, free=67.23%, comm=256MB]", This region contains mainly this
one cache).

At the beginning, when I didn't tune the JVM at all, the Apache.Ignite
process consumed around 1.6-1.9GB of RAM.
After I've done some reading and research, I use the following JVM options
which have brought the process to consume around 760MB as of now:
-J-Xms512m
-J-Xmx512m
-J-Xmn64m
-J-XX:+UseG1GC
-J-XX:SurvivorRatio=128
-J-XX:MaxGCPauseMillis=1000
-J-XX:InitiatingHeapOccupancyPercent=40
-J-XX:+DisableExplicitGC
-J-XX:+UseStringDeduplication

Currently Ignite is up for 29 hours on my server. When I only started the
node, the Apache.Ignite process consumed around 600MB (after my data
insertion, which doesn't change much after), and as stated, now it consumes
around 760MB. I've been monitoring it every once in a while and this is not
a sudden rise, it has been rising slowly but steadily ever since the node
has started.
I used DBeaver to look into node metrics system view
, and I turned on the
garbage collector logs. The garbage collector log shows that heap is
constantly growing, but I guess this is due to the SQL queries and their
results being stored there. (There are a few queries in a second, the
results normally contain one row but can contain tens or hundreds of rows).
After every garbage collection the heap usage is between 80-220MB. This is
in accordance to what I see under HEAP_MEMORY_USED system view metric.
Also, I can see that NONHEAP_MEMORY_COMITTED is around 102MB and
NONHEAP_MEMORY_USED is around 98MB.

My question is, what could be causing the constant growth in memory usage?
What else consumes memory that doesn't appear in these metrics?

Thanks for your help!

http://www.springframework.org/schema/beans;
   xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance;
   xsi:schemaLocation="
http://www.springframework.org/schema/beans
http://www.springframework.org/schema/beans/spring-beans.xsd;>
	









		




			
		



		
			

	
		
			
			
			
			
		
		
	








			
		
		

		
			

	


	

			
		
	
	


	
		
		
		
		
		

		
			
		
	  
	  
			
	
	
		
			
			

			
			
		
		

			
		
	


		
		
		
		
		
		
		
		
			
		
	  
	
	
		
	
		
			
		
	

			
		
	


 
	
		
		
		
		
		
		
		
			

	
		
		
	

			
		
	




Increase the indexing speed while loading the cache from an RDBMS

2020-08-26 Thread Srikanta Patanjali
Currently I'm using Apache Ignite v2.8.1 to preload a cache from the RDBMS.
There are two tables with each 27M rows. The index is defined on a single
column of type String in 1st table and Integer in the 2nd table. Together
the total size of the two tables is around 120GB.

The preloading process (triggered using loadCacheAsync() from within a Java
app) takes about 45hrs. The cache is persistence enabled and a common EBS
volume (SSD) is being used for both the WAL and other locations.

I'm unable to figure out the bottleneck for increasing the speed.

Apart from defining a separate path for WAL and the persistence, is there
any other way to load the cache faster (with indexing enabled) ?


Thanks,
Srikanta


Re: Lag before records are visible after transaction commit

2020-08-26 Thread ssansoy
As an update, if I update the printBs method to also try a
cache.getAll(keys), it still exhibits the same problem (missing records):

   private int printBs(long id) {
IgniteCache cacheB = ignite.cache("B").withKeepBinary();

ScanQuery scanQuery = new ScanQuery<>(
(IgniteBiPredicate) (key, value) ->
value
.field("PARENT_ID").equals(id));

Set keys = new HashSet<>();
for(int i=0;i<100;i++){
keys.add("ID_" + id + "_B_" + i);
}
Map getResults = cacheB.getAll(keys);
List scanResults = cacheB.query(scanQuery).getAll();

int scanResultsSize = scanResults.size();
int getResultsSize = getResults.size();

LOGGER.debug("Received {} scan results, {} getAll results",
scanResultsSize, getResultsSize);
return Math.max(scanResultsSize, getResultsSize);
}
}



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: 2.8.1 : Server cluster node startup issue

2020-08-26 Thread akorensh
Veena,
  Ignite shouldn't be waiting for the storage spi. It first starts all its
components and when an 
  Event is ready to be recorded, the storage SPI is invoked. This scenario
tests out ok.
  Send a reproducer and I'll take a look.
Thanks, Alex



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Lag before records are visible after transaction commit

2020-08-26 Thread ssansoy
Here is a reproducer for this btw.

Run the mainclass with program argument READER and again with argument
WRITER.
In the console for WRITER press a key (this will generate an A and 100
associated Bs)
READER subscribes to A and gets the associated B's with a scan query.
However, it takes some number of retries before all 100 arrive.

package com.testproject.server;

import java.util.Arrays;
import java.util.List;
import java.util.Scanner;
import javax.cache.Cache.Entry;
import javax.cache.CacheException;
import javax.cache.event.CacheEntryEvent;
import javax.cache.event.CacheEntryListenerException;
import javax.cache.event.CacheEntryUpdatedListener;
import org.apache.ignite.Ignite;
import org.apache.ignite.IgniteCache;
import org.apache.ignite.Ignition;
import org.apache.ignite.binary.BinaryObject;
import org.apache.ignite.binary.BinaryObjectBuilder;
import org.apache.ignite.cache.CacheAtomicityMode;
import org.apache.ignite.cache.CacheMode;
import org.apache.ignite.cache.CacheRebalanceMode;
import org.apache.ignite.cache.CacheWriteSynchronizationMode;
import org.apache.ignite.cache.query.ContinuousQuery;
import org.apache.ignite.cache.query.QueryCursor;
import org.apache.ignite.cache.query.ScanQuery;
import org.apache.ignite.configuration.CacheConfiguration;
import org.apache.ignite.configuration.IgniteConfiguration;
import org.apache.ignite.lang.IgniteAsyncCallback;
import org.apache.ignite.lang.IgniteBiPredicate;
import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi;
import
org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder;
import org.apache.ignite.transactions.Transaction;
import org.apache.ignite.transactions.TransactionConcurrency;
import org.apache.ignite.transactions.TransactionIsolation;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;

public class TransactionProblem{

private static final Logger LOGGER =
LoggerFactory.getLogger(TransactionProblem.class);

private static class TestIgniteConfiguration extends IgniteConfiguration
{

public TestIgniteConfiguration(String name){
setWorkDirectory("c:\\data\\testproject\\"+name);
TcpDiscoveryVmIpFinder tcpPortConfig = new
TcpDiscoveryVmIpFinder();
tcpPortConfig.setAddresses(Arrays.asList("localhost:47500",
"localhost:47501"));
TcpDiscoverySpi discoverySpi = new TcpDiscoverySpi();
discoverySpi.setIpFinder(tcpPortConfig);
setDiscoverySpi(discoverySpi);
setPeerClassLoadingEnabled(true);
}
}

private static class TestCacheConfiguration extends CacheConfiguration {
public TestCacheConfiguration(String name){
super(name);
setRebalanceMode(CacheRebalanceMode.SYNC);
setCacheMode(CacheMode.REPLICATED);
   
setWriteSynchronizationMode(CacheWriteSynchronizationMode.FULL_SYNC);
setAtomicityMode(CacheAtomicityMode.TRANSACTIONAL);
}
}

@IgniteAsyncCallback
private static class ACallback implements
CacheEntryUpdatedListener {

private final Ignite ignite;

public ACallback(Ignite ignite) {
this.ignite = ignite;
}

@Override
public void onUpdated(
Iterable> cacheEntryEvents)
throws CacheEntryListenerException {

cacheEntryEvents.forEach(e -> {
LOGGER.info("Continuous update: {}", e);
BinaryObject b = e.getValue();
long id = b.field("ID");
LOGGER.info("ID is {}", id);
// find the B's for this A
// keep retrying until 100 are seen
int count=0;
long start = System.currentTimeMillis();
while(count<100){
count = printBs(id);
}
long end = System.currentTimeMillis();
LOGGER.info("Took {} ms to receive all B's",
(end-start));
}
);
}

private int printBs(long id) {
IgniteCache cacheB = ignite.cache("B").withKeepBinary();

ScanQuery scanQuery = new ScanQuery<>(
(IgniteBiPredicate) (key, value) ->
value
.field("PARENT_ID").equals(id));

cacheB.query(scanQuery);
List scanResults = cacheB.query(scanQuery).getAll();
LOGGER.debug("Received {} scan results", scanResults.size());
return scanResults.size();
}
}


public static void main(String[] args){
String type = args.length>0?args[0]:"BLANK";
if(!"READER".equals(type) && !"WRITER".equals(type)){
throw new UnsupportedOperationException("Unknown option
"+type+". Choose one one READER or WRITER");
}

Ignite ignite = Ignition.start(new TestIgniteConfiguration(type));

LOGGER.info("Node was successfully started");

IgniteCache 

Re: How to solve the problem of single quotes in SQL statements?

2020-08-26 Thread 38797715

ok,solved.
INSERT INTO City(ID, Name, CountryCode, District, Population) VALUES 
(1,'Ka''bul','AFG','Kabol',178);


在 2020/8/26 下午7:10, 38797715 写道:

Hi,

for example:

CREATE TABLE City (   ID INT(11),   Name CHAR(35),   CountryCode 
CHAR(3),   District CHAR(20),   Population INT(11),   PRIMARY KEY (ID, 
CountryCode) ) WITH "template=partitioned, backups=1, 
affinityKey=CountryCode, CACHE_NAME=City, KEY_TYPE=demo.model.CityKey, 
VALUE_TYPE=demo.model.City";


INSERT INTO City(ID, Name, CountryCode, District, Population) VALUES 
(1,'Ka'bul','AFG','Kabol',178);


name field's value contains single quotes.

The following writing will throw exceptions:

INSERT INTO City(ID, Name, CountryCode, District, Population) VALUES 
(1,'Ka\'bul','AFG','Kabol',178);


INSERT INTO City(ID, Name, CountryCode, District, Population) VALUES 
(1,'Ka\\'bul','AFG','Kabol',178);


INSERT INTO City(ID, Name, CountryCode, District, Population) VALUES 
(1,"Ka'bul",'AFG','Kabol',178);


I wonder if there are any other solutions besides the PreparedStatement?



Re: ignite partition mode

2020-08-26 Thread itsmeravikiran.c
By using ignite web console



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


How to solve the problem of single quotes in SQL statements?

2020-08-26 Thread 38797715

Hi,

for example:

CREATE TABLE City (   ID INT(11),   Name CHAR(35),   CountryCode 
CHAR(3),   District CHAR(20),   Population INT(11),   PRIMARY KEY (ID, 
CountryCode) ) WITH "template=partitioned, backups=1, 
affinityKey=CountryCode, CACHE_NAME=City, KEY_TYPE=demo.model.CityKey, 
VALUE_TYPE=demo.model.City";


INSERT INTO City(ID, Name, CountryCode, District, Population) VALUES 
(1,'Ka'bul','AFG','Kabol',178);


name field's value contains single quotes.

The following writing will throw exceptions:

INSERT INTO City(ID, Name, CountryCode, District, Population) VALUES 
(1,'Ka\'bul','AFG','Kabol',178);


INSERT INTO City(ID, Name, CountryCode, District, Population) VALUES 
(1,'Ka\\'bul','AFG','Kabol',178);


INSERT INTO City(ID, Name, CountryCode, District, Population) VALUES 
(1,"Ka'bul",'AFG','Kabol',178);


I wonder if there are any other solutions besides the PreparedStatement?




Shared counter

2020-08-26 Thread Bastien Durel
Hello,

I whish to know if there is a supported way to implement some kind of
shared counter in ignite, where any node could increment or decrement a
value, which would be decremented automatically if a node is leaving
the cluster ?
I know I can use an AtomicInteger but there will be no decrement on
exit, i guess ?

Should I use a cache with  (summing all counters) and
manually evict rows when I get a EVT_NODE_FAILED/EVT_NODE_LEFT event,
or is there a better way ?

Thanks,

-- 
Bastien Durel
DATA
Intégration des données de l'entreprise,
Systèmes d'information décisionnels.

bastien.du...@data.fr
tel : +33 (0) 1 57 19 59 28
fax : +33 (0) 1 57 19 59 73
12 avenue Raspail, 94250 GENTILLY France
www.data.fr



Re: 2.8.1 : Server cluster node startup issue

2020-08-26 Thread VeenaMithare
Thanks Alex,
Will try and see if I can create a reproducer.

>>If Ignite hasn't fully started, it will wait( U.awaitQuiet(startLatch); )
until it has, before returning the instance.

Does the Ignition.start wait for the storage spi to return from record, to
start ? since the storage spi is waiting on startLatch and the main thread
which is waiting for storage spi to return, there is a deadlock ?

regards,
Veena.



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/