Re: Blocked system-critical thread has been detected - After upgrade to 2.8.1

2020-07-07 Thread Manu
Hi Alex

We can't share GC logs as they contains sensitive data. We have solve it by
creating a new data cluster with persistence enabled and moving data from
problematic cluster to new one.

As far as we can see, it seems that the problem is in the checkpoint
process, for some unknown reason (maybe it has to do with the migration from
2.7.6 to 2.8.1 and the new changes in persistence management) the checkpoint
thread is blocked.

Anyway, we will be attentive to this topic.

Thank you very much, greetings!




--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Blocked system-critical thread has been detected - After upgrade to 2.8.1

2020-07-01 Thread Manu
Hi Alex, thanks so much.

We are reduced topology to picture below (1 server node and 3 clients).

- 1 Ignite server node: IMDB with persistence enabled
- 3 Ignite client nodes: for SQL query, messaging (topic, queue) and
countdown latches.

All pluggable elements (TOPIC listener and QUEUE listener) are online.

This topology works perfectly with 2.7.6. But with 2.8.1 not... 

Also we are detected that failure (blocked thread) occurs when pluggable
modules are online (green lines and blocks) and we make only 1 request (not
by heavy load).

 



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Blocked system-critical thread has been detected - After upgrade to 2.8.1

2020-06-30 Thread Manu
Hi!

We have been working with Ignite 2.7.6 without incidents, since we upgrade
to 2.8.1 (same machine, same resources) we are getting "Blocked
system-critical thread", Ignite server nodes stops responding. 

We have been notice that after several hours (about 8 or 9), it recovers
itself, but after some queries, countdown latches, queues and topics
creation stops working again.

We are tried to modify number of threads, timeouts without success.

Any idea?

Thanks!!

logs-from-ignite-server-data-in-ignite-server-data-0-7.txt

  



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Critical System error Detected

2020-05-15 Thread Manu
Try this, you just need the multicast group (must be the same on clients and
servers within same cluster):


 
 
 
 
PIE
 
 
 
 









  
  
   





--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: NATIVE PERSISTENCE: Cache data is destroyed after disable WAL and restart

2019-05-04 Thread Manu
Thanks Denis!

Regards.

Manu.



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


NATIVE PERSISTENCE: Cache data is destroyed after disable WAL and restart

2019-04-28 Thread Manu
Hi! 

I have a question, is it normal that if WAL is deactivated for a persisted 
cache when the server node(s) is restarted, the persisted content of the 
cache is completely destroyed? 

I need to disable WAL for large heavy ingestion processes, but eventually 
ingestion may fail (OS, machine crash), so WAL state is not re-enabled. On
this situation if I restart a server node, cache’s persistent 
directory is deleted and recreated again, so data is lost. 

Thanks! 

This is the method that does this hell thing 
org.apache.ignite.internal.processors.cache.persistence.file.FilePageStoreManager.beforeCacheGroupStart
 

Process to replicate it: 

1. Start one or more server nodes with native persistence enabled 
2. Create a cache (natively persisted) and store some data 
3. Disable WAL for cache - ignite().cluster().disableWal("TheCacheName") 
4. Restart server/s nodes 
5. Check cache directory was deleted and recreated again, all data was lost. 

Call stack on server node start: 
*org.apache.ignite.internal.processors.cache.persistence.file.FilePageStoreManager.beforeCacheGroupStart*
 
org.apache.ignite.internal.processors.cache.ClusterCachesInfo.registerCacheGroup
 
org.apache.ignite.internal.processors.cache.ClusterCachesInfo.registerNewCache 
org.apache.ignite.internal.processors.cache.ClusterCachesInfo.processJoiningNode
 
org.apache.ignite.internal.processors.cache.ClusterCachesInfo.onStart 
*org.apache.ignite.internal.processors.cache.GridCacheProcessor.startCachesOnStart*
 
org.apache.ignite.internal.processors.cache.GridCacheProcessor.onReadyForRead 
org.apache.ignite.internal.processors.cache.persistence.GridCacheDatabaseSharedManager.notifyMetastorageReadyForRead
 
org.apache.ignite.internal.processors.cache.persistence.GridCacheDatabaseSharedManager.readMetastore
 
org.apache.ignite.internal.processors.cache.persistence.GridCacheDatabaseSharedManager.notifyMetaStorageSubscribersOnReadyForRead
org.apache.ignite.internal.IgniteKernal.start 
org.apache.ignite.internal.IgnitionEx$IgniteNamedInstance.start0 
org.apache.ignite.internal.IgnitionEx$IgniteNamedInstance.start 
org.apache.ignite.internal.IgnitionEx.start0 
org.apache.ignite.internal.IgnitionEx.startConfigurations 
org.apache.ignite.internal.IgnitionEx.start 
org.apache.ignite.internal.IgnitionEx.start 
org.apache.ignite.internal.IgnitionEx.start 
org.apache.ignite.internal.IgnitionEx.start 
org.apache.ignite.Ignition.start 
org.apache.ignite.startup.cmdline.CommandLineStartup.main 

Ignite version 2.7.0



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Is Pagination not supported in Ignite Spring data?

2019-02-16 Thread Manu
Hi! 

Hawkore’s team has reimplemented partially ignite’s spring-data 2.0 module
to provide fully support to (dynamic) projections, Page responses, SpEL...
until changes be aproved by ignite community you could use it. (uses spring
data version 5.1.4.RELEASE compatible with spring-boot 2.1.4.RELEASE)

Take a look to  https://github.com/hawkore/ignite-hk/modules/spring-data-2.0



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: How to use multiple schemas in ignite?

2019-02-16 Thread Manu
Hi!

“When the CREATE TABLE command is executed, the name of the cache is
generated with the following format- SQL_{SCHEMA_NAME}_{TABLE}. Use the
CACHE_NAME parameter to override the default name.” 

So if you want to create a table under a specific schema use CREATE TABLE
“SCHEMA_NAME”.TABLE ... using sql fields query Rest Api operation
(https://apacheignite.readme.io/docs/rest-api#sql-fields-query-execute),
please note that operation’s paramerter cacheName is only used as base
schema for statement execution, so you can set it to any available cache.

Refer to https://apacheignite-sql.readme.io/docs/create-table for more info



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Baseline Nodes not able to join the cluster

2019-01-13 Thread Manu
Hi!
You need one work directory (not only wal) for each server node.

If you have persistence enabled, once server nodes start,  you need to
activate cluster: ignite.cluster().active(true)  this creates a cluster
baseline topology. Please note that once cluster is activated, if you add a
new server node, you need to update baseline with new server node id to make
new server node join to cluster.

Take a look to https://apacheignite.readme.io/v2.5.0/docs/cluster-activation



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Baseline Nodes not able to join the cluster

2019-01-12 Thread Manu
Hi! Could you try to configure a different work directory per ode? 



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Baseline Nodes not able to join the cluster

2019-01-12 Thread Manu
Hi! Could you try to configure a different work directory per node?



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Text Query via SQL or REST API?

2019-01-10 Thread Manu
Hi! take a look to
https://github.com/hawkore/examples-apache-ignite-extensions/ they are
implemented a solution for persisted lucene indexes that supports SQL
searching




--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: How to detect sql schema changes and make update

2019-01-10 Thread Manu
Hi! take a look to 
https://github.com/hawkore/examples-apache-ignite-extensions/ they are 
implemented a solution to detect changes on query entities and propagate
changes over cluster (fields, indexes and re-indexation)



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Graph Query Integration

2019-01-09 Thread Manu
Hi! take a look to
https://github.com/hawkore/examples-apache-ignite-extensions/ they are
implemented a solution for persisted lucene and spatial indexes



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Text Query question

2019-01-09 Thread Manu
Hi! take a look to
https://github.com/hawkore/examples-apache-ignite-extensions/ they are
implemented a solution for persisted lucene and spatial indexes



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Lucene CorruptIndexException (checksum failed) on GridLuceneIndex - suggested patch

2018-04-05 Thread Manu
Hi,

*GridLuceneOutputStream* has a bug on */copyBytes/* method and
*GridLuceneInputStream* on */readBytes/* method for direct calls from
GridLuceneOutputStream.

On both methods internal GridLuceneOutputStream's CRC is not updated, so we
get  /org.apache.lucene.index.CorruptIndexException: checksum failed
(hardware problem?) [...]/ when the use of lucene index is intensive and
lucene internally try to merge it.

Suggested patch to fix CorruptIndexException on GridLuceneIndex
<http://apache-ignite-users.70518.x6.nabble.com/file/t547/FIX-IGNITE-LUCENE-STREAM-CRC.patch>
  

Hope it helps!!

Bye!

Manu



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Ignite 2.3 - replicated cache lost data after restart cluster nodes with persistence enabled

2017-10-20 Thread Manu
Hi,

after restart data seems not be consistent.

We have been waiting until rebalance was fully completed to restart the
cluster to check if durable memory data rebalance works correctly and sql
queries still work. 
Another question (it´s not this case), what's happen if one cluster node
crashes in the middle of rebalance process?  

Thanks!



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Ignite 2.3 - replicated cache lost data after restart cluster nodes with persistence enabled

2017-10-20 Thread Manu
To reproduce: 
1. create a replicated cache with multiple indexedtypes, with some indexes
2. Start first server node
3. Insert data into cache (100 entries)
4. Start second server node

At this point, seems all is ok, data is apparently successfully rebalanced
making sql queries (count(*))

5. Stop server nodes
6. Restart server nodes
7. Doing sql queries (count(*)) returns less data



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Affinity key does not seem to be working

2017-06-22 Thread Manu
AffinityKeyMapped is only processed on cache keys, not on cache values.

Try cache.put(keyEntityWithAffinityKeyMappedAnnotation, value)


El 22 jun 2017, a las 13:16, tuco.ramirez [via Apache Ignite Users] 
> 
escribió:

Hi,

I have a simple use case, but affinity key does not seem to be working.
AffinityKey is placed on clientId which is same for everyone. So all the data 
should go to one node.
However ignite visor shows that the data is different on each nodes, with each 
node having 3000+ entries.
using 1.9.0 also leads to the same behavior.

Below is the code.

TestItem Class


import java.io.Serializable;

import org.apache.ignite.cache.affinity.AffinityKeyMapped;

public class TestItem implements Serializable {

  private static final long serialVersionUID = 1L;

  @AffinityKeyMapped
private String clientId;

  private int counter;

public String getClientId() {
return clientId;
}

public void setClientId(String clientId) {
this.clientId = clientId;
}

public int getCounter() {
return counter;
}

public void setCounter(int counter) {
this.counter = counter;
}



public TestItem(String clientId, int counter) {
super();
this.clientId = clientId;
this.counter = counter;
}

}


import java.util.Arrays;
import java.util.HashMap;
import java.util.Map;

import org.apache.ignite.Ignite;
import org.apache.ignite.IgniteCache;
import org.apache.ignite.Ignition;
import org.apache.ignite.cache.CacheMode;
import org.apache.ignite.configuration.CacheConfiguration;
import org.apache.ignite.configuration.IgniteConfiguration;
import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi;
import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder;

public class CreateCache {
public static void main(String[] args) throws Exception {
Ignition.setClientMode(true);

IgniteConfiguration conf = new IgniteConfiguration();
conf.setPeerClassLoadingEnabled(true);
TcpDiscoverySpi discovery = new TcpDiscoverySpi();

TcpDiscoveryVmIpFinder ipFinder = new TcpDiscoveryVmIpFinder();

ipFinder.setAddresses(Arrays.asList("ip1", "ip2", "ip3"));

discovery.setIpFinder(ipFinder);

conf.setDiscoverySpi(discovery);
Ignite ignite = Ignition.start(conf);

ignite.getOrCreateCache("TESTITEMCACHE").destroy();
pushItems(ignite);

Ignition.stop(true);
}

private static void pushItems(Ignite ignite) {
CacheConfiguration itemCfg = new 
CacheConfiguration<>("TESTITEMCACHE");
itemCfg.setCacheMode(CacheMode.PARTITIONED);
itemCfg.setIndexedTypes(Integer.class, TestItem.class);

IgniteCache skuCache = 
ignite.createCache(itemCfg);

System.out.println("putting data");
long t1 = System.currentTimeMillis();
Map skuMap = new HashMap<>();

for(int i = 0 ; i < 1 ; i++){
TestItem item = new TestItem("testId", i);
skuMap.put(i, item);
}
System.out.println(" sku map size " + skuMap.size());
skuCache.putAll(skuMap);

long t2 = System.currentTimeMillis();
System.out.println("put data in ms " + (t2 - t1));

Ignition.stop(false);
}
}



Output of ignite visor


visor> cache -a
Time of the snapshot: 06/22/17, 16:42:16
+==+
|  Name(@)   |Mode | Nodes |   Entries (Heap / Off-heap)   |   
Hits|  Misses   |   Reads   |  Writes   |
+==+
| TESTITEMCACHE(@c0) | PARTITIONED | 3 | min: 3096 (3096 / 0)  | 
min: 0| min: 0| min: 0| min: 0|
|| |   | avg: .33 (.33 / 0.00) | 
avg: 0.00 | avg: 0.00 | avg: 0.00 | avg: 0.00 |
|| |   | max: 3496 (3496 / 0)  | 
max: 0| max: 0| max: 0| max: 0|
+--+

Cache 'TESTITEMCACHE(@c0)':
+-+
| Name(@) | TESTITEMCACHE(@c0)|
| Nodes   | 3 |
| Total size Min/Avg/Max  | 3096 / .33 / 3496 |
|   Heap size 

Re: How to continuously subscribe for event?

2016-10-24 Thread Manu
Hi,

You need to return true on apply method to continuously listen.



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/How-to-continuously-subscribe-for-event-tp8438p8442.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Ignite Jdbc connection

2016-10-24 Thread Manu
If you use ignite jdbc driver, to ensure that you always get a valid ignite
instance before call a ignite operation I recommend to use a datasource
implementation that validates connection before calls and create new ones
otherwise.

For common operations with a ignite instance, I use this method to ensure a
*good* ignite instance and don´t waits or control reconnection... maybe
there are some other mechanisms... but who cares? ;)

public Ignite getIgnite() {
if (this.ignite!=null){
try{
//ensure this ignite instance is STARTED and 
connected
this.ignite.getOrCreateCache("default");
}catch (IllegalStateException e){
this.ignite=null;
}catch (IgniteClientDisconnectedException cause) {
this.ignite=null;
}catch (CacheException e) {
if (e.getCause() instanceof 
IgniteClientDisconnectedException) {
this.ignite=null;
}else if (e.getCause() instanceof
IgniteClientDisconnectedCheckedException) {
this.ignite=null;
}else{
throw e;
}
}
}
if (this.ignite==null){
this.createIgniteInstance();
}
return ignite;
}

also you can wait for reconnection using this catch block instead of
above... but as I said... who cares?... sometimes reconnection waits are not
desirable...
[...]
   try{
//ensure this ignite instance is STARTED and 
connected
this.ignite.getOrCreateCache("default");
}catch (IllegalStateException e){
this.ignite=null;
}catch (IgniteClientDisconnectedException cause) {
LOG.warn("Client disconnected from cluster. Waiting for 
reconnect...");
cause.reconnectFuture().get(); // Wait for reconnect.
}catch (CacheException e) {
if (e.getCause() instanceof 
IgniteClientDisconnectedException) {
LOG.warn("Client disconnected from cluster. 
Waiting for reconnect...");
IgniteClientDisconnectedException cause =
(IgniteClientDisconnectedException)e.getCause();
cause.reconnectFuture().get(); // Wait for 
reconnect.
}else if (e.getCause() instanceof
IgniteClientDisconnectedCheckedException) {
LOG.warn("Client disconnected from cluster. 
Waiting for reconnect...");
IgniteClientDisconnectedCheckedException cause =
(IgniteClientDisconnectedCheckedException)e.getCause();
cause.reconnectFuture().get(); // Wait for 
reconnect.
}else{
throw e;
}
}
[...]



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Ignite-Jdbc-connection-tp8431p8441.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Ignite Jdbc connection

2016-10-24 Thread Manu
You are right,  if connection is closed due to cluster *client* node
disconnection, client will automatically recreate connection using discovery
configuration. Pool is also supported, but N pooled instances of
org.apache.ignite.internal.jdbc2.JdbcConnection for same url on same java VM
will use same and unique ignite instance...



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Ignite-Jdbc-connection-tp8431p8440.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Ignite Jdbc connection

2016-10-24 Thread Manu
Hi,

as you know, org.apache.ignite.internal.jdbc2.JdbcConnection is an
implementation of java.sql.Connection, works always on client mode (this
flag is hardcoded to true when load xml configuration passed on connection
url) and works on read mode (only select). On same java VM instance,
connection (ignite instance) is cached internally in JdbcConnection by url,
so for same connection (type, path, collocation...) you only have (and need)
one ignite instance. For more info check this 
https://apacheignite.readme.io/docs/jdbc-driver
  

As a java.sql.Connection, you could use a javax.sql.DataSource
implementation to manage it and checks connection status (validation query)
etc, but you don't need a pool, for example:








 


[...]
This is client ignite configuration with default cache (dummy, without data,
only used to validate client connection) used on url of
collocatedDbcpIgniteDataGridDataSource





 



default*" />




java.lang.String

java.lang.String 

 


[...]



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Ignite-Jdbc-connection-tp8431p8436.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Data Streamer

2016-10-22 Thread Manu
Hi,

Your are creating new data streamer on each loop call... 
[...]
for (int i = 0; i < 100; i++){
//  
CacheManager.getInstallBaseCache().put(name+"-"+i, new
TestPojo());

CacheManager.getInstance().dataStreamer(CACHE).addData(name+"-"+i, new
TestPojo());
}
[...]

Ignite does this when you call dataStreamer(cache name) method...
[...]
  /**
 * @param cacheName Cache name ({@code null} for default cache).
 * @return Data loader.
 */
public DataStreamerImpl dataStreamer(@Nullable String cacheName) {
if (!busyLock.enterBusy())
throw new IllegalStateException("Failed to create data streamer
(grid is stopping).");

try {
final DataStreamerImpl ldr = new DataStreamerImpl<>(ctx,
cacheName, flushQ);

ldrs.add(ldr);
[...]

So try create a data streamer instance only once. 
[...]
*IgniteStream stream = CacheManager.getInstance().dataStreamer(CACHE);*

for (int i = 0; i < 100; i++){
stream.addData(name+"-"+i, new 
TestPojo());
}
[...]

Another improvement is send data on a "buffered fashion", so you reduce
calls to cluster... try stream.addData(data); // where data buffer =
Map



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Data-Streamer-tp8409p8427.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Can't increase the speed of loadCache() when increasing more Ignite node

2016-10-18 Thread Manu
Of course, it's not trivial... and changes on database are required (new
field on primary table (better) or new "extended partition table" 1to1
relationship with primary table (id primary table, partitionId)) but using
CacheStoreAdapter implementation it's not such as complex. I would do: 

1. overwrite "write" method on your CacheStoreAdapter implementation to
ensure new entries take proper partition when are written on database: with
ignite.affinity(cacheName).partition(for entryKey), implement insert/update
of new partition field on primary table or on "extended partition table"
(your will need to do 2 inserts on write, one on primary table with entity
and a second one on extended with partitionId)
2. overwrite loadCache(IgniteBiInClosure clo,
Object... args) on your CacheStoreAdapter implementation with a flag arg to
allow load cache on full scan mode or on partition mode (using partitionId
field created on primary table or join with "extended partition table")
3. Call cache.load(fullScanFlag) for full scan mode.
4. Once loaded, cache.forEach... put same entry on cache to force re-write
with correct partition.
5. Now table (primary or "partition tabled") is updated with correct
partitions.
6. From now you can call cache.load(null) (for convenience, by default
without params on partition mode) and each node will load its own
partitioned data

Take a look to Affinity-Collocation
(https://apacheignite.readme.io/docs/affinity-collocation) to improve
performance and other important recommendations when use sql joins with
partitioned caches



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Can-t-increase-the-speed-of-loadCache-when-increasing-more-Ignite-node-tp8336p8347.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Can't increase the speed of loadCache() when increasing more Ignite node

2016-10-18 Thread Manu
Have you probe to partitioning your data? It’s pretty simple by adding a
field (integer partitionId) on your table, so each node will load only its
own partitions. You could see an example here:
http://apacheignite.gridgain.org/docs/data-loading#section-partition-aware-data-loading



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Can-t-increase-the-speed-of-loadCache-when-increasing-more-Ignite-node-tp8336p8340.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: SQL Queries - propagate "new" CacheConfiguration.queryEntities over the cluster on an already started cache

2016-06-23 Thread Manu
Done. 

Time to  :P



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/SQL-Queries-propagate-new-CacheConfiguration-queryEntities-over-the-cluster-on-an-already-started-cae-tp5802p5851.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: SQL Queries - propagate "new" CacheConfiguration.queryEntities over the cluster on an already started cache

2016-06-23 Thread Manu
Thanks!

I almost have the change, queryEntities changes are propagated on H2 tables
and index tree over the cluster... preserving old indexes. I'll let you know
when done... working on version 1.6.0



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/SQL-Queries-propagate-new-CacheConfiguration-queryEntities-over-the-cluster-on-an-already-started-cae-tp5802p5834.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.