Re: Native persistence and upgrading

2020-12-14 Thread xero
Thanks for the confirmation Stanislav. Is this compatibility kept between
minors as a commitment from the team or is it just specific to the versions
I mentioned?
If this is a backward compatibility feature that you have intentions to
maintain, I think it's worth mentioning that in the documentation because is
a great feature.

Thank you



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Native persistence and upgrading

2020-12-10 Thread xero
Hi Dimitry and community,
Is this still true? My intention is to do it between versions 2.7.6 and
2.8.1/2.9. Basically, I want to only update the docker image keeping the
volumes so that I can recover the persisted data. I couldn't find
documentation regarding this topic.

On the other hand, release_2.9 introduced Cluster Snapshots. Are these
snapshots version agnostic? or there are some considerations regarding which
versions are compatible with the created snapshot? Could this be an
alternative to solve my issue (upgrading without losing the persisted data)? 

Thanks in advance for the time.



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: client hangs forever trying to join the cluster (ClientImp joinLatch.await())

2020-10-27 Thread xero
Hi, 
Thanks for the recommendations.

in this case, both server and client didn't show memory issues (heap and
available memory in the container). The GC pauses were very short too. 

The configured timeouts are default:
clientFailureDetectionTimeout = 3
failureDetectionTimeout = 1

The latch did not get notified in more than 24hs and the timeout is 30
seconds. How can this explain the node hanging for a day? That's why I was
thinking about a message that got lost.  

Do you think that using a different value for those parameters would avoid
this scenario? 



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


client hangs forever trying to join the cluster (ClientImp joinLatch.await())

2020-10-27 Thread xero
Hello,
we recently had a production incident in which our application got stuck
connecting to the cluster. The *IgnitionEx start0* method was blocked for
more than 24 hours waiting for that latch to be notified, but that never
happened. Finally, the container was restarted in order to recover the
service.

this is the stacktrace of that thread

 


This happened close to an Ignite server node restart due to SEGMENTATION.
These are some lines that I extracted from the logs of that server that may
be relevant (not sure tho).

/2020-10-22T13:33:03.348+00:00 a5912bf99152 ignite:
tcp-disco-msg-worker-#2|WARN |o.a.i.s.d.tcp.TcpDiscoverySpi|Node is out of
topology (probably, due to short-time network problems).

2020-10-22T13:33:03.349+00:00 a5912bf99152 ignite:
disco-event-worker-#66|WARN |o.a.i.i.m.d.GridDiscoveryManager|Local node
SEGMENTED: TcpDiscoveryNode [id=2296e9a7-96d6-44d9-af3b-4e22e33261ea,
addrs=[10.133.3.6, 127.0.0.1], sockAddrs=[/127.0.0.1:47500,
a5912bf99152/10.133.3.6:47500], discPort=47500, order=276, intOrder=142,
lastExchangeTime=1603373583342, loc=true, ver=2.7.6#20190911-sha1:21f7ca41,
isClient=false]

2020-10-22T13:33:04.232+00:00 a5912bf99152 ignite:
node-stopper|ERROR|ROOT|Stopping local node on Ignite failure:
[failureCtx=FailureContext [type=SEGMENTATION, err=null]]

2020-10-22T13:33:09.312+00:00 a5912bf99152 ignite: exchange-worker-#67|INFO
|o.a.i.i.p.c.d.d.p.GridDhtPartitionsExchangeFuture|Coordinator changed, send
partitions to new coordinator [ver=AffinityTopologyVersion [topVer=284,
minorTopVer=0], crd=6293444a-0f6d-4946-b357-85a6d195a244,
newCrd=ad701f62-28ee-4028-8981-8a19dd5de1f8]

2020-10-22T13:33:09.313+00:00 a5912bf99152 ignite: exchange-worker-#67|INFO
|o.a.i.i.p.c.d.d.p.GridDhtPartitionsExchangeFuture|Coordinator failed, node
is new coordinator [ver=AffinityTopologyVersion [topVer=284, minorTopVer=0],
prev=ad701f62-28ee-4028-8981-8a19dd5de1f8
]/


During those 24 hours there are hundreds of messages about the
SYSTEM_WORKER_BLOCKED but that event is ignored by the failure handler:

/2020-10-22 06:33:12.732 PDT [grid-timeout-worker-#119]  ERROR root 
  
-Critical system error detected. Will be handled accordingly to configured
handler [hnd=ExpressIgnitionFailureHandler [], failureCtx=FailureContext
[type=SYSTEM_WORKER_BLOCKED, err=class o.a.i.IgniteException: GridWorker
[name=tcp-client-disco-msg-worker, igniteInstanceName=null, finished=false,
heartbeatTs=1603373580648]]]/


Based on the logs, it seems that there was a network glitch during that
interval at the same time the client was trying to join the cluster.
Do you think these events can be related to the blocked start0 method? is it
possible that the glitch/coordinator-change is generating the join
request/response to get lost making that latch to block forever?

Any suggestions to handle this case? (any 2.8.1 or 2.9 change that may
apply?)
Thanks for your time. 












--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Node is unable to join cluster because it has destroyed caches

2020-06-03 Thread xero
Hi,
I tried your suggestion of using a NodeFilter but, is not solving this
issue. Using a NodeFilter by consistent-id in order to create the cache in
only one node is creating persistence information in every node:

In the node for which the filter is true (directory size 75MB):
//work/db/node01-0da087c4-c11a-47ce-ad53-0380f0d2c51a//cache-tempBCK0-cd982aa5-c27f-4582-8a3b-b34c5c60a49c/

In the node for which the filter is false  (directory size 8k):
//work/db/node01-0da087c4-c11a-47ce-ad53-0380f0d2c51a//cache-tempBCK0-cd982aa5-c27f-4582-8a3b-b34c5c60a49c/

If the cache is destroyed while *any* of these nodes is down, it won't join
the cluster again throwing the exception:
/Caused by: class org.apache.ignite.spi.IgniteSpiException: Joining node has
caches with data which are not presented on cluster, it could mean that they
were already destroyed, to add the node to cluster - remove directories with
the caches[tempBCK0-cd982aa5-c27f-4582-8a3b-b34c5c60a49c]/



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Node is unable to join cluster because it has destroyed caches

2020-06-02 Thread xero
Hi, thanks for the prompt response
We can have several of these caches, one for each query (is an exceptional
case but, with load, there can be several simultaneously) that is being
executed so we would like to preserve the persistence to take advantage of
the swapping in case the amount of memory is not enough. That's the reason
we didn't try to move these caches to a region without persistence.

Do you think that this scenario will only happen with persistence enabled?
is it possible to remove the validation?

Is this an expected behavior when persistence is enabled? I thought the node
would know that those caches were removed in its absence and would delete
the data accordingly





--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Node is unable to join cluster because it has destroyed caches

2020-06-02 Thread xero
Hi Ignite team, We have a use case where a small portion of the dataset must
answer successive queries that could be relatively expensive. For this, we
create a temporary cache with that small subset of the dataset and operate
on that new cache. At the end of the process, that cache is destroyed. The
caches are created *partitioned* and *persistence* is enabled.The problem we
are facing is the following. If during the reboot of a node, one of those
"temporal" caches is destroyed (which is very likely since they have a short
life), the node is unable to rejoin the cluster because it contains
information about a cache that no longer exists. This is the
exception:2020-06-02T02: 56: 22.326 + 00: 00 fa72f64b5d0f ignite: Caused by:
class org.apache.ignite.spi.IgniteSpiException: Joining node has caches with
data which are not presented on cluster, it could mean that they were
already destroyed, to add the node to cluster - remove directories with the
caches [cache-tempX, cache-tempY]Is this an expected behavior? Is it
possible to skip this validation? Is there any way to indicate that it is a
temporary cache?Thanks



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/

Re: Slow cache updates with indexing module enabled

2020-02-13 Thread xero
Hi Andrei, thanks for taking the time to answer my question. I will consider
your suggestion if we decide to switch to a multiple tables approach that
will require those JOIN considerations. But, in this case we have only 1
cache and the operation that we are executing is an update. We tried using
SQL-Update but we also tried using a CacheEntryProcessor directly. My
question is what is happening with all those indexes when an entry is
updated but, none of the indexed fields (except one) are being changed? In
our case, we are only flipping a boolean value of only 1 field. Is this
change triggering updates in ALL the indexes associated with the cache?

Cache is like this (with indexes on all fields):
id|(other fields)|segment_1|segment_2|segment_2|...|segment_99|segment_100

Then we try updating a batch of entries with an invokeAll using a
CacheEntryProcessor:
public Void process(MutableEntry entry, Object...
arguments) {
final BinaryObjectBuilder builder =
entry.getValue().toBuilder().setField("SEGMENT_1", true);
entry.setValue(builder.build());

return null;
}
When we update entry field SEGMENT_1 field with a True, are the other 99
indexes updated?
Those tickets I mentioned seem to be related but I would like to have your
confirmation.



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Slow cache updates with indexing module enabled

2020-02-12 Thread xero
Hi,
We are experiencing slow updates to a cache with multiple indexed fields
(around 25 indexes during testing but we expect to have many more) for
updates that are only changing one field. Basically, we have a
customer*->belongsto->*segment relationship and we have one column per
segment. Only one column is updated with a 1 or 0 if the customer belongs to
the segment. 

During testing, we tried dropping half of the unrelated indexes (indexes
over fields that are not being updated) and we duplicate the performance. We
went from 1k ops to 2k ops approximately.

We found these cases may be related:
https://cwiki.apache.org/confluence/display/IGNITE/IEP-19%3A+SQL+index+update+optimizations
https://issues.apache.org/jira/browse/IGNITE-7015?src=confmacro

Could you please confirm us if IGNITE-7015 could be related to this
scenario? If yes, do you have any plans to continue the development of the
fix?


We are using Ignite 2.7.6 with 10 nodes, 2 backups, indexing module enabled
and persistence.

Cache Configuration: [name=xdp-contactcomcast-1, grpName=null,
memPlcName=xdp, storeConcurrentLoadAllThreshold=5, rebalancePoolSize=2,
rebalanceTimeout=1, evictPlc=null, evictPlcFactory=null,
onheapCache=false, sqlOnheapCache=false, sqlOnheapCacheMaxSize=0,
evictFilter=null, eagerTtl=true, dfltLockTimeout=0, nearCfg=null,
writeSync=PRIMARY_SYNC, storeFactory=null, storeKeepBinary=false,
loadPrevVal=false, aff=RendezvousAffinityFunction [parts=1024, mask=1023,
exclNeighbors=false, exclNeighborsWarn=false, backupFilter=null,
affinityBackupFilter=null], cacheMode=PARTITIONED, atomicityMode=ATOMIC,
backups=2, invalidate=false, tmLookupClsName=null, rebalanceMode=ASYNC,
rebalanceOrder=0, rebalanceBatchSize=524288, rebalanceBatchesPrefetchCnt=2,
maxConcurrentAsyncOps=500, sqlIdxMaxInlineSize=-1, writeBehindEnabled=false,
writeBehindFlushSize=10240, writeBehindFlushFreq=5000,
writeBehindFlushThreadCnt=1, writeBehindBatchSize=512,
writeBehindCoalescing=true, maxQryIterCnt=1024,
affMapper=org.apache.ignite.internal.processors.cache.CacheDefaultBinaryAffinityKeyMapper@db5e319,
rebalanceDelay=0, rebalanceThrottle=0, interceptor=null,
longQryWarnTimeout=3000, qryDetailMetricsSz=0, readFromBackup=true,
nodeFilter=IgniteAllNodesPredicate [], sqlSchema=XDP_CONTACTCOMCAST_1,
sqlEscapeAll=false, cpOnRead=true, topValidator=null, partLossPlc=IGNORE,
qryParallelism=1, evtsDisabled=false, encryptionEnabled=false]


Thanks,








--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Slow cache updates with indexing module enabled

2020-02-12 Thread xero
Hi,
We are experiencing slow updates to a cache with multiple indexed fields
(around 25 indexes during testing but we expect to have many more) for
updates that are only changing one field. Basically, we have a
customer*->belongsto->*segment relationship and we have one column per
segment. Only one column is updated with a 1 or 0 if the customer belongs to
the segment. 

During testing, we tried dropping half of the unrelated indexes (indexes
over fields that are not being updated) and we duplicate the performance. We
went from 1k ops to 2k ops approximately.

We found these cases may be related:
https://cwiki.apache.org/confluence/display/IGNITE/IEP-19%3A+SQL+index+update+optimizations
https://issues.apache.org/jira/browse/IGNITE-7015?src=confmacro

Could you please confirm us if IGNITE-7015 could be related to this
scenario? If yes, do you have any plans to continue the development of the
fix?


We are using Ignite 2.7.6 with 10 nodes, 2 backups, indexing module enabled
and persistence.

Cache Configuration: [name=xdp-contactcomcast-1, grpName=null,
memPlcName=xdp, storeConcurrentLoadAllThreshold=5, rebalancePoolSize=2,
rebalanceTimeout=1, evictPlc=null, evictPlcFactory=null,
onheapCache=false, sqlOnheapCache=false, sqlOnheapCacheMaxSize=0,
evictFilter=null, eagerTtl=true, dfltLockTimeout=0, nearCfg=null,
writeSync=PRIMARY_SYNC, storeFactory=null, storeKeepBinary=false,
loadPrevVal=false, aff=RendezvousAffinityFunction [parts=1024, mask=1023,
exclNeighbors=false, exclNeighborsWarn=false, backupFilter=null,
affinityBackupFilter=null], cacheMode=PARTITIONED, atomicityMode=ATOMIC,
backups=2, invalidate=false, tmLookupClsName=null, rebalanceMode=ASYNC,
rebalanceOrder=0, rebalanceBatchSize=524288, rebalanceBatchesPrefetchCnt=2,
maxConcurrentAsyncOps=500, sqlIdxMaxInlineSize=-1, writeBehindEnabled=false,
writeBehindFlushSize=10240, writeBehindFlushFreq=5000,
writeBehindFlushThreadCnt=1, writeBehindBatchSize=512,
writeBehindCoalescing=true, maxQryIterCnt=1024,
affMapper=org.apache.ignite.internal.processors.cache.CacheDefaultBinaryAffinityKeyMapper@db5e319,
rebalanceDelay=0, rebalanceThrottle=0, interceptor=null,
longQryWarnTimeout=3000, qryDetailMetricsSz=0, readFromBackup=true,
nodeFilter=IgniteAllNodesPredicate [], sqlSchema=XDP_CONTACTCOMCAST_1,
sqlEscapeAll=false, cpOnRead=true, topValidator=null, partLossPlc=IGNORE,
qryParallelism=1, evtsDisabled=false, encryptionEnabled=false]


Thanks,








--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Configuration of IndexingSpi per cache?

2019-06-10 Thread xero
Hi,
Is it possible to configure a custom IndexingSpi per cache? I'd like to have
a custom indexing mechanism to improve the full text capabilities but I
would like to enable only in some caches. is that possible? Is it possible
to configure multiple indexingSpi?
Adding a custom indexingspi to igniteConfiguration will affect any of the
built-in indexing capabilities of Ignite?

Thanks



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: BinaryObjectImpl.deserializeValue with specific ClassLoader

2018-11-27 Thread xero
Hi,
We are also facing this issue while trying to retrieve domain objects inside
a compute task.
Do you have plans to add this feature in the near future? maybe 2.7 release? 

regards




--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: IgniteUtils enables strictPostRedirect in a static block

2018-09-18 Thread xero
Hi,
Does anyone have information about this?

Thanks.




--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Using compute().call from a Service has P2P class loading feature?

2018-09-14 Thread xero
Hello,
I have a question about p2p class loading.

My topology is 1 dedicated server, 1 client.
I have deployed a Service in the ignite client node (selecting clusterGroup
forClients) from which I'm trying to execute a compute() to the server node. 
The IgniteCallable that I'm sending to call() is referencing User's classes
I would expect that the p2p class loading will transfer that class to the
server node (As the service is running in a client node, the classpath has
the mentioned classes)

This is not working. I'm getting ClassNotFoundException for the user class.
Should this be working?

Thanks!



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


IgniteUtils enables strictPostRedirect in a static block

2018-09-11 Thread xero
Hello Igniters,
We noticed that IgniteUtils class has a static initialization block (line
796 in version 2.6) in which System properties are being changed. In
particular, the property "http.strictPostRedirect" is set to "true".
This could change how an application behaves when referencing any class that
triggers this static block.

Is there any reason to have this property configured this way?

As a workaround we are forcing the initialization of this class in order to
override this property value back to false in a controlled way. We identify
that new version checker could be using this but, we would like to know if
disabling this property could cause any additional issue.

Any information would be appreciated

Thanks



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


RE: Does event listener keep the order of events as they occur on the cluster?

2018-07-25 Thread xero
Hi,
It is not mandatory to provide a filter. 
For example, this will get all the events from "myCache"

ContinuousQuery qry = new ContinuousQuery<>();
qry.setLocalListener(new MyEventListener());
myCache.withKeepBinary().query(qry);

@IgniteAsyncCallback
private class MyEventListener implements CacheEntryUpdatedListener {
@Override
 public void onUpdated(Iterable> evts) {
// Do what you want here
 }
}



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Ignite Transaction and Cache Events

2018-07-15 Thread xero
Thanks for replying PRasad,
I understand your approach but, I still don't get /when/ I should group all
events for the same transaction. In my tests I get onSessionStart and
onSessionEnd for each cache (both sharing the same cix). I would love to
have a post commit hook because at that point I'm sure that I have collected
all the events. (In this example I have A and B caches but, in reality could
have multiple of them).

Example implementing CacheStoreSessionListener to log events:
em.getTransaction().begin();
A a = new A();
a.setData(10);
em.persist(a);

b b = new b();
b.setIntData(20);
em.persist(b);

em.getTransaction().commit();

Triggered events:
OnSessionStart cacheName:a_entity
xid:e1fceee9461--0888-5e7a--0001
OnSessionStart cacheName:b_entity
xid:e1fceee9461--0888-5e7a--0001
OnSessionEnd cacheName:a_entity
xid:e1fceee9461--0888-5e7a--0001 commit:true
OnSessionEnd cacheName:b_entity
xid:e1fceee9461--0888-5e7a--0001 commit:true


Is there a way of getting only one onSessionEnd event? or getting an event
associated with the transaction?

Thanks!



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Ignite Transaction and Cache Events

2018-07-13 Thread xero
Hi,
Do you know if there is mechanism to subscribe for to transaction events
(pre or post commit)?

My scenario is the following:
I have 2 caches (A and B) that are modified in the same transaction. On the
other hand, I have a third C cache whose entries are calculated based on A
and B values.
I would like to update C once the transaction that modifies A and B commits.

I plan to collect the modifications of A and B caches by subscribing to the
events (which have the associated transaction id  in the cix field). I would
like to group all the events associated with a certain cix but, I can not
find a way to hook to the transaction's commit event in order to ensure that
all the cache events were triggered.
Then, if the tx modified 5 entries in A and 5 entries in B, I would like to
group them once I have collected all 10 events.

Any help would be greatly appreciated




--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/