Re: Query on Ignite Visor usage

2020-12-08 Thread Evgenii Zhuravlev
Hi,

I don't think that it's related to the commands that you run from Visor,
it's more about the way Visor connects to the cluster. Internally, it
starts a daemon node, which basically has the same discovery and
communication mechanisms as other nodes. So, other nodes will try to
communicate with this node and it makes sense to place it closer to the
server nodes.

Evgenii

вт, 8 дек. 2020 г. в 09:02, vbm :

> Hi,
>
> I was going over the ignite confluence page on Ignite 3.0 wishlist.
>
> https://cwiki.apache.org/confluence/display/IGNITE/Apache+Ignite+3.0+Wishlist
>
> I saw the below line in the page:
> *"Visorcmd - a node that is always connected to the grid may be a
> destabilizing behavior; actionable utilities like control.sh make more
> sense"*
>
>
> Based on the above, I had one query on how much is it recommended to use
> visor to debug the ignite cluster ?
> Will it cause any overhead when, cache or node command is run from visor ?
>
>
>
> Regards,
> Vishwas
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Ignite communicating with non ignite servers

2020-11-23 Thread Evgenii Zhuravlev
Hi,

Can you please tell what scan were you running? I want to reproduce this
issue using tenable.sc.

Thank you,
Evgenii


вт, 22 сент. 2020 г. в 06:55, Ilya Kasnacheev :

> Hello!
>
> I don't think it should cause heap dumps. Here you are showing just a
> warning. This warning may be ignored.
>
> It's outside of scope of Apache Ignite to disable something else to try
> connecting to it. If you have invasive security port scanning, you will
> expect to see warnings/errors in the logs of any network application.
>
> Regards,
> --
> Ilya Kasnacheev
>
>
> вт, 22 сент. 2020 г. в 16:26, ignite_user2016 :
>
>> We have SSL enabled on all servers but some how it s trying to attempt
>> connection on SSL causing heap dumps. Is there a way to disable to
>> external
>> server try connecting to ignite ?
>>
>> 2020-09-10 22:52:47,029 WARN [grid-nio-worker-tcp-comm-3-#27%NAME_GRID%]
>> org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi Client
>> disconnected abruptly due to network connection loss or because the
>> connection was left open on application shutdown. [cls=class
>> o.a.i.i.util.nio.GridNioException, msg=Failed to decode SSL data:
>> GridSelectorNioSessionImpl [worker=DirectNioClientWorker
>> [super=AbstractNioClientWorker [idx=3, bytesRcvd=13315002728, bytesSent=0,
>> bytesRcvd0=18, bytesSent0=0, select=true, super=GridWorker
>> [name=grid-nio-worker-tcp-comm-3, igniteInstanceName=WebGrid,
>> finished=false, heartbeatTs=1599796365124, hashCode=1230825885,
>> interrupted=false, runner=grid-nio-worker-tcp-comm-3-#27%WebGrid%]]],
>> writeBuf=java.nio.DirectByteBuffer[pos=0 lim=32768 cap=32768],
>> readBuf=java.nio.DirectByteBuffer[pos=18 lim=18 cap=32768],
>> inRecovery=null,
>> outRecovery=null, closeSocket=true,
>>
>> outboundMessagesQueueSizeMetric=o.a.i.i.processors.metric.impl.LongAdderMetric@69a257d1
>> ,
>> super=GridNioSessionImpl [locAddr=/*IG_SERVER1*:47101, rmtAddr=/*SEC_SCAN*
>> SERVER:52082, createTime=1599796365124, closeTime=0, bytesSent=0,
>> bytesRcvd=18, bytesSent0=0, bytesRcvd0=18, sndSchedTime=1599796365124,
>> lastSndTime=1599796365124, lastRcvTime=1599796367026, readsPaused=false,
>> filterChain=FilterChain[filters=[GridNioCodecFilter
>> [parser=o.a.i.i.util.nio.GridDirectParser@20ca1d6a, directMode=true],
>> GridConnectionBytesVerifyFilter, SSL filter], accepted=true,
>> markedForClose=false]]]
>>
>>
>>
>> --
>> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>>
>


Re: Ignite 2.8.1: Database Closed error

2020-11-19 Thread Evgenii Zhuravlev
Hi,

I don't think it's related to the discovery. In the log, before OOM you can
see the long JVM pause:
 Possible too long JVM pause: 2323 milliseconds.

So, probably you used more heap memory than you had.

What version do you use? How much heap memory do you have? What do you do
with a cluster(SQL, key value operations, etc.)? Also, can you share full
logs?

I would recommend collecting heap dump and checking objects that use memory
there.

Evgenii

чт, 19 нояб. 2020 г. в 09:16, Mahesh Renduchintala <
mahesh.renduchint...@aline-consulting.com>:

> Hi,
>
>
> Any pointers on what the below error means.
> There seems to be an out of memory on discoverySPI.  what can cause this?
>
>
>
>
> ^--   TxLog region [used=0MB, free=100%, comm=100MB]
> ^-- Ignite persistence [used=50071MB]
> ^--   sysMemPlc region [used=0MB]
> ^--   default region [used=50071MB]
> ^--   metastoreMemPlc region [used=0MB]
> ^--   TxLog region [used=0MB]
> ^-- Outbound messages queue [size=0]
> ^-- Public thread pool [active=0, idle=0, qSize=0]
> ^-- System thread pool [active=0, idle=1, qSize=0]
> [16:29:45,630][INFO][exchange-worker-#85][GridCachePartitionExchangeManager]
> Skipping rebalancing (nothing scheduled) [top=AffinityTopologyVersion
> [topVer=2016, minorTopVer=0], force=true, evt=DISCOVERY>
> [16:30:12,877][WARNING][jvm-pause-detector-worker][IgniteKernal] Possible
> too long JVM pause: 2323 milliseconds.
> [16:30:12,877][INFO][tcp-disco-srvr-[:47500]-#3][TcpDiscoverySpi] TCP
> discovery accepted incoming connection [rmtAddr=/192.168.10.137,
> rmtPort=39599]
> [16:30:12,882][SEVERE][tcp-disco-client-message-worker-[9a0c020b
> 192.168.1.9:61059]-#1560][TcpDiscoverySpi] Runtime error caught during
> grid runnable execution: GridWorker [name=tcp-disco-client-message>
> java.lang.OutOfMemoryError: Java heap space
> at
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.addConditionWaiter(AbstractQueuedSynchronizer.java:1855)
> at
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2068)
> at
> java.util.concurrent.LinkedBlockingDeque.pollFirst(LinkedBlockingDeque.java:522)
> at
> java.util.concurrent.LinkedBlockingDeque.poll(LinkedBlockingDeque.java:684)
> at
> org.apache.ignite.spi.discovery.tcp.ServerImpl$MessageWorker.body(ServerImpl.java:7761)
> at
> org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:120)
> at
> org.apache.ignite.spi.discovery.tcp.ServerImpl$MessageWorkerThread.body(ServerImpl.java:7697)
> at
> org.apache.ignite.spi.IgniteSpiThread.run(IgniteSpiThread.java:61)
> [16:30:12,883][INFO][tcp-disco-srvr-[:47500]-#3][TcpDiscoverySpi] TCP
> discovery spawning a new thread for connection [rmtAddr=/192.168.10.137,
> rmtPort=39599]
> [16:30:12,883][SEVERE][query-#218652][GridMapQueryExecutor] Failed to
> execute local query.
> class org.apache.ignite.IgniteCheckedException: Failed to execute SQL
> query. The database has been closed [90098-197]
> at
> org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing.executeSqlQuery(IgniteH2Indexing.java:874)
> at
> org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing.executeSqlQueryWithTimer(IgniteH2Indexing.java:955)
> at
> org.apache.ignite.internal.processors.query.h2.twostep.GridMapQueryExecutor.onQueryRequest0(GridMapQueryExecutor.java:412)
> at
> org.apache.ignite.internal.processors.query.h2.twostep.GridMapQueryExecutor.onQueryRequest(GridMapQueryExecutor.java:241)
> at
> org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing.onMessage(IgniteH2Indexing.java:2186)
> at
> org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing.lambda$start$17(IgniteH2Indexing.java:2139)
> at
> org.apache.ignite.internal.managers.communication.GridIoManager$ArrayListener.onMessage(GridIoManager.java:3386)
> at
> org.apache.ignite.internal.managers.communication.GridIoManager.invokeListener(GridIoManager.java:1847)
> at
> org.apache.ignite.internal.managers.communication.GridIoManager.processRegularMessage0(GridIoManager.java:1472)
> at
> org.apache.ignite.internal.managers.communication.GridIoManager.access$5200(GridIoManager.java:229)
> at
> org.apache.ignite.internal.managers.communication.GridIoManager$9.run(GridIoManager.java:1367)
> at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
> Caused by: org.h2.jdbc.JdbcSQLException: The database has been closed
> [90098-197]
> at
> org.h2.message.DbException.getJdbcSQLException(DbException.java:357)
> at org.h2.message.DbException.get(DbException.java:179)
> at org.h2.message.DbException.get(DbException.java:155)
>  

Re: L2-cache slow/not working as intended

2020-11-06 Thread Evgenii Zhuravlev
Hi,

How many nodes do you have? Can you check the same scenario with one node
only? How do you run queries? Is client on the same machine as a server
node?

I would recommend enabling DEBUG logs for org.apache.ignite.cache.hibernate
package. DEBUG logs can show all get and put operations for hibernate
cache.

Best Regards,
Evgenii

чт, 5 нояб. 2020 г. в 01:59, Bastien Durel :

> Hello,
>
> I'm using an ignite cluster to back an hibernate-based application. I
> configured L2-cache as explained in
>
> https://ignite.apache.org/docs/latest/extensions-and-integrations/hibernate-l2-cache
>
> (config below)
>
> I've ran a test reading a 1M-elements cache with a consumer counting
> elements. It's very slow : more than 5 minutes to run.
>
> Session metrics says it was the LC2 puts that takes most time (5
> minutes and 3 seconds of a 5:12" operation)
>
> INFO  [2020-11-05 09:51:15,694]
> org.hibernate.engine.internal.StatisticalLoggingSessionEventListener:
> Session Metrics {
> 33350 nanoseconds spent acquiring 1 JDBC connections;
> 25370 nanoseconds spent releasing 1 JDBC connections;
> 571572 nanoseconds spent preparing 1 JDBC statements;
> 1153110307 nanoseconds spent executing 1 JDBC statements;
> 0 nanoseconds spent executing 0 JDBC batches;
> 303191158712 nanoseconds spent performing 100 L2C puts;
> 23593547 nanoseconds spent performing 1 L2C hits;
> 0 nanoseconds spent performing 0 L2C misses;
> 370656057 nanoseconds spent executing 1 flushes (flushing a total of
> 101 entities and 2 collections);
> 4684 nanoseconds spent executing 1 partial-flushes (flushing a total
> of 0 entities and 0 collections)
> }
>
> It seems long, event for 1M puts, but ok, let's say the L2C is
> initialized now, and it will be better next time ? So I ran the query
> again, but it took 5+ minutes again ...
>
> INFO  [2020-11-05 09:58:02,538]
> org.hibernate.engine.internal.StatisticalLoggingSessionEventListener:
> Session Metrics {
> 28982 nanoseconds spent acquiring 1 JDBC connections;
> 25974 nanoseconds spent releasing 1 JDBC connections;
> 52468 nanoseconds spent preparing 1 JDBC statements;
> 1145821128 nanoseconds spent executing 1 JDBC statements;
> 0 nanoseconds spent executing 0 JDBC batches;
> 303763054228 nanoseconds spent performing 100 L2C puts;
> 1096985 nanoseconds spent performing 1 L2C hits;
> 0 nanoseconds spent performing 0 L2C misses;
> 317558122 nanoseconds spent executing 1 flushes (flushing a total of
> 101 entities and 2 collections);
> 5500 nanoseconds spent executing 1 partial-flushes (flushing a total
> of 0 entities and 0 collections)
> }
>
> Why does the L2 cache had to be filled again ? Isn't his purpose was to
> share it between Sessions ?
>
> Actually, disabling it make the test runs in less that 6 seconds.
>
> Why is L2C working that way ?
>
> Regards,
>
>
> **
>
> I'm running 2.9.0 from Debian package
>
> Hibernate properties :
> hibernate.cache.use_second_level_cache: true
> hibernate.generate_statistics: true
> hibernate.cache.region.factory_class:
> org.apache.ignite.cache.hibernate.HibernateRegionFactory
> org.apache.ignite.hibernate.ignite_instance_name: ClusterWA
> org.apache.ignite.hibernate.default_access_type: READ_ONLY
>
> Method code:
> @GET
> @Timed
> @UnitOfWork
> @Path("/events/speed")
> public Response getAllEvents(@Auth AuthenticatedUser auth) {
> AtomicLong id = new AtomicLong();
> StopWatch watch = new StopWatch();
> watch.start();
> evtDao.findAll().forEach(new Consumer() {
>
> @Override
> public void accept(Event t) {
> long cur = id.incrementAndGet();
> if (cur % 65536 == 0)
> logger.debug("got element#{}",
> cur);
> }
> });
> watch.stop();
> return Response.ok().header("X-Count",
> Long.toString(id.longValue())).entity(new Time(watch)).build();
> }
>
> Event cache config:
>
>
> 
> class="org.apache.ignite.configuration.CacheConfiguration">
>  value="EventCache" />
>  value="PARTITIONED" />
>  value="TRANSACTIONAL" />
>  />
>  value="true" />
>  value="true" />
>  value="Disk" />
>
> 
> 
>  class="org.apache.ignite.cache.CacheKeyConfiguration">
>
> 
> 

Re: Eviction policy enablement leads to ignite cluster blocking, it does not use pages from freelist

2020-10-05 Thread Evgenii Zhuravlev
Hi Prasad,

What operations do you run on the cluster? What is the size of objects? Is
it possible to share full logs from nodes? Do you have some kind of small
reproducer for this issue? It would be really helpful.

Thanks,
Evgenii

пн, 5 окт. 2020 г. в 07:53, Prasad Pillala :

> Hi,
>
>
>
> evictDataPage() always leads to ignite cluster blocked, due to some reason.
>
> This method does not seem to consider the freelist, which is still have
> some/many pages available. But evictDataPage() still trying to evict few
> entries from filled pages, and after sometime (in few mins, after it
> reached evictionThreshold memory); it is not getting any pages/entries to
> evict. It started reporting "Too many failed attempts to evict page: 30".
>
>
>
> My igniteconfiguration as follows:
>
> DataRegionConfiguration
>
> dataRegionConfig.setMaxSize(8L * 1024 * 1024 * 1024)//8GB
>
>
>
>
> dataRegionConfig.setPageEvictionMode(DataPageEvictionMode.RANDOM_LRU)//tried
>
> LRU2 as well
>
> ...
>
> igniteDataCfg.setPageSize(pageSizeKB)//16KB
>
>
>
>Ignite version - 2.8.0
>
>
>
> Using only Off-Heap for caching. DataRegion persistence is disabled, as we
> have 3rd party persistence configured with read-through & write-through
> enabled.
>
>
>
> When I tried different evictionThreshold, still got the same result. Not
> sure, what is the problem with my configuration.
>
>
>
> Many thanks in advance for your help.
>
>
>
>
>
> *Stay ahead of today’s supply chain complexities with Luminate Control
> Tower. Start a free 30-day trial here!
>  *
>


Re: Ignite communicating with non ignite servers

2020-09-21 Thread Evgenii Zhuravlev
Hi,

What security scan tool do you use?

Evgenii

пн, 21 сент. 2020 г. в 09:03, ignite_user2016 :

> Recently, we migrated ignite to JDK11, all works well except when we run
> our
> security scan, ignite node tries to connect on that servers and result in
> out of memory and heap dump errors.
>
> Is it possible where we can stop that scan server connecting to ignite ?
>
> Any configuration ?
>
> help is much appreciated.
>
> And I have observed that ignite visor is also broken where it cant give us
> the states for nodes, memory and CPU.
>
> Thanks..
> Rishi
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Apache ignite statefulsets pods abruptly restarts

2020-09-21 Thread Evgenii Zhuravlev
There is no such thing as "on heap cache only.". It's possible to enable an
additional cache level in heap, but it still will be storing all data in
the off heap. So, right now you need at least 10.25+8gb+ Checkpoint buffer
size for your Ignite node.

Evgenii

пн, 21 сент. 2020 г. в 09:29, Sanjaya :

> Hi All,
>
> In out production environment, ignite v2.8.1 is install as a kubernetes
> stateful sets pods inside Azure Kubernetes cluster. There are 2 pods
> running.
>
> Ignite is persistence enabled, with on heap cache only.
>
> The pod is running with below guaranteed resources
> Memory : 11 GB
> CPU: 3 core
>
> Ignite is given heap as : 10.25 GB
> The total data region size is as : 8GB
>
>
> We are getting below error when 2 caches joins each other without any
> indexing, one of PODS jvm simply restarts, we are not sure whats going on.
> The usecase is that ignite cache grid hold all master data and gets loads
> from postgres, and plannned to being called from 30+ differen pods for same
> kind of queries.
>
> We are completely stuck in this usecase, and thinking if ignite is right
> for
> this usecase.
>
>
> The stack trace as is below
> =
>   AND (A__Z0.ASSET_UID = B__Z1.ASSET_UID
> ORDER BY 9, 1]
> [09:43:10,370][WARNING][jvm-pause-detector-worker][IgniteKernal] Possible
> too long JVM pause: 872 milliseconds.
> [09:43:10,630][WARNING][client-connector-#52][IgniteH2Indexing] Long
> running
> query is finished [time=4316ms, type=MAP, distributedJoin=false,
> enforceJoinOrder=true, lazy=false, schema=CRTX, node=TcpDiscoveryNode
> [id=4093191a-f958-4b4b-bf55-ae774d450fa2,
> consistentId=4ed84cd6-d24c-4b2e-b61b-e747b0a6e6ba, addrs=ArrayList
> [10.188.0.108, 127.0.0.1], sockAddrs=HashSet
> [ignite-0.ignite.ignite.svc.cluster.local/10.188.0.108:47500,
> /127.0.0.1:47500], discPort=47500, order=2, intOrder=2,
> lastExchangeTime=1600681390383, loc=true, ver=2.8.1#20200521-sha1:86422096,
> isClient=false], reqId=145, segment=0, sql='SELECT
> A__Z0.ASSET_UID __C0_0,
> A__Z0.ATTRIBUTE_CODE __C0_1,
> B__Z1.TYPE __C0_2,
> A__Z0.NUMVALUE __C0_3,
> A__Z0.UNIT_SYMBOL __C0_4,
> A__Z0.ALNVALUE __C0_5,
> A__Z0.CHANGEDATE __C0_6,
> B__Z1.CHANGEDATE __C0_7,
> A__Z0.ORG_ID __C0_8
> FROM CRTX.ASSET B__Z1
>  INNER JOIN CRTX.ASSETSPEC A__Z0
>  ON TRUE
> WHERE (B__Z1.LOCATION_UID = 'R02ERUS010843') AND ((A__Z0.ORG_ID = ?4) AND
> (((A__Z0.CHANGEDATE > ?2) OR (B__Z1.CHANGEDATE > ?3)) AND ((B__Z1.TYPE =
> ?1)
> AND (A__Z0.ASSET_UID = B__Z1.ASSET_UID
> ORDER BY 9, 1', plan=SELECT
> A__Z0.ASSET_UID AS __C0_0,
> A__Z0.ATTRIBUTE_CODE AS __C0_1,
> B__Z1.TYPE AS __C0_2,
> A__Z0.NUMVALUE AS __C0_3,
> A__Z0.UNIT_SYMBOL AS __C0_4,
> A__Z0.ALNVALUE AS __C0_5,
> A__Z0.CHANGEDATE AS __C0_6,
> B__Z1.CHANGEDATE AS __C0_7,
> A__Z0.ORG_ID AS __C0_8
> FROM CRTX.ASSET B__Z1
> /* CRTX.ASSET.__SCAN_ */
> /* WHERE (B__Z1.LOCATION_UID = 'R02ERUS010843')
> AND (B__Z1.TYPE = ?1)
> */
> /* scanCount: 377126 */
> INNER JOIN CRTX.ASSETSPEC A__Z0
> /* CRTX."_key_PK": ASSET_UID = B__Z1.ASSET_UID */
> ON 1=1
> WHERE (B__Z1.LOCATION_UID = 'R02ERUS010843')
> AND ((A__Z0.ORG_ID = ?4)
> AND (((A__Z0.CHANGEDATE > ?2)
> OR (B__Z1.CHANGEDATE > ?3))
> AND ((B__Z1.TYPE = ?1)
> AND (A__Z0.ASSET_UID = B__Z1.ASSET_UID
> ORDER BY 9, 1]
> /opt/ignite/apache-ignite/bin/ignite.sh: line 207:74 Killed
>
> "$JAVA" ${JVM_OPTS} ${QUIET} "${RESTART_SUCCESS_OPT}" ${JMX_MON:-}
> -DIGNITE_HOME="${IGNITE_HOME}" -DIGNITE_PROG_NAME="$0" ${JVM_XOPTS} -cp
> "${CP}" ${MAIN_CLASS} "${CONFIG}"
>
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Lag before records are visible after transaction commit

2020-09-15 Thread Evgenii Zhuravlev
Ilya,

This won't help, since the problem here is that CQ doesn't return all
needed keys.

Evgenii

вт, 15 сент. 2020 г. в 02:28, Ilya Kasnacheev :

> Hello!
>
> Maybe keys may be queued from the CQ to be revisited later with
> transaction per key approach.
>
> Regards,
> --
> Ilya Kasnacheev
>
>
> пн, 14 сент. 2020 г. в 21:15, Evgenii Zhuravlev  >:
>
>> No, I don't see other ways to do this transactionally, as CQ itself is
>> not transactional.
>>
>> Evgenii
>>
>> чт, 10 сент. 2020 г. в 00:52, ssansoy :
>>
>>> unfortunately the 's' on B here can't be derived from a number 0..n -
>>> e.g. it
>>> isn't a numeric id.
>>>
>>> E.g. in practice lets say:
>>>
>>> A is a "Location"
>>> it has properties: "city", "street" etc
>>>
>>> B is a "Person" with key:
>>> p = city
>>> q = street
>>> r = social security number
>>>
>>> E.g. an A and associated B's are updated in a transaction, we want our
>>> client app to see the updated A and B's where the Person lives at that
>>> that
>>> Location.
>>>
>>> E.g. A is updated and our continuous query on A picks up:
>>> city = London
>>> street = Downing Street
>>>
>>> We would like to say:
>>> Select * from B where city="London" and street="Downing Street"
>>>
>>> Is there any way at all in Ignite to do this transactionally, so if an A
>>> and
>>> associated B's are updated in one transaction (e.g. a street is renamed
>>> from
>>> "Downing Street" to "Regent Street"), then our client app can see them
>>> consistently?
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>> --
>>> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>>>
>>


Re: Lag before records are visible after transaction commit

2020-09-14 Thread Evgenii Zhuravlev
No, I don't see other ways to do this transactionally, as CQ itself is not
transactional.

Evgenii

чт, 10 сент. 2020 г. в 00:52, ssansoy :

> unfortunately the 's' on B here can't be derived from a number 0..n - e.g.
> it
> isn't a numeric id.
>
> E.g. in practice lets say:
>
> A is a "Location"
> it has properties: "city", "street" etc
>
> B is a "Person" with key:
> p = city
> q = street
> r = social security number
>
> E.g. an A and associated B's are updated in a transaction, we want our
> client app to see the updated A and B's where the Person lives at that that
> Location.
>
> E.g. A is updated and our continuous query on A picks up:
> city = London
> street = Downing Street
>
> We would like to say:
> Select * from B where city="London" and street="Downing Street"
>
> Is there any way at all in Ignite to do this transactionally, so if an A
> and
> associated B's are updated in one transaction (e.g. a street is renamed
> from
> "Downing Street" to "Regent Street"), then our client app can see them
> consistently?
>
>
>
>
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Lag before records are visible after transaction commit

2020-09-09 Thread Evgenii Zhuravlev
Yes, but if you know the number of entries B for this object A, then you
can get all objects using s, which will be 0..n

Evgenii

пн, 7 сент. 2020 г. в 06:38, ssansoy :

> Thanks Evgenii,
>
> Sorry to keep revisiting this - maybe I am misunderstanding, but don't we
> also need 's' to be able to query B by key. E.g. the key of B consists of
> {q, r, s} We only have q and r from the parent A.
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Lag before records are visible after transaction commit

2020-09-04 Thread Evgenii Zhuravlev
You can put the number of entries in B cache related to this object A right
in the object A. After that, you can use this number to make keys of all
objects from cache B, as you already know q and r. But it depends on use
case.

Evgenii

пт, 4 сент. 2020 г. в 03:21, ssansoy :

> Thanks Evgenii,
>
> Could you please elaborate a bit on how the get would work here.
>
> E.g. parent object A has properties p, q, r
> child object B has properties q, r, s, t
>
> {q, r, s} are the primary key of B (as defined in backing SQL table DDL
> which is how the cache was created)
>
> When an A update comes in with values p1, q1, r1, we were doing a select *
> from B where q=q1 and r=r1 which would return multiple records.
>
> Is there an equivalent using igniteCacheForB.get(key). What would key be
> here?
>
> Thanks
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Ignite test takes several days

2020-09-03 Thread Evgenii Zhuravlev
As per, it's configured to run for a very long time. Anyway, it is a part
of IgnitePerformanceTestSuite which contains long-running performance tests
and I believe they shouldn't be running for each commit. So, I think you
can just disable this test suite.

Evgenii

пт, 28 авг. 2020 г. в 10:52, Cong Guo :

> Hi,
>
> I want to run all the tests. Actually I want to apply the patch for
> https://issues.apache.org/jira/browse/IGNITE-10959 to 2.8.1. I find that
> even for the original 2.8.1 source code, the test takes a long time. I
> think there must be an env or configuration issue. Do I need any special
> configuration for the ignite core test? Thank you.
>
>
> On Thu, Aug 27, 2020 at 9:53 AM Evgenii Zhuravlev <
> e.zhuravlev...@gmail.com> wrote:
>
>> Hi,
>>
>> No, it's not normal. Do you really want to run all the tests locally, or
>> you just want to build the project? If you want just to build it, I suggest
>> skipping tests by using -Dmaven.*test*.*skip*=true flag.
>>
>> Evgenii
>>
>> чт, 27 авг. 2020 г. в 06:33, Cong Guo :
>>
>>> Hi,
>>>
>>> I try to build the ignite-core on my workstation. I use the original
>>> ignite-2.8.1 source package. The test, specifically
>>> GridCacheWriteBehindStoreLoadTest, has been running for several days. Is it
>>> normal? I run "mvn clean package" directly. Should I configure anything in
>>> advance? Thank you.
>>>
>>


Re: Lag before records are visible after transaction commit

2020-09-03 Thread Evgenii Zhuravlev
Yes, it is expected that ScanQuery and ContinuousQuery are not
transactional.

>Getting all the records will pull everything back onto the and we would
have to filter locally if I am understanding correctly?
There is no need to get all entries from the cache, you can get entries
with certain keys. In your example, you can get all entries based on
generated keys if you know the number of inserted entries. This number, for
example, can be inserted as a part of the first object.

Evgenii

ср, 2 сент. 2020 г. в 08:53, ssansoy :

> Thanks for looking into it. Is this expected?
> Just wondering how another node can ever be transactionally notified of an
> update in an event driven way if continuous queries don't support
> transactions?
>
> using getAll isn't a practical workaround unfortunately as we want to get
> the B records based on the value of some of it's fields. E.g. a scan query
> with a filter on the child, or an sql fields query. Getting all the records
> will pull everything back onto the and we would have to filter locally if I
> am understanding correctly?
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Lag before records are visible after transaction commit

2020-09-02 Thread Evgenii Zhuravlev
Hi,

To make this work, you can change the transaction type - from READ_COMMITED
to SERIALIZABLE and replace scanQuery with getAll. In this case, getAll
operation will be waiting for locked keys. Note that running cache
operations in CQ listener thread may cause deadlock and it's better to use
another thread for that.

Best Regards,
Evgenii


Re: Lag before records are visible after transaction commit

2020-09-01 Thread Evgenii Zhuravlev
Hi,

Checked this reproducer. Continuous Query itself is not transactional and
it looks like it can't be used for this at the moment. So, it gets
notification before other entries were committed.

Best Regards,
Evgenii

вт, 1 сент. 2020 г. в 00:34, ssansoy :

> Hi is anyone able to help look into/reproduce this with the example code
> given? thanks!
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Item not found, B+Tree is corrupted and critical error detected after add nodes to baseline topology

2020-08-31 Thread Evgenii Zhuravlev
Can you export logs from your system in a different format? I don't really
understand how it can be analyzed by a person in this format.

Evgenii

пн, 31 авг. 2020 г. в 00:36, Steven Zheng :

> Thanks evgenii, but origin log is not been saved and I can only provide
> json format logs.
> Best Regards,
> -
> Steven Zheng
> E-mail: closee...@gmail.com
>
>
> Evgenii Zhuravlev  于2020年8月28日周五 上午6:45写道:
>
>> Hi,
>>
>> Can you attach logs in normal format? It's really hard to read it. Also,
>> please attach full logs from nodes, not only the stacktrace.
>>
>> Thanks,
>> Evgenii
>>
>> вт, 25 авг. 2020 г. в 19:27, Steven Zheng :
>>
>>> Hi community,
>>> Currently I have 25 nodes in my ignite cluster and all of them were
>>> added into the baseline, and I was trying to add another 5 nodes into it.
>>> The data in my cluster is about 8TB and the persistence is enabled.
>>> At first I start all the 5 nodes; then execute in the command line:
>>> ```
>>> bin/control.sh --baseline add ${my_node_id}
>>> ```
>>> Meanwhile, there is still read/write workloads on the cluster.
>>>
>>> After a few minutes, a critical error detected , several nodes crashed
>>> and emits the logs like this(json format):
>>> ```
>>> {
>>>   "message": "Critical system error detected. Will be handled
>>> accordingly to configured handler [hnd=StopNodeFailureHandler
>>> [super=AbstractFailureHandler [ignoredFailureTypes=UnmodifiableSet
>>> [SYSTEM_WORKER_BLOCKED, SYSTEM_CRITICAL_OPERATION_TIMEOUT]]],
>>> failureCtx=FailureContext [type=CRITICAL_ERROR, err=class
>>> o.a.i.i.processors.cache.persistence.tree.CorruptedTreeException: B+Tree is
>>> corrupted [pages(groupId, pageId)=[IgniteBiTuple [val1=-1526563570,
>>> val2=844420635165881]], cacheId=-2021712086, cacheName=MY_SQL_TABLE,
>>> indexName=AFFINITY_KEY, msg=Runtime failure on search row: Row@d00bc1[
>>> key: SQL_FEAT_MY_SQL_TABLE_7f8539ef_08a1_421e_a0b8_60f80cdbbbdf_KEY
>>> [idHash=2015873411, hash=-2113698004, CARDNUMBER=25323111322, IP=], val:
>>> SQL_FEAT_MY_SQL_TABLE_7f8539ef_08a1_421e_a0b8_60f80cdbbbdf
>>> [idHash=1729397221, hash=2090272279, MY_BIZ_UID=0, ETL_TIME=1596023962011]
>>> ][ 25323111322, , 0, 1596023962011 ",
>>>   "loggerFqcn": "org.apache.logging.log4j.spi.AbstractLogger",
>>>   "thrown": {
>>> "localizedMessage": "B+Tree is corrupted [pages(groupId,
>>> pageId)=[IgniteBiTuple [val1=-1526563570, val2=844420635165881]],
>>> cacheId=-2021712086, cacheName=MY_SQL_TABLE, indexName=AFFINITY_KEY,
>>> msg=Runtime failure on search row: Row@d00bc1[ key:
>>> SQL_FEAT_MY_SQL_TABLE_7f8539ef_08a1_421e_a0b8_60f80cdbbbdf_KEY
>>> [idHash=2015873411, hash=-2113698004, CARDNUMBER=25323111322, IP=], val:
>>> SQL_FEAT_MY_SQL_TABLE_7f8539ef_08a1_421e_a0b8_60f80cdbbbdf
>>> [idHash=1729397221, hash=2090272279, MY_BIZ_UID=0, ETL_TIME=1596023962011]
>>> ][ 25323111322, , 0, 1596023962011 ]]",
>>> "message": "B+Tree is corrupted [pages(groupId,
>>> pageId)=[IgniteBiTuple [val1=-1526563570, val2=844420635165881]],
>>> cacheId=-2021712086, cacheName=MY_SQL_TABLE, indexName=AFFINITY_KEY,
>>> msg=Runtime failure on search row: Row@d00bc1[ key:
>>> SQL_FEAT_MY_SQL_TABLE_7f8539ef_08a1_421e_a0b8_60f80cdbbbdf_KEY
>>> [idHash=2015873411, hash=-2113698004, CARDNUMBER=25323111322, IP=], val:
>>> SQL_FEAT_MY_SQL_TABLE_7f8539ef_08a1_421e_a0b8_60f80cdbbbdf
>>> [idHash=1729397221, hash=2090272279, MY_BIZ_UID=0, ETL_TIME=1596023962011]
>>> ][ 25323111322, , 0, 1596023962011 ]]",
>>> "commonElementCount": 0,
>>> "cause": {
>>>   "localizedMessage": "java.lang.IllegalStateException: Item not
>>> found: 27",
>>>   "message": "java.lang.IllegalStateException: Item not found: 27",
>>>   "commonElementCount": 21,
>>>   "cause": {
>>> "localizedMessage": "Item not found: 27",
>>> "message": "Item not found: 27",
>>> "commonElementCount": 21,
>>> "name": "java.lang.IllegalStateException",
>>> "extendedStackTrace": [{
>>> "line": 351,
>>> "method": &qu

Re: Hibernate 2nd Level query cache with Ignite

2020-08-28 Thread Evgenii Zhuravlev
Hi,

In the ticket you've shared before, the reproducer has a cache object
creation after the reconnect. This should be done in this case too. The
thing is that the client reconnects to the absolutely different cluster, as
it was in memory and it was fully stopped. To avoid this situation you can
start more than one server node. At the same time, I think that on the
ignite - hibernate integration side, we should add handling of the
reconnect for this scenario.
All cache proxy objects should be recreated after reconnect.

I created ticket for this fix:
https://issues.apache.org/jira/browse/IGNITE-13391

Best Regards,
Evgenii


чт, 27 авг. 2020 г. в 08:48, Tathagata Roy :

> If it helps here is my dependency tree
>
>
>
>
>
>
>
> *From:* Tathagata Roy
> *Sent:* Thursday, August 27, 2020 8:35 AM
> *To:* user 
> *Subject:* RE: Hibernate 2nd Level query cache with Ignite
>
>
>
> Thanks for responding @Evgenii
>
>
>
> Attaching the logs for both. There is no signification info in the ignite
> server. In application logs all logs after line 6629 is when application
> was able to reconnect with server after the server restart
>
>
>
> *From:* Evgenii Zhuravlev 
> *Sent:* Wednesday, August 26, 2020 4:58 PM
> *To:* user 
> *Subject:* Re: Hibernate 2nd Level query cache with Ignite
>
>
>
> Hi,
>
>
>
> Can you please share full logs from client and server nodes?
>
>
>
> Thanks,
>
> Evgenii
>
>
>
> ср, 26 авг. 2020 г. в 14:26, Tathagata Roy  >:
>
> Hi,
>
>
>
> I am trying to do a POC on hibernate 2nd level cache with Apache Ignite.
> With this configuration I was able to make it work
>
>
>
> *spring.jpa.properties.hibernate.cache.use_second_level_cache*=
> *true **spring.jpa.properties.hibernate.cache.use_query_cache*=
> *true **spring.jpa.properties.hibernate.generate_statistics*=
> *false **spring.jpa.properties.hibernate.cache.region.factory_class*=
> *org.apache.ignite.cache.hibernate.HibernateRegionFactory *
> *spring.jpa.properties.org.apache.ignite.hibernate.default_access_type*=
> *READ_ONLY*
>
>
>
>
>
> <*dependency*>
>  <*groupId*>org.gridgain
>  <*artifactId*>ignite-hibernate_5.3
>  <*version*>8.7.23
>  <*exclusions*>
>  <*exclusion*>
>  <*groupId*>org.hibernate
>  <*artifactId*>hibernate-core
>  
>  
>  
>
>
>
> @Bean
> @ConditionalOnMissingBean
> *public *IgniteConfiguration igniteConfiguration(DiscoverySpi discoverySpi, 
> CommunicationSpi communicationSpi) {
> IgniteConfiguration igniteConfiguration = *new *IgniteConfiguration();
> igniteConfiguration.setClientMode(*clientMode*);
> igniteConfiguration.setMetricsLogFrequency(0);
>
> igniteConfiguration.setGridLogger(*new *Slf4jLogger());
>
> igniteConfiguration.setDiscoverySpi(discoverySpi);
> igniteConfiguration.setCommunicationSpi(communicationSpi);
> igniteConfiguration.setFailureDetectionTimeout(*failureDetectionTimeout*);
>
> CacheConfiguration cc = *new *CacheConfiguration<>();
> cc.setName(“Entity1”);
> cc.setCacheMode(CacheMode.*REPLICATED*);
>
>
>
> CacheConfiguration cc1 = *new *CacheConfiguration<>();
> cc1.setName(“*default-query-results-region*”);
> cc1.setCacheMode(CacheMode.*REPLICATED*);
>
>
>
> CacheConfiguration cc2 = *new *CacheConfiguration<>();
> cc2.setName(“*default-update-timestamps-region*”);
> cc2.setCacheMode(CacheMode.*REPLICATED*);
>
> igniteConfiguration.setCacheConfiguration(cc);
>
>
>
> *return *igniteConfiguration;
> }
>
>
>
>
>
>
>
> I am testing this with external ignite node, but if the external ig node
> is restarted , I see the error when trying to access Entity1
>
>
>
> "errorMessage": "class
> org.apache.ignite.internal.processors.cache.CacheStoppedException: Failed
> to perform cache operation (cache is stopped): Entity1; nested exception is
> java.lang.IllegalStateException: class
> org.apache.ignite.internal.processors.cache.CacheStoppedException: Failed
> to perform cache operation (cache is stopped): Entity1",
>
>
>
> It looks like the issue is as reported here ,
>
>
>
>
> https://stackoverflow.com/questions/46053089/ignite-cache-reconnection-issue-cache-is-stopped
> <https://urldefense.proofpoint.com/v2/url?u=https-3A__stackoverflow.com_questions_46053089_ignite-2Dcache-2Dreconnection-2Dissue-2Dcache-2Dis-2Dstopped=DwMFaQ=dqndFQAGz2cg7ln6ll1EqkpBLZllP_GH8-2iqGbTww0=UMGFmU3WkANZKXxHI2sMIQI1g3U2qKPxMiWWjmYk4LE=aUplBu9VdZkboQjxew1KTmPlCoAsCi0f9YHRcc7unIY=TfRyCsMCDPOcfYDdNRBwggxoY8goZ_q4XfJWDHY8JTI=>
>
> https://issues.apache.org/jira/browse/IGNITE-5789
> <https://urldefense.proofpoint.com/v2/url?u=https-3A__issues.apache.org_jira_browse_IGNITE-2D5789=DwMFaQ=dqndFQAGz2cg7ln6ll1EqkpBLZllP_GH8-2iqGbTww0=UMGFmU3WkANZKXxHI2sMIQI1g3U2qKPxMiWWjmYk4LE=aUplBu9VdZkboQjxew1KTmPlCoAsCi0f9YHRcc7unIY=GWBUi2BHTLsW0NvGljjEezzCbTzUbaJHvfTvnFfYqQc=>
>
>
>
>
>
> Are there any other way without restaring the client application we can
> make it work?
>
>


Re: Item not found, B+Tree is corrupted and critical error detected after add nodes to baseline topology

2020-08-27 Thread Evgenii Zhuravlev
Hi,

Can you attach logs in normal format? It's really hard to read it. Also,
please attach full logs from nodes, not only the stacktrace.

Thanks,
Evgenii

вт, 25 авг. 2020 г. в 19:27, Steven Zheng :

> Hi community,
> Currently I have 25 nodes in my ignite cluster and all of them were added
> into the baseline, and I was trying to add another 5 nodes into it. The
> data in my cluster is about 8TB and the persistence is enabled.
> At first I start all the 5 nodes; then execute in the command line:
> ```
> bin/control.sh --baseline add ${my_node_id}
> ```
> Meanwhile, there is still read/write workloads on the cluster.
>
> After a few minutes, a critical error detected , several nodes crashed and
> emits the logs like this(json format):
> ```
> {
>   "message": "Critical system error detected. Will be handled accordingly
> to configured handler [hnd=StopNodeFailureHandler
> [super=AbstractFailureHandler [ignoredFailureTypes=UnmodifiableSet
> [SYSTEM_WORKER_BLOCKED, SYSTEM_CRITICAL_OPERATION_TIMEOUT]]],
> failureCtx=FailureContext [type=CRITICAL_ERROR, err=class
> o.a.i.i.processors.cache.persistence.tree.CorruptedTreeException: B+Tree is
> corrupted [pages(groupId, pageId)=[IgniteBiTuple [val1=-1526563570,
> val2=844420635165881]], cacheId=-2021712086, cacheName=MY_SQL_TABLE,
> indexName=AFFINITY_KEY, msg=Runtime failure on search row: Row@d00bc1[
> key: SQL_FEAT_MY_SQL_TABLE_7f8539ef_08a1_421e_a0b8_60f80cdbbbdf_KEY
> [idHash=2015873411, hash=-2113698004, CARDNUMBER=25323111322, IP=], val:
> SQL_FEAT_MY_SQL_TABLE_7f8539ef_08a1_421e_a0b8_60f80cdbbbdf
> [idHash=1729397221, hash=2090272279, MY_BIZ_UID=0, ETL_TIME=1596023962011]
> ][ 25323111322, , 0, 1596023962011 ",
>   "loggerFqcn": "org.apache.logging.log4j.spi.AbstractLogger",
>   "thrown": {
> "localizedMessage": "B+Tree is corrupted [pages(groupId,
> pageId)=[IgniteBiTuple [val1=-1526563570, val2=844420635165881]],
> cacheId=-2021712086, cacheName=MY_SQL_TABLE, indexName=AFFINITY_KEY,
> msg=Runtime failure on search row: Row@d00bc1[ key:
> SQL_FEAT_MY_SQL_TABLE_7f8539ef_08a1_421e_a0b8_60f80cdbbbdf_KEY
> [idHash=2015873411, hash=-2113698004, CARDNUMBER=25323111322, IP=], val:
> SQL_FEAT_MY_SQL_TABLE_7f8539ef_08a1_421e_a0b8_60f80cdbbbdf
> [idHash=1729397221, hash=2090272279, MY_BIZ_UID=0, ETL_TIME=1596023962011]
> ][ 25323111322, , 0, 1596023962011 ]]",
> "message": "B+Tree is corrupted [pages(groupId, pageId)=[IgniteBiTuple
> [val1=-1526563570, val2=844420635165881]], cacheId=-2021712086,
> cacheName=MY_SQL_TABLE, indexName=AFFINITY_KEY, msg=Runtime failure on
> search row: Row@d00bc1[ key:
> SQL_FEAT_MY_SQL_TABLE_7f8539ef_08a1_421e_a0b8_60f80cdbbbdf_KEY
> [idHash=2015873411, hash=-2113698004, CARDNUMBER=25323111322, IP=], val:
> SQL_FEAT_MY_SQL_TABLE_7f8539ef_08a1_421e_a0b8_60f80cdbbbdf
> [idHash=1729397221, hash=2090272279, MY_BIZ_UID=0, ETL_TIME=1596023962011]
> ][ 25323111322, , 0, 1596023962011 ]]",
> "commonElementCount": 0,
> "cause": {
>   "localizedMessage": "java.lang.IllegalStateException: Item not
> found: 27",
>   "message": "java.lang.IllegalStateException: Item not found: 27",
>   "commonElementCount": 21,
>   "cause": {
> "localizedMessage": "Item not found: 27",
> "message": "Item not found: 27",
> "commonElementCount": 21,
> "name": "java.lang.IllegalStateException",
> "extendedStackTrace": [{
> "line": 351,
> "method": "findIndirectItemIndex",
> "exact": false,
> "file": "AbstractDataPageIO.java",
> "location": "ignite-core-2.8.0-SNAPSHOT.jar",
> "class":
> "org.apache.ignite.internal.processors.cache.persistence.tree.io.AbstractDataPageIO",
> "version": "2.8.0-SNAPSHOT"
>   },
>   {
> "line": 459,
> "method": "getDataOffset",
> "exact": false,
> "file": "AbstractDataPageIO.java",
> "location": "ignite-core-2.8.0-SNAPSHOT.jar",
> "class":
> "org.apache.ignite.internal.processors.cache.persistence.tree.io.AbstractDataPageIO",
> "version": "2.8.0-SNAPSHOT"
>   },
>   {
> "line": 501,
> "method": "readPayload",
> "exact": false,
> "file": "AbstractDataPageIO.java",
> "location": "ignite-core-2.8.0-SNAPSHOT.jar",
> "class":
> "org.apache.ignite.internal.processors.cache.persistence.tree.io.AbstractDataPageIO",
> "version": "2.8.0-SNAPSHOT"
>   },
>   {
> "line": 325,
> "method": "readIncomplete",
> "exact": false,
> "file": "CacheDataRowAdapter.java",
> "location": "ignite-core-2.8.0-SNAPSHOT.jar",
> "class":
> "org.apache.ignite.internal.processors.cache.persistence.CacheDataRowAdapter",
> "version": "2.8.0-SNAPSHOT"
>   },
>   {
> "line": 261,
> 

Re: Ignite test takes several days

2020-08-27 Thread Evgenii Zhuravlev
Hi,

No, it's not normal. Do you really want to run all the tests locally, or
you just want to build the project? If you want just to build it, I suggest
skipping tests by using -Dmaven.*test*.*skip*=true flag.

Evgenii

чт, 27 авг. 2020 г. в 06:33, Cong Guo :

> Hi,
>
> I try to build the ignite-core on my workstation. I use the original
> ignite-2.8.1 source package. The test, specifically
> GridCacheWriteBehindStoreLoadTest, has been running for several days. Is it
> normal? I run "mvn clean package" directly. Should I configure anything in
> advance? Thank you.
>


Re: Cache Expiry policy not working..

2020-08-27 Thread Evgenii Zhuravlev
Hi,
Please share the full maven project with a reproducer then. I wan't able to
reproduce the same behaviour with a code you shared before.

Evgenii

чт, 27 авг. 2020 г. в 01:40, kay :

> Hi I didn't notice a Cache Name
>
> cache name is NC_INITPGECONTRACT_CACHE
>
> Thank you.
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Hibernate 2nd Level query cache with Ignite

2020-08-26 Thread Evgenii Zhuravlev
Hi,

Can you please share full logs from client and server nodes?

Thanks,
Evgenii

ср, 26 авг. 2020 г. в 14:26, Tathagata Roy :

> Hi,
>
>
>
> I am trying to do a POC on hibernate 2nd level cache with Apache Ignite.
> With this configuration I was able to make it work
>
>
>
> *spring.jpa.properties.hibernate.cache.use_second_level_cache*=
> *true **spring.jpa.properties.hibernate.cache.use_query_cache*=
> *true **spring.jpa.properties.hibernate.generate_statistics*=
> *false **spring.jpa.properties.hibernate.cache.region.factory_class*=
> *org.apache.ignite.cache.hibernate.HibernateRegionFactory *
> *spring.jpa.properties.org.apache.ignite.hibernate.default_access_type*=
> *READ_ONLY*
>
>
>
>
>
> <*dependency*>
>  <*groupId*>org.gridgain
>  <*artifactId*>ignite-hibernate_5.3
>  <*version*>8.7.23
>  <*exclusions*>
>  <*exclusion*>
>  <*groupId*>org.hibernate
>  <*artifactId*>hibernate-core
>  
>  
>  
>
>
>
> @Bean
> @ConditionalOnMissingBean
> *public *IgniteConfiguration igniteConfiguration(DiscoverySpi discoverySpi, 
> CommunicationSpi communicationSpi) {
> IgniteConfiguration igniteConfiguration = *new *IgniteConfiguration();
> igniteConfiguration.setClientMode(*clientMode*);
> igniteConfiguration.setMetricsLogFrequency(0);
>
> igniteConfiguration.setGridLogger(*new *Slf4jLogger());
>
> igniteConfiguration.setDiscoverySpi(discoverySpi);
> igniteConfiguration.setCommunicationSpi(communicationSpi);
> igniteConfiguration.setFailureDetectionTimeout(*failureDetectionTimeout*);
>
> CacheConfiguration cc = *new *CacheConfiguration<>();
> cc.setName(“Entity1”);
> cc.setCacheMode(CacheMode.*REPLICATED*);
>
>
>
> CacheConfiguration cc1 = *new *CacheConfiguration<>();
> cc1.setName(“*default-query-results-region*”);
> cc1.setCacheMode(CacheMode.*REPLICATED*);
>
>
>
> CacheConfiguration cc2 = *new *CacheConfiguration<>();
> cc2.setName(“*default-update-timestamps-region*”);
> cc2.setCacheMode(CacheMode.*REPLICATED*);
>
> igniteConfiguration.setCacheConfiguration(cc);
>
>
>
> *return *igniteConfiguration;
> }
>
>
>
>
>
>
>
> I am testing this with external ignite node, but if the external ig node
> is restarted , I see the error when trying to access Entity1
>
>
>
> "errorMessage": "class
> org.apache.ignite.internal.processors.cache.CacheStoppedException: Failed
> to perform cache operation (cache is stopped): Entity1; nested exception is
> java.lang.IllegalStateException: class
> org.apache.ignite.internal.processors.cache.CacheStoppedException: Failed
> to perform cache operation (cache is stopped): Entity1",
>
>
>
> It looks like the issue is as reported here ,
>
>
>
>
> https://stackoverflow.com/questions/46053089/ignite-cache-reconnection-issue-cache-is-stopped
>
> https://issues.apache.org/jira/browse/IGNITE-5789
>
>
>
>
>
> Are there any other way without restaring the client application we can
> make it work?
>


Re: Cache Expiry policy not working..

2020-08-26 Thread Evgenii Zhuravlev
Hi,

It looks like a little bit different problem then. As far as I see, the
only issue here is related to the cache size, no to the get operations. It
is a known issue: https://issues.apache.org/jira/browse/IGNITE-9474

Best Regards,
Evgenii

вт, 25 авг. 2020 г. в 21:22, kay :

> Hello, There is a get method in my code.
>
> but that method is not for expiry check that method to check if data is
> saved well.
>
> I figured out in GirdGain webconsole cache size after 4hours data
> put(expiry
> policy is 4 minutes).
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Lag before records are visible after transaction commit

2020-08-25 Thread Evgenii Zhuravlev
Hi,

It looks like you need to use
https://ignite.apache.org/releases/latest/javadoc/org/apache/ignite/cache/CacheWriteSynchronizationMode.html,
just set it to FULL_SYNC for the cache using CacheConfiguration.
setWriteSynchronizationMode.

Evgenii

вт, 25 авг. 2020 г. в 02:27, ssansoy :

> Hi, I am performing the following operation on node 1 of my 3 node cluster
>
> (All caches use CacheRebalanceMode.SYNC,
> CacheWriteSynchronizationMode.FULL_SYNC, CacheAtomicityMode.TRANSACTIONAL):
>
>
>   try (Transaction tx = ignite.transactions().txStart(
> TransactionConcurrency.PESSIMISTIC,
> TransactionIsolation.READ_COMMITTED,
> transactionTimeout, igniteTransactionBatchSize)) {
>
> // write 1 record to cache A
> // write 11 records to cache B
>
> tx.commit()
>
> }
>
>
> How should I expect the updated A and B records to appear on some other
> node, e.g. node 2.
> I was expecting them to both become visible together at exactly the same
> time. I am using CacheMode.REPLICATED. I am not seeing this however - there
> seems to be a delay in between both these A and B records being made
> available.
>
> On node 2, I am performing a continuous query on A, and in the local listen
> for A,
> I am fetching those 11 B records related to A (using an SQLFieldsQuery)
> that
> were updated
> in the same transaction.
> After the tx commit on node 1, my local listen for A is called, and I try
> and fetch the Bs.
> However, there seems to be a delay in seeing these B records - they are not
> always returned by my query.
> If I put a sleep in there and try the SQLFieldsQuery again, I do get all
> the
> B's.
>
> 2020-08-21 16:25:05,484 [callback-#192] DEBUG x.TableDataSelector [] -
> Executing SQL query SqlFieldsQuery [sql=SELECT * FROM B WHERE A_FK =
> 'TEST4', args=null, collocated=false, timeout=-1, enforceJoinOrder=false,
> distributedJoins=false, replicatedOnly=false, lazy=false, schema=null,
> updateBatchSize=1]
> 2020-08-21 16:25:05,486 [callback-#192] DEBUG x.TableDataSelector [] -
> Received 3 results
> 2020-08-21 16:25:05,486 [callback-#192] DEBUG x.TableDataSelector [] -
> Trying again in 5 seconds
> 2020-08-21 16:25:10,486 [callback-#192] DEBUG x.TableDataSelector [] -
> Received 11 results
>
> My local listen for A is annotated with @IgniteAsyncCallback incase that
> matters. Anything obviously wrong here?
> My requirement is that node 2 has access to A and all the associated
> updated
> B's that
> were committed in the same transaction.
>
> Thanks!
> Sham
>
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Cache Expiry policy not working..

2020-08-25 Thread Evgenii Zhuravlev
Well, in the code you've shared you're not waiting between puts and checks
after that. I used this expiry policy for cache: FactoryBuilder.factoryOf(new
CreatedExpiryPolicy(new Duration(SECONDS, 1
and added Thread.sleep(2000) before puts and gets in your code and
ExpiryPolicy world for me without any issues.

Evgenii

вт, 25 авг. 2020 г. в 02:31, kay :

> Hello, here is my code for test.
>
>
> public class CachePutLoopTest {
>
>  /**
>* @param args
>*/
>   public static void main(String[] args) {
>/*
> cache의 ip, port 파라미터를 받아서 특정노드에 데이터를 저장
> */
>System.out.println("┌ Cache Rebalance Put/Get
> Start ");
>ClientConfiguration cfg = new
> ClientConfiguration().setAddresses(args[0]);
>IgniteClient igniteClient = Ignition.startClient(cfg);
>
>ClientCache testCache = igniteClient.cache(args[1]);
>System.out.println("│ ■ Cache Name : " + testCache.getName());
>System.out.println("│ ■ Cache Put Start");
>
>for(int i = 0; i < Integer.parseInt(args[2]); i++) {
> testCache.put(i, i+1);
>}// end for(i)
>
>System.out.println("│ ■ Cache Put End");
>System.out.println("│ ■ Cache Get Start");
>for(int i = 0; i < Integer.parseInt(args[2]); i++) {
> System.out.println("│ ■ Data Get : "+testCache.get(i));
>}// end for(i)
>System.out.println("│ ■ Cache Get End");
>System.out.println("└ Cache Rebalance Put/Get
> End
> ");
>System.out.println();
>System.out.println();
>
>
>  }// end of main
>
> }// end of CachePutLoopTest.java
>
>
> and I use GridGainWebConsole for monitoring.
> so I figured out remain cache data.
>
>
> Thank you so much.
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Cache configuration

2020-08-17 Thread Evgenii Zhuravlev
Hi,

You can add cache configuration to the xml file with the *(for example,
cache-*) at the end. After this, caches with names that fit this
template(cache-1 for template cache-*)  will use it's cache configuration.

Evgenii

вс, 16 авг. 2020 г. в 07:03, C Ravikiran :

> As for the below, I have to change in xml and java code also.
>
> Is there there any other possibility, without changing the java code, and
> changing only xml file, shall we achive cache configuration??
>
> As we don't have the java code access, we have the access to the
> configuration
> xml file.
>
> Could you please help me with this?
>
> Regards,
> Ravikiran C
>
>
>
>
> On Sun, 16 Aug, 2020, 12:57 am John Smith,  wrote:
>
>> You can create templates in the XML and programmatically when you say
>> getOrCreate() you can specify the template to use and pass in random name
>> for the cache name ...
>>
>>
>> https://apacheignite.readme.io/docs/cache-template#:~:text=Cache%20templates%20are%20useful%20when,CREATE%20TABLE%20and%20REST%20commands
>> .
>>
>> On Sat., Aug. 15, 2020, 8:53 a.m. itsmeravikiran.c, <
>> itsmeravikira...@gmail.com> wrote:
>>
>>> My cache ids are dynamic.
>>> Is it possible to add cache configuration in xml.
>>> I have checked, name property is mandatory. But i cannot add the name as
>>> it's dynamic name.
>>>
>>>
>>>
>>>
>>>
>>> --
>>> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>>>
>>


Re: I have an exception while trying to send the "evt=NODE_JOINED" message

2020-08-14 Thread Evgenii Zhuravlev
Hi,

You can use Address resolver for that:
https://ignite.apache.org/releases/latest/javadoc/org/apache/ignite/configuration/BasicAddressResolver.html

Best Regards,
Evgenii

пт, 14 авг. 2020 г. в 14:30, Homer Kommrad :

> Hello again,
>
> my problem's root was using Scaleway instances as server nodes . Scaleway
> developer instances do not know about their Public IPs , thus they were
> sending local addresses in the addr and sockAddr arrays. When I set up my
> server nodes on digital ocean instances , everything was solved.
>
> example from scaleway , which is not working:
> addrs=ArrayList [0:0:0:0:0:0:0:1%lo, 10.18.122.155, 127.0.0.1,
> 2001:bc8:1824:154d:0:0:0:1%ens2], sockAddrs=HashSet
> [2001:bc8:1824:154d:0:0:0:1%ens2:47500, /10.18.122.155:47500,
> /0:0:0:0:0:0:0:1%lo:47500, /127.0.0.1:47500]
>
> example digital ocean node's arrays:
> addrs=ArrayList [0:0:0:0:0:0:0:1%lo, 10.15.0.5, 127.0.0.1, 188.166.219.
> <http://188.166.219.180/>XXX], sockAddrs=HashSet
> [/0:0:0:0:0:0:0:1%lo:47500, /127.0.0.1:47500, /10.15.0.5:47500, /
> 188.166.219.XXX:47500 <http://188.166.219.180:47500/>]
>
>
> How can we set scaleway server and client nodes' configurations , so that
> they send correct public IPs for tcp communication  ?
>
> On Wed, Aug 12, 2020 at 12:53 PM Ilya Kasnacheev <
> ilya.kasnach...@gmail.com> wrote:
>
>> Hello!
>>
>> I recommend setting the additional JVM arg -Djava.net.prefer
>> IPv4Stack=true on all nodes. You have IPv6 addresses there, which can
>> cause issues in some cases.
>>
>> Regards,
>> --
>> Ilya Kasnacheev
>>
>>
>>
>> вт, 11 авг. 2020 г. в 23:40, Homer Kommrad :
>>
>>> Thank you for the quick response. First of wall ,none of the servers
>>> have a firewall. And all the ignite instances are run as root, on machines
>>> that I solely run for ignite. So I really don't understand How they can
>>> have any connection permission problems at all.
>>>
>>> For now , I'll also provide the small java snippet I run for this test:
>>>
>>> IgniteConfiguration cfg = new IgniteConfiguration();
>>> cfg.setClientMode(true);
>>>
>>> cfg.setPeerClassLoadingEnabled(true);
>>>
>>> cfg.setWorkDirectory("/tmp/");
>>>
>>> TcpDiscoveryVmIpFinder ipFinder = new TcpDiscoveryVmIpFinder();
>>>
>>> ipFinder.setAddresses(Arrays.asList("51.15.203.48","51.15.88.216"));
>>>
>>> cfg.setDiscoverySpi(new
>>> TcpDiscoverySpi().setLocalAddress("68.183.91.137").setIpFinder(ipFinder));
>>>
>>> Ignite ignite=Ignition.start(cfg);
>>>
>>> On Tue, Aug 11, 2020 at 10:06 PM Evgenii Zhuravlev <
>>> e.zhuravlev...@gmail.com> wrote:
>>>
>>>> Hi,
>>>>
>>>> It looks like the node can't establish a connection using the
>>>> Communication channel to the remote node. I would recommend checking that
>>>> all ports are opened and there is no firewall. Also, you can check that
>>>> connection can be established using telnet or any other tool. If you're
>>>> sure, that connection can be established between these 2 nodes using the
>>>> communication port, please share full logs(not the console output) from all
>>>> nodes.
>>>>
>>>> Evgenii
>>>>
>>>> вт, 11 авг. 2020 г. в 10:14, Homer Kommrad :
>>>>
>>>>> Hello ,
>>>>>
>>>>> I have a cluster of 2 server nodes and I'm trying to connect them via
>>>>> a client node on another server. When I try to connect , my client node
>>>>> connects and I can see the topology changing , 1 Client up . But right
>>>>> after that , I keep getting the exception for socketTimeout .
>>>>>
>>>>> such is the log:
>>>>>
>>>>> Aug 11, 2020 5:03:47 PM java.util.logging.LogManager$RootLogger log
>>>>> WARNING: Failed to resolve default logging config file:
>>>>> config/java.util.logging.properties
>>>>> [17:03:48]__  
>>>>> [17:03:48]   /  _/ ___/ |/ /  _/_  __/ __/
>>>>> [17:03:48]  _/ // (7 7// /  / / / _/
>>>>> [17:03:48] /___/\___/_/|_/___/ /_/ /___/
>>>>> [17:03:48]
>>>>> [17:03:48] ver. 2.8.1#20200521-sha1:86422096
>>>>> [17:03:48] 2020 Copyright(C) Apache Software Foundation
>>>>> [17:03:48]
>>>>> [17:03:48] Ig

Re: How can I find out if indexes are used in a query?

2020-08-12 Thread Evgenii Zhuravlev
Well, it uses index now:
 /*
PUBLIC.IDX_CELL_ENODEB_ID: PERIOD_START_TIME < TIMESTAMP '2020-08-04
00:00:00'\nAND REGION_ID = 'NORTHEAST'\nAND
PERIOD_START_TIME >= TIMESTAMP '2020-08-03 00:00:00'\n */

Is it better now, after using the hint for index? How much time was it
taking before?

Evgenii

ср, 12 авг. 2020 г. в 16:23, Axel Luft :

> I have the following index created:
> CELL_CREATE_INDEX = '''
> CREATE INDEX IF NOT EXISTS idx_cell_enodeb_id ON erilte01_cell
> (PERIOD_START_TIME,REGION_ID)'''
>
> So I used now the created INDEX and the EXPLAIN tells me I used it:
>
> ['PLAN']
> ["SELECT\nP__Z0.PERIOD_START_TIME AS __C0_0,\nP__Z0.REGION_ID AS
> __C0_1,\nP__Z0.MARKET_ID AS __C0_2,\n
> SUM(P__Z0.EUTRANCELLFDD_PMUETHPTIMEDL) AS __C0_3,\n
> SUM(P__Z0.EUTRANCELLFDD_PMPDCPVOLDLDRB) AS __C0_4,\n
> SUM(P__Z0.EUTRANCELLFDD_PMPDCPVOLDLDRBLASTTTI) AS __C0_5\nFROM
> PUBLIC.ERILTE01_CELL P__Z0 USE INDEX (IDX_CELL_ENODEB_ID)\n/*
> PUBLIC.IDX_CELL_ENODEB_ID: PERIOD_START_TIME < TIMESTAMP '2020-08-04
> 00:00:00'\nAND REGION_ID = 'NORTHEAST'\nAND
> PERIOD_START_TIME >= TIMESTAMP '2020-08-03 00:00:00'\n */\nWHERE
> (UPPER(P__Z0.MEASUREMENTNAME) = 'EUTRANCELLFDD')\nAND ((P__Z0.DATALEVEL
> = 'RAW')\nAND ((P__Z0.DATATYPE = 'RAW')\nAND
> ((P__Z0.PERIOD_START_TIME < TIMESTAMP '2020-08-04 00:00:00')\nAND
> ((P__Z0.REGION_ID = 'NORTHEAST')\nAND (P__Z0.PERIOD_START_TIME >=
> TIMESTAMP '2020-08-03 00:00:00')\nGROUP BY P__Z0.PERIOD_START_TIME,
> P__Z0.REGION_ID, P__Z0.MARKET_ID"]
> ['SELECT\nEUTRANCELLFDD__Z1.PERIOD_START_TIME AS PERIOD_START_TIME,\n
>
> EUTRANCELLFDD__Z1.REGION,\nEUTRANCELLFDD__Z1.MARKET,\n
> ROUND(DECODE((NVL(EUTRANCELLFDD__Z1.PMUETHPTIMEDL, 0) / 1000), 0, 0,
> ((NVL(EUTRANCELLFDD__Z1.PMPDCPVOLDLDRB, 0) -
> NVL(EUTRANCELLFDD__Z1.PMPDCPVOLDLDRBLASTTTI, 0)) /
> (NVL(EUTRANCELLFDD__Z1.PMUETHPTIMEDL, 0) / 1000))), 6) AS DLTHROUGHPU\nFROM
> (\nSELECT\n__C0_0 AS PERIOD_START_TIME,\n__C0_1 AS
> REGION,\n__C0_2 AS MARKET,\nCAST(CAST(SUM(__C0_3) AS
> DOUBLE)
> AS DOUBLE) AS PMUETHPTIMEDL,\nCAST(CAST(SUM(__C0_4) AS DOUBLE) AS
> DOUBLE) AS PMPDCPVOLDLDRB,\nCAST(CAST(SUM(__C0_5) AS DOUBLE) AS
> DOUBLE) AS PMPDCPVOLDLDRBLASTTTI\nFROM PUBLIC.__T0\nGROUP BY
> __C0_0,
> __C0_1, __C0_2\n) EUTRANCELLFDD__Z1\n/* SELECT\n__C0_0 AS
> PERIOD_START_TIME,\n__C0_1 AS REGION,\n__C0_2 AS
> MARKET,\n
> CAST(CAST(SUM(__C0_3) AS DOUBLE) AS DOUBLE) AS PMUETHPTIMEDL,\n
> CAST(CAST(SUM(__C0_4) AS DOUBLE) AS DOUBLE) AS PMPDCPVOLDLDRB,\n
> CAST(CAST(SUM(__C0_5) AS DOUBLE) AS DOUBLE) AS PMPDCPVOLDLDRBLASTTTI\n
> FROM PUBLIC.__T0\n/++ PUBLIC."merge_scan" ++/\nGROUP BY __C0_0,
> __C0_1, __C0_2\n */']
>
> Here is the query on 2.5Million rows, on a table that has ~1500 columns:
> ALL_QUERY = '''select
> EUTRANCELLFDD.period_start_time period_start_time,
> EUTRANCELLFDD.REGION,
> EUTRANCELLFDD.MARKET,
> round(DECODE((NVL(EUTRANCELLFDD.PMUETHPTIMEDL,0)/1000), 0,
>
> 0,(NVL(EUTRANCELLFDD.PMPDCPVOLDLDRB,0)-NVL(EUTRANCELLFDD.PMPDCPVOLDLDRBLASTTTI,0))/(NVL(EUTRANCELLFDD.PMUETHPTIMEDL,0)/1000)),
> 6) DLThroughpu
>   from
>
>   (
>   select
> period_start_time,
> p.REGION_ID REGION,
> p.MARKET_ID MARKET,
> SUM(EUTRANCELLFDD_PMUETHPTIMEDL) PMUETHPTIMEDL,
> SUM(EUTRANCELLFDD_PMPDCPVOLDLDRB) PMPDCPVOLDLDRB,
> SUM(EUTRANCELLFDD_PMPDCPVOLDLDRBLASTTTI) PMPDCPVOLDLDRBLASTTTI
>
>   from
> erilte01_cell p
> USE INDEX(idx_cell_enodeb_id)
>   where
>  p.REGION_ID in ( 'NORTHEAST' )  and
> period_start_time >= '2020-08-03 00:00:00' and
> period_start_time < '2020-08-04 00:00:00'   and
> p.datatype ='RAW' and
> p.datalevel ='RAW' and
> UPPER(p.measurementname) = UPPER('EUTRANCELLFDD')
>   group by
> period_start_time,
> p.REGION_ID,
> p.MARKET_ID) EUTRANCELLFDD'''
>
> And it returns in about 7 seconds in a 18Host cluster with about 3TB
> memory.
> It seems way to slow for me.
>
> I am loading via SQL interface. Is there a performance difference reading
> if
> I use the cache ?
> And nothing is persistent.
> AL
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: How can I find out if indexes are used in a query?

2020-08-11 Thread Evgenii Zhuravlev
Hi,

EXPLAIN should show all the indexes. Are you sure that these indexes can be
used for your query? You also can use USE INDEX hint to force the use of
the index.

Best Regards,
Evgenii

вт, 11 авг. 2020 г. в 16:22, Axel Luft :

> EXPLIN PLAN doesn't show any indexes used although I created them. We are
> using python3 (sql interface). The aggregation query on 1.000.000 rows is
> very slow with about 4seconds.
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: I have an exception while trying to send the "evt=NODE_JOINED" message

2020-08-11 Thread Evgenii Zhuravlev
Hi,

It looks like the node can't establish a connection using the Communication
channel to the remote node. I would recommend checking that all ports are
opened and there is no firewall. Also, you can check that connection can be
established using telnet or any other tool. If you're sure, that connection
can be established between these 2 nodes using the communication port,
please share full logs(not the console output) from all nodes.

Evgenii

вт, 11 авг. 2020 г. в 10:14, Homer Kommrad :

> Hello ,
>
> I have a cluster of 2 server nodes and I'm trying to connect them via a
> client node on another server. When I try to connect , my client node
> connects and I can see the topology changing , 1 Client up . But right
> after that , I keep getting the exception for socketTimeout .
>
> such is the log:
>
> Aug 11, 2020 5:03:47 PM java.util.logging.LogManager$RootLogger log
> WARNING: Failed to resolve default logging config file:
> config/java.util.logging.properties
> [17:03:48]__  
> [17:03:48]   /  _/ ___/ |/ /  _/_  __/ __/
> [17:03:48]  _/ // (7 7// /  / / / _/
> [17:03:48] /___/\___/_/|_/___/ /_/ /___/
> [17:03:48]
> [17:03:48] ver. 2.8.1#20200521-sha1:86422096
> [17:03:48] 2020 Copyright(C) Apache Software Foundation
> [17:03:48]
> [17:03:48] Ignite documentation: http://ignite.apache.org
> [17:03:48]
> [17:03:48] Quiet mode.
> [17:03:48]   ^-- Logging by 'JavaLogger [quiet=true, config=null]'
> [17:03:48]   ^-- To see **FULL** console log here add -DIGNITE_QUIET=false
> or "-v" to ignite.{sh|bat}
> [17:03:48]
> [17:03:48] OS: Linux 4.15.0-112-generic amd64
> [17:03:48] VM information: OpenJDK Runtime Environment
> 1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 Private Build OpenJDK 64-Bit Server
> VM 25.265-b01
> [17:03:48] Please set system property '-Djava.net.preferIPv4Stack=true' to
> avoid possible problems in mixed environments.
> [17:03:48] Initial heap size is 16MB (should be no less than 512MB, use
> -Xms512m -Xmx512m).
> [17:03:48] Configured plugins:
> [17:03:48]   ^-- None
> [17:03:48]
> [17:03:48] Configured failure handler: [hnd=StopNodeOrHaltFailureHandler
> [tryStop=false, timeout=0, super=AbstractFailureHandler
> [ignoredFailureTypes=UnmodifiableSet [SYSTEM_WORKER_BLOCKED, SYSTEM_CRITIC
> AL_OPERATION_TIMEOUT
> [17:03:49] Message queue limit is set to 0 which may lead to potential
> OOMEs when running cache operations in FULL_ASYNC or PRIMARY_SYNC modes due
> to message queues growth on sender and receiver sides.
> [17:03:49] Security status [authentication=off, tls/ssl=off]
> [17:03:50] REST protocols do not start on client node. To start the
> protocols on client node set '-DIGNITE_REST_START_ON_CLIENT=true' system
> property.
> [17:03:52] Nodes started on local machine require more than 80% of
> physical RAM what can lead to significant slowdown due to swapping (please
> decrease JVM heap size, data region size or checkpoint buffer
> size) [required=239MB, available=985MB]
> Aug 11, 2020 5:04:02 PM org.apache.ignite.logger.java.JavaLogger error
> SEVERE: Failed to send message to remote node [node=TcpDiscoveryNode
> [id=7899fe2d-bd77-4077-8cce-a0550f1cab62, consistentId=22,
> addrs=ArrayList [0:0:0:0:0:0:0:1%lo, 10.69.44.105, 127.0.0.1, 2
> 001:bc8:628:1634:0:0:0:1%ens2], sockAddrs=HashSet [/10.69.44.105:47500,
> /0:0:0:0:0:0:0:1%lo:47500, /127.0.0.1:47500,
> 2001:bc8:628:1634:0:0:0:1%ens2:47500], discPort=47500, order=116,
> intOrder=59, lastExc
> hangeTime=1597165431145, loc=false, ver=2.8.1#20200521-sha1:86422096,
> isClient=false], msg=GridIoMessage [plc=2, topic=TOPIC_CACHE, topicOrd=8,
> ordered=false, timeout=0, skipOnTimeout=false, msg=GridDhtP
> artitionsSingleMessage [parts=null, partCntrs=null, partsSizes=null,
> partHistCntrs=null, err=null, client=true, exchangeStartTime=4152061099919,
> finishMsg=null, super=GridDhtPartitionsAbstractMessage [ex
> chId=GridDhtPartitionExchangeId [topVer=AffinityTopologyVersion
> [topVer=201, minorTopVer=0], discoEvt=DiscoveryEvent
> [evtNode=TcpDiscoveryNode [id=5aa5cfc5-8722-4e2f-bddf-35af5df73808,
> consistentId=5aa5c
> fc5-8722-4e2f-bddf-35af5df73808, addrs=ArrayList [68.183.91.137],
> sockAddrs=HashSet [/68.183.91.137:0], discPort=0, order=201, intOrder=0,
> lastExchangeTime=1597165430185, loc=true, ver=2.8.1#20200521-sha
> 1:86422096, isClient=true], topVer=201, nodeId8=5aa5cfc5, msg=null,
> type=NODE_JOINED, tstamp=1597165432178], nodeId=5aa5cfc5, evt=NODE_JOINED],
> lastVer=GridCacheVersion [topVer=0, order=1597165429407, no
> deOrder=0], super=GridCacheMessage [msgId=1, depInfo=null,
> lastAffChangedTopVer=AffinityTopologyVersion [topVer=-1, minorTopVer=0],
> err=null, skipPrepare=false]
> class org.apache.ignite.IgniteCheckedException: Failed to connect to node
> (is node still alive?). Make sure that each ComputeTask and cache
> Transaction has a timeout set in order to prevent parties from
> waiting forever in case of network issues
> 

Re: build failure for module hibernate 5.3

2020-08-11 Thread Evgenii Zhuravlev
Hi,

You can use it from this repository:
https://gridgainsystems.com/nexus/content/repositories/external/org/gridgain/ignite-hibernate_5.3/
I believe there are no changes between these versions.

Best Regards,
Evgenii

вт, 11 авг. 2020 г. в 10:53, Tathagata Roy :

> I am trying to do a poc on using apache ignite as hibernate 2nd level
> cache, following the example given
> https://apacheignite-mix.readme.io/v2.1/docs/hibernate-l2-cache#maven-configuration
> .
>
>
>
> Since we are using Hibernate 5.3.7 in Spring Boot, I need to use
> ignite-hibernate_5.3. Unfortunately this artifact id is not present in any
> public maven repository, hence as mentioned
> https://apacheignite.readme.io/docs/maven-setup# i need to build it
> locally and push it to my maven repository.
>
>
>
> When I am trying to build using the command
>
>
>
> *mvn clean install -DskipTests -Plgpl -pl modules/hibernate -am*
>
>
>
> I am getting the error
>
>
>
> The following artifacts could not be resolved:
> org.jacorb:jacorb:jar:2.2.3-jonas-patch-20071018,
> org.jacorb:jacorb-idl:jar:2.2.3-jonas-patch-20071018
>
>
>
>
>
> I am not seeing any reference to the org.jacorb:jacorb ,
> org.jacorb:jacorb-idl in any pom nor is it available in any public
> repository. This is happening for both 2.8.1 tag and also current SNAPSHOT
> What should be done to fix it, or is there some public repository where is
> can get ignite-hibernate_5.3?
>
>


Re: Is there a way for client to lazy join the cluster?

2020-08-06 Thread Evgenii Zhuravlev
It should be handled on your application side. For example, you can
make initialization of Ignite instance in a separate thread and add a check
on other API invocations that instance was initialized.

Evgenii

чт, 6 авг. 2020 г. в 09:03, John Smith :

> I'm testing failover scenarios and currently I have the full cluster shut
> off. I would still like my application to continue working even if the
> cache is not there...
>
> When my application starts...
>
> It calls Ignition.start(config)
>
> The application will not start until Ignition.start(config) finishes I.e:
> I start the cluster back up.
>


Re: Best first steps for performance tuning

2020-08-06 Thread Evgenii Zhuravlev
Hi Devin,

Yes, you're right, the first step could be increasing the amount of offheap
memory used for data(data region size). By default, Ignite uses 20% of
available RAM.

After that, I would recommend finding where the bottleneck is for your
system - you can check CPU, disk and network to find it.

Best Regards,
Evgenii

чт, 6 авг. 2020 г. в 12:49, Devin Bost :

> While looking at the docs, there are a lot of available parameters for
> performance tuning.
> We have several high-velocity Ignite operations, and Ignite is having
> trouble keeping up. We are using native persistence, and I suspect we will
> get more value from increasing the amount of memory used since I think the
> default memory utilization is low if I remember correctly.
>
> We're wondering what the first few things should be for us to look at for
> performance tuning and wondering if anyone has some guidance so we know
> what to start with.
>
> Thanks,
>
> Devin G. Bost
>


Re: ignite metrics - cache vs cachelocal

2020-08-03 Thread Evgenii Zhuravlev
Hi,

Cache metrics contain information from all nodes in the cluster, while
cache local metrics are related to the one(local) node only.

Evgenii

пн, 3 авг. 2020 г. в 15:43, scottmf :

> hi,
> In JMX with Ignite I see cache and cachelocal metrics.  What's the
> difference?  Many of the metrics seem to overlap.
>
> thanks
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Question in Continuous Query Remote Filter

2020-08-03 Thread Evgenii Zhuravlev
I believe Java doc should be enough for that:
https://ignite.apache.org/releases/latest/javadoc/org/apache/ignite/cache/query/ContinuousQuery.html

It says:

To stop receiving updates call QueryCursor.close()

 method:

 cur.close();


Evgenii


сб, 1 авг. 2020 г. в 00:11, Devakumar J :

> Hi,
>
> Thanks for the reply.
>
> Do we have any document reference for stopping/unsubscribing registered CQ
> listeners.
>
> Thanks & Regards,
> Devakumar J
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Question in Continuous Query Remote Filter

2020-07-31 Thread Evgenii Zhuravlev
Hi,

You can just stop the CQ and then register it again if you don't want to
get notifications for some period of time.

Evgenii

пт, 31 июл. 2020 г. в 01:51, Devakumar J :

> Hi All,
>
> We have a setup of 2 servers and 1 client Node. Client node registers CQ on
> set of caches.
>
> I just want to temporarily pause and resume CQ notifications based on
> certain action at client node.
>
> I was trying to achieve this through remote filter and ignite messaging. I
> mean client publish message to all the server nodes to flip the boolean
> flag. And remote filter will return true or false based on the flag.
>
> The behavior was inconsistent. Sometime it works and some time i dont see
> even the CQ registered properly and when i query SYS.CONTINUOUS_QUERIES,
> getting some invalid state exception.
>
> Is there any other way of pausing and resuming CQ notification based on
> client side event?
>
> Thanks & Regards,
> Devakumar J
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Log file not found with Ignite 2.8.1

2020-07-31 Thread Evgenii Zhuravlev
Are you sure that this configuration file was applied? On the node start,
it prints info about log to the System.out, so, I would recommend checking
the console.

Best Regards,
Evgenii

чт, 30 июл. 2020 г. в 05:51, manueltg89 :

> Hi all!
>
> I have a Ignite app with version 2.8.1, I've followed this steps
> https://apacheignite.readme.io/docs/logging for Log4j, but my app is not
> created the log file. I attach my log4j.xml file.
>
> Any suggestion?
> Thanks in advance. log4j.xml
> 
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Are CPU Metrics logged by Ignite correct?

2020-07-28 Thread Evgenii Zhuravlev
Hi,

It's a known issue related to the java11:
https://issues.apache.org/jira/browse/IGNITE-13306

Evgenii

пн, 27 июл. 2020 г. в 03:17, Mat :

> Tested with Windows and Linux K8 containers all on Java 11.
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Are CPU Metrics logged by Ignite correct?

2020-07-24 Thread Evgenii Zhuravlev
Hi,

What OS and java version do you use?

Evgenii

пт, 24 июл. 2020 г. в 12:32, Mat :

> Ignite is logging "strange" CPU metrics for me:
>
> Metrics for local node (to disable set 'metricsLogFrequency' to 0)
> ^-- Node [id=c2a76580, name=embedded, uptime=00:01:00.153]
> ^-- H/N/C [hosts=1, nodes=1, CPUs=12]
> ^-- *CPU [cur=-100%, avg=-96.15%, GC=0%]*
> ^-- PageMemory [pages=200]
> ^-- Heap [used=337MB, free=95.84%, comm=508MB]
> ^-- Off-heap [used=0MB, free=99.89%, comm=592MB]
> ^--   DefaultDataRegion region [used=0MB, free=100%, comm=512MB]
> ^--   sysMemPlc region [used=0MB, free=99.21%, comm=40MB]
> ^--   TxLog region [used=0MB, free=100%, comm=40MB]
> ^-- Outbound messages queue [size=0]
> ^-- Public thread pool [active=0, idle=0, qSize=0]
> ^-- System thread pool [active=0, idle=2, qSize=0]
>
>
> I have tested 2.8.1, 2.8.0 and 2.7.0 as well as multiple environment and
> bottom line CPU always show negative values that cannot be correlated with
> the real cpu usage. Am I missing something?
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Odd behavior/bug with DataRegion

2020-07-24 Thread Evgenii Zhuravlev
Hi,

There is a couple of known issues related to the small data region. I saw
this behavior before and you shouldn't see this issue for regions with
bigger sizes. Also, if you have really big objects, you might need to
change the emptyPagesPoolSize, to make sure that there are no objects
bigger than this value * 4k(default page size).

Best Regards,
Evgenii

чт, 23 июл. 2020 г. в 23:40, Victor :

> Update,
>
> Interestingly, i took the value to the wire, setting 'emptyPagesPoolSize'
> to
> 510 (< 512) and that seems to have done the trick. Atleast in my past 5
> test
> run's the preload has gone through fine.
>
> Right now i have set it to pretty much the max value, since there is no
> good
> way to identify what would be the runtime max value of an instance while
> creating a cache.
>
> Any other ideas around how to go about setting a safe value for this
> property.
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: custom ignite-log4j.xml when using stock docker ignite image

2020-07-23 Thread Evgenii Zhuravlev
Hi Maxim,

Do you plan to use persistence or attach any disk for the work directory?
If so, you can just put a configuration file there, and just use inside the
ignite XML configuration file.

Evgenii

ср, 22 июл. 2020 г. в 08:13, Maxim Volkomorov <2201...@gmail.com>:

> Hi!
>
> Is there a simple way to use a custom ignite-log4.xml config, nested in
> URI ignite-config.xml, when ignite is running like a stock docker image?
>
> Same question for custom app.properties beans, nested in config.xml.
>
> Our docker run cmd:
> docker run -it --net=localhost -e "CONFIG_URI=
> http://host/ignite-config.xml; apacheignite/ignite
>


Re: Checking dataregion limits

2020-07-23 Thread Evgenii Zhuravlev
Hi,

>What is the best way to check for data region limits.

*How about DataRegionMetricsMXBeanImpl.getMaxSize ?*

1. Why does max size not show what i exactly set. E.g. if i set the size as
20 mb (i.e. 20 * 1024 * 1024 = 20971520), but the value for OffHeapSize is
19922944. So why not exact?

*I think it might be an overhead for metadata information*

2. The Initial size under the same mbean show simply 19, where as i set it
to 19 mb (in bytes). Any idea why is that?

*Because this metrics show it in megabytes:dataRegCfg.getInitialSize()
/ (1024 * 1024)*

3. I expected "OffheapUsedSize" to be incremented everytime i add data to
this cache till it hits the max Size. However, it always stays at 0. The
only value increments is "TotalAllocatedSize". Is that the right attribute
to check for data size increments or should other attributes be checked?

*Are you sure that you have metrics enabled for this DataRegion?*

4. With no data yet, "TotalAllocatedSize" still shows some amount of
allocation. Like in the above case of max size of 20 mb, i could see
TotalAllocatedSize already at 8 mb, even before data was added to the cache.

*Ignite preallocates data by chunks.*

5. Finally if "TotalAllocatedSize" is indeed the attribute to track the size
increment, i should expect eviction to kick in when its value reaches 90% of
the max size. Is this understanding correct?

*Well, not really. If you have configured Eviction Policy, then you
can also configure it there, but TotalAllocatedSize won't be reduced
after the eviction, it can only grow. *


*Best Regards,*

*Evgenii*


вт, 21 июл. 2020 г. в 21:06, Victor :

> Hi,
>
> What is the best way to check for data region limits. Currently i am using
> below mbeans attributes to monitor this,
>
> 1. CacheClusterMetricsMXBeanImpl / CacheLocalMetricsMXBeanImpl
> Size - provides the total entries.
>
> 2. DataRegionMetricsMXBeanImpl
> OffHeapSize - Shows close to the max size i have set for the cache. Not
> exact though.
> TotalAllocatedSize - Seems to increase as data is added to the cache.
>
> Few queries,
> 1. Why does max size not show what i exactly set. E.g. if i set the size as
> 20 mb (i.e. 20 * 1024 * 1024 = 20971520), but the value for OffHeapSize is
> 19922944. So why not exact?
> 2. The Initial size under the same mbean show simply 19, where as i set it
> to 19 mb (in bytes). Any idea why is that?
> 3. I expected "OffheapUsedSize" to be incremented everytime i add data to
> this cache till it hits the max Size. However, it always stays at 0. The
> only value increments is "TotalAllocatedSize". Is that the right attribute
> to check for data size increments or should other attributes be checked?
> 4. With no data yet, "TotalAllocatedSize" still shows some amount of
> allocation. Like in the above case of max size of 20 mb, i could see
> TotalAllocatedSize already at 8 mb, even before data was added to the
> cache.
> 5. Finally if "TotalAllocatedSize" is indeed the attribute to track the
> size
> increment, i should expect eviction to kick in when its value reaches 90%
> of
> the max size. Is this understanding correct?
>
> I'll run some more tests.
>
> Victor
>
> I'll tr
>
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: java.lang.StackOverflowError when put a value on Ignite cache 2.7.6

2020-07-23 Thread Evgenii Zhuravlev
Hi,

Did you find the object which caused this error? Can you share the
reproducer with us?

Thank you,
Evgenii


вт, 21 июл. 2020 г. в 23:15, abraham :

> I am using Ignite 2.7.6 in a cluster of 2 servers with persistence
> enabled.
>
>
>
> It is working fine but suddenly a java.lang.StackOverflowError happens. Do
> you know if it is a bug in 2.7.6 version or maybe I am doing something
> wrong?
>
> Error trace is attached.  ignite-2.txt
> 
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Ignite DefaultDataRegion config

2020-07-22 Thread Evgenii Zhuravlev
Hi, If you don't plan to use it at all, then yes, 40mb should be fine.

Evgenii

вт, 21 июл. 2020 г. в 22:17, kay :

> Hello!
>
> What size should I give you for DefaultDataRegion?
> Every cache has a specific Region(I'm not going to use defaultDataRegion)
>
> 40MB is enough? If will not use defaultDataRegion.
>
> Thank you so much
> I will wait for reply!
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Apache Ignite CacheRebalanceMode Is Not Respected By Nodes

2020-07-21 Thread Evgenii Zhuravlev
Hi,

CacheRebalanceMode is responsible for a different thing - it starts to work
when data need to be rebalanced due to topology(or baseline topology
change). It's not responsible for data distribution between nodes for put
operations. So, when you insert data, part of this data belongs to the
partitions which are not related to the local node.

To achieve what you want, you can just create 2 different caches with
NodeFilter:
https://www.javadoc.io/doc/org.apache.ignite/ignite-core/latest/org/apache/ignite/util/AttributeNodeFilter.html
Using that you can avoid data movement between nodes and your thin client
will see these caches too.

Evgenii




ср, 15 июл. 2020 г. в 07:58, cparaskeva :

> The setup: Hello folks I have a simple Apache Ignite setup with two Ignite
> instances configured as server nodes over C# and one Ignite instance as a
> client node over java.
>
> What is the goal: Populate data on instance 1 and instance 2 but avoid
> movement of data between them. In other words data receiced on each node
> must stay in node. Then using the java client to run queries against the
> two
> nodes either combined (distributed join) or per node (using affinity).
>
> The issue: With one server node everything works as expected, however on
> more than one server nodes, data of the cluster is balancing between the x
> member nodes even if I have expliccitly set the CacheRebalanceMode to None
> which should disable the rebalance between then nodes. The insert time is
> increase by 4x-10x times, function to each node's populated data.
>
> P.S. I have tried change the cache mode from Partitioned to Local where
> each
> node will have isolated the data in it's internal H2 db however in that
> case
> the Java client is unable to detect the nodes or read any data from the
> cache of each node.
>
> Java Client Node
>
> IgniteConfiguration cfg = new IgniteConfiguration();
> // Enable client mode.
> cfg.setClientMode(true);
>
> // Setting up an IP Finder to ensure the client can locate the
> servers.
> TcpDiscoveryMulticastIpFinder ipFinder = new
> TcpDiscoveryMulticastIpFinder();
>
> ipFinder.setAddresses(Collections.singletonList("127.0.0.1:47500
> ..47509"));
> cfg.setDiscoverySpi(new TcpDiscoverySpi().setIpFinder(ipFinder));
>
> // Configure Ignite to connect with .NET nodes
> cfg.setBinaryConfiguration(new BinaryConfiguration()
> .setNameMapper(new BinaryBasicNameMapper(true))
> .setCompactFooter(true)
>
> // Start Ignite in client mode.
> Ignite ignite = Ignition.start(cfg);
>
>
> IgniteCache cache0 = ignite.cache(CACHE_NAME);
> IgniteCache cache =
> cache0.withKeepBinary();
>
> // execute some queries to nodes
> C# Server Node
>
>
>IIgnite _ignite =
> Ignition.Start(IgniteUtils.DefaultIgniteConfig()));
>
> // Create new cache and configure queries for Trade binary
> types.
> // Note that there are no such classes defined.
> var cache0 = _ignite.GetOrCreateCache Trade>("DEALIO");
>
> // Switch to binary mode to work with data in serialized
> form.
> _cache = cache0.WithKeepBinary IBinaryObject>();
>
>//populate some data ...
>
> public static IgniteConfiguration DefaultIgniteConfig()
> {
> return new IgniteConfiguration
> {
>
>
> PeerAssemblyLoadingMode =
> PeerAssemblyLoadingMode.CurrentAppDomain,
> BinaryConfiguration = new BinaryConfiguration
> {
> NameMapper = new BinaryBasicNameMapper { IsSimpleName =
> true },
> CompactFooter = true,
> TypeConfigurations = new[] {
> new BinaryTypeConfiguration(typeof(Trade)) {
> Serializer = new IgniteTradeSerializer()
> }
> }
> },
> DiscoverySpi = new TcpDiscoverySpi
> {
> IpFinder = new TcpDiscoveryMulticastIpFinder
> {
> Endpoints = new[] { "127.0.0.1:47500..47509" }
> },
> SocketTimeout = TimeSpan.FromSeconds(0.10)
> },
> Logger = new IgniteNLogLogger(),
> CacheConfiguration = new[]{
> new CacheConfiguration{
> PartitionLossPolicy=PartitionLossPolicy.Ignore,
> RebalanceMode=CacheRebalanceMode.None,
> Name = CACHE_NAME,
> CacheMode = CacheMode.Partitioned,
> Backups = 0,
> QueryEntities = new[] {
> new QueryEntity(typeof(AffinityKey),
> typeof(Trade))
>   

Re: How to do cache.get() on SQL table by primary key with multiple columns?

2020-07-15 Thread Evgenii Zhuravlev
John,

Then you should just get a new builder every time when you need it:
myIgniteInstance.binary().builder("MyKey"). I don't see why you need to
reuse builder from multiple threads here.

Evgenii

ср, 15 июл. 2020 г. в 14:34, John Smith :

> I'm using it in Vertx.io. if you understand the concept a bit. I have 2
> vertices.
>
> I create 2 instances of BinaryObjectBuilder
>
> Each builder creates a new object (binary key) per "event" that comes in.
>
> So if I get 2 events then each builder will build one...
>
> If I get 3 events, the 3rd event will wait until one of the event loops
> can process the next event...
>
>
>
> On Wed., Jul. 15, 2020, 3:43 p.m. Evgenii Zhuravlev, <
> e.zhuravlev...@gmail.com> wrote:
>
>> 1. This builder can be used for making one object, do you want to
>> construct one object from multiple threads?
>> 2. No, you still can work with BinaryObjects instead of actual classes.
>>
>> Evgenii
>>
>> ср, 15 июл. 2020 г. в 08:50, John Smith :
>>
>>> Hi Evgenii, it works good. I have two questions...
>>>
>>> 1- Is the BinaryObjectBuilder obtained from
>>> myIgniteInstance.binary().builder("MyKey"); thread safe? Can I pass the
>>> same builder to multiple instances of my cache "repository" wrapper I wrote?
>>> 2- If we want to use the actual MyKey class then I suppose that needs to
>>> be in the classpath on all nodes?
>>>
>>> On Wed, 15 Jul 2020 at 10:43, John Smith  wrote:
>>>
>>>> Ok I will try it...
>>>>
>>>> On Tue, 14 Jul 2020 at 22:34, Evgenii Zhuravlev <
>>>> e.zhuravlev...@gmail.com> wrote:
>>>>
>>>>> John,
>>>>>
>>>>> It's not necessary to have class at all, you can specify any type, you
>>>>> just need to use this type when creating binary object for this key.
>>>>>
>>>>> вт, 14 июл. 2020 г. в 17:50, John Smith :
>>>>>
>>>>>> I just used two columns as primary key...
>>>>>>
>>>>>> Of I use key_type and specify a type does that class need to exist in
>>>>>> the class path of the server nodes?
>>>>>>
>>>>>> Like if I have
>>>>>>
>>>>>> class MyKeyClass {
>>>>>>Integer col1;
>>>>>>Integer col2;
>>>>>> }
>>>>>>
>>>>>> Does this class need to be loaded in all nodes or ignite can figure
>>>>>> it out and marshal it?
>>>>>>
>>>>>> On Tue., Jul. 14, 2020, 6:50 p.m. Evgenii Zhuravlev, <
>>>>>> e.zhuravlev...@gmail.com> wrote:
>>>>>>
>>>>>>> Hi John,
>>>>>>>
>>>>>>> To do this, you need to create a key object with the same type as
>>>>>>> you have for the table. If you don't specify KEY_TYPE in the create 
>>>>>>> table
>>>>>>> script, it will be generated automatically. I would recommend to 
>>>>>>> specify it
>>>>>>> for the command(just type name, if you don't have a class) and, when you
>>>>>>> need to get data using key-value API, just create a binary object of 
>>>>>>> this
>>>>>>> type with these fields:
>>>>>>> https://www.gridgain.com/docs/latest/developers-guide/key-value-api/binary-objects#creating-and-modifying-binary-objects
>>>>>>>
>>>>>>> Evgenii
>>>>>>>
>>>>>>> вт, 14 июл. 2020 г. в 07:18, John Smith :
>>>>>>>
>>>>>>>> Hi, I have an SQL table
>>>>>>>>
>>>>>>>> create table if not exists my_table (
>>>>>>>> column1 int,
>>>>>>>> column2 int,
>>>>>>>> column3 varchar(16),
>>>>>>>> PRIMARY KEY (column1, column2)
>>>>>>>> ) with "template=replicatedTpl";
>>>>>>>>
>>>>>>>> and I'm creating my near cache as follows...
>>>>>>>>
>>>>>>>> IgniteCache myCache;
>>>>>>>>
>>>>>>>> NearCacheConfiguration nearConfig = new
>>>>>>>> NearCacheConfiguration<>();
>>>>>>>> nearConfig.setNearEvictionPolicyFactory(new
>>>>>>>> LruEvictionPolicyFactory<>(1024));
>>>>>>>>
>>>>>>>> myCache =
>>>>>>>> this.ignite.getOrCreateNearCache(SQL_PUBLIC_MY_TABLE, nearConfig)
>>>>>>>> .withExpiryPolicy(new AccessedExpiryPolicy(new
>>>>>>>> Duration(TimeUnit.HOURS, 1)));
>>>>>>>>
>>>>>>>> So if I use myCache.get()...
>>>>>>>>
>>>>>>>> 1- How do I specify the primary key if it's 2 columns?
>>>>>>>> 2- I assume the data will be put in near cache?
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>>


Re: How to do cache.get() on SQL table by primary key with multiple columns?

2020-07-15 Thread Evgenii Zhuravlev
1. This builder can be used for making one object, do you want to construct
one object from multiple threads?
2. No, you still can work with BinaryObjects instead of actual classes.

Evgenii

ср, 15 июл. 2020 г. в 08:50, John Smith :

> Hi Evgenii, it works good. I have two questions...
>
> 1- Is the BinaryObjectBuilder obtained from
> myIgniteInstance.binary().builder("MyKey"); thread safe? Can I pass the
> same builder to multiple instances of my cache "repository" wrapper I wrote?
> 2- If we want to use the actual MyKey class then I suppose that needs to
> be in the classpath on all nodes?
>
> On Wed, 15 Jul 2020 at 10:43, John Smith  wrote:
>
>> Ok I will try it...
>>
>> On Tue, 14 Jul 2020 at 22:34, Evgenii Zhuravlev 
>> wrote:
>>
>>> John,
>>>
>>> It's not necessary to have class at all, you can specify any type, you
>>> just need to use this type when creating binary object for this key.
>>>
>>> вт, 14 июл. 2020 г. в 17:50, John Smith :
>>>
>>>> I just used two columns as primary key...
>>>>
>>>> Of I use key_type and specify a type does that class need to exist in
>>>> the class path of the server nodes?
>>>>
>>>> Like if I have
>>>>
>>>> class MyKeyClass {
>>>>Integer col1;
>>>>Integer col2;
>>>> }
>>>>
>>>> Does this class need to be loaded in all nodes or ignite can figure it
>>>> out and marshal it?
>>>>
>>>> On Tue., Jul. 14, 2020, 6:50 p.m. Evgenii Zhuravlev, <
>>>> e.zhuravlev...@gmail.com> wrote:
>>>>
>>>>> Hi John,
>>>>>
>>>>> To do this, you need to create a key object with the same type as you
>>>>> have for the table. If you don't specify KEY_TYPE in the create table
>>>>> script, it will be generated automatically. I would recommend to specify 
>>>>> it
>>>>> for the command(just type name, if you don't have a class) and, when you
>>>>> need to get data using key-value API, just create a binary object of this
>>>>> type with these fields:
>>>>> https://www.gridgain.com/docs/latest/developers-guide/key-value-api/binary-objects#creating-and-modifying-binary-objects
>>>>>
>>>>> Evgenii
>>>>>
>>>>> вт, 14 июл. 2020 г. в 07:18, John Smith :
>>>>>
>>>>>> Hi, I have an SQL table
>>>>>>
>>>>>> create table if not exists my_table (
>>>>>> column1 int,
>>>>>> column2 int,
>>>>>> column3 varchar(16),
>>>>>> PRIMARY KEY (column1, column2)
>>>>>> ) with "template=replicatedTpl";
>>>>>>
>>>>>> and I'm creating my near cache as follows...
>>>>>>
>>>>>> IgniteCache myCache;
>>>>>>
>>>>>> NearCacheConfiguration nearConfig = new
>>>>>> NearCacheConfiguration<>();
>>>>>> nearConfig.setNearEvictionPolicyFactory(new
>>>>>> LruEvictionPolicyFactory<>(1024));
>>>>>>
>>>>>> myCache =
>>>>>> this.ignite.getOrCreateNearCache(SQL_PUBLIC_MY_TABLE, nearConfig)
>>>>>> .withExpiryPolicy(new AccessedExpiryPolicy(new
>>>>>> Duration(TimeUnit.HOURS, 1)));
>>>>>>
>>>>>> So if I use myCache.get()...
>>>>>>
>>>>>> 1- How do I specify the primary key if it's 2 columns?
>>>>>> 2- I assume the data will be put in near cache?
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>


Re: Ignite Native Persistence With write behind additional store

2020-07-15 Thread Evgenii Zhuravlev
There is a typo in my previous message, I meant "Storages will be
synchronized in case of one of the node failure."

There is no reference for this type of configuration since there is no
guarantee for the consistency. If you still want to use it, you can just
combine configuration for persistence with a configuration for 3rd party
cache store. Both can be found in the documentation and Ignite examples.

Evgenii

ср, 15 июл. 2020 г. в 08:22, Devakumar J :

> Hi,
>
> Thanks for the reply.
>
> is there a reference to configure custom store along with native
> persistence
> enabled.
>
> Thanks & Regards,
> Devakumar J
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: How to do cache.get() on SQL table by primary key with multiple columns?

2020-07-14 Thread Evgenii Zhuravlev
John,

It's not necessary to have class at all, you can specify any type, you just
need to use this type when creating binary object for this key.

вт, 14 июл. 2020 г. в 17:50, John Smith :

> I just used two columns as primary key...
>
> Of I use key_type and specify a type does that class need to exist in the
> class path of the server nodes?
>
> Like if I have
>
> class MyKeyClass {
>Integer col1;
>Integer col2;
> }
>
> Does this class need to be loaded in all nodes or ignite can figure it out
> and marshal it?
>
> On Tue., Jul. 14, 2020, 6:50 p.m. Evgenii Zhuravlev, <
> e.zhuravlev...@gmail.com> wrote:
>
>> Hi John,
>>
>> To do this, you need to create a key object with the same type as you
>> have for the table. If you don't specify KEY_TYPE in the create table
>> script, it will be generated automatically. I would recommend to specify it
>> for the command(just type name, if you don't have a class) and, when you
>> need to get data using key-value API, just create a binary object of this
>> type with these fields:
>> https://www.gridgain.com/docs/latest/developers-guide/key-value-api/binary-objects#creating-and-modifying-binary-objects
>>
>> Evgenii
>>
>> вт, 14 июл. 2020 г. в 07:18, John Smith :
>>
>>> Hi, I have an SQL table
>>>
>>> create table if not exists my_table (
>>> column1 int,
>>> column2 int,
>>> column3 varchar(16),
>>> PRIMARY KEY (column1, column2)
>>> ) with "template=replicatedTpl";
>>>
>>> and I'm creating my near cache as follows...
>>>
>>> IgniteCache myCache;
>>>
>>> NearCacheConfiguration nearConfig = new
>>> NearCacheConfiguration<>();
>>> nearConfig.setNearEvictionPolicyFactory(new
>>> LruEvictionPolicyFactory<>(1024));
>>>
>>> myCache =
>>> this.ignite.getOrCreateNearCache(SQL_PUBLIC_MY_TABLE, nearConfig)
>>> .withExpiryPolicy(new AccessedExpiryPolicy(new Duration(TimeUnit.HOURS,
>>> 1)));
>>>
>>> So if I use myCache.get()...
>>>
>>> 1- How do I specify the primary key if it's 2 columns?
>>> 2- I assume the data will be put in near cache?
>>>
>>>
>>>
>>>
>>>


Re: Ignite Native Persistence With write behind additional store

2020-07-14 Thread Evgenii Zhuravlev
Hi,

There is no guarantee of data consistency between Ignite persistence and
3rd party DB in this case. Storages will be synchronized in case of one of
the node failure. You can try to run some explicit checking for data
consistency, but I believe it won't be easy with the load.

Evgenii

вт, 14 июл. 2020 г. в 08:23, Devakumar J :

> Hi,
>
> I am exploring ways to do data backup along with native persistence. Is it
> possible to achieve this using cache store implementation. So that data
> will
> be persisted in disk as well as replica copy.
>
> Thanks,
> Devakumar J
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: How to do cache.get() on SQL table by primary key with multiple columns?

2020-07-14 Thread Evgenii Zhuravlev
Hi John,

To do this, you need to create a key object with the same type as you have
for the table. If you don't specify KEY_TYPE in the create table script, it
will be generated automatically. I would recommend to specify it for the
command(just type name, if you don't have a class) and, when you need to
get data using key-value API, just create a binary object of this type with
these fields:
https://www.gridgain.com/docs/latest/developers-guide/key-value-api/binary-objects#creating-and-modifying-binary-objects

Evgenii

вт, 14 июл. 2020 г. в 07:18, John Smith :

> Hi, I have an SQL table
>
> create table if not exists my_table (
> column1 int,
> column2 int,
> column3 varchar(16),
> PRIMARY KEY (column1, column2)
> ) with "template=replicatedTpl";
>
> and I'm creating my near cache as follows...
>
> IgniteCache myCache;
>
> NearCacheConfiguration nearConfig = new
> NearCacheConfiguration<>();
> nearConfig.setNearEvictionPolicyFactory(new
> LruEvictionPolicyFactory<>(1024));
>
> myCache =
> this.ignite.getOrCreateNearCache(SQL_PUBLIC_MY_TABLE, nearConfig)
> .withExpiryPolicy(new AccessedExpiryPolicy(new Duration(TimeUnit.HOURS,
> 1)));
>
> So if I use myCache.get()...
>
> 1- How do I specify the primary key if it's 2 columns?
> 2- I assume the data will be put in near cache?
>
>
>
>
>


Re: What does all partition owners have left the grid on the client side mean?

2020-07-07 Thread Evgenii Zhuravlev
John,

Unfortunately, I didn't find messages about lost partitions for this cache,
there is a chance that it happened before. What Partition Loss policy do
you have?

Logs says that there is a problem with partition distribution:
 "Local node affinity assignment distribution is not ideal [cache=cache1,
expectedPrimary=512.00, actualPrimary=493, expectedBackups=512.00,
actualBackups=171, warningThreshold=50.00%]"
How do you restart nodes? Do you wait until rebalance completed?

Evgenii



пт, 3 июл. 2020 г. в 09:03, John Smith :

> Hi Evgenii, did you have a chance to look at the latest logs?
>
> On Thu, 25 Jun 2020 at 11:32, John Smith  wrote:
>
>> Ok
>>
>> stdout.copy.zip
>>
>> https://www.dropbox.com/sh/ejcddp2gcml8qz2/AAD_VfUecE0hSNZX7wGbfDh3a?dl=0
>>
>> On Thu, 25 Jun 2020 at 11:01, John Smith  wrote:
>>
>>> Because in between it's all the business logs. Let me make sure I didn't
>>> filter anything relevant. So maybe in those 13 hours nothing happened?
>>>
>>>
>>> On Thu, 25 Jun 2020 at 10:53, Evgenii Zhuravlev <
>>> e.zhuravlev...@gmail.com> wrote:
>>>
>>>> This doesn't seem to be a full log. There is a gap for more than 13
>>>> hours in the log :
>>>> {"appTimestamp":"2020-06-23T23:06:41.658+00:00","threadName":"ignite-update-notifier-timer","level":"WARN","loggerName":"org.apache.ignite.internal.processors.cluster.GridUpdateNotifier","message":"New
>>>> version is available at ignite.apache.org: 2.8.1"}
>>>> {"appTimestamp":"2020-06-24T12:58:42.294+00:00","threadName":"disco-event-worker-#35%xx%","level":"INFO","loggerName":"org.apache.ignite.internal.managers.discovery.GridDiscoveryManager","message":"Node
>>>> left topology: TcpDiscoveryNode [id=02949ae0-4eea-4dc9-8aed-b3f50e8d7238,
>>>> addrs=[0:0:0:0:0:0:0:1%lo, 127.0.0.1, xxx.xxx.xxx.73],
>>>> sockAddrs=[0:0:0:0:0:0:0:1%lo:0, /127.0.0.1:0,
>>>> xx-task-0003/xxx.xxx.xxx.73:0], discPort=0, order=1258, intOrder=632,
>>>> lastExchangeTime=1592890182021, loc=false,
>>>> ver=2.7.0#20181130-sha1:256ae401, isClient=true]"}
>>>>
>>>> I don't see any exceptions in the log. When did the issue happen? Can
>>>> you share the full log?
>>>>
>>>> Evgenii
>>>>
>>>> чт, 25 июн. 2020 г. в 07:36, John Smith :
>>>>
>>>>> Hi Evgenii, same folder shared stdout.copy
>>>>>
>>>>> Just in case:
>>>>> https://www.dropbox.com/sh/ejcddp2gcml8qz2/AAD_VfUecE0hSNZX7wGbfDh3a?dl=0
>>>>>
>>>>> On Wed, 24 Jun 2020 at 21:23, Evgenii Zhuravlev <
>>>>> e.zhuravlev...@gmail.com> wrote:
>>>>>
>>>>>> No, it's not. It's not clear when it happened and what was with the
>>>>>> cluster and the client node itself at this moment.
>>>>>>
>>>>>> Evgenii
>>>>>>
>>>>>> ср, 24 июн. 2020 г. в 18:16, John Smith :
>>>>>>
>>>>>>> Ok I'll try... The stack trace isn't enough?
>>>>>>>
>>>>>>> On Wed., Jun. 24, 2020, 4:30 p.m. Evgenii Zhuravlev, <
>>>>>>> e.zhuravlev...@gmail.com> wrote:
>>>>>>>
>>>>>>>> John, right, didn't notice them before. Can you share the full log
>>>>>>>> for the client node with an issue?
>>>>>>>>
>>>>>>>> Evgenii
>>>>>>>>
>>>>>>>> ср, 24 июн. 2020 г. в 12:29, John Smith :
>>>>>>>>
>>>>>>>>> I thought I did! The link doesn't have them?
>>>>>>>>>
>>>>>>>>> On Wed., Jun. 24, 2020, 2:43 p.m. Evgenii Zhuravlev, <
>>>>>>>>> e.zhuravlev...@gmail.com> wrote:
>>>>>>>>>
>>>>>>>>>> Can you share full log files from server nodes?
>>>>>>>>>>
>>>>>>>>>> ср, 24 июн. 2020 г. в 10:47, John Smith :
>>>>>>>>>>
>>>>>>>>>>> The logs for server are here:
>>>>>>>>>>> https://www.dropbox.com/sh/ejcddp2gcml8qz2/AAD_VfUecE0hSNZX7wGbfDh3a?dl=0
>>>>>>>>>&

Re: What does all partition owners have left the grid on the client side mean?

2020-06-25 Thread Evgenii Zhuravlev
This doesn't seem to be a full log. There is a gap for more than 13 hours
in the log :
{"appTimestamp":"2020-06-23T23:06:41.658+00:00","threadName":"ignite-update-notifier-timer","level":"WARN","loggerName":"org.apache.ignite.internal.processors.cluster.GridUpdateNotifier","message":"New
version is available at ignite.apache.org: 2.8.1"}
{"appTimestamp":"2020-06-24T12:58:42.294+00:00","threadName":"disco-event-worker-#35%xx%","level":"INFO","loggerName":"org.apache.ignite.internal.managers.discovery.GridDiscoveryManager","message":"Node
left topology: TcpDiscoveryNode [id=02949ae0-4eea-4dc9-8aed-b3f50e8d7238,
addrs=[0:0:0:0:0:0:0:1%lo, 127.0.0.1, xxx.xxx.xxx.73],
sockAddrs=[0:0:0:0:0:0:0:1%lo:0, /127.0.0.1:0,
xx-task-0003/xxx.xxx.xxx.73:0], discPort=0, order=1258, intOrder=632,
lastExchangeTime=1592890182021, loc=false,
ver=2.7.0#20181130-sha1:256ae401, isClient=true]"}

I don't see any exceptions in the log. When did the issue happen? Can you
share the full log?

Evgenii

чт, 25 июн. 2020 г. в 07:36, John Smith :

> Hi Evgenii, same folder shared stdout.copy
>
> Just in case:
> https://www.dropbox.com/sh/ejcddp2gcml8qz2/AAD_VfUecE0hSNZX7wGbfDh3a?dl=0
>
> On Wed, 24 Jun 2020 at 21:23, Evgenii Zhuravlev 
> wrote:
>
>> No, it's not. It's not clear when it happened and what was with the
>> cluster and the client node itself at this moment.
>>
>> Evgenii
>>
>> ср, 24 июн. 2020 г. в 18:16, John Smith :
>>
>>> Ok I'll try... The stack trace isn't enough?
>>>
>>> On Wed., Jun. 24, 2020, 4:30 p.m. Evgenii Zhuravlev, <
>>> e.zhuravlev...@gmail.com> wrote:
>>>
>>>> John, right, didn't notice them before. Can you share the full log for
>>>> the client node with an issue?
>>>>
>>>> Evgenii
>>>>
>>>> ср, 24 июн. 2020 г. в 12:29, John Smith :
>>>>
>>>>> I thought I did! The link doesn't have them?
>>>>>
>>>>> On Wed., Jun. 24, 2020, 2:43 p.m. Evgenii Zhuravlev, <
>>>>> e.zhuravlev...@gmail.com> wrote:
>>>>>
>>>>>> Can you share full log files from server nodes?
>>>>>>
>>>>>> ср, 24 июн. 2020 г. в 10:47, John Smith :
>>>>>>
>>>>>>> The logs for server are here:
>>>>>>> https://www.dropbox.com/sh/ejcddp2gcml8qz2/AAD_VfUecE0hSNZX7wGbfDh3a?dl=0
>>>>>>>
>>>>>>> The error from the client:
>>>>>>>
>>>>>>> javax.cache.CacheException: class
>>>>>>> org.apache.ignite.internal.processors.cache.CacheInvalidStateException:
>>>>>>> Failed to execute cache operation (all partition owners have left the 
>>>>>>> grid,
>>>>>>> partition data has been lost) [cacheName=cache1, part=580,
>>>>>>> key=UserKeyCacheObjectImpl [part=580, val=14385045508, 
>>>>>>> hasValBytes=false]]
>>>>>>> at
>>>>>>> org.apache.ignite.internal.processors.cache.GridCacheUtils.convertToCacheException(GridCacheUtils.java:1337)
>>>>>>> at
>>>>>>> org.apache.ignite.internal.processors.cache.IgniteCacheFutureImpl.convertException(IgniteCacheFutureImpl.java:62)
>>>>>>> at
>>>>>>> org.apache.ignite.internal.util.future.IgniteFutureImpl.get(IgniteFutureImpl.java:137)
>>>>>>> at
>>>>>>> com.xx.common.vertx.ext.data.impl.IgniteCacheRepository.lambda$executeAsync$d94e711a$1(IgniteCacheRepository.java:55)
>>>>>>> at
>>>>>>> org.apache.ignite.internal.util.future.AsyncFutureListener$1.run(AsyncFutureListener.java:53)
>>>>>>> at
>>>>>>> com.xx.common.vertx.ext.data.impl.VertxIgniteExecutorAdapter.lambda$execute$0(VertxIgniteExecutorAdapter.java:18)
>>>>>>> at io.vertx.core.impl.ContextImpl.executeTask(ContextImpl.java:369)
>>>>>>> at
>>>>>>> io.vertx.core.impl.WorkerContext.lambda$wrapTask$0(WorkerContext.java:35)
>>>>>>> at io.vertx.core.impl.TaskQueue.run(TaskQueue.java:76)
>>>>>>> at
>>>>>>> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>>>>>>> at
>>>>>&g

Re: What does all partition owners have left the grid on the client side mean?

2020-06-24 Thread Evgenii Zhuravlev
No, it's not. It's not clear when it happened and what was with the cluster
and the client node itself at this moment.

Evgenii

ср, 24 июн. 2020 г. в 18:16, John Smith :

> Ok I'll try... The stack trace isn't enough?
>
> On Wed., Jun. 24, 2020, 4:30 p.m. Evgenii Zhuravlev, <
> e.zhuravlev...@gmail.com> wrote:
>
>> John, right, didn't notice them before. Can you share the full log for
>> the client node with an issue?
>>
>> Evgenii
>>
>> ср, 24 июн. 2020 г. в 12:29, John Smith :
>>
>>> I thought I did! The link doesn't have them?
>>>
>>> On Wed., Jun. 24, 2020, 2:43 p.m. Evgenii Zhuravlev, <
>>> e.zhuravlev...@gmail.com> wrote:
>>>
>>>> Can you share full log files from server nodes?
>>>>
>>>> ср, 24 июн. 2020 г. в 10:47, John Smith :
>>>>
>>>>> The logs for server are here:
>>>>> https://www.dropbox.com/sh/ejcddp2gcml8qz2/AAD_VfUecE0hSNZX7wGbfDh3a?dl=0
>>>>>
>>>>> The error from the client:
>>>>>
>>>>> javax.cache.CacheException: class
>>>>> org.apache.ignite.internal.processors.cache.CacheInvalidStateException:
>>>>> Failed to execute cache operation (all partition owners have left the 
>>>>> grid,
>>>>> partition data has been lost) [cacheName=cache1, part=580,
>>>>> key=UserKeyCacheObjectImpl [part=580, val=14385045508, hasValBytes=false]]
>>>>> at
>>>>> org.apache.ignite.internal.processors.cache.GridCacheUtils.convertToCacheException(GridCacheUtils.java:1337)
>>>>> at
>>>>> org.apache.ignite.internal.processors.cache.IgniteCacheFutureImpl.convertException(IgniteCacheFutureImpl.java:62)
>>>>> at
>>>>> org.apache.ignite.internal.util.future.IgniteFutureImpl.get(IgniteFutureImpl.java:137)
>>>>> at
>>>>> com.xx.common.vertx.ext.data.impl.IgniteCacheRepository.lambda$executeAsync$d94e711a$1(IgniteCacheRepository.java:55)
>>>>> at
>>>>> org.apache.ignite.internal.util.future.AsyncFutureListener$1.run(AsyncFutureListener.java:53)
>>>>> at
>>>>> com.xx.common.vertx.ext.data.impl.VertxIgniteExecutorAdapter.lambda$execute$0(VertxIgniteExecutorAdapter.java:18)
>>>>> at io.vertx.core.impl.ContextImpl.executeTask(ContextImpl.java:369)
>>>>> at
>>>>> io.vertx.core.impl.WorkerContext.lambda$wrapTask$0(WorkerContext.java:35)
>>>>> at io.vertx.core.impl.TaskQueue.run(TaskQueue.java:76)
>>>>> at
>>>>> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>>>>> at
>>>>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>>>>> at
>>>>> io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
>>>>> at java.lang.Thread.run(Thread.java:748)
>>>>> Caused by:
>>>>> org.apache.ignite.internal.processors.cache.CacheInvalidStateException:
>>>>> Failed to execute cache operation (all partition owners have left the 
>>>>> grid,
>>>>> partition data has been lost) [cacheName=cache1, part=580,
>>>>> key=UserKeyCacheObjectImpl [part=580, val=14385045508, hasValBytes=false]]
>>>>> at
>>>>> org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtTopologyFutureAdapter.validatePartitionOperation(GridDhtTopologyFutureAdapter.java:169)
>>>>> at
>>>>> org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtTopologyFutureAdapter.validateCache(GridDhtTopologyFutureAdapter.java:116)
>>>>> at
>>>>> org.apache.ignite.internal.processors.cache.distributed.dht.GridPartitionedSingleGetFuture.init(GridPartitionedSingleGetFuture.java:208)
>>>>> at
>>>>> org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicCache.getAsync0(GridDhtAtomicCache.java:1428)
>>>>> at
>>>>> org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicCache.access$1600(GridDhtAtomicCache.java:135)
>>>>> at
>>>>> org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicCache$16.apply(GridDhtAtomicCache.java:474)
>>>>> at
>>>>> org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicCache$16.apply(GridDhtAtomicCache.java:472)
>>>>>

Re: What does all partition owners have left the grid on the client side mean?

2020-06-24 Thread Evgenii Zhuravlev
John, right, didn't notice them before. Can you share the full log for the
client node with an issue?

Evgenii

ср, 24 июн. 2020 г. в 12:29, John Smith :

> I thought I did! The link doesn't have them?
>
> On Wed., Jun. 24, 2020, 2:43 p.m. Evgenii Zhuravlev, <
> e.zhuravlev...@gmail.com> wrote:
>
>> Can you share full log files from server nodes?
>>
>> ср, 24 июн. 2020 г. в 10:47, John Smith :
>>
>>> The logs for server are here:
>>> https://www.dropbox.com/sh/ejcddp2gcml8qz2/AAD_VfUecE0hSNZX7wGbfDh3a?dl=0
>>>
>>> The error from the client:
>>>
>>> javax.cache.CacheException: class
>>> org.apache.ignite.internal.processors.cache.CacheInvalidStateException:
>>> Failed to execute cache operation (all partition owners have left the grid,
>>> partition data has been lost) [cacheName=cache1, part=580,
>>> key=UserKeyCacheObjectImpl [part=580, val=14385045508, hasValBytes=false]]
>>> at
>>> org.apache.ignite.internal.processors.cache.GridCacheUtils.convertToCacheException(GridCacheUtils.java:1337)
>>> at
>>> org.apache.ignite.internal.processors.cache.IgniteCacheFutureImpl.convertException(IgniteCacheFutureImpl.java:62)
>>> at
>>> org.apache.ignite.internal.util.future.IgniteFutureImpl.get(IgniteFutureImpl.java:137)
>>> at
>>> com.xx.common.vertx.ext.data.impl.IgniteCacheRepository.lambda$executeAsync$d94e711a$1(IgniteCacheRepository.java:55)
>>> at
>>> org.apache.ignite.internal.util.future.AsyncFutureListener$1.run(AsyncFutureListener.java:53)
>>> at
>>> com.xx.common.vertx.ext.data.impl.VertxIgniteExecutorAdapter.lambda$execute$0(VertxIgniteExecutorAdapter.java:18)
>>> at io.vertx.core.impl.ContextImpl.executeTask(ContextImpl.java:369)
>>> at
>>> io.vertx.core.impl.WorkerContext.lambda$wrapTask$0(WorkerContext.java:35)
>>> at io.vertx.core.impl.TaskQueue.run(TaskQueue.java:76)
>>> at
>>> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>>> at
>>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>>> at
>>> io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
>>> at java.lang.Thread.run(Thread.java:748)
>>> Caused by:
>>> org.apache.ignite.internal.processors.cache.CacheInvalidStateException:
>>> Failed to execute cache operation (all partition owners have left the grid,
>>> partition data has been lost) [cacheName=cache1, part=580,
>>> key=UserKeyCacheObjectImpl [part=580, val=14385045508, hasValBytes=false]]
>>> at
>>> org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtTopologyFutureAdapter.validatePartitionOperation(GridDhtTopologyFutureAdapter.java:169)
>>> at
>>> org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtTopologyFutureAdapter.validateCache(GridDhtTopologyFutureAdapter.java:116)
>>> at
>>> org.apache.ignite.internal.processors.cache.distributed.dht.GridPartitionedSingleGetFuture.init(GridPartitionedSingleGetFuture.java:208)
>>> at
>>> org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicCache.getAsync0(GridDhtAtomicCache.java:1428)
>>> at
>>> org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicCache.access$1600(GridDhtAtomicCache.java:135)
>>> at
>>> org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicCache$16.apply(GridDhtAtomicCache.java:474)
>>> at
>>> org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicCache$16.apply(GridDhtAtomicCache.java:472)
>>> at
>>> org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicCache.asyncOp(GridDhtAtomicCache.java:761)
>>> at
>>> org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicCache.getAsync(GridDhtAtomicCache.java:472)
>>> at
>>> org.apache.ignite.internal.processors.cache.GridCacheAdapter.getAsync(GridCacheAdapter.java:4749)
>>> at
>>> org.apache.ignite.internal.processors.cache.GridCacheAdapter.getAsync(GridCacheAdapter.java:1477)
>>> at
>>> org.apache.ignite.internal.processors.cache.IgniteCacheProxyImpl.getAsync(IgniteCacheProxyImpl.java:937)
>>> at
>>> org.apache.ignite.internal.processors.cache.GatewayProtectedCacheProxy.getAsync(GatewayProtectedCacheProxy.java:652)
>>> at
>>> com.xx.common.vertx.ext.data.impl.IgniteCacheRepository.lambda$get$1(IgniteCacheRepository.java:28)
>>> at
>>> com.xx.common.

Re: What does all partition owners have left the grid on the client side mean?

2020-06-24 Thread Evgenii Zhuravlev
eout.java:53)
> at io.reactivex.Completable.subscribe(Completable.java:2309)
> at
> io.reactivex.internal.operators.completable.CompletablePeek.subscribeActual(CompletablePeek.java:51)
> at io.reactivex.Completable.subscribe(Completable.java:2309)
> at
> io.reactivex.internal.operators.completable.CompletableResumeNext.subscribeActual(CompletableResumeNext.java:41)
> at io.reactivex.Completable.subscribe(Completable.java:2309)
> at
> io.reactivex.internal.operators.completable.CompletableToFlowable.subscribeActual(CompletableToFlowable.java:32)
> at io.reactivex.Flowable.subscribe(Flowable.java:14918)
> at io.reactivex.Flowable.subscribe(Flowable.java:14865)
> at
> io.reactivex.internal.operators.flowable.FlowableFlatMap$MergeSubscriber.onNext(FlowableFlatMap.java:163)
> at
> io.reactivex.internal.operators.flowable.FlowableFromIterable$IteratorSubscription.slowPath(FlowableFromIterable.java:236)
> at
> io.reactivex.internal.operators.flowable.FlowableFromIterable$BaseRangeSubscription.request(FlowableFromIterable.java:124)
> at
> io.reactivex.internal.operators.flowable.FlowableFlatMap$MergeSubscriber.drainLoop(FlowableFlatMap.java:546)
> at
> io.reactivex.internal.operators.flowable.FlowableFlatMap$MergeSubscriber.drain(FlowableFlatMap.java:366)
> at
> io.reactivex.internal.operators.flowable.FlowableFlatMap$InnerSubscriber.onComplete(FlowableFlatMap.java:678)
> at
> io.reactivex.internal.observers.SubscriberCompletableObserver.onComplete(SubscriberCompletableObserver.java:33)
> at
> io.reactivex.internal.operators.completable.CompletableResumeNext$ResumeNextObserver.onComplete(CompletableResumeNext.java:68)
> at
> io.reactivex.internal.operators.completable.CompletablePeek$CompletableObserverImplementation.onComplete(CompletablePeek.java:115)
> at
> io.reactivex.internal.operators.completable.CompletableTimeout$TimeOutObserver.onComplete(CompletableTimeout.java:87)
> at
> io.reactivex.internal.operators.completable.CompletableCreate$Emitter.onComplete(CompletableCreate.java:64)
> at
> com.xx.common.vertx.ext.kafka.impl.KafkaProcessorImpl.lambda$null$3(KafkaProcessorImpl.java:86)
> at io.vertx.core.impl.FutureImpl.dispatch(FutureImpl.java:105)
> at io.vertx.core.impl.FutureImpl.tryComplete(FutureImpl.java:150)
> at io.vertx.core.impl.FutureImpl.tryComplete(FutureImpl.java:157)
> at io.vertx.core.impl.FutureImpl.complete(FutureImpl.java:118)
> at
> com.xx.impl.MtEventProcessor.lambda$process$0(MtEventProcessor.java:83)
> at
> io.vertx.ext.web.client.impl.HttpContext.handleDispatchResponse(HttpContext.java:310)
> at io.vertx.ext.web.client.impl.HttpContext.execute(HttpContext.java:297)
> at io.vertx.ext.web.client.impl.HttpContext.next(HttpContext.java:272)
> at
> io.vertx.ext.web.client.impl.predicate.PredicateInterceptor.handle(PredicateInterceptor.java:69)
> at
> io.vertx.ext.web.client.impl.predicate.PredicateInterceptor.handle(PredicateInterceptor.java:32)
> at io.vertx.ext.web.client.impl.HttpContext.next(HttpContext.java:269)
> at io.vertx.ext.web.client.impl.HttpContext.fire(HttpContext.java:279)
> at
> io.vertx.ext.web.client.impl.HttpContext.dispatchResponse(HttpContext.java:240)
> at
> io.vertx.ext.web.client.impl.HttpContext.lambda$null$2(HttpContext.java:370)
> ... 7 common frames omitted
>
> On Wed, 24 Jun 2020 at 13:28, John Smith  wrote:
>
>> Not sure about the wrong configuration... All the apps work this seems to
>> happen every few weeks. We don't have any particular heavy load.
>>
>> I just bounced the client application and the errors went away.
>>
>> On Wed, 24 Jun 2020 at 12:57, Evgenii Zhuravlev 
>> wrote:
>>
>>> Hi,
>>>
>>> It means that there are no nodes in the cluster that holds certain
>>> partitions. So, probably you have a wrong configuration or some of the
>>> nodes left the cluster and you don't have backups in the cluster for these
>>> partitions. I believe more can be found from logs.
>>>
>>> Evgenii
>>>
>>> ср, 24 июн. 2020 г. в 09:52, John Smith :
>>>
>>>> Also I'm assuming that the thin client wouldn't be susceptible to this
>>>> error?
>>>>
>>>> On Wed, 24 Jun 2020 at 12:38, John Smith 
>>>> wrote:
>>>>
>>>>> The cluster is showing active when running control.sh
>>>>>
>>>>> But the client is showing "all partition owners have left the grid"
>>>>>
>>>>> The client node is marked as client=true so it's not a server node.
>>>>>
>>>>> Is this split brain as well?
>>>>>
>>>>


Re: What does all partition owners have left the grid on the client side mean?

2020-06-24 Thread Evgenii Zhuravlev
Hi,

It means that there are no nodes in the cluster that holds certain
partitions. So, probably you have a wrong configuration or some of the
nodes left the cluster and you don't have backups in the cluster for these
partitions. I believe more can be found from logs.

Evgenii

ср, 24 июн. 2020 г. в 09:52, John Smith :

> Also I'm assuming that the thin client wouldn't be susceptible to this
> error?
>
> On Wed, 24 Jun 2020 at 12:38, John Smith  wrote:
>
>> The cluster is showing active when running control.sh
>>
>> But the client is showing "all partition owners have left the grid"
>>
>> The client node is marked as client=true so it's not a server node.
>>
>> Is this split brain as well?
>>
>


Re: How to fix Ignite node segmentation without restart

2020-06-17 Thread Evgenii Zhuravlev
>I do not want to restart it and I cannot do a failover because a network
issue just happened and the stand-by may be invalid. The fix is to always
restart the slave.
You can enable CacheWriteSynchronizationMode.FULL_SYNC and there will be no
differences between primary and backup partitions. In this case, you can
just restart your master node - the backup node will have valid data.

There is no way to join nodes after segmentation without restarting one of
the nodes.

Evgenii



вт, 16 июн. 2020 г. в 06:26, Actarus :

> Hello,
>
> I'm running Apache Ignite (2.4.0) embedded into a java application that
> runs
> in a master/slave architecture. This means that there are only ever two
> nodes in a grid, in FULL_SYNC, REPLICATED mode. Only the master application
> writes to the grid, the slave only reads from it when it gets promoted to
> master on a failover.
>
> In such an architecture, network segmentation issues mean different things.
> Typically I see that for handling segmentation, the node that experienced
> the issue would need to be restarted. However in this scenario if the
> master
> is segmented, I do not want to restart it and I cannot do a failover
> because
> a network issue just happened and the stand-by may be invalid. The fix is
> to
> always restart the slave.
>
> However I notice that regardless of handling the EVT_NODE_SEGMENTED event,
> adding a SegmentationProcess, running with SegmentationPolicy.NOOP and
> having a segmentation plugin and always returning true/OK, I find that the
> node that runs in master always remains in segmented state, and it is
> impossible for it to re-join a cluster after restarting the slave node.
>
> Is there some mechanism I can use to tell the node within my master process
> to completely ignore segmentation? Or tell it that it is fine so that
> discovery can still happen after I restart the slave node? Currently I used
> port  with TcpDiscoverySpi with hard-coded addresses (master and slave
> IP addresses). When the master node is segmented (by simulating network
> issues on the command-line) it appears there's no way for the discovery to
> recover - port  is shut down, and the slave node always comes up blind
> to the master.
>
> I would appreciate any insights on this issue. Thank you.
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Ignite persistence and activation

2020-06-16 Thread Evgenii Zhuravlev
Hi,

All caches, including caches for atomic structures and in-memory caches,
are not available before activation. I believe it makes sense to move your
code for running after the activation event:
https://apacheignite.readme.io/docs/baseline-topology#cluster-activationdeactivation-events
.

Evgenii

чт, 11 июн. 2020 г. в 05:18, steve.hostettler :

> Hello.
>
> I am trying to implement ignite persistence but I stumbled upon the
> following problems/questions. It is required to activate the cluster, that
> much is clear but I have bootstrap code that is using technical caches that
> I do not want to persist and more problematic I need to use
> ignite.atomicReference and that as part of the initialization of the node.
>
> I assume that I need to create a another region that is not persisted for
> the so called system caches but what do I do with  ignite.atomicReference?
>
>
> Thanks in advance
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Using native persistence to "extend" memory

2020-06-16 Thread Evgenii Zhuravlev
Steve,

Actually, disabling WAL is a good option for your use case. Checkpoint
mechanism is the same with disabled WAL, the only difference is that node
is not writing WAL to the disk on each operation. Usually, it might make
sense to disable WAL for initial loading - when you can lose the data in
case of failure and start data loading again. Four your use case, if you
don't care about restore, you can just disable it:
https://www.gridgain.com/docs/latest/developers-guide/persistence/native-persistence#disabling-wal

Best Regards,
Evgenii



вт, 16 июн. 2020 г. в 17:02, Denis Magda :

> Steve,
>
> Please check these generic recommendations if you haven't done so already:
> https://apacheignite.readme.io/docs/durable-memory-tuning#native-persistence-related-tuning
>
> Otherwise, send us a note if you come across any bottlenecks or issues so
> that we can give you more specific recommendations.
>
> -
> Denis
>
>
> On Tue, Jun 16, 2020 at 3:25 PM steve.hostettler <
> steve.hostett...@gmail.com> wrote:
>
>> Thanks a lot for the recommendation. So keeping the WAL, disabling
>> archiving.
>> I understand all records are kept on disk.
>>
>> Thanks again. Anything else?
>>
>>
>>
>> --
>> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>>
>


Re: HashMap warning when using PutAll()

2020-06-12 Thread Evgenii Zhuravlev
The same works for other operations too(like removeAll).

Evgenii

пт, 12 июн. 2020 г. в 07:22, Evgenii Zhuravlev :

> Raymond,
>
> Collections used in putAll should be sorted, because otherwise if they
> have the same entries but in a different order, it can lead to the classic
> deadlock. It is expected behavior.
>
> Best Regards,
> Evgenii
>
> чт, 11 июн. 2020 г. в 21:38, Raymond Wilson :
>
>> We are using Ignite v2.8.0 and the C# client. Some of our operations use
>> PutAll() to save a collection of items in a single operation. This
>> operation is emitting the following warning into the log:
>>
>> 2020-06-10 15:04:14,199 [77] WRN [ImmutableClientServer]
>>  Unordered map java.util.HashMap is
>> used for putAll operation on cache Spatial-SubGridDirectory-Immutable. This
>> can lead to a distributed deadlock. Switch to a sorted map like TreeMap
>> instead.
>>
>> Does this require a Jira ticket?
>>
>> Thanks,
>> Raymond.
>>
>>
>> --
>> <http://www.trimble.com/>
>> Raymond Wilson
>> Solution Architect, Civil Construction Software Systems (CCSS)
>> 11 Birmingham Drive | Christchurch, New Zealand
>> +64-21-2013317 Mobile
>> raymond_wil...@trimble.com
>>
>>
>> <https://worksos.trimble.com/?utm_source=Trimble_medium=emailsign_campaign=Launch>
>>
>


Re: HashMap warning when using PutAll()

2020-06-12 Thread Evgenii Zhuravlev
Raymond,

Collections used in putAll should be sorted, because otherwise if they have
the same entries but in a different order, it can lead to the classic
deadlock. It is expected behavior.

Best Regards,
Evgenii

чт, 11 июн. 2020 г. в 21:38, Raymond Wilson :

> We are using Ignite v2.8.0 and the C# client. Some of our operations use
> PutAll() to save a collection of items in a single operation. This
> operation is emitting the following warning into the log:
>
> 2020-06-10 15:04:14,199 [77] WRN [ImmutableClientServer]
>  Unordered map java.util.HashMap is
> used for putAll operation on cache Spatial-SubGridDirectory-Immutable. This
> can lead to a distributed deadlock. Switch to a sorted map like TreeMap
> instead.
>
> Does this require a Jira ticket?
>
> Thanks,
> Raymond.
>
>
> --
> 
> Raymond Wilson
> Solution Architect, Civil Construction Software Systems (CCSS)
> 11 Birmingham Drive | Christchurch, New Zealand
> +64-21-2013317 Mobile
> raymond_wil...@trimble.com
>
>
> 
>


Re: CountDownLatch issue in Ignite 2.6 version

2020-06-10 Thread Evgenii Zhuravlev
Prasad,

Please don't use the dev list for the questions regarding the product
usage, dev list used for development-related activities.

To see how this configuration used for countDownLatch you can take a look
at these 2 methods:
https://github.com/apache/ignite/blob/master/modules/core/src/main/java/org/apache/ignite/internal/processors/datastructures/DataStructuresProcessor.java#L1187

https://github.com/apache/ignite/blob/master/modules/core/src/main/java/org/apache/ignite/internal/processors/datastructures/DataStructuresProcessor.java#L495

Evgenii

пн, 8 июн. 2020 г. в 20:43, Prasad Bhalerao :

> I just checked the ignite doc for atomic configuration.
> But it doesn't say that it is applicable to distributed data structures.
>
> Is it really applicable to distributed data structures like count down
> latch
>
> On Tue 9 Jun, 2020, 7:26 AM Prasad Bhalerao  wrote:
>
>> Hi,
>> I was under the impression that countdown latch is implemented in
>> replicated cache. So when any number of nodes go down it does not loose
>> it's state.
>>
>> Can you please explain why atmoc data structures are using 1 back when
>> its state is very important?
>>
>> Can we enforce  atomic data structures to use replicated cache?
>>
>> Which cache does ignite use to store atomic data structures?
>>
>> Thanks
>> Prasad
>>
>> On Mon 8 Jun, 2020, 11:58 PM Evgenii Zhuravlev > wrote:
>>
>>> Hi,
>>>
>>> By default, cache, that stores all atomic structures has only 1 backup,
>>> so, after losing all data for this certain latch, it recreates it. To
>>> change the default atomic configuration use
>>> IgniteConfiguration.setAtomicConfiguration.
>>>
>>> Evgenii
>>>
>>> сб, 6 июн. 2020 г. в 06:20, Akash Shinde :
>>>
>>>> *Issue:* Countdown latched gets reinitialize to original value(4) when
>>>> one or more (but not all) node goes down. *(Partition loss happened)*
>>>>
>>>> We are using ignite's distributed countdownlatch to make sure that
>>>> cache loading is completed on all server nodes. We do this to make sure
>>>> that our kafka consumers starts only after cache loading is complete on all
>>>> server nodes. This is the basic criteria which needs to be fulfilled before
>>>> starts actual processing
>>>>
>>>>
>>>>  We have 4 server nodes and countdownlatch is initialized to 4. We use
>>>> cache.loadCache method to start the cache loading. When each server
>>>> completes cache loading it reduces the count by 1 using countDown method.
>>>> So when all the nodes completes cache loading, the count reaches to zero.
>>>> When this count  reaches to zero we start kafka consumers on all server
>>>> nodes.
>>>>
>>>>  But we saw weird behavior in prod env. The 3 server nodes were shut
>>>> down at the same time. But 1 node is still alive. When this happened the
>>>> count down was reinitialized to original value i.e. 4. But I am not able to
>>>> reproduce this in dev env.
>>>>
>>>>  Is this a bug, when one or more (but not all) nodes goes down then
>>>> count re initializes back to original value?
>>>>
>>>> Thanks,
>>>> Akash
>>>>
>>>


Re: ignite web agent issue

2020-06-10 Thread Evgenii Zhuravlev
Hi,

You always should upgrade web console and web agent to the newer versions
when you upgrade the cluster itself.

Evgenii

ср, 10 июн. 2020 г. в 03:19, itsmeravikiran.c :

> Hi Team,
>
> Currently my application is using ignite 2.6.0 version and ignite web agent
> version is ignite-web-agent-2.4.4.
> Everything is working fine.
>
> Now we are migrating ignite 2.6.0 to 2.8.0.
>
> Ignite 2.8.0 will support the  ignite-web-agent-2.4.4??
>
> While running the queries i am getting below error in ignite-web-agent.
>
> Error: Cannot read property 'length' of null
> Show more
>
> Could you please help me on this
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Ignite visor use

2020-06-08 Thread Evgenii Zhuravlev
Ignite Visor starts daemon node inside to connect to the cluster, it
doesn't matter cluster was started. All you need is to configure addresses
of the cluster nodes in the config file, the same way as you do fo other
nodes in the cluster.

Evgenii

пн, 8 июн. 2020 г. в 19:01, Prasad Bhalerao :

> Hi,
> I am starting ignite inside my java spring boot app. I do not use ignite
> shell script to start it.
> Can I use ignite visor in such case to connect to my ignite cluster?
>
> What all the minimum required scrips do I need to use ignite visor?
>
>
> Thanks,
> Prasad
>


Re: CountDownLatch issue in Ignite 2.6 version

2020-06-08 Thread Evgenii Zhuravlev
Hi,

By default, cache, that stores all atomic structures has only 1 backup, so,
after losing all data for this certain latch, it recreates it. To change
the default atomic configuration use
IgniteConfiguration.setAtomicConfiguration.

Evgenii

сб, 6 июн. 2020 г. в 06:20, Akash Shinde :

> *Issue:* Countdown latched gets reinitialize to original value(4) when
> one or more (but not all) node goes down. *(Partition loss happened)*
>
> We are using ignite's distributed countdownlatch to make sure that cache
> loading is completed on all server nodes. We do this to make sure that our
> kafka consumers starts only after cache loading is complete on all server
> nodes. This is the basic criteria which needs to be fulfilled before starts
> actual processing
>
>
>  We have 4 server nodes and countdownlatch is initialized to 4. We use
> cache.loadCache method to start the cache loading. When each server
> completes cache loading it reduces the count by 1 using countDown method.
> So when all the nodes completes cache loading, the count reaches to zero.
> When this count  reaches to zero we start kafka consumers on all server
> nodes.
>
>  But we saw weird behavior in prod env. The 3 server nodes were shut down
> at the same time. But 1 node is still alive. When this happened the count
> down was reinitialized to original value i.e. 4. But I am not able to
> reproduce this in dev env.
>
>  Is this a bug, when one or more (but not all) nodes goes down then count
> re initializes back to original value?
>
> Thanks,
> Akash
>


Re: UriDeployment Question

2020-06-04 Thread Evgenii Zhuravlev
Hi,

What version of Ignite do you use? It started to work with jar files only
since version 2.8: https://issues.apache.org/jira/browse/IGNITE-11380. When
it reads file, it prints message to the log:

Found new or updated deployment unit

Do you have it in your logs?

Yes, you don't need to restart the cluster in this case, just undeploy the
service, update the jar file and deploy the service again.

Best Regards,
Evgenii

чт, 4 июн. 2020 г. в 05:49, marble.zh...@coinflex.com <
marble.zh...@coinflex.com>:

> Hi Expert,
>
> I am trying the UriDeployment, the xml setting is,
>   
>  class="org.apache.ignite.spi.deployment.uri.UriDeploymentSpi">
> 
> 
>
> file:///home/myProjects/price-processor-ignite/target/
> 
> 
>  value="/home/myProjects/deployment/"/>
> 
> 
>
> When ignite startup, I can find the something generated in the deployment
> folder, but when I try to launch the client to execute methods in my jar,
> but it shows class not found, I am not sure if something I missed.
>
> And btw, if we redeploy the jar into the ./target, we cancel/start the
> service in the jar, ignite will reload the new jar, right? If so, I no need
> to restart the ignite instance.
>
> Thanks again.
>
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Can Apache Superset connect with Apache Ignite ?

2020-06-01 Thread Evgenii Zhuravlev
Hi,

Superset uses SqlAlchemy supporting the JDBC driver, so technically it
should be possible to connect to Ignite. Here is the information on jdbc
driver: https://apacheignite-sql.readme.io/docs/jdbc-driver and on some
tools: https://apacheignite-sql.readme.io/docs/sql-tooling

Please let us know if you're able to use Superset with Ignite so we can add
this to documentation and/or fix issues with the integration.

Evgenii

пн, 1 июн. 2020 г. в 01:36, lan :

> as the topic ,apache superset can connect the database from ignite ?
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Change schema on runtime Apache Ignite

2020-05-27 Thread Evgenii Zhuravlev
Hi Denis,

I planned to address this issue at first, but then, after analysis, I
decided to change other parts of CacheStore implementation only.

As for this case, usually, in production, model changes are not so frequent
and they also require new code or config changes, which also requires node
restart. When you restart node without native persistence, it recreates
CacheStore from the factory again and if columns were added to the XML
configuration, they will be added to the CacheStore too.

So, in your case, you can just restart all nodes one by one, it will
recreate CacheStore with a new configuration.

Best Regards,
Evgenii

ср, 27 мая 2020 г. в 08:00, Denis Magda :

> Evgeniy,
>
> I remember you were planning to address this particular limitation in the
> new version of CacheStore. Could you please join the thread and confirm
> this? Hopefully, there is already a ticket for this task.
>
> -
> Denis
>
>
> On Tue, May 26, 2020 at 11:14 PM manueltg89 <
> manuel.trinidad.gar...@hotmail.com> wrote:
>
>> Thanks for your response. I update my RDBMS manually also, but the
>> problem is
>> if for example I added a field called "telephone" Apache Ignite with
>> write-through activated is not capable of write this field on RDBMS, How I
>> can do this whitout rebooting?
>>
>> Thanks in advance.
>>
>>
>>
>> --
>> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>>
>


Re: How to close/cancel a running tasks?

2020-05-26 Thread Evgenii Zhuravlev
Hi,

To do this you can execute tasks asynchronously(for example, runAsync and
others). Async operations return IgniteFuture, which can be canceled.

Also, I think that services fit better for this use case - there you can
run your code in the background and deploy/undeploy them when a code change
is needed. For an upgrade, UriDeploymentSpi can be used:
https://apacheignite.readme.io/docs/service-grid#service-updates-redeployment

Evgenii

вс, 24 мая 2020 г. в 21:02, marble.zh...@coinflex.com <
marble.zh...@coinflex.com>:

> Hi Experts,
>
> Say I have a while (true) task, which help monitoring/handling the MQ
> messages.
> Once I need change the logic, how I redeploy the task? how I cancel this
> task and let the cluster using the new logic jar?
>
> Do I need restart all cluster nodes? But my cluster nodes have other tasks
> running.
>
> Thanks a lot.
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: JDBC Connection and stream receiver

2020-05-22 Thread Evgenii Zhuravlev
Hi,

No, as far as I know there is no way to set stream receiver for streaming
with JDBC driver

Evgenii

чт, 21 мая 2020 г. в 18:33, narges saleh :

> Hi All,
>
> Is it possible to use stream receivers with JDBC connections (with
> streaming set to on)? If yes, can you point me to an example?
>
> thanks.
>


Re: Ignite Node Metrics using JMX

2020-05-21 Thread Evgenii Zhuravlev
Yes, it is a node information.

heap = onheap, so, just use "a" as onheap memory used.

ср, 20 мая 2020 г. в 22:08, kay :

> Thankyou.
> I found ClusterLocalNodeMetricsMXBeanImpl bean and attributes.
> Is it node information? or os processor?
> ex. currentCpuLoad, CurrentGcCpuload, CurrentThreadCount??
>
> and if i want to get onheap/offheap used size ,
> ClusterLocalNodeMetricsMXBeanImpl - HeapMemoryUsed -> a
> DataRegionMetrics - offHeapUsedSize -> b
>
> b is offHeapMemoryUsed, onHeap memory used calculation is a-b is it
> correct?
>
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Unable to deploy Ignite Web Console in local

2020-05-21 Thread Evgenii Zhuravlev
I suggest checking the logs from the cluster nodes - there should be
messages that node is using this IP and that it was bound to 8080 port.

Also, it makes sense to check the connection between these machines, there
is a chance that you have a firewall or ports are blocked.

Evgenii

чт, 21 мая 2020 г. в 01:57, BEELA GAYATRI :

> Hi Evgenii,
>
>
>
>   Thanks for the update. When I run  in my local machine every thing works
> fine. However when I run the same in centOS Linux server
> web-console-agent.sh is not  able to communicate with node URI. Below is
> the log. Here my queries are
>
>
>
>1. What should be the node uri to be given? Here I have given
>http://serverIp:8080
>2. *ignite-rest-http *module is also in  Ignite_home libs
>folder.please suggest what could be the issue
>
>
>
> WARNING: All illegal access operations will be denied in a future release
>
> [2020-05-21T03:58:32,604][INFO ][main][AgentLauncher]
>
> [2020-05-21T03:58:32,604][INFO ][main][AgentLauncher] Web Console Agent
> configuration :
>
> [2020-05-21T03:58:32,950][INFO ][main][AgentLauncher] User's security
> tokens : 7066
>
> [2020-05-21T03:58:32,951][INFO ][main][AgentLauncher] URI to Ignite node
> REST server : http://xx.xxx.xxx.xxx:8080
>
> [2020-05-21T03:58:32,952][INFO ][main][AgentLauncher] Login to Ignite node
> REST server   : ignite
>
> [2020-05-21T03:58:32,952][INFO ][main][AgentLauncher] Password to Ignite
> node REST server: **
>
> *[2020-05-21T03:58:32,953][INFO ][main][AgentLauncher] URI to GridGain Web
> Console: http://xx.xxx.xxx.xxx:8008 <http://xx.xxx.xxx.xxx:8008>*
>
> [2020-05-21T03:58:32,953][INFO ][main][AgentLauncher] Path to properties
> file: default.properties
>
> [2020-05-21T03:58:32,954][INFO ][main][AgentLauncher] Path to JDBC drivers
> folder:
> /***/*/gridgain-web-console-agent-2020.03.01/./jdbc-drivers
>
> [2020-05-21T03:58:32,954][INFO ][main][AgentLauncher] Demo
> mode  : enabled
>
> [2020-05-21T03:58:32,955][INFO ][main][AgentLauncher]
>
> [2020-05-21T03:58:33,117][INFO ][main][WebSocketRouter] Starting Web
> Console Agent...
>
> [2020-05-21T03:58:33,148][INFO ][Connect thread][WebSocketRouter]
> Connecting to server: ws://xx.xxx.xxx.xxx:8008
>
> [2020-05-21T03:58:34,381][INFO ][http-client-16][WebSocketRouter]
> Successfully completes handshake with server
>
> *[2020-05-21T03:58:34,440][WARN ][pool-2-thread-1][ClusterHandler] Failed
> to connect to cluster.*
>
> *[2020-05-21T03:58:34,440][WARN ][pool-2-thread-1][ClusterHandler] Check
> that '--node-uri' configured correctly.*
>
> *[2020-05-21T03:58:34,441][WARN ][pool-2-thread-1][ClusterHandler] Ensure
> that cluster nodes have [ignite-rest-http] module in classpath (was copied
> from libs/optional to libs folder).*
>
> *[2020-05-21T03:58:34,446][INFO ][pool-2-thread-1][ClustersWatcher] Failed
> to establish connection to node*
>
> ^C[2020-05-21T04:00:03,722][INFO ][Thread-9][WebSocketRouter] Websocket
> connection closed with code: 1006
>
> [2020-05-21T04:00:03,724][INFO ][Connect thread][ClustersWatcher] Topology
> watch process was suspended
>
> [2020-05-21T04:00:03,729][INFO ][main][WebSocketRouter] Stopping Web
> Console Agent...
>
>
>
> Sent from Mail <https://go.microsoft.com/fwlink/?LinkId=550986> for
> Windows 10
>
>
>
> *From: *Evgenii Zhuravlev 
> *Sent: *Wednesday, May 20, 2020 11:43 PM
> *To: *user 
> *Subject: *Re: Unable to deploy Ignite Web Console in local
>
>
> "External email. Open with Caution"
>
> Hi,
>
>
>
> The community is stopping maintenance for Ignite WebConsole:
> http://apache-ignite-developers.2346864.n4.nabble.com/RESULT-VOTE-Stop-Maintenance-of-Ignite-Web-Console-td47548.html.
> I recommend using GridGain Webconsole instead:
> https://www.gridgain.com/resources/download#webconsole. There is no need
> to build it, binaries are available.
>
>
>
> Best Regards,
>
> Evgenii
>
>
>
> ср, 20 мая 2020 г. в 02:32, BEELA GAYATRI :
>
> Hi,
>
>
>
>
>
>I am trying to deploy Ignite Web Console in my local machine and
> installed Mongo Dbversion 4.2 and NodeJs 14.0 versions, Ignite version used
> is  apache-ignite-2.7.0.
>
> However am unable to find paths($IGNITE_HOME/modules/web-console/backend
> and $IGNITE_HOME/modules/web-console/frontend)  in Ignite_home  and
>  unable to  create ignite-web-agent-x.x.x.zip
>
>
>
> I am getting below error while running below Command. Please suggest how
> to proceed further.
>
> mvn clean package -pl :ignite-web-agent -am -P web-con

Re: Unable to deploy Ignite Web Console in local

2020-05-20 Thread Evgenii Zhuravlev
Hi,

The community is stopping maintenance for Ignite WebConsole:
http://apache-ignite-developers.2346864.n4.nabble.com/RESULT-VOTE-Stop-Maintenance-of-Ignite-Web-Console-td47548.html.
I recommend using GridGain Webconsole instead:
https://www.gridgain.com/resources/download#webconsole. There is no need to
build it, binaries are available.

Best Regards,
Evgenii

ср, 20 мая 2020 г. в 02:32, BEELA GAYATRI :

> Hi,
>
>
>
>
>
>I am trying to deploy Ignite Web Console in my local machine and
> installed Mongo Dbversion 4.2 and NodeJs 14.0 versions, Ignite version used
> is  apache-ignite-2.7.0.
>
> However am unable to find paths($IGNITE_HOME/modules/web-console/backend
> and $IGNITE_HOME/modules/web-console/frontend)  in Ignite_home  and
>  unable to  create ignite-web-agent-x.x.x.zip
>
>
>
> I am getting below error while running below Command. Please suggest how
> to proceed further.
>
> mvn clean package -pl :ignite-web-agent -am -P web-console -DskipTests=true
>
>
>
>
>
>
>
> https://apacheignite-tools.readme.io/docs/build-and-deploy
>
>
>
>
>
> In order to deploy Ignite Web Console locally, you should install:
>
> · MongoDB (version >=3.2.x <=3.4.15) follow instructions from
> site http://docs.mongodb.org/manual/installation
>
> · NodeJS (version >=8.0.0) using installer from site
> https://nodejs.org/en/download/current for your OS.
>
> Download the following dependencies:
>
> · For backend:
> cd $IGNITE_HOME/modules/web-console/backend
> npm install --no-optional
>
> · For frontend:
> cd $IGNITE_HOME/modules/web-console/frontend
> npm install --no-optional
> Building Ignite Web Agent
>
> To build Ignite Web Agent
> 
>  from
> sources, run the following command from the $IGNITE_HOME folder:
> mvn clean package -pl :ignite-web-agent -am -P web-console -DskipTests=true
>
> Once the build process is over, you can find ignite-web-agent-x.x.x.zip
>  in:
> $IGNITE_HOME/modules/web-console/web-agent/target
>
>
>
>
>
> Sent from Mail  for
> Windows 10
>
>
>
> =-=-=
> Notice: The information contained in this e-mail
> message and/or attachments to it may contain
> confidential or privileged information. If you are
> not the intended recipient, any dissemination, use,
> review, distribution, printing or copying of the
> information contained in this e-mail message
> and/or attachments to it are strictly prohibited. If
> you have received this communication in error,
> please notify us by reply e-mail or telephone and
> immediately and permanently delete the message
> and any attachments. Thank you
>
>


Re: Ignite Node Metrics using JMX

2020-05-19 Thread Evgenii Zhuravlev
Hi,

Some node information, like CPU & heap memory, can be found in
ClusterLocalNodeMetricsMXBeanImpl.

As per memory metrics - they are described here:
https://apacheignite.readme.io/docs/memory-metrics#getting-metrics

Evgenii

вт, 19 мая 2020 г. в 01:52, kay :

> Hello, I have 4 remote node for server mode.
>
> I'd like to monitoring that nodes using JMX.
> I want to get CPU, THREAD, HEAP, OFFHEAP, GC and so on..
>
> Can I get thoes things( CPU, THREAD, Heap, offheap, gc) each node??
>
> I figured out there are MXBean(CacheMetrics, CacheGroup, DataRegion,
> DataStorage)
> but I don't know what attribute should I going to use like
> (PhysicalMemorySize, TotalAllocatedSize)
>
> Please help me..
> Thank you
>
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Getting Baseline Topology Error

2020-05-14 Thread Evgenii Zhuravlev
Hi,

When do you usually change the baseline topology? It looks like you changed
topology for part of the nodes and the changed it for another node
separately, so now they have different baseline topologies.

Do you have nodes for runs before this issue happened?

Thanks,
Evgenii

чт, 14 мая 2020 г. в 05:01, BEELA GAYATRI :

> Hi,
>
>
>
>Sometimes I am getting  the below error while starting the server
> nodes. After clearing the cache data in the persistence and Restart the
> nodes, then the nodes are up and running.
>
>
>
> I am using below code for setting BaseLineTopology
>
>
>
> ignite.cluster().active(true);
>
> ignite.cluster().setBaselineTopology(nodes);
>
>
>
>   Please provide some solution why this is happening and what needs to be
> done to get rid of the below error
>
>
>
> Failed to start manager: GridManagerAdapter [enabled=true,
> name=o.a.i.i.managers.discovery.GridDiscoveryManager]
>
> class org.apache.ignite.IgniteCheckedException: Failed to start SPI:
> TcpDiscoverySpi [addrRslvr=null, sockTimeout=5000, ackTimeout=5000,
> marsh=JdkMarshaller
> [clsFilter=org.apache.ignite.marshaller.MarshallerUtils$1@2f4d01b6],
> reconCnt=10, reconDelay=2000, maxAckTimeout=60, forceSrvMode=false,
> clientReconnectDisabled=false, internalLsnr=null]
>
> at
> org.apache.ignite.internal.managers.GridManagerAdapter.startSpi(GridManagerAdapter.java:300)
>
> at
> org.apache.ignite.internal.managers.discovery.GridDiscoveryManager.start(GridDiscoveryManager.java:939)
>
> at
> org.apache.ignite.internal.IgniteKernal.startManager(IgniteKernal.java:1682)
>
> at
> org.apache.ignite.internal.IgniteKernal.start(IgniteKernal.java:1066)
>
> at
> org.apache.ignite.internal.IgnitionEx$IgniteNamedInstance.start0(IgnitionEx.java:2038)
>
> at
> org.apache.ignite.internal.IgnitionEx$IgniteNamedInstance.start(IgnitionEx.java:1730)
>
> at
> org.apache.ignite.internal.IgnitionEx.start0(IgnitionEx.java:1158)
>
> at
> org.apache.ignite.internal.IgnitionEx.startConfigurations(IgnitionEx.java:1076)
>
> at org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.java:962)
>
> at org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.java:861)
>
> at org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.java:731)
>
> at org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.java:700)
>
> at org.apache.ignite.Ignition.start(Ignition.java:348)
>
> at
> com.cmc.facts.grid.WorkerGridInvoker.loadNode(WorkerGridInvoker.java:140)
>
> at
> com.cmc.facts.grid.WorkerGridInvoker.main(WorkerGridInvoker.java:61)
>
> Caused by: class org.apache.ignite.spi.IgniteSpiException:
> BaselineTopology of joining node (bf9321c9-bc4f-49a7-ae07-b6895cdd6645) is
> not compatible with BaselineTopology in the cluster. Branching history of
> cluster BlT ([5934945015, 1007705614, 662484928]) doesn't contain branching
> point hash of joining node BlT (-251217758). Consider cleaning persistent
> storage of the node and adding it to the cluster again.
>
> at
> org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi.checkFailedError(TcpDiscoverySpi.java:1946)
>
> at
> org.apache.ignite.spi.discovery.tcp.ServerImpl.joinTopology(ServerImpl.java:969)
>
> at
> org.apache.ignite.spi.discovery.tcp.ServerImpl.spiStart(ServerImpl.java:391)
>
> at
> org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi.spiStart(TcpDiscoverySpi.java:2020)
>
> at
> org.apache.ignite.internal.managers.GridManagerAdapter.startSpi(GridManagerAdapter.java:297)
>
> ... 14 more
>
> [14:44:55,242][SEVERE][main][IgniteKernal%MATCHERWORKER] Got exception
> while starting (will rollback startup routine).
>
> class org.apache.ignite.IgniteCheckedException: Failed to start manager:
> GridManagerAdapter [enabled=true,
> name=org.apache.ignite.internal.managers.discovery.GridDiscoveryManager]
>
> at
> org.apache.ignite.internal.IgniteKernal.startManager(IgniteKernal.java:1687)
>
> at
> org.apache.ignite.internal.IgniteKernal.start(IgniteKernal.java:1066)
>
> at
> org.apache.ignite.internal.IgnitionEx$IgniteNamedInstance.start0(IgnitionEx.java:2038)
>
> at
> org.apache.ignite.internal.IgnitionEx$IgniteNamedInstance.start(IgnitionEx.java:1730)
>
> at
> org.apache.ignite.internal.IgnitionEx.start0(IgnitionEx.java:1158)
>
> at
> org.apache.ignite.internal.IgnitionEx.startConfigurations(IgnitionEx.java:1076)
>
> at org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.java:962)
>
> at org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.java:861)
>
> at org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.java:731)
>
> at org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.java:700)
>
> at org.apache.ignite.Ignition.start(Ignition.java:348)
>
> at
> com.cmc.facts.grid.WorkerGridInvoker.loadNode(WorkerGridInvoker.java:140)
>
> at
> 

Re: Question on the log of public pool thread [pub-#14505].

2020-05-14 Thread Evgenii Zhuravlev
James,

> But whether could I know the other threads are stopped and destroy?
That's how it works, Here in the code:
https://github.com/apache/ignite/blob/master/modules/core/src/main/java/org/apache/ignite/internal/IgnitionEx.java#L1763
you
can see that Ignite create ThreadPoolExecutor with a max size equals to
yours configured size.

ср, 13 мая 2020 г. в 19:01, James Yuan :

> Thank you for reply.
>
> It seems I should create custom thread pool for the business logics rather
> than use the public pool and reduce its size.
>
> So [pub-#14505] means the 14505th new created thread in public pool? But
> whether could I know the other threads are stopped and destroy? I am afraid
> that the huge threads cause long GC and hang the whole system.
>
> Thanks,
> James.
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Binary recovery for a very long time

2020-05-13 Thread Evgenii Zhuravlev
Can you share full logs from all nodes?

вт, 12 мая 2020 г. в 18:24, 38797715 <38797...@qq.com>:

> Hi Evgenii,
>
> The storage used is not SSD.
>
> We will use different versions of ignite for further testing, such as
> ignite2.8.
> Ignite is configured as follows:
> 
> http://www.springframework.org/schema/beans;
> <http://www.springframework.org/schema/beans>
> xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance;
> <http://www.w3.org/2001/XMLSchema-instance> xsi:schemaLocation="
> http://www.springframework.org/schema/beans
> http://www.springframework.org/schema/beans/spring-beans.xsd;>
>  "org.apache.ignite.configuration.IgniteConfiguration">
> 
> 
> 
> 
> 
> 
> 
> 
> 
>  />
> 
> 
> 
> 
>  "org.apache.ignite.configuration.CacheConfiguration">
> 
> 
> 
> 
> 
> 
>  "org.apache.ignite.configuration.CacheConfiguration">
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 在 2020/5/13 上午4:45, Evgenii Zhuravlev 写道:
>
> Hi,
>
> Can you share full logs and configuration? What disk so you use?
>
> Evgenii
>
> вт, 12 мая 2020 г. в 06:49, 38797715 <38797...@qq.com>:
>
>> Among them:
>> CO_CO_NEW: ~ 48 minutes(partitioned,backup=1,33M)
>>
>> Ignite sys cache: ~ 27 minutes
>>
>> PLM_ITEM:~3 minutes(repicated,1.9K)
>>
>>
>> 在 2020/5/12 下午9:08, 38797715 写道:
>>
>> Hi community,
>>
>> We have 5 servers, 16 cores, 256g memory, and 200g off-heap memory.
>> We have 7 tables to test, and the data volume is
>> respectively:31.8M,495.2M,552.3M,33M,873.3K,28M,1.9K(replicated),others are
>> partitioned(backup = 1)
>>
>> VM args:-server -Xms20g -Xmx20g -XX:+AlwaysPreTouch -XX:+UseG1GC
>> -XX:+ScavengeBeforeFullGC -XX:+DisableExplicitGC -XX:+PrintGCDetails
>> -XX:+PrintGCTimeStamps -XX:+PrintGCDateStamps -XX:+UseGCLogFileRotation
>> -XX:NumberOfGCLogFiles=10 -XX:GCLogFileSize=100M
>> -Xloggc:/data/gc/logs/gclog.txt -Djava.net.preferIPv4Stack=true
>> -XX:MaxDirectMemorySize=256M -XX:+PrintAdaptiveSizePolicy
>>
>> Today, one of the servers was restarted(kill and then start ignite.sh)
>> for some reason, but the node took 1.5 hours to start, which was much
>> longer than expected.
>>
>> After analyzing the log, the following information is found:
>> [2020-05-12T17:00:05,138][INFO ][main][GridCacheDatabaseSharedManager]
>> Found last checkpoint marker [cpId=7a0564f2-43e5-400b-9439-746fc68a6ccb,
>> pos=FileWALPointer [idx=10511, fileOff=5134, len=61193]]
>> [2020-05-12T17:00:05,151][INFO ][main][GridCacheDatabaseSharedManager]
>> Binary memory state restored at node startup [restoredPtr=FileWALPointer
>> [idx=10511, fileOff=51410110, len=0]]
>> [2020-05-12T17:00:05,152][INFO ][main][FileWriteAheadLogManager]
>> Resuming logging to WAL segment [file=/appdata/ignite/db/wal/24/
>> 0001.wal, offset=51410110, ver=2]
>> [2020-05-12T17:00:06,448][INFO ][main][PageMemoryImpl] Started page
>> memory [memoryAllocated=200.0 GiB, pages=50821088, tableSize=3.9 GiB,
>> checkpointBuffer=2.0 GiB]
>> [2020-05-12T17:02:08,528][INFO ][main][GridCacheProcessor] Started cache
>> in recovery mode [name=CO_CO_NEW, id=-189779360, dataRegionName=default,
>> mode=PARTITIONED, atomicity=ATOMIC, backups=1, mvcc=false]
>> [2020-05-12T17:50:44,341][INFO ][main][GridCacheProcessor] Started cache
>> in recovery mode [name=CO_CO_LINE, id=-1588248812,
>> dataRegionName=default, mode=PARTITIONED, atomicity=ATOMIC, backups=1,
>> mvcc=false]
>> [2020-05-12T17:50:44,366][INFO ][main][GridCacheProcessor] Started cache
>> in recovery mode [name=ignite-sys-cache, id=-2100569601,
>> dataRegionName=sysMemPlc, mode=REPLICATED, atomicity=TRANSACTIONAL, backups=
>> 2147483647, mvcc=false]
>> [2020-05-12T18:17:57,071][INFO ][main][GridCacheProcessor] Started cache
>> in recovery mode [name=CO_CO_LINE_NEW, id=1742991829,
>> dataRegionName=default, mode=PARTITIONED, atomicity=ATOMIC, backups=1,
>> mvcc=false]
>> [2020-05-12T18:19:54,910][INFO ][main][GridCacheProcessor] Started cache
>> in recovery mode [name=PI_COM_DAY, id=-1904194728,
>> dataRegionName=default, mode=PARTITIONED, atomicity=ATOMIC, backups=1,
>> mvcc=false]
>> [2020-05-12T18:19:54,949][INFO ][main][GridCacheProcessor] Started cache
>> in recovery mode [name=PLM_ITEM, id=-1283854143, dataRegionName=default,
>> mode=REPLICATED, atomicity=ATOMIC, backups=2147483647, mvcc=false]
>> [2020-05-12T18:22:53,662][INFO ][main][GridCacheProcessor] Start

Re: Question on the log of public pool thread [pub-#14505].

2020-05-13 Thread Evgenii Zhuravlev
James,

This is a thread number. The public thread pool size can be reduced after
idle timeout. After this, if needed, a new thread will be created with a
new id.

By the way, 640 is a pretty big size, how many cores do you have? So big
thread pool can lead to performance degradation due to a lot of context
switching.

Evgenii

вт, 12 мая 2020 г. в 21:16, James Yuan :

> Hi,
>
> I have the following log.
>
> [2020-05-13 09:33:49] [pub-#14505] INFO
>  c.n.b.e.l.ActiveSyncServiceEventListener - Event [id=CommandEvent] has
> been handled on ContextKey{key='0021fc3b-b293-4f8a-b62d-25c51ec52586'}.
>
> What's does the [pub-#14505] mean? The number of thread in public pool? Or
> the number of computed task?
>
> If pub-#14505 number increase all the time, does it mean the public thread
> is not returned back due to my program issue? I have only configured the
> public pool thread number 640.
>
> Thanks,
> James.
>


Re: ignite node ports

2020-05-13 Thread Evgenii Zhuravlev
Hi,

Ports are described here:
https://dzone.com/articles/a-simple-checklist-for-apache-ignite-beginners

Basically, Discovery(47500 by default) and Communication(47100) are always
should be open, since without them cluster won't be functional. Discovery
port used for clustering, checking all nodes state in the cluster.

Communication port(
https://ignite.apache.org/releases/latest/javadoc/org/apache/ignite/spi/communication/tcp/TcpCommunicationSpi.html#setLocalPort-int-)
used
for all other communications between nodes, for example, cache operations
requests, compute jobs, etc.

Rest Port(8080) is used for rest calls(
https://apacheignite.readme.io/docs/rest-api) and connection from
WebConsole(Management tool)

Client connector port(10800) is used for the JDBC(
https://apacheignite-sql.readme.io/docs/jdbc-driver), ODBC(
https://apacheignite-sql.readme.io/docs/odbc-driver) or other thin clients(
https://apacheignite.readme.io/docs/java-thin-client) connection.

11211 - Port for Thick Jdbc Driver and old rest protocol.

Note that all ports also have port range, which means that if the default
port is already in use, it will try to use the next one.

Evgenii





вт, 12 мая 2020 г. в 22:56, kay :

> Hello, I started ignite node and checked log file.
>
> I found TCP ports in logs
>
> >>> Local ports : TCP:8080 TCP:11213 TCP:47102 TCP:49100 TCP:49200
>
> I set 49100, 49200 port at configuration file for ignite node and client
> connector port.
> but I don't know the others port exactly.
>
> I found a summary at log.
>
> [Node 1]
> TCP binary : 8080
> Jetty REST  : 11213
> Communication spi : 47102
>
> [Node 2]
> TCP binary : 8081
> Jetty REST  : 11214
> Communication spi : 47103
>
> Could you guys tell me where each port is used??
>
> Is it necessary ports?
> Do I need 5 ports each time add a new node all of different port?
> if it is true, how can i set TCP binary port(8080) & Jetty REST(11213) at
> configuration file ??
>
>
>
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Binary recovery for a very long time

2020-05-12 Thread Evgenii Zhuravlev
Hi,

Can you share full logs and configuration? What disk so you use?

Evgenii

вт, 12 мая 2020 г. в 06:49, 38797715 <38797...@qq.com>:

> Among them:
> CO_CO_NEW: ~ 48 minutes(partitioned,backup=1,33M)
>
> Ignite sys cache: ~ 27 minutes
>
> PLM_ITEM:~3 minutes(repicated,1.9K)
>
>
> 在 2020/5/12 下午9:08, 38797715 写道:
>
> Hi community,
>
> We have 5 servers, 16 cores, 256g memory, and 200g off-heap memory.
> We have 7 tables to test, and the data volume is
> respectively:31.8M,495.2M,552.3M,33M,873.3K,28M,1.9K(replicated),others are
> partitioned(backup = 1)
>
> VM args:-server -Xms20g -Xmx20g -XX:+AlwaysPreTouch -XX:+UseG1GC
> -XX:+ScavengeBeforeFullGC -XX:+DisableExplicitGC -XX:+PrintGCDetails
> -XX:+PrintGCTimeStamps -XX:+PrintGCDateStamps -XX:+UseGCLogFileRotation
> -XX:NumberOfGCLogFiles=10 -XX:GCLogFileSize=100M
> -Xloggc:/data/gc/logs/gclog.txt -Djava.net.preferIPv4Stack=true
> -XX:MaxDirectMemorySize=256M -XX:+PrintAdaptiveSizePolicy
>
> Today, one of the servers was restarted(kill and then start ignite.sh) for
> some reason, but the node took 1.5 hours to start, which was much longer
> than expected.
>
> After analyzing the log, the following information is found:
> [2020-05-12T17:00:05,138][INFO ][main][GridCacheDatabaseSharedManager]
> Found last checkpoint marker [cpId=7a0564f2-43e5-400b-9439-746fc68a6ccb,
> pos=FileWALPointer [idx=10511, fileOff=5134, len=61193]]
> [2020-05-12T17:00:05,151][INFO ][main][GridCacheDatabaseSharedManager]
> Binary memory state restored at node startup [restoredPtr=FileWALPointer
> [idx=10511, fileOff=51410110, len=0]]
> [2020-05-12T17:00:05,152][INFO ][main][FileWriteAheadLogManager] Resuming
> logging to WAL segment [file=/appdata/ignite/db/wal/24/0001.wal,
> offset=51410110, ver=2]
> [2020-05-12T17:00:06,448][INFO ][main][PageMemoryImpl] Started page
> memory [memoryAllocated=200.0 GiB, pages=50821088, tableSize=3.9 GiB,
> checkpointBuffer=2.0 GiB]
> [2020-05-12T17:02:08,528][INFO ][main][GridCacheProcessor] Started cache
> in recovery mode [name=CO_CO_NEW, id=-189779360, dataRegionName=default,
> mode=PARTITIONED, atomicity=ATOMIC, backups=1, mvcc=false]
> [2020-05-12T17:50:44,341][INFO ][main][GridCacheProcessor] Started cache
> in recovery mode [name=CO_CO_LINE, id=-1588248812,
> dataRegionName=default, mode=PARTITIONED, atomicity=ATOMIC, backups=1,
> mvcc=false]
> [2020-05-12T17:50:44,366][INFO ][main][GridCacheProcessor] Started cache
> in recovery mode [name=ignite-sys-cache, id=-2100569601,
> dataRegionName=sysMemPlc, mode=REPLICATED, atomicity=TRANSACTIONAL, backups=
> 2147483647, mvcc=false]
> [2020-05-12T18:17:57,071][INFO ][main][GridCacheProcessor] Started cache
> in recovery mode [name=CO_CO_LINE_NEW, id=1742991829,
> dataRegionName=default, mode=PARTITIONED, atomicity=ATOMIC, backups=1,
> mvcc=false]
> [2020-05-12T18:19:54,910][INFO ][main][GridCacheProcessor] Started cache
> in recovery mode [name=PI_COM_DAY, id=-1904194728,
> dataRegionName=default, mode=PARTITIONED, atomicity=ATOMIC, backups=1,
> mvcc=false]
> [2020-05-12T18:19:54,949][INFO ][main][GridCacheProcessor] Started cache
> in recovery mode [name=PLM_ITEM, id=-1283854143, dataRegionName=default,
> mode=REPLICATED, atomicity=ATOMIC, backups=2147483647, mvcc=false]
> [2020-05-12T18:22:53,662][INFO ][main][GridCacheProcessor] Started cache
> in recovery mode [name=CO_CO, id=64322847, dataRegionName=default,
> mode=PARTITIONED, atomicity=ATOMIC, backups=1, mvcc=false]
> [2020-05-12T18:22:54,876][INFO ][main][GridCacheProcessor] Started cache
> in recovery mode [name=CO_CUST, id=1684722246, dataRegionName=default,
> mode=PARTITIONED, atomicity=ATOMIC, backups=1, mvcc=false]
> [2020-05-12T18:22:54,892][INFO ][main][GridCacheDatabaseSharedManager]
> Binary recovery performed in 4970233 ms.
>
> Among them, binary recovery took 4970 seconds.
>
> Our question is:
>
> 1.Why is the start time so long?
>
> 2.Is the current state of ignite, with the growth of single node data
> volume, the restart time will be longer and longer?
>
> 3.Do have any suggestions for optimizing the restart time?
>
>


Re: Schema Questions

2020-05-12 Thread Evgenii Zhuravlev
There is no way to define nested collection of addresses as SQL fields. The
problem is that there is no such types in JDBC, so, it just won't work. So,
if you want to use SQL, just have separate tables for these objects.




вт, 12 мая 2020 г. в 06:07, narges saleh :

> Thanks Evgenii.
> My next two questions are, assuming I go with option 1.1:
> 1) How do I define these nested addresses via query entities, assuming,
> I'd use binaryobjects when inserting. There can be multiple primary
> addresses and secondary addresses. E.g., {john,{primary-address:[addr1,
> addr2], secondary-addess:[addr3, addr4, addr5]}}
> 2) Can I use SQL if I am filtering by person and then I want certain
> information in the addresses?  say I want all the primary addresses for
> john., or I want the cities for the primary addresses for John.
>
> thanks.
>
> On Mon, May 11, 2020 at 4:56 PM Evgenii Zhuravlev <
> e.zhuravlev...@gmail.com> wrote:
>
>> Hi,
>>
>> The main question here is how you want to use this data. Do you use SQL?
>>
>> 1) It depends on the use case. If you plan to access only a person object
>> without any filtering by addresses and you will always need the entire
>> object, it makes sense to have one big object. But in this case, you won't
>> be able to filter persons by addresses, since SQL doesn't work with
>> collections. So, if you want to use SQL, it definitely makes sense to use
>> the second approach.
>>
>> 2) Of course, if you already have unique ID for object, it makes sense to
>> use it as a key, there is no need to generate an additional field for this.
>>
>> Evgenii
>>
>> пн, 11 мая 2020 г. в 09:20, narges saleh :
>>
>>> Hi All,
>>>
>>> I would appreciate your feedback, for the following, in terms of
>>> performance for both inserts and queries.
>>>
>>> 1) Which one of these patterns is preferable for the table design?
>>> A- Have a fat table/cache with nested objects, e.g. person table with a
>>> hashmap of addresses.
>>> B- Have person and address tables separate and just link them via
>>> foreign keys.
>>>
>>> 2) Which one of these patterns is preferable for primary keys?
>>> A- Have a UUID + affinity key as the primary key
>>> B- Have the keys spelled out + affinity key. For example, assume person
>>> table, combination of age and name uniquely identifies a person, so the key
>>> will be person-name, person-age, and org-id.
>>> If I have a associative table joining persons and addresses (if address
>>> is a separate object), then in case B, I will have to include three fields
>>> from person and the id from the address table, as opposed to case A, where
>>> I will have UUID + orgid + address id. Would having one less field buy me
>>> much, as opposed to having the overhead of creating UUIDs?
>>>
>>> thanks
>>>
>>>


Re: Schema Questions

2020-05-11 Thread Evgenii Zhuravlev
Hi,

The main question here is how you want to use this data. Do you use SQL?

1) It depends on the use case. If you plan to access only a person object
without any filtering by addresses and you will always need the entire
object, it makes sense to have one big object. But in this case, you won't
be able to filter persons by addresses, since SQL doesn't work with
collections. So, if you want to use SQL, it definitely makes sense to use
the second approach.

2) Of course, if you already have unique ID for object, it makes sense to
use it as a key, there is no need to generate an additional field for this.

Evgenii

пн, 11 мая 2020 г. в 09:20, narges saleh :

> Hi All,
>
> I would appreciate your feedback, for the following, in terms of
> performance for both inserts and queries.
>
> 1) Which one of these patterns is preferable for the table design?
> A- Have a fat table/cache with nested objects, e.g. person table with a
> hashmap of addresses.
> B- Have person and address tables separate and just link them via foreign
> keys.
>
> 2) Which one of these patterns is preferable for primary keys?
> A- Have a UUID + affinity key as the primary key
> B- Have the keys spelled out + affinity key. For example, assume person
> table, combination of age and name uniquely identifies a person, so the key
> will be person-name, person-age, and org-id.
> If I have a associative table joining persons and addresses (if address is
> a separate object), then in case B, I will have to include three fields
> from person and the id from the address table, as opposed to case A, where
> I will have UUID + orgid + address id. Would having one less field buy me
> much, as opposed to having the overhead of creating UUIDs?
>
> thanks
>
>


Re: Cache was inconsistent state

2020-05-11 Thread Evgenii Zhuravlev
John,

Yes, client nodes should have this parameter too.

Evgenii

пн, 11 мая 2020 г. в 07:54, John Smith :

> I mean both the prefer IPV4 and the Zookeeper discovery should be on the
> "central" cluster as well as all nodes specifically marked as client = true?
>
> On Mon, 11 May 2020 at 09:59, John Smith  wrote:
>
>> Should be on client nodes as well that are specifically setClient = true?
>>
>> On Fri, 8 May 2020 at 22:26, Evgenii Zhuravlev 
>> wrote:
>>
>>> John,
>>>
>>> It looks like a split-brain. They were in one cluster at first. I'm not
>>> sure what was the reason for this, it could be a network problem or
>>> something else.
>>>
>>> I saw in logs that you use both ipv4 and ipv6, I would recommend using
>>> only one of them to avoid problems - just add 
>>> -Djava.net.preferIPv4Stack=true
>>> to all nodes in the cluster.
>>>
>>> Also, to avoid split-brain situations, you can use Zookeeper Discovery:
>>> https://apacheignite.readme.io/docs/zookeeper-discovery#failures-and-split-brain-handling
>>>  or
>>> implement Segmentation resolver. More information about the second can be
>>> found on the forum, for example, here:
>>> http://apache-ignite-users.70518.x6.nabble.com/split-brain-problem-and-GridSegmentationProcessor-td14590.html
>>>
>>> Evgenii
>>>
>>> пт, 8 мая 2020 г. в 14:30, John Smith :
>>>
>>>> How though? It's the same cluster! We haven't changed anything
>>>> this happened on it's own...
>>>>
>>>> All I did was reboot the node and the cluster fixed itself.
>>>>
>>>> On Fri, 8 May 2020 at 15:32, Evgenii Zhuravlev <
>>>> e.zhuravlev...@gmail.com> wrote:
>>>>
>>>>> Hi John,
>>>>>
>>>>> *Yes, it looks like they are in a different clusters:*
>>>>> *Metrics from the node with a problem:*
>>>>> [15:17:28,668][INFO][grid-timeout-worker-#23%xx%][IgniteKernal%xx]
>>>>>
>>>>> Metrics for local node (to disable set 'metricsLogFrequency' to 0)
>>>>> ^-- Node [id=5bbf262e, name=xx, uptime=93 days, 19:36:10.921]
>>>>> ^-- H/N/C [hosts=3, nodes=4, CPUs=10]
>>>>>
>>>>> *Metrics from another node:*
>>>>> [15:17:05,635][INFO][grid-timeout-worker-#23%xx%][IgniteKernal%xx]
>>>>>
>>>>> Metrics for local node (to disable set 'metricsLogFrequency' to 0)
>>>>> ^-- Node [id=dddefdcd, name=xx, uptime=19 days, 16:49:48.381]
>>>>> ^-- H/N/C [hosts=6, nodes=7, CPUs=21]
>>>>>
>>>>> *The same topology versions for 2 nodes has different nodes:*
>>>>> [03:56:17,643][INFO][disco-event-worker-#42%xx%][GridDiscoveryManager]
>>>>> Topology snapshot [ver=1036, locNode=5bbf262e, servers=1, clients=3,
>>>>> state=ACTIVE, CPUs=10, offheap=10.0GB, heap=13.0GB]
>>>>> [03:56:17,643][INFO][disco-event-worker-#42%xx%][GridDiscoveryManager]
>>>>>   ^-- Baseline [id=0, size=3, online=1, offline=2]
>>>>>
>>>>> *And*
>>>>>
>>>>> [03:56:43,388][INFO][disco-event-worker-#42%xx%][GridDiscoveryManager]
>>>>> Topology snapshot [ver=1036, locNode=4394fdd4, servers=2, clients=2,
>>>>> state=ACTIVE, CPUs=15, offheap=20.0GB, heap=19.0GB]
>>>>> [03:56:43,389][INFO][disco-event-worker-#42%xx%][GridDiscoveryManager]
>>>>>   ^-- Baseline [id=0, size=3, online=2, offline=1]
>>>>>
>>>>> So, it's just 2 different clusters.
>>>>>
>>>>> Best Regards,
>>>>> Evgenii
>>>>>
>>>>> пт, 8 мая 2020 г. в 08:50, John Smith :
>>>>>
>>>>>> Hi Evgenii, here the logs.
>>>>>>
>>>>>> https://www.dropbox.com/s/ke71qsoqg588kc8/ignite-logs.zip?dl=0
>>>>>>
>>>>>> On Fri, 8 May 2020 at 09:21, John Smith 
>>>>>> wrote:
>>>>>>
>>>>>>> Ok let me try get them...
>>>>>>>
>>>>>>> On Thu., May 7, 2020, 1:14 p.m. Evgenii Zhuravlev, <
>>>>>>> e.zhuravlev...@gmail.com> wrote:
>>>>>>>
>>>>>>>> Hi,
>>>>>>>>
>>>>>>>> It looks like the third server node was not a part of this cluster
>>>>>

Re: How to list caches and the number of key-values inside them

2020-05-11 Thread Evgenii Zhuravlev
Hi,

Ignite binaries contain visor cmd tool, it can do what you want:
https://apacheignite-tools.readme.io/docs/command-line-interface

Evgenii

пн, 11 мая 2020 г. в 00:05, scriptnull :

> I would like to know if there is a command line tool that will help us list
> down the caches and the respective number of keys present in each cache.
>
> The closest I have got is using `control.sh` to list down the names of the
> caches.
>
> Also since we are currently only using the redis layer to create the dbs, I
> wrote this little shell script to do it.
>
> ```
> dbsize() {
> /usr/share/apache-ignite/bin/control.sh --cache list redis-* | grep -o
> "redis-ignite-internal-cache-\w*" | sed -r
> 's/redis-ignite-internal-cache-//g' | xargs -n 1 --replace="{}" sh -c
> 'printf "Database Number = {} | DBSIZE = "; redis-cli -p 11211 -n {}
> dbsize'
> }
> ```
>
> But I really wonder, if there is a clean of doing this!
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Cache was inconsistent state

2020-05-08 Thread Evgenii Zhuravlev
John,

It looks like a split-brain. They were in one cluster at first. I'm not
sure what was the reason for this, it could be a network problem or
something else.

I saw in logs that you use both ipv4 and ipv6, I would recommend using
only one of them to avoid problems - just add -Djava.net.preferIPv4Stack=true
to all nodes in the cluster.

Also, to avoid split-brain situations, you can use Zookeeper Discovery:
https://apacheignite.readme.io/docs/zookeeper-discovery#failures-and-split-brain-handling
or
implement Segmentation resolver. More information about the second can be
found on the forum, for example, here:
http://apache-ignite-users.70518.x6.nabble.com/split-brain-problem-and-GridSegmentationProcessor-td14590.html

Evgenii

пт, 8 мая 2020 г. в 14:30, John Smith :

> How though? It's the same cluster! We haven't changed anything
> this happened on it's own...
>
> All I did was reboot the node and the cluster fixed itself.
>
> On Fri, 8 May 2020 at 15:32, Evgenii Zhuravlev 
> wrote:
>
>> Hi John,
>>
>> *Yes, it looks like they are in a different clusters:*
>> *Metrics from the node with a problem:*
>> [15:17:28,668][INFO][grid-timeout-worker-#23%xx%][IgniteKernal%xx]
>>
>> Metrics for local node (to disable set 'metricsLogFrequency' to 0)
>> ^-- Node [id=5bbf262e, name=xx, uptime=93 days, 19:36:10.921]
>> ^-- H/N/C [hosts=3, nodes=4, CPUs=10]
>>
>> *Metrics from another node:*
>> [15:17:05,635][INFO][grid-timeout-worker-#23%xx%][IgniteKernal%xx]
>>
>> Metrics for local node (to disable set 'metricsLogFrequency' to 0)
>> ^-- Node [id=dddefdcd, name=xx, uptime=19 days, 16:49:48.381]
>> ^-- H/N/C [hosts=6, nodes=7, CPUs=21]
>>
>> *The same topology versions for 2 nodes has different nodes:*
>> [03:56:17,643][INFO][disco-event-worker-#42%xx%][GridDiscoveryManager]
>> Topology snapshot [ver=1036, locNode=5bbf262e, servers=1, clients=3,
>> state=ACTIVE, CPUs=10, offheap=10.0GB, heap=13.0GB]
>> [03:56:17,643][INFO][disco-event-worker-#42%xx%][GridDiscoveryManager]
>>   ^-- Baseline [id=0, size=3, online=1, offline=2]
>>
>> *And*
>>
>> [03:56:43,388][INFO][disco-event-worker-#42%xx%][GridDiscoveryManager]
>> Topology snapshot [ver=1036, locNode=4394fdd4, servers=2, clients=2,
>> state=ACTIVE, CPUs=15, offheap=20.0GB, heap=19.0GB]
>> [03:56:43,389][INFO][disco-event-worker-#42%xx%][GridDiscoveryManager]
>>   ^-- Baseline [id=0, size=3, online=2, offline=1]
>>
>> So, it's just 2 different clusters.
>>
>> Best Regards,
>> Evgenii
>>
>> пт, 8 мая 2020 г. в 08:50, John Smith :
>>
>>> Hi Evgenii, here the logs.
>>>
>>> https://www.dropbox.com/s/ke71qsoqg588kc8/ignite-logs.zip?dl=0
>>>
>>> On Fri, 8 May 2020 at 09:21, John Smith  wrote:
>>>
>>>> Ok let me try get them...
>>>>
>>>> On Thu., May 7, 2020, 1:14 p.m. Evgenii Zhuravlev, <
>>>> e.zhuravlev...@gmail.com> wrote:
>>>>
>>>>> Hi,
>>>>>
>>>>> It looks like the third server node was not a part of this cluster
>>>>> before restart. Can you share full logs from all server nodes?
>>>>>
>>>>> Evgenii
>>>>>
>>>>> чт, 7 мая 2020 г. в 09:11, John Smith :
>>>>>
>>>>>> Hi, running 2.7.0 on 3 deployed on VMs running Ubuntu.
>>>>>>
>>>>>> I checked the state of the cluster by going
>>>>>> to: /ignite?cmd=currentState
>>>>>> And the response was: 
>>>>>> {"successStatus":0,"error":null,"sessionToken":null,"response":true}
>>>>>> I also checked: /ignite?cmd=size=
>>>>>>
>>>>>> 2 nodes where reporting 3 million records
>>>>>> 1 node was reporting 2 million records.
>>>>>>
>>>>>> When I connected to visor and ran the node command... The details
>>>>>> where wrong as it only showed 2 server nodes and only 1 client, but 3
>>>>>> server nodes actually exist and more clients are connected.
>>>>>>
>>>>>> So I rebooted the node that was claiming 2 million records instead of
>>>>>> 3 and when I re-ran the node command displayed all the proper nodes.
>>>>>> Also after the reboot all the nodes started reporting 2 million
>>>>&

Re: Cache was inconsistent state

2020-05-08 Thread Evgenii Zhuravlev
Hi John,

*Yes, it looks like they are in a different clusters:*
*Metrics from the node with a problem:*
[15:17:28,668][INFO][grid-timeout-worker-#23%xx%][IgniteKernal%xx]
Metrics for local node (to disable set 'metricsLogFrequency' to 0)
^-- Node [id=5bbf262e, name=xx, uptime=93 days, 19:36:10.921]
^-- H/N/C [hosts=3, nodes=4, CPUs=10]

*Metrics from another node:*
[15:17:05,635][INFO][grid-timeout-worker-#23%xx%][IgniteKernal%xx]
Metrics for local node (to disable set 'metricsLogFrequency' to 0)
^-- Node [id=dddefdcd, name=xx, uptime=19 days, 16:49:48.381]
^-- H/N/C [hosts=6, nodes=7, CPUs=21]

*The same topology versions for 2 nodes has different nodes:*
[03:56:17,643][INFO][disco-event-worker-#42%xx%][GridDiscoveryManager]
Topology snapshot [ver=1036, locNode=5bbf262e, servers=1, clients=3,
state=ACTIVE, CPUs=10, offheap=10.0GB, heap=13.0GB]
[03:56:17,643][INFO][disco-event-worker-#42%xx%][GridDiscoveryManager]
  ^-- Baseline [id=0, size=3, online=1, offline=2]

*And*

[03:56:43,388][INFO][disco-event-worker-#42%xx%][GridDiscoveryManager]
Topology snapshot [ver=1036, locNode=4394fdd4, servers=2, clients=2,
state=ACTIVE, CPUs=15, offheap=20.0GB, heap=19.0GB]
[03:56:43,389][INFO][disco-event-worker-#42%xx%][GridDiscoveryManager]
  ^-- Baseline [id=0, size=3, online=2, offline=1]

So, it's just 2 different clusters.

Best Regards,
Evgenii

пт, 8 мая 2020 г. в 08:50, John Smith :

> Hi Evgenii, here the logs.
>
> https://www.dropbox.com/s/ke71qsoqg588kc8/ignite-logs.zip?dl=0
>
> On Fri, 8 May 2020 at 09:21, John Smith  wrote:
>
>> Ok let me try get them...
>>
>> On Thu., May 7, 2020, 1:14 p.m. Evgenii Zhuravlev, <
>> e.zhuravlev...@gmail.com> wrote:
>>
>>> Hi,
>>>
>>> It looks like the third server node was not a part of this cluster
>>> before restart. Can you share full logs from all server nodes?
>>>
>>> Evgenii
>>>
>>> чт, 7 мая 2020 г. в 09:11, John Smith :
>>>
>>>> Hi, running 2.7.0 on 3 deployed on VMs running Ubuntu.
>>>>
>>>> I checked the state of the cluster by going to: /ignite?cmd=currentState
>>>> And the response was: 
>>>> {"successStatus":0,"error":null,"sessionToken":null,"response":true}
>>>> I also checked: /ignite?cmd=size=
>>>>
>>>> 2 nodes where reporting 3 million records
>>>> 1 node was reporting 2 million records.
>>>>
>>>> When I connected to visor and ran the node command... The details where
>>>> wrong as it only showed 2 server nodes and only 1 client, but 3 server
>>>> nodes actually exist and more clients are connected.
>>>>
>>>> So I rebooted the node that was claiming 2 million records instead of 3
>>>> and when I re-ran the node command displayed all the proper nodes.
>>>> Also after the reboot all the nodes started reporting 2 million records
>>>> instead of 3 million so there some sort of rebalancing or correction (the
>>>> cache has a 90 day TTL)?
>>>>
>>>>
>>>>
>>>> Before reboot
>>>>
>>>> +=+
>>>> | # |   Node ID8(@), IP   |Consistent ID
>>>>   | Node Type | Up Time  | CPUs | CPU Load | Free Heap |
>>>>
>>>> +=+
>>>> | 0 | xx(@n0), xx.69 | xx | Server| 20:25:30 | 4|
>>>> 1.27 %   | 84.00 %   |
>>>> | 1 | xx(@n1), xx.1 | xx | Client| 13:12:01 | 3|
>>>> 0.67 %   | 74.00 %   |
>>>> | 2 | xx(@n2), xx.63 | xx | Server| 16:55:05 | 4|
>>>> 6.57 %   | 84.00 %   |
>>>>
>>>> +-+
>>>>
>>>> After reboot
>>>>
>>>> +=+
>>>> | # |   Node ID8(@), IP   |Consistent ID
>>>>   | Node Type | Up Time  | CPUs | CPU Load | Free Heap |
>>>>
>>>> +=+
>>>> | 0 | xx(@n0), xx.69 | xx | Server| 21:13:45 | 4|
>>>> 0.77 %   | 56.00 %   |
>>>> | 1 | xx(@n1), xx.1 | xx | Client| 14:00:17 | 3|
>>>> 0.77 %   | 56.00 %   |
>>>> | 2 | xx(@n2), xx.63 | xx | Server| 17:43:20 | 4|
>>>> 1.00 %   | 60.00 %   |
>>>> | 3 | xx(@n3), xx.65 | xx | Client| 01:42:45 | 4|
>>>> 4.10 %   | 56.00 %   |
>>>> | 4 | xx(@n4), xx.65 | xx | Client| 01:42:45 | 4|
>>>> 3.93 %   | 56.00 %   |
>>>> | 5 | xx(@n5), xx.1 | xx | Client| 16:59:53 | 2|
>>>> 0.67 %   | 91.00 %   |
>>>> | 6 | xx(@n6), xx.79 | xx | Server| 00:41:31 | 4|
>>>> 1.00 %   | 97.00 %   |
>>>>
>>>> +-+
>>>>
>>>


Re: Deploying the Ignite Maven Project in LINUX

2020-05-07 Thread Evgenii Zhuravlev
Hi,

as for fat jar: https://www.baeldung.com/executable-jar-with-maven

But it's not necessary to create a fat jar, you can just put all needed
libraries to the lib folder and add these libs to the classpath when you
start jar:
https://docs.oracle.com/javase/7/docs/technotes/tools/windows/classpath.html

Evgenii

чт, 7 мая 2020 г. в 12:47, nithin91 <
nithinbharadwaj.govindar...@franklintempleton.com>:

> Hi
>
>  I am new to JAVA. If possible , can you please share a link on how to
> create fat jar(i.e jar with all dependencies) using eclipse.
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Can you change ExpiryPolicy of existing cache?

2020-05-07 Thread Evgenii Zhuravlev
It's not changing the default Expiry Policy, you will have a new Expiry
Policy as long as you use this "cache" object. withExpiryPolivy return
proxy with a new policy, so, all objects that were inserted using this
"cache" object, will have a new policy.

Evgenii

чт, 7 мая 2020 г. в 10:39, John Smith :

> Ok cool. I create my cache using a template and the rest API, but when I
> start my application I do...
>
> cache = this.ignite.cache(this.cacheName)
> .withExpiryPolicy(new ModifiedExpiryPolicy(new Duration(timeUnit, 
> this.cacheTtlDuration)));
>
> Can it be changed then at startup as am doing above?
>
> Or at this point I can only do cache.withExpiryPolicy(...).put(key, value);
>
>
> On Thu, 7 May 2020 at 13:31, Evgenii Zhuravlev 
> wrote:
>
>> Hi,
>>
>> There is no way to change the default policy for the already created
>> cache. The expiry policy can be changed for all operations on one cache
>> proxy object using withExpiryPolicy.
>>
>> Evgenii
>>
>> чт, 7 мая 2020 г. в 09:46, John Smith :
>>
>>> Hi running 2.7.0
>>>
>>> I created a cache with ModifiedExpiryPolicy
>>>
>>> Can we change the policy of the created cache? I know we can do per
>>> write but can we change the default of the existing cache to another policy?
>>>
>>


Re: Can you change ExpiryPolicy of existing cache?

2020-05-07 Thread Evgenii Zhuravlev
Hi,

There is no way to change the default policy for the already created cache.
The expiry policy can be changed for all operations on one cache proxy
object using withExpiryPolicy.

Evgenii

чт, 7 мая 2020 г. в 09:46, John Smith :

> Hi running 2.7.0
>
> I created a cache with ModifiedExpiryPolicy
>
> Can we change the policy of the created cache? I know we can do per write
> but can we change the default of the existing cache to another policy?
>


Re: Cache was inconsistent state

2020-05-07 Thread Evgenii Zhuravlev
Hi,

It looks like the third server node was not a part of this cluster before
restart. Can you share full logs from all server nodes?

Evgenii

чт, 7 мая 2020 г. в 09:11, John Smith :

> Hi, running 2.7.0 on 3 deployed on VMs running Ubuntu.
>
> I checked the state of the cluster by going to: /ignite?cmd=currentState
> And the response was: 
> {"successStatus":0,"error":null,"sessionToken":null,"response":true}
> I also checked: /ignite?cmd=size=
>
> 2 nodes where reporting 3 million records
> 1 node was reporting 2 million records.
>
> When I connected to visor and ran the node command... The details where
> wrong as it only showed 2 server nodes and only 1 client, but 3 server
> nodes actually exist and more clients are connected.
>
> So I rebooted the node that was claiming 2 million records instead of 3
> and when I re-ran the node command displayed all the proper nodes.
> Also after the reboot all the nodes started reporting 2 million records
> instead of 3 million so there some sort of rebalancing or correction (the
> cache has a 90 day TTL)?
>
>
>
> Before reboot
>
> +=+
> | # |   Node ID8(@), IP   |Consistent ID |
> Node Type | Up Time  | CPUs | CPU Load | Free Heap |
>
> +=+
> | 0 | xx(@n0), xx.69 | xx | Server| 20:25:30 | 4| 1.27
> %   | 84.00 %   |
> | 1 | xx(@n1), xx.1 | xx | Client| 13:12:01 | 3| 0.67
> %   | 74.00 %   |
> | 2 | xx(@n2), xx.63 | xx | Server| 16:55:05 | 4| 6.57
> %   | 84.00 %   |
>
> +-+
>
> After reboot
>
> +=+
> | # |   Node ID8(@), IP   |Consistent ID |
> Node Type | Up Time  | CPUs | CPU Load | Free Heap |
>
> +=+
> | 0 | xx(@n0), xx.69 | xx | Server| 21:13:45 | 4| 0.77
> %   | 56.00 %   |
> | 1 | xx(@n1), xx.1 | xx | Client| 14:00:17 | 3| 0.77
> %   | 56.00 %   |
> | 2 | xx(@n2), xx.63 | xx | Server| 17:43:20 | 4| 1.00
> %   | 60.00 %   |
> | 3 | xx(@n3), xx.65 | xx | Client| 01:42:45 | 4| 4.10
> %   | 56.00 %   |
> | 4 | xx(@n4), xx.65 | xx | Client| 01:42:45 | 4| 3.93
> %   | 56.00 %   |
> | 5 | xx(@n5), xx.1 | xx | Client| 16:59:53 | 2| 0.67
> %   | 91.00 %   |
> | 6 | xx(@n6), xx.79 | xx | Server| 00:41:31 | 4| 1.00
> %   | 97.00 %   |
>
> +-+
>


Re: Deploying the Ignite Maven Project in LINUX

2020-05-06 Thread Evgenii Zhuravlev
Hi,

You can just compile jar file with all dependencies and run it from any
machine.

Evgenii

вт, 5 мая 2020 г. в 03:14, nithin91 <
nithinbharadwaj.govindar...@franklintempleton.com>:

> Hi
>
> We have couple of ignite nodes running in LINUX and the cache configuration
> details are specified in the bean file and deployed on both the nodes.Now i
> am connecting to one of the nodes as client and loading the cache from my
> local system.
>
> Is there a way, i can deploy the JAVA code  with Maven Dependency in LINUX
> and run the program in LINUX.
>
> I am using Eclipse as IDE.
>
>
>
>
>
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Apache ignite evolvable object

2020-05-01 Thread Evgenii Zhuravlev
Hi,

BinaryObjects allow doing this:
https://apacheignite.readme.io/docs/binary-marshaller

Evgenii

пт, 1 мая 2020 г. в 09:03, Hemambara :

> I am using apache ignite 2.8.0. Looking for an option to have evolvable
> data
> model similar to coherence (com.tangosol.io.Evolvable). Do we have any ?
> Idea is to save future data if the domain model version is backward
> compatible and when the same model is transferred to new version, we can
> retrieve fields from future data, with out any data loss
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Ignite Data loss

2020-04-29 Thread Evgenii Zhuravlev
Hi,

Please attach full logs from all nodes for investigation.

ср, 29 апр. 2020 г. в 06:48, krkumar24061...@gmail.com <
krkumar24061...@gmail.com>:

> Checked the logs and nothing like that happened where went down.
>
> Thanx and Regards,
> KR Kumar
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Multicast Address

2020-04-24 Thread Evgenii Zhuravlev
Hi,

Server nodes should be able to communicate with each other directly, for
example, for rebalancing purpose or sending current partition distribution.
So, if they're able to directly communicate with each other, you can either
configure all their static Ips(not necessary for Multicast Ip finder) or
use on of the other Ip finder implementations:
https://apacheignite.readme.io/docs/tcpip-discovery

Evgenii

пт, 24 апр. 2020 г. в 01:07, bribridnl :

> Hey guys, I have a very specific use case where raspberry pis are connected
> in a ring topology and I have a docker service running apache ignite. Now I
> want them to be running the same distributed database but I don't know what
> multicast adress to use in the config.xml file.
> For now each rpi has a wifi dongle plugged in and the dongle is connected
> to
> the wifi inerface of the next pi etc.. forming a ring. So they're not on
> same subnetwork but kind of and  they also act like routers between each
> other...
> Is someone able to help me ?
>
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Getting Remote node has peer class loading enabled flag different from local

2020-04-23 Thread Evgenii Zhuravlev
Right, only nodes, where this EntryProcessor should be executed. As for
peerClassLoading - it's easier to add it to the config template and use it
for all clients.

Evgenii

чт, 23 апр. 2020 г. в 07:03, John Smith :

> Ah ok. So other option is to copy my jar to the lib folders of each server
> node correct?
>
> Like if one application needs a specific EntryProcessor then that
> application and the server node needs it in the class path. Not the other
> clients? Just to be sure.
>
> On Thu, 23 Apr 2020 at 00:14, Evgenii Zhuravlev 
> wrote:
>
>> yes
>>
>> ср, 22 апр. 2020 г. в 18:31, John Smith :
>>
>>> So client enabled nodes nee to set it also?
>>>
>>> On Wed, 22 Apr 2020 at 19:52, Evgenii Zhuravlev <
>>> e.zhuravlev...@gmail.com> wrote:
>>>
>>>> Hi John,
>>>>
>>>> Yes, you're right, this flag should be the same on all nodes in the
>>>> cluster. This message prints, which node has different value for tis flag,
>>>> so you can find it.
>>>>
>>>> Evgenii
>>>>
>>>> ср, 22 апр. 2020 г. в 16:22, John Smith :
>>>>
>>>>> Hi, getting the message in the subject line
>>>>>
>>>>> I'm pretty sure I have all my nodes enabled with
>>>>>
>>>>>   
>>>>>
>>>>> I'm guessing this cannot work with client enabled nodes only?
>>>>>
>>>>> igniteConfig.setClientMode(true);
>>>>>
>>>>>
>>>>>


Re: Getting Remote node has peer class loading enabled flag different from local

2020-04-22 Thread Evgenii Zhuravlev
yes

ср, 22 апр. 2020 г. в 18:31, John Smith :

> So client enabled nodes nee to set it also?
>
> On Wed, 22 Apr 2020 at 19:52, Evgenii Zhuravlev 
> wrote:
>
>> Hi John,
>>
>> Yes, you're right, this flag should be the same on all nodes in the
>> cluster. This message prints, which node has different value for tis flag,
>> so you can find it.
>>
>> Evgenii
>>
>> ср, 22 апр. 2020 г. в 16:22, John Smith :
>>
>>> Hi, getting the message in the subject line
>>>
>>> I'm pretty sure I have all my nodes enabled with
>>>
>>>   
>>>
>>> I'm guessing this cannot work with client enabled nodes only?
>>>
>>> igniteConfig.setClientMode(true);
>>>
>>>
>>>


Re: Getting Remote node has peer class loading enabled flag different from local

2020-04-22 Thread Evgenii Zhuravlev
Hi John,

Yes, you're right, this flag should be the same on all nodes in the
cluster. This message prints, which node has different value for tis flag,
so you can find it.

Evgenii

ср, 22 апр. 2020 г. в 16:22, John Smith :

> Hi, getting the message in the subject line
>
> I'm pretty sure I have all my nodes enabled with
>
>   
>
> I'm guessing this cannot work with client enabled nodes only?
>
> igniteConfig.setClientMode(true);
>
>
>


Re: Subquery or Joins query not returning correct result Ignite V2.7.6

2020-04-22 Thread Evgenii Zhuravlev
Hi,

Can you share small reproducer project?

Evgenii

пн, 20 апр. 2020 г. в 23:58, siva :

> Hi All,
> I am using Apache Ignite V2.7.6 .Net client And Server.
> -Tables(*Company*,*CompanyTypes*) creating with QueryEntities at the time
> cache configuration.
>
> I have two  model 
>
> class.And both classes all properties are SQL Query fields with private and
> public modifier.
>
> Company class (EntityId pk) and CompanyTypes(CompanyId,CapabilityId both
> are
> pk fields).
>
> *here is the sql query:
> *
>
>
> So i am facing issue like no of records returning from ignite is wrong.
>
> above query working fine on sql server but from ignite returning result is
> wrong.
>
>
> for example expected selected rows:100 records,
> sql server result rows:100 records
> ignite result  rows:some time less or some time more row.
>
> Please let me know if any other information needed.
>
> Thanks.
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Ignite -Option to close the open files

2020-04-21 Thread Evgenii Zhuravlev
Hi,

I don't think that it's possible just to close files. How many caches do
you have?

Evgenii

вт, 21 апр. 2020 г. в 12:14, Sriveena Mattaparthi <
sriveena.mattapar...@ekaplus.com>:

> Hi,
>
>
>
> Ignite server in preproduction and production are going down with too many
> open files error.
>
>
>
> *Caused by: java.nio.file.FileSystemException:
> /opt/apache-ignite-fabric-2.5.0-bin/work/db/node00-94e4310a-f450-4bbc-acfd-f84ab29a158c/cache-SQL_PUBLIC_x/part-185.bin:
> Too many open files*
>
>
>
> Based on the suggestion given in
> https://issues.apache.org/jira/browse/IGNITE-11783, we have increased the
> limit.
>
>
>
> But recently again after increasing the limit to 30 also, sever has
> crashed.
>
>
>
> Is there a way programmatically to close the open files, so that the
> threshold limit is not exceeded.
>
>
>
> Please suggest as production servers also, we are facing same issue and it
> is very critical for us.
>
>
>
> Thanks,
>
> Sriveena
> “Confidentiality Notice: The contents of this email message and any
> attachments are intended solely for the addressee(s) and may contain
> confidential and/or privileged information and may be legally protected
> from disclosure. If you are not the intended recipient of this message or
> their agent, or if this message has been addressed to you in error, please
> immediately alert the sender by reply email and then delete this message
> and any attachments. If you are not the intended recipient, you are hereby
> notified that any use, dissemination, copying, or storage of this message
> or its attachments is strictly prohibited.”
>


Re: Unable to run several ContinuousQuery due to: Failed to unmarshal discovery data for component: CONTINUOUS_PROC

2020-04-21 Thread Evgenii Zhuravlev
Why is the client needs to be serializable? Have you tried suggestion from
this answer
https://stackoverflow.com/questions/61293343/failed-to-unmarshal-discovery-data-for-component-continuous-proc-with-more-than/61318360#61318360
 ?

Evgenii

вт, 21 апр. 2020 г. в 00:36, AlexBor :

> Hi Denis,
>
> Both servers are looking to the same server.
> Here are code samples:
>
> Server:
>
> public class IgniteServerCacheBootstrap {
>
> final static Logger logger =
> LoggerFactory.getLogger(IgniteCacheClient.class);
>
> public static void main(String[] args) throws IgniteCheckedException,
> InterruptedException {
>
> IgniteConfiguration serverConfig = new IgniteConfiguration()
> .setGridLogger(new Log4J2Logger("log4j2.xml"));
>
> Ignite server = Ignition.start(serverConfig);
> Thread.currentThread().join();
> }
>
> }
>
>
> Client (I run two of such clients in parallel). Code is mostly taken from
> Ignite samples:
>
> public class IgniteCacheClient implements Serializable {
>
> Logger logger = LoggerFactory.getLogger(IgniteCacheClient.class);
>
> private IgniteCache igniteCache;
>
> public IgniteCacheClient() throws IgniteCheckedException {
> IgniteConfiguration clientConfig = new IgniteConfiguration()
> .setGridLogger(new Log4J2Logger("log4j2.xml"))
> .setClientMode(true);
>
> Ignite client = Ignition.getOrStart(clientConfig);
> igniteCache = client.getOrCreateCache("MY_CACHE");
> }
>
> public void run() throws InterruptedException {
>
> // Create new continuous query.
> ContinuousQuery qry = new ContinuousQuery<>();
>
> qry.setInitialQuery(new ScanQuery<>(new IgniteBiPredicate String>() {
> @Override
> public boolean apply(Integer key, String val) {
> return key > 10;
> }
> }));
>
> // Callback that is called locally when update notifications are
> received.
> qry.setLocalListener(new CacheEntryUpdatedListener String>() {
> @Override
> public void onUpdated(Iterable Integer, ? extends String>> evts) {
> for (CacheEntryEvent e
> : evts)
> logger.info("Updated entry [key=" + e.getKey() + ",
> val=" + e.getValue() + ']');
> }
> });
>
> // This filter will be evaluated remotely on all nodes.
> // Entry that pass this filter will be sent to the caller.
> qry.setRemoteFilterFactory(new
> Factory>() {
> @Override
> public CacheEntryEventFilter create() {
> return new CacheEntryEventFilter() {
> @Override
> public boolean evaluate(CacheEntryEvent Integer, ? extends String> e) {
> return e.getKey() > 10;
> }
> };
> }
> });
>
> // Execute query.
> QueryCursor> cur =
> igniteCache.query(qry);
>
> // Iterate through existing data.
> for (Cache.Entry e : cur)
> logger.info("Queried existing entry [key=" + e.getKey() + ",
> val=" + e.getValue() + ']');
>
> Thread.currentThread().join();
> }
> }
>
>
>
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


  1   2   3   4   5   6   >