Re: [MTCGA]: new failures in builds [5479103] needs to be handled
I'll take a look. ср, 22 июл. 2020 г. в 06:29, : > Hi Igniters, > > I've detected some new issue on TeamCity to be handled. You are more than > welcomed to help. > > If your changes can lead to this failure(s): We're grateful that you were > a volunteer to make the contribution to this project, but things change and > you may no longer be able to finalize your contribution. > Could you respond to this email and indicate if you wish to continue and > fix test failures or step down and some committer may revert you commit. > > *New test failure in master > CacheSerializableTransactionsTest.testConflictResolution > https://ci.ignite.apache.org/project.html?projectId=IgniteTests24Java8=6728566098547435193=%3Cdefault%3E=testDetails > Changes may lead to failure were done by > - alexey scherbakov > https://ci.ignite.apache.org/viewModification.html?modId=904804 > > - Here's a reminder of what contributors were agreed to do > https://cwiki.apache.org/confluence/display/IGNITE/How+to+Contribute > - Should you have any questions please contact > dev@ignite.apache.org > > Best Regards, > Apache Ignite TeamCity Bot > https://github.com/apache/ignite-teamcity-bot > Notification generated at 06:29:19 22-07-2020 > -- Best regards, Alexei Scherbakov
[jira] [Created] (IGNITE-13282) Fix TcpDiscoveryCoordinatorFailureTest.testClusterFailedNewCoordinatorInitialized()
Vladimir Steshin created IGNITE-13282: - Summary: Fix TcpDiscoveryCoordinatorFailureTest.testClusterFailedNewCoordinatorInitialized() Key: IGNITE-13282 URL: https://issues.apache.org/jira/browse/IGNITE-13282 Project: Ignite Issue Type: Bug Reporter: Vladimir Steshin Assignee: Vladimir Steshin -- This message was sent by Atlassian Jira (v8.3.4#803005)
[MTCGA]: new failures in builds [5479103] needs to be handled
Hi Igniters, I've detected some new issue on TeamCity to be handled. You are more than welcomed to help. If your changes can lead to this failure(s): We're grateful that you were a volunteer to make the contribution to this project, but things change and you may no longer be able to finalize your contribution. Could you respond to this email and indicate if you wish to continue and fix test failures or step down and some committer may revert you commit. *New test failure in master CacheSerializableTransactionsTest.testConflictResolution https://ci.ignite.apache.org/project.html?projectId=IgniteTests24Java8=6728566098547435193=%3Cdefault%3E=testDetails Changes may lead to failure were done by - alexey scherbakov https://ci.ignite.apache.org/viewModification.html?modId=904804 - Here's a reminder of what contributors were agreed to do https://cwiki.apache.org/confluence/display/IGNITE/How+to+Contribute - Should you have any questions please contact dev@ignite.apache.org Best Regards, Apache Ignite TeamCity Bot https://github.com/apache/ignite-teamcity-bot Notification generated at 06:29:19 22-07-2020
Re: Removal of "default" cache from REST APIs
Hi Evgeniy, Can you please confirm in your configuration you have this example config file. https://github.com/apache/ignite/blob/master/examples/config/example-cache.xml#L26-L32 This has instructions on default cache config. My understanding is when node startup then the default cache should already be available. Regards Saikat On Sun, Jul 19, 2020 at 11:21 PM Evgeniy Rudenko wrote: > Hi Saikat, > > I understand this, but as I wrote this is pointless. It is not safe > fallback, because there is no "default" cache. If you will try to use any > API without a name you will receive the following error: > > *org.apache.ignite.IgniteCheckedException: Failed to find cache for given > cache name: default* > > Better to replace it by the correctly formed error message that will point > to the actual problem of the request. API documents will be updated > accordingly when PR will be merged. > > On Sun, Jul 19, 2020 at 12:56 AM Saikat Maitra > wrote: > > > Hi Evgeniy, > > > > The default cacheName is safe fallback when cache name is not provided in > > the request. It is part of rest document. > > > > https://apacheignite.readme.io/docs/rest-api#put > > > > When request do not have cacheName since it is optional param then this > > DFLT_CACHE_NAME is used > > > > > https://github.com/apache/ignite/pull/8041/files#diff-a3477d5e0cfdfcceed3371fc899a9d15L30 > > > > Regards, > > Saikat > > > > On Wed, Jul 15, 2020 at 10:43 PM Evgeniy Rudenko > > wrote: > > > > > Hi guys, > > > > > > Most of the cache APIs are trying to use "default" cache when cacheName > > is > > > not provided. This is pointless, because we don't have such cache by > > > default. I would like to change that and just return "Failed to find > > > mandatory parameter in request" error if name is absent. > > > > > > Please tell if you have any concerns. Update can be found at > > > https://github.com/apache/ignite/pull/8041 > > > > > > -- > > > Best regards, > > > Evgeniy > > > > > > > > -- > Best regards, > Evgeniy >
[MTCGA]: new failures in builds [5479110] needs to be handled
Hi Igniters, I've detected some new issue on TeamCity to be handled. You are more than welcomed to help. If your changes can lead to this failure(s): We're grateful that you were a volunteer to make the contribution to this project, but things change and you may no longer be able to finalize your contribution. Could you respond to this email and indicate if you wish to continue and fix test failures or step down and some committer may revert you commit. *New test failure in master ContinuousQueryMarshallerTest.testRemoteFilterFactoryServer https://ci.ignite.apache.org/project.html?projectId=IgniteTests24Java8=4649996366513987762=%3Cdefault%3E=testDetails Changes may lead to failure were done by - alexey scherbakov https://ci.ignite.apache.org/viewModification.html?modId=904804 - Here's a reminder of what contributors were agreed to do https://cwiki.apache.org/confluence/display/IGNITE/How+to+Contribute - Should you have any questions please contact dev@ignite.apache.org Best Regards, Apache Ignite TeamCity Bot https://github.com/apache/ignite-teamcity-bot Notification generated at 02:14:19 22-07-2020
[jira] [Created] (IGNITE-13281) Test failed: GridIoManagerFileTransmissionSelfTest#testChunkHandlerInitSizeFail
Stanilovsky Evgeny created IGNITE-13281: --- Summary: Test failed: GridIoManagerFileTransmissionSelfTest#testChunkHandlerInitSizeFail Key: IGNITE-13281 URL: https://issues.apache.org/jira/browse/IGNITE-13281 Project: Ignite Issue Type: Bug Components: general Affects Versions: 2.8.1 Reporter: Stanilovsky Evgeny Attachments: Ignite_Tests_2.4_Java_8_9_10_11_Basic_1_24053.log.zip I found this problem on TC (current master) {code:java} [14:33:27]W: [org.apache.ignite:ignite-core] class org.apache.ignite.IgniteException: Test exception. Initialization failed [14:33:27]W: [org.apache.ignite:ignite-core]at org.apache.ignite.internal.managers.communication.GridIoManagerFileTransmissionSelfTest$16.chunkHandler(GridIoManagerFileTransmissionSelfTest.java:777) [14:33:27]W: [org.apache.ignite:ignite-core]at org.apache.ignite.internal.managers.communication.GridIoManager.createReceiver(GridIoManager.java:3062) [14:33:27]W: [org.apache.ignite:ignite-core]at org.apache.ignite.internal.managers.communication.GridIoManager.receiveFromChannel(GridIoManager.java:2949) [14:33:27]W: [org.apache.ignite:ignite-core]at org.apache.ignite.internal.managers.communication.GridIoManager.processOpenedChannel(GridIoManager.java:2892) [14:33:27]W: [org.apache.ignite:ignite-core]at org.apache.ignite.internal.managers.communication.GridIoManager.access$4800(GridIoManager.java:243) [14:33:27]W: [org.apache.ignite:ignite-core]at org.apache.ignite.internal.managers.communication.GridIoManager$7.run(GridIoManager.java:1234) [14:33:27]W: [org.apache.ignite:ignite-core]at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [14:33:27]W: [org.apache.ignite:ignite-core]at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) {code} -- This message was sent by Atlassian Jira (v8.3.4#803005)
Re: Listening cluster activation events by default
Alex, I think it makes sense to enable distribution of these events by default, since they won't introduce any performance impact, and they may be pretty useful for the application lifecycle management. The following events also make sense to be enabled by default: - EVT_BASELINE_CHANGED - EVT_CLUSTER_STATE_CHANGED Events that won't make any impact but don't have great practical use either: - EVT_BASELINE_AUTO_ADJUST_ENABLED_CHANGED - EVT_BASELINE_AUTO_ADJUST_AWAITING_TIME_CHANGED Denis вт, 21 июл. 2020 г. в 11:39, Alex Kozhenkov : > Igniters, > > There are 2 events in Ignite (EVT_CLUSTER_ACTIVATED > and EVT_CLUSTER_DEACTIVATED) that are only listened to by the coordinator. > To listen to them by other nodes, they must be included in > IgniteConfiguration. > > There are also discovery events that are listened to by all nodes. > > Both activation and discovery events are rare, system and cluster-wide, so > I suggest to enable activation events by default on all nodes. >
Re: [DISCUSSION] Ignite integration testing framework.
Discussed privately with Max. Discussion results provided at the slack channel [1]. [1] https://the-asf.slack.com/archives/C016F4PS8KV/p1595336751234500 On Wed, Jul 15, 2020 at 3:59 PM Max Shonichev wrote: > Anton, Nikolay, > > I want to share some more findings about ducktests I've stubmled upon > during porting them to Tiden. > > > First problem was that GridGain Tiden-based tests by default use real > production-like configuration for Ignite nodes, notably: > > - persitence enabled > - ~120 caches in ~40 groups > - data set size around 1M keys per each cache > - primitive and PoJo cache values > - extensive use of query entities (indices) > > When I've tried to run 4 nodes with such configuration in docker, my > notebook nearly burns. Nevertheless, grid was starting and working OK, > but for one little 'but': each successive version under test was > starting slower and slower. > > The 2.7.6 was the fastest, 2.8.0 and 2.8.1 a little bit slower, and your > fork (2.9.0-SNAPSHOT) failed to start 4 persistence-enabled nodes within > default 120 seconds timeout. In order to mimick behavior of your tests I > had to turn off persistence and use only 1 cache too. > > It's a pity that you completely ignore persistence and indices in your > ducktests, otherwise you would quickly stuck into same limitation. > > I hope in the nearest time I would adopt Tiden docker PoC to our > TeamCity and we'll try to git-bisect in order to find where this > slowdown comes from. After that I'll file a bug to IGNITE Jira. > > > > Another problem with your rebalance benchmark is it's low accuracy due > to granularity of measurements. > > You don't actually measure rebalance time, you measure time that takes > you to find a specific string in logs, that's confusing. > > The scenario of your test is as follows: > > 1. start 3 server nodes > 2. start 1 data loading client, preload a data, stop client > 3. start 1 more server node > 4. wait till server joins topology > 5. wait till this server node completes exchange and write > 'rebalanced=true, wasRebalanced=false' message to log > 6. report time was taken by step 5 as 'Rebalance time' > > Confusing thing here is that 'wait till' implementation - you actually > continuously re-scan logs sleeping each second and wait till message > appear. So, that means that rebalance time is at least of second > granularity or even higher, though it is reported with nanosecond > precision. > > But for such lightweight configuration (single in-memory cache) and such > small set of data (1M keys only), rebalancing is very fast, and usually > performs under 1 second or just slightly slower. > > Before waiting for rebalance message you first wait for topology message > and that wait also takes time to execute. > > So, at the time Python part of the test performs first scan of the logs, > rebalancing is in most cases already done and time you report as > '0.0760810375213623' is actually the time to execute logs scanning code. > > However, if rebalancing perform just a little bit slower after topology > update, then first scan of logs is failed, you sleep for whole one > second and rescan logs and there you got your message and report it as > '1.02205491065979'. > > Under different conditions, dockerized application may run a little > slower or a little faster, that depends on overall system load, free > memory, etc. I've tried to increase load on my laptop by running browser > or maven build, and time to scan logs may fluctuate from 0.02 to 0.09 or > even 1.02 seconds. Note, that in CI environment, high system load from > tenants is a quite ordinary situation. > > Suppose we adopted rebalance improvements and all versions after 2.9.0 > would perform within 1 second just as 2.9.0 itself. Then your benchmark > would either report false negative (e.g. 0.02 for master and 0.03 for > PR), while actually on next re-run it would pass (e.g. 0.07 for master > and 0.03 for PR). That's not quite the 'stable and non-flacky' test > Ignite community wants. > > What suggestions do you have to improve benchmark measurement accuracy? > > > A third question is about PME free switch benchmark. Under some > conditions, LongTxStreamerApplication actually hangs up PME. It need to > be investigated further, but either this was due to persistence enabled > or due to missing -DIGNITE_ALLOW_ATOMIC_OPS_IN_TX=false > > Can you share some details about IGNITE_ALLOW_ATOMIC_OPS_IN_TX option? > Also, have you had performed a test of PME free switch with > persistence-enabled caches? > > > On 09.07.2020 10:11, Max Shonichev wrote: > > Anton, > > > > well, strange thing, but clean up and rerun helped. > > > > > > Ubuntu 18.04 > > > > > > > > > > SESSION REPORT (ALL TESTS) > > ducktape version: 0.7.7 > > session_id: 2020-07-06--003 > > run time: 4 minutes 44.835 seconds > > tests run:5 > > passed:
Re: Getting rid of NONE cache rebalance mode
Alexey, Thank you for explanation. I feel that I miss a couple bits to understand the picture fully. I am thinking about a case which I tend to call a Memcached use-case. There is a cache over underlying storage with read-through and expiration and without any rebalancing at all. When new nodes enter they take ownership for some partitions from already running nodes and serve client requests. Entries for not owning anymore partitions expire according to configuration. Actually, I have an idea. My guess is that "rebalancing" is a smarter and better approach than waiting for expiration. Am I right? 2020-07-21 15:31 GMT+03:00, Alexey Goncharuk : > Ivan, > > In my understanding this mode does not work at all even in the presence of > ForceKeysRequest which is now supposed to fetch values from peers in case > of a miss. In this mode we 1) move partitions to OWNING state > unconditionally, and 2) choose an arbitrary OWNING node for force keys > request. Therefore, after a user started two additional nodes in a cluster, > the request may be mapped to a node which does not hold any data. We will > do a read-through in this case, but it will result in significant load > increase on a third-party storage right after a node started, which means > that adding a node will increase, not decrease, the load on the database > being cached. > All these issues go away when (A)SYNC mode is used. > > Val, > The idea makes sense to me - a user can use rebalance future to wait for > rebalance to finish. This will simplify the configuration even further. > > пн, 20 июл. 2020 г. в 21:27, Valentin Kulichenko < > valentin.kuliche...@gmail.com>: > >> +1 for deprecating/removing NONE mode. >> >> Alexey, what do you think about the SYNC mode? In my experience, it does >> not add much value as well. I would go as far as removing the >> rebalancingMode parameter altogether (probably in 3.0). >> >> -Val >> >> On Mon, Jul 20, 2020 at 11:09 AM Ivan Pavlukhin >> wrote: >> >> > Alexey, Igniters, >> > >> > Could you please outline motivation answering following questions? >> > 1. Does this mode generally work correctly today? >> > 2. Can this mode be useful at all? >> > >> > I can imagine that it might be useful in a transparent caching use >> > case (if I did not misunderstand). >> > >> > 2020-07-20 20:39 GMT+03:00, Pavel Tupitsyn : >> > > +1 >> > > >> > > More evidence: >> > > >> > >> https://stackoverflow.com/questions/62902640/apache-ignite-cacherebalancemode-is-not-respected-by-nodes >> > > >> > > On Mon, Jul 20, 2020 at 8:26 PM Alexey Goncharuk >> > > >> > > wrote: >> > > >> > >> Igniters, >> > >> >> > >> I would like to run the idea of deprecating and probably ignoring >> > >> the >> > >> NONE >> > >> rebalance mode by the community. It's in the removal list for Ignite >> 3.0 >> > >> [1], but it looks like it still confuses and creates issues for >> > >> users >> > >> [2]. >> > >> >> > >> What about deprecating it in one of the next releases and even >> ignoring >> > >> this constant in further releases, interpreting it as ASYNC, before >> > >> Ignite >> > >> 3.0? I find it hard to believe that any Ignite user actually has >> > >> RebalanceMode.NONE set in their configuration due to its absolutely >> > >> unpredictable behavior. >> > >> >> > >> Thanks for your thoughts, >> > >> --AG >> > >> >> > >> [1] >> > >> >> > >> >> > >> https://cwiki.apache.org/confluence/display/IGNITE/Apache+Ignite+3.0+Wishlist >> > >> [2] >> > >> >> > >> >> > >> http://apache-ignite-developers.2346864.n4.nabble.com/About-Rebalance-Mode-SYNC-amp-NONE-td47279.html >> > >> >> > > >> > >> > >> > -- >> > >> > Best regards, >> > Ivan Pavlukhin >> > >> > -- Best regards, Ivan Pavlukhin
Re: Getting rid of NONE cache rebalance mode
Ivan, In my understanding this mode does not work at all even in the presence of ForceKeysRequest which is now supposed to fetch values from peers in case of a miss. In this mode we 1) move partitions to OWNING state unconditionally, and 2) choose an arbitrary OWNING node for force keys request. Therefore, after a user started two additional nodes in a cluster, the request may be mapped to a node which does not hold any data. We will do a read-through in this case, but it will result in significant load increase on a third-party storage right after a node started, which means that adding a node will increase, not decrease, the load on the database being cached. All these issues go away when (A)SYNC mode is used. Val, The idea makes sense to me - a user can use rebalance future to wait for rebalance to finish. This will simplify the configuration even further. пн, 20 июл. 2020 г. в 21:27, Valentin Kulichenko < valentin.kuliche...@gmail.com>: > +1 for deprecating/removing NONE mode. > > Alexey, what do you think about the SYNC mode? In my experience, it does > not add much value as well. I would go as far as removing the > rebalancingMode parameter altogether (probably in 3.0). > > -Val > > On Mon, Jul 20, 2020 at 11:09 AM Ivan Pavlukhin > wrote: > > > Alexey, Igniters, > > > > Could you please outline motivation answering following questions? > > 1. Does this mode generally work correctly today? > > 2. Can this mode be useful at all? > > > > I can imagine that it might be useful in a transparent caching use > > case (if I did not misunderstand). > > > > 2020-07-20 20:39 GMT+03:00, Pavel Tupitsyn : > > > +1 > > > > > > More evidence: > > > > > > https://stackoverflow.com/questions/62902640/apache-ignite-cacherebalancemode-is-not-respected-by-nodes > > > > > > On Mon, Jul 20, 2020 at 8:26 PM Alexey Goncharuk > > > > > > wrote: > > > > > >> Igniters, > > >> > > >> I would like to run the idea of deprecating and probably ignoring the > > >> NONE > > >> rebalance mode by the community. It's in the removal list for Ignite > 3.0 > > >> [1], but it looks like it still confuses and creates issues for users > > >> [2]. > > >> > > >> What about deprecating it in one of the next releases and even > ignoring > > >> this constant in further releases, interpreting it as ASYNC, before > > >> Ignite > > >> 3.0? I find it hard to believe that any Ignite user actually has > > >> RebalanceMode.NONE set in their configuration due to its absolutely > > >> unpredictable behavior. > > >> > > >> Thanks for your thoughts, > > >> --AG > > >> > > >> [1] > > >> > > >> > > > https://cwiki.apache.org/confluence/display/IGNITE/Apache+Ignite+3.0+Wishlist > > >> [2] > > >> > > >> > > > http://apache-ignite-developers.2346864.n4.nabble.com/About-Rebalance-Mode-SYNC-amp-NONE-td47279.html > > >> > > > > > > > > > -- > > > > Best regards, > > Ivan Pavlukhin > > >
[jira] [Created] (IGNITE-13280) Improper index usage, fields enumeration not used with pk index creation.
Stanilovsky Evgeny created IGNITE-13280: --- Summary: Improper index usage, fields enumeration not used with pk index creation. Key: IGNITE-13280 URL: https://issues.apache.org/jira/browse/IGNITE-13280 Project: Ignite Issue Type: Bug Components: sql Affects Versions: 2.8.1 Reporter: Stanilovsky Evgeny Assignee: Stanilovsky Evgeny For example: {code:java} CREATE TABLE PUBLIC.TEST_TABLE (FIRST_NAME VARCHAR, LAST_NAME VARCHAR, ADDRESS VARCHAR, LANG VARCHAR, CONSTRAINT PK_PERSON PRIMARY KEY (FIRST_NAME, LAST_NAME)); CREATE INDEX "idx2" ON PUBLIC.TEST_TABLE (LANG, ADDRESS); {code} and further explain: {code:java} SELECT "__Z0"."FIRST_NAME" AS "__C0_0", "__Z0"."LAST_NAME" AS "__C0_1", "__Z0"."ADDRESS" AS "__C0_2", "__Z0"."LANG" AS "__C0_3" FROM "PUBLIC"."TEST_TABLE" "__Z0" /* PUBLIC.IDX2: ADDRESS > 0 */ WHERE "__Z0"."ADDRESS" > 0 {code} this is erroneous to use "idx2" here, because first index field LANG not equals to predicate ADDRESS. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (IGNITE-13279) Ignition.start failing with error java.lang.IllegalStateException: Failed to parse version: -1595322995-
Keshava Munegowda created IGNITE-13279: -- Summary: Ignition.start failing with error java.lang.IllegalStateException: Failed to parse version: -1595322995- Key: IGNITE-13279 URL: https://issues.apache.org/jira/browse/IGNITE-13279 Project: Ignite Issue Type: Bug Components: examples, general Affects Versions: 2.8.1 Reporter: Keshava Munegowda I am using the Apache ignite : apache-ignite-2.8.1-bin I started the apache iginite node using : ./bin/ignite.sh ./examples/config/example-ignite.xml The node is successful started with this message: ``` [root@mdw apache-ignite-2.8.1-bin]# ./bin/ignite.sh ./examples/config/example-ignite.xml [02:19:43] __ [02:19:43] / _/ ___/ |/ / _/_ __/ __/ [02:19:43] _/ // (7 7 // / / / / _/ [02:19:43] /___/\___/_/|_/___/ /_/ /___/ [02:19:43] [02:19:43] ver. 2.8.1#20200521-sha1:86422096 [02:19:43] 2020 Copyright(C) Apache Software Foundation [02:19:43] [02:19:43] Ignite documentation: http://ignite.apache.org [02:19:43] [02:19:43] Quiet mode. [02:19:43] ^-- Logging to file '/data/kmg/apache-ignite-2.8.1-bin/work/log/ignite-4135cf96.0.log' [02:19:43] ^-- Logging by 'JavaLogger [quiet=true, config=null]' [02:19:43] ^-- To see **FULL** console log here add -DIGNITE_QUIET=false or "-v" to ignite.\{sh|bat} [02:19:43] [02:19:43] OS: Linux 3.10.0-1127.el7.x86_64 amd64 [02:19:43] VM information: OpenJDK Runtime Environment 1.8.0_252-b09 Oracle Corporation OpenJDK 64-Bit Server VM 25.252-b09 [02:19:43] Please set system property '-Djava.net.preferIPv4Stack=true' to avoid possible problems in mixed environments. [02:19:43] Configured plugins: [02:19:43] ^-- None [02:19:43] [02:19:43] Configured failure handler: [hnd=StopNodeOrHaltFailureHandler [tryStop=false, timeout=0, super=AbstractFailureHandler [ignoredFailureTypes=UnmodifiableSet [SYSTEM_WORKER_BLOCKED, SYSTEM_CRITICAL_OPERATION_TIMEOUT [02:19:46] Message queue limit is set to 0 which may lead to potential OOMEs when running cache operations in FULL_ASYNC or PRIMARY_SYNC modes due to message queues growth on sender and receiver sides. [02:19:46] Security status [authentication=off, tls/ssl=off] [02:19:48] Performance suggestions for grid (fix if possible) [02:19:48] To disable, set -DIGNITE_PERFORMANCE_SUGGESTIONS_DISABLED=true [02:19:48] ^-- Disable grid events (remove 'includeEventTypes' from configuration) [02:19:48] ^-- Enable G1 Garbage Collector (add '-XX:+UseG1GC' to JVM options) [02:19:48] ^-- Specify JVM heap max size (add '-Xmx[g|G|m|M|k|K]' to JVM options) [02:19:48] ^-- Set max direct memory size if getting 'OOME: Direct buffer memory' (add '-XX:MaxDirectMemorySize=[g|G|m|M|k|K]' to JVM options) [02:19:48] ^-- Disable processing of calls to System.gc() (add '-XX:+DisableExplicitGC' to JVM options) [02:19:48] ^-- Speed up flushing of dirty pages by OS (alter vm.dirty_expire_centisecs parameter by setting to 500) [02:19:48] Refer to this page for more performance suggestions: https://apacheignite.readme.io/docs/jvm-and-system-tuning [02:19:48] [02:19:48] To start Console Management & Monitoring run ignitevisorcmd.\{sh|bat} [02:19:48] Data Regions Configured: [02:19:48] ^-- default [initSize=256.0 MiB, maxSize=75.5 GiB, persistence=false, lazyMemoryAllocation=true] [02:19:48] [02:19:48] Ignite node started OK (id=4135cf96) [02:19:48] Topology snapshot [ver=1, locNode=4135cf96, servers=1, clients=0, state=ACTIVE, CPUs=32, offheap=76.0GB, heap=27.0GB] [02:19:48] ^-- Baseline [id=0, size=1, online=1, offline=0] ``` Now, I have benchmarking application, which start the apache iginitition using the java API Ignition.start("examples/config/example-ignite.xml"); This method is failing with below error log: ``` 0 [main] DEBUG org.springframework.core.env.StandardEnvironment - Adding PropertySource 'systemProperties' with lowest search precedence 2 [main] DEBUG org.springframework.core.env.StandardEnvironment - Adding PropertySource 'systemEnvironment' with lowest search precedence 3 [main] DEBUG org.springframework.core.env.StandardEnvironment - Initialized StandardEnvironment with PropertySources [MapPropertySource@1928301845 {name='systemProperties', properties={java.runtime.name=OpenJDK Runtime Environment, sun.boot.library.path=/usr/lib/jvm/java-1.8.0-openjdk-1.8.0.252.b09-2.el7_8.x86_64/jre/lib/amd64, java.vm.version=25.252-b09, java.vm.vendor=Oracle Corporation, java.vendor.url=http://java.oracle.com/, path.separator=:, java.vm.name=OpenJDK 64-Bit Server VM, file.encoding.pkg=sun.io, user.country=US, sun.java.launcher=SUN_STANDARD, sun.os.patch.level=unknown, java.vm.specification.name=Java Virtual Machine Specification, user.dir=/data/kmg/SBK, java.runtime.version=1.8.0_252-b09, java.awt.graphicsenv=sun.awt.X11GraphicsEnvironment,
[jira] [Created] (IGNITE-13278) Forgotten logger isInfoEnabled check.
Stanilovsky Evgeny created IGNITE-13278: --- Summary: Forgotten logger isInfoEnabled check. Key: IGNITE-13278 URL: https://issues.apache.org/jira/browse/IGNITE-13278 Project: Ignite Issue Type: Improvement Components: general Affects Versions: 2.8.1 Reporter: Stanilovsky Evgeny Assignee: Stanilovsky Evgeny In RO tests with enabled -ea we can obtain near assertion: {code:java} java.lang.AssertionError: Logging at INFO level without checking if INFO level is enabled: Cluster state was changed from ACTIVE to ACTIVE at org.apache.ignite.testframework.junits.logger.GridTestLog4jLogger.info(GridTestLog4jLogger.java:481) ~[ignite-core-tests.jar] {code} -- This message was sent by Atlassian Jira (v8.3.4#803005)
Listening cluster activation events by default
Igniters, There are 2 events in Ignite (EVT_CLUSTER_ACTIVATED and EVT_CLUSTER_DEACTIVATED) that are only listened to by the coordinator. To listen to them by other nodes, they must be included in IgniteConfiguration. There are also discovery events that are listened to by all nodes. Both activation and discovery events are rare, system and cluster-wide, so I suggest to enable activation events by default on all nodes.
Re: IEP-50 Thin Client Continuous Queries
Igniters, Igor raised an interesting point in the PR: Should we limit the number of Continuous Queries together with other queries according to ClientConnectorConfiguration.maxOpenCursorsPerConn? Or should we have a separate limit? Technically, Ignite returns a QueryCursor, but it is very different from other cursors. On Fri, Jul 17, 2020 at 11:25 AM Pavel Tupitsyn wrote: > The pull request is ready for review. > > On Fri, Jul 17, 2020 at 4:11 AM Igor Sapego wrote: > >> I've reviewed changes made to IEP and they look good to me. >> >> Best Regards, >> Igor >> >> >> On Wed, Jul 15, 2020 at 1:03 PM Pavel Tupitsyn >> wrote: >> >> > Alex, >> > >> > You are correct, OP_RESOURCE_CLOSE is enough. >> > Removed the extra op. >> > >> > > If client closes CQ it doesn't want to receive any new events. Why >> can't >> > we >> > > just ignore events for this CQ after that moment? >> > I don't think that our protocol should involve ignoring messages. >> > If the client stops the query, the server should guarantee that no >> events >> > will be sent >> > to the client after the OP_RESOURCE_CLOSE response. >> > >> > I had some concerns about this guarantee, but after reviewing >> GridNioServer >> > logic, >> > the current implementation with OP_RESOURCE_CLOSE seems to be fine. >> > >> > >> > >> > On Wed, Jul 15, 2020 at 10:09 AM Alex Plehanov > > >> > wrote: >> > >> > > Pavel, >> > > >> > > > OP_QUERY_CONTINUOUS_END_NOTIFICATION is another client -> server >> > message >> > > I think you mean "server -> client" here. >> > > >> > > But I still didn't get why do we need it. >> > > I've briefly looked to the POC implementation and, as far as I >> > understand, >> > > OP_QUERY_CONTINUOUS_END_NOTIFICATION can be sent only when >> > > OP_RESOURCE_CLOSE is received by server (client closes the CQ >> > explicitly). >> > > If client closes CQ it doesn't want to receive any new events. Why >> can't >> > we >> > > just ignore events for this CQ after that moment? >> > > Also, in current implementation OP_QUERY_CONTINUOUS_END_NOTIFICATION >> is >> > > sent before OP_RESOURCE_CLOSE response, so OP_RESOURCE_CLOSE response >> can >> > > be used the same way as OP_QUERY_CONTINUOUS_END_NOTIFICATION. >> > > >> > > Such notification (or something more generalized like >> OP_RESOURCE_CLOSED) >> > > can be helpful if CQ is closed by someone else (for example if >> > > administrator call QueryMXBean.cancelContinuous), but AFAIK now we >> don't >> > > have callbacks for this action on user side. >> > > >> > > >> > > ср, 15 июл. 2020 г. в 01:26, Pavel Tupitsyn : >> > > >> > > > Igniters, >> > > > >> > > > I've made an important change to the IEP (and the POC): >> > > > OP_QUERY_CONTINUOUS_END_NOTIFICATION is another client -> server >> > message >> > > > that notifies the client that the query has stopped and no more >> events >> > > > should be expected. >> > > > >> > > > This is important because client can't immediately stop listening >> > > > for OP_QUERY_CONTINUOUS_EVENT_NOTIFICATION >> > > > after sending OP_RESOURCE_CLOSE - some more events can be present in >> > one >> > > of >> > > > the buffers/queues of the server and/or the OS. >> > > > >> > > > Let me know if this makes sense. >> > > > >> > > > On Tue, Jul 14, 2020 at 3:25 PM Pavel Tupitsyn < >> ptupit...@apache.org> >> > > > wrote: >> > > > >> > > > > I've removed Initial Query from the POC and IEP (left a note there >> > > about >> > > > > the decision). >> > > > > >> > > > > Since there are no other comments and concerns, I'll move on with >> the >> > > > > final implementation. >> > > > > >> > > > > On Fri, Jul 10, 2020 at 4:14 PM Pavel Tupitsyn < >> ptupit...@apache.org >> > > >> > > > > wrote: >> > > > > >> > > > >> Igor, Alex, >> > > > >> >> > > > >> I was aware of the duplicates issue with the initial query, but I >> > did >> > > > not >> > > > >> give it a second thought. >> > > > >> >> > > > >> Now I see that Vladimir was right - Initial query seems to be >> > > pointless, >> > > > >> since users can >> > > > >> achieve the same by simply invoking the regular query. >> > > > >> >> > > > >> I will remove Initial Query from the IEP and POC next week if >> there >> > > are >> > > > >> no objections by then. >> > > > >> >> > > > >> >> > > > >> On Fri, Jul 10, 2020 at 3:58 PM Alex Plehanov < >> > > plehanov.a...@gmail.com> >> > > > >> wrote: >> > > > >> >> > > > >>> Igor, Pavel, >> > > > >>> >> > > > >>> Here is discussion about removal: [1] >> > > > >>> >> > > > >>> [1] : >> > > > >>> >> > > > >>> >> > > > >> > > >> > >> http://apache-ignite-developers.2346864.n4.nabble.com/ContinuousQueryWithTransformer-implementation-questions-2-td21418i20.html#a22041 >> > > > >>> >> > > > >>> пт, 10 июл. 2020 г. в 17:44, Igor Sapego : >> > > > >>> >> > > > >>> > Can not find proposal to remove them, so maybe it was not on >> > > devlist, >> > > > >>> > but here is discussion about the problem: [1] >> > > > >>> > >> > > > >>> > [1] - >> > > > >>> > >> > > > >>> > >> > > > >>> >> > >