[jira] [Created] (IGNITE-13432) Continuous Query deploys remote filter even on client nodes
Mikhail Cherkasov created IGNITE-13432: -- Summary: Continuous Query deploys remote filter even on client nodes Key: IGNITE-13432 URL: https://issues.apache.org/jira/browse/IGNITE-13432 Project: Ignite Issue Type: Improvement Reporter: Mikhail Cherkasov Assignee: Mikhail Cherkasov CQ deployment can fail due to the absence of a remote filter on client nodes. And this doesn't make sense, we don't need it on nodes which doesn't store data. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (IGNITE-12808) Allow create tables for existing caches
Mikhail Cherkasov created IGNITE-12808: -- Summary: Allow create tables for existing caches Key: IGNITE-12808 URL: https://issues.apache.org/jira/browse/IGNITE-12808 Project: Ignite Issue Type: New Feature Components: sql Reporter: Mikhail Cherkasov If you have a big cache with a lot of data and you need to index it, right now you have to destroy cache and create a new one to index your data. Or create a new cache with a table and reload it to data to the new cache which definitely is time-consuming and super inconvenient. I believe we can allow users to create tables for existing caches. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (IGNITE-12758) zookeeper discovery does not work due to missed dependency
Mikhail Cherkasov created IGNITE-12758: -- Summary: zookeeper discovery does not work due to missed dependency Key: IGNITE-12758 URL: https://issues.apache.org/jira/browse/IGNITE-12758 Project: Ignite Issue Type: Bug Affects Versions: 2.8 Reporter: Mikhail Cherkasov in new zookeeper 3.5.5 version part of classes were moved in new jar: [https://mvnrepository.com/artifact/org.apache.zookeeper/zookeeper-jute/3.5.5] which is missed in Apache Ignite release. Sever fails to start with the following exception: [15:16:14,514][SEVERE][main][IgniteKernal] Got exception while starting (will rollback startup routine). java.lang.NoClassDefFoundError: org/apache/zookeeper/data/Id at org.apache.zookeeper.ZooDefs$Ids.(ZooDefs.java:111) at org.apache.ignite.spi.discovery.zk.internal.ZookeeperClient.(ZookeeperClient.java:68) at org.apache.ignite.spi.discovery.zk.internal.ZookeeperDiscoveryImpl.joinTopology(ZookeeperDiscoveryImpl.java:783) at org.apache.ignite.spi.discovery.zk.internal.ZookeeperDiscoveryImpl.startJoinAndWait(ZookeeperDiscoveryImpl.java:714) at org.apache.ignite.spi.discovery.zk.ZookeeperDiscoverySpi.spiStart(ZookeeperDiscoverySpi.java:483) at org.apache.ignite.internal.managers.GridManagerAdapter.startSpi(GridManagerAdapter.java:297) at org.apache.ignite.internal.managers.discovery.GridDiscoveryManager.start(GridDiscoveryManager.java:943) at org.apache.ignite.internal.IgniteKernal.startManager(IgniteKernal.java:1960) at org.apache.ignite.internal.IgniteKernal.start(IgniteKernal.java:1276) at org.apache.ignite.internal.IgnitionEx$IgniteNamedInstance.start0(IgnitionEx.java:2038) at org.apache.ignite.internal.IgnitionEx$IgniteNamedInstance.start(IgnitionEx.java:1703) at org.apache.ignite.internal.IgnitionEx.start0(IgnitionEx.java:1117) at org.apache.ignite.internal.IgnitionEx.startConfigurations(IgnitionEx.java:1035) at org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.java:921) at org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.java:820) at org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.java:690) at org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.java:659) at org.apache.ignite.Ignition.start(Ignition.java:346) at org.apache.ignite.startup.cmdline.CommandLineStartup.main(CommandLineStartup.java:300) Caused by: java.lang.ClassNotFoundException: [org.apache.zookeeper.data.Id|http://org.apache.zookeeper.data.id/] at java.net.URLClassLoader.findClass(URLClassLoader.java:382) at java.lang.ClassLoader.loadClass(ClassLoader.java:418) at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:352) at java.lang.ClassLoader.loadClass(ClassLoader.java:351) ... 19 more -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (IGNITE-12431) Allow to set inline size for implicit indexes per table
Mikhail Cherkasov created IGNITE-12431: -- Summary: Allow to set inline size for implicit indexes per table Key: IGNITE-12431 URL: https://issues.apache.org/jira/browse/IGNITE-12431 Project: Ignite Issue Type: Bug Components: sql Reporter: Mikhail Cherkasov Right now you can specify inline size only for explicit indexes, but there's no way you can set it for implicit caches, except global flag: IGNITE_MAX_INDEX_PAYLOAD_SIZE or per cache: [https://ignite.apache.org/releases/latest/javadoc/org/apache/ignite/configuration/CacheConfiguration.html#setSqlIndexMaxInlineSize-int-] however, if there's only one table which requires a big inline for implicit indices is too big overhead to specify it for the whole cache. Let's introduce some way to do this per table. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (IGNITE-12276) Thin client uses Optimized marshaller for TreeSet and TreeMap
Mikhail Cherkasov created IGNITE-12276: -- Summary: Thin client uses Optimized marshaller for TreeSet and TreeMap Key: IGNITE-12276 URL: https://issues.apache.org/jira/browse/IGNITE-12276 Project: Ignite Issue Type: Bug Components: thin client Reporter: Mikhail Cherkasov Thin client uses Optimized marshaller for TreeSet and TreeMap, while thick client replace them with BinaryTreeMap/BinaryTreeSet. As a result it blocks schema changes for stored objects. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (IGNITE-11819) Add query timeouts support for JDBC statement
Mikhail Cherkasov created IGNITE-11819: -- Summary: Add query timeouts support for JDBC statement Key: IGNITE-11819 URL: https://issues.apache.org/jira/browse/IGNITE-11819 Project: Ignite Issue Type: Improvement Components: sql Reporter: Mikhail Cherkasov statement.setQueryTimeout(5_000); - this timeout doesn't have any effect for ignite, it ignores it, event if we have delays in network for minutes, it will wait all this time. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (IGNITE-10960) Thin client cannot retrieve data that was inserted with the Thick Ignite client when using a composite key
Mikhail Cherkasov created IGNITE-10960: -- Summary: Thin client cannot retrieve data that was inserted with the Thick Ignite client when using a composite key Key: IGNITE-10960 URL: https://issues.apache.org/jira/browse/IGNITE-10960 Project: Ignite Issue Type: Bug Affects Versions: 2.7 Reporter: Mikhail Cherkasov Attachments: ThinClientGets.java Thin client cannot retrieve data that was inserted with the Thick Ignite client when using a composite key. See the attached reproducer: ThinClientGets.java thickCache.put(new TestKey("a", "0"), 1); thickCache.get(new TestKey("a", "0")); // returns 1 thinCache.get(new TestKey("a", "0")) // returns null -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (IGNITE-10886) JVM_OPTS and -J-PARAMS doesn't allow spaces
Mikhail Cherkasov created IGNITE-10886: -- Summary: JVM_OPTS and -J-PARAMS doesn't allow spaces Key: IGNITE-10886 URL: https://issues.apache.org/jira/browse/IGNITE-10886 Project: Ignite Issue Type: Improvement Components: general Reporter: Mikhail Cherkasov Assignee: Mikhail Cherkasov JVM_OPTS and -J-PARAMS doesn't allow spaces, so you can pass for example: -DIGNITE_CLUSTER_NAME="dev dev dev" and set name for you cluster in WC. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (IGNITE-10244) Peer classloading creates a new class on each call for nested compute tasks
Mikhail Cherkasov created IGNITE-10244: -- Summary: Peer classloading creates a new class on each call for nested compute tasks Key: IGNITE-10244 URL: https://issues.apache.org/jira/browse/IGNITE-10244 Project: Ignite Issue Type: Bug Affects Versions: 2.7 Reporter: Mikhail Cherkasov Attachments: JustServer.java, MyCall.java, NestedCall.java, Test.java If a compute task has embedded compute tasks, embeded task will be loaded by peer class loading as a new class on each call, which leads to metadata OOM. Reproducer is attached. Make sure that you run ignite nodes with -XX:MaxMetaspaceSize=64m , by default JVM doesn't limit meta space size. So what happens: # client sends compute taks MyCall to server_1 # server_1 execute MyCall and MyCall sends NestedCall task to server_2 # server_2 loads NestedCall as a new class and execute it # repeat it's again and on second iteration server_2 will load NestedCall as new class again, after few iterations this will lead to OOM -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (IGNITE-9966) Dump configuration to distinct file and all dynamic changes to the cluster
Mikhail Cherkasov created IGNITE-9966: - Summary: Dump configuration to distinct file and all dynamic changes to the cluster Key: IGNITE-9966 URL: https://issues.apache.org/jira/browse/IGNITE-9966 Project: Ignite Issue Type: Improvement Reporter: Mikhail Cherkasov Fix For: 2.8 Sometimes it's difficult to analyze issues, very often we can't restore nodes/caches configurations due to log rotation, I think we can dump configurations on start in work dir. Also, it will be convenient to dump configurations of all dynamically created caches to the file too. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (IGNITE-9965) Ignite.sh must enable GC logs by default
Mikhail Cherkasov created IGNITE-9965: - Summary: Ignite.sh must enable GC logs by default Key: IGNITE-9965 URL: https://issues.apache.org/jira/browse/IGNITE-9965 Project: Ignite Issue Type: Improvement Reporter: Mikhail Cherkasov Fix For: 2.8 Almost always you need to monitor GC and almost always people foget to enable GC logs, so why not to enable GC logs by default? Let's check if it's oracle/openjdk and no GC logs config is specified then collect GC logs, also we may add a script variable to specify a path for logs. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (IGNITE-9925) Validate QueryEntites on cache creation
Mikhail Cherkasov created IGNITE-9925: - Summary: Validate QueryEntites on cache creation Key: IGNITE-9925 URL: https://issues.apache.org/jira/browse/IGNITE-9925 Project: Ignite Issue Type: Improvement Components: cache Affects Versions: 2.6 Reporter: Mikhail Cherkasov Fix For: 2.8 it's possible to create a cache with the index for non-existing field and kill the whole cluster, I think this should be prevented by configuration validation, see reproducer in https://issues.apache.org/jira/browse/IGNITE-9907 -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (IGNITE-9924) Rename "allow non-collacated joins" to allow distributed joins
Mikhail Cherkasov created IGNITE-9924: - Summary: Rename "allow non-collacated joins" to allow distributed joins Key: IGNITE-9924 URL: https://issues.apache.org/jira/browse/IGNITE-9924 Project: Ignite Issue Type: Improvement Components: visor Affects Versions: 2.6 Reporter: Mikhail Cherkasov Assignee: Mikhail Cherkasov Attachments: Screen Shot 2018-10-17 at 15.33.33.png if you google "allow non-collacated joins" it will show you doc with title about distributed join, our API use is/setDistributedJoins method, so why we have this non-collacated in WebConsole, let's rename it to "allow distributed joins" to make it more consistent to our API and doc. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (IGNITE-9907) Wrong index field name makes the whole cluster to fail
Mikhail Cherkasov created IGNITE-9907: - Summary: Wrong index field name makes the whole cluster to fail Key: IGNITE-9907 URL: https://issues.apache.org/jira/browse/IGNITE-9907 Project: Ignite Issue Type: Bug Reporter: Mikhail Cherkasov Assignee: Mikhail Cherkasov Attachments: WrongFields.java Wrong index field name makes the whole cluster to fail and there's now reliable way to restore from this state, exchange fails with the exception: 2018-10-16 14:42:56,842][ERROR][exchange-worker-#42%server_0%][GridCachePartitionExchangeManager] Failed to wait for completion of partition map exchange (preloading will not start): GridDhtPartitionsExchangeFuture [firstDiscoEvt=DiscoveryCustomEvent [customMsg=null, affTopVer=AffinityTopologyVersion [topVer=2, minorTopVer=1], super=DiscoveryEvent [evtNode=TcpDiscoveryNode [id=6859ef9c-cceb-4d8a-8d5b-c1cd2cd192b7, addrs=[0:0:0:0:0:0:0:1%lo0, 127.0.0.1, 192.168.75.84], sockAddrs=[/192.168.75.84:0, /0:0:0:0:0:0:0:1%lo0:0, /127.0.0.1:0], discPort=0, order=2, intOrder=2, lastExchangeTime=1539726176458, loc=false, ver=2.4.3#19691231-sha1:, isClient=true], topVer=2, nodeId8=0d8b289d, msg=null, type=DISCOVERY_CUSTOM_EVT, tstamp=1539726176684]], crd=TcpDiscoveryNode [id=0d8b289d-32aa-402e-8e71-137977559979, addrs=[0:0:0:0:0:0:0:1%lo0, 127.0.0.1, 192.168.75.84], sockAddrs=[/192.168.75.84:47500, /0:0:0:0:0:0:0:1%lo0:47500, /127.0.0.1:47500], discPort=47500, order=1, intOrder=1, lastExchangeTime=1539726176493, loc=true, ver=2.4.3#19691231-sha1:, isClient=false], exchId=GridDhtPartitionExchangeId [topVer=AffinityTopologyVersion [topVer=2, minorTopVer=1], discoEvt=DiscoveryCustomEvent [customMsg=null, affTopVer=AffinityTopologyVersion [topVer=2, minorTopVer=1], super=DiscoveryEvent [evtNode=TcpDiscoveryNode [id=6859ef9c-cceb-4d8a-8d5b-c1cd2cd192b7, addrs=[0:0:0:0:0:0:0:1%lo0, 127.0.0.1, 192.168.75.84], sockAddrs=[/192.168.75.84:0, /0:0:0:0:0:0:0:1%lo0:0, /127.0.0.1:0], discPort=0, order=2, intOrder=2, lastExchangeTime=1539726176458, loc=false, ver=2.4.3#19691231-sha1:, isClient=true], topVer=2, nodeId8=0d8b289d, msg=null, type=DISCOVERY_CUSTOM_EVT, tstamp=1539726176684]], nodeId=6859ef9c, evt=DISCOVERY_CUSTOM_EVT], added=true, initFut=GridFutureAdapter [ignoreInterrupts=false, state=DONE, res=false, hash=1240595188], init=false, lastVer=null, partReleaseFut=PartitionReleaseFuture [topVer=AffinityTopologyVersion [topVer=2, minorTopVer=1], futures=[ExplicitLockReleaseFuture [topVer=AffinityTopologyVersion [topVer=2, minorTopVer=1], futures=[]], TxReleaseFuture [topVer=AffinityTopologyVersion [topVer=2, minorTopVer=1], futures=[]], AtomicUpdateReleaseFuture [topVer=AffinityTopologyVersion [topVer=2, minorTopVer=1], futures=[]], DataStreamerReleaseFuture [topVer=AffinityTopologyVersion [topVer=2, minorTopVer=1], futures=[, exchActions=null, affChangeMsg=null, initTs=1539726176695, centralizedAff=false, forceAffReassignment=false, changeGlobalStateE=null, done=true, state=CRD, evtLatch=0, remaining=[], super=GridFutureAdapter [ignoreInterrupts=false, state=DONE, res=java.lang.IndexOutOfBoundsException: Index: 0, Size: 0, hash=1559339235]] class org.apache.ignite.IgniteCheckedException: Index: 0, Size: 0 at org.apache.ignite.internal.util.IgniteUtils.cast(IgniteUtils.java:7332) at org.apache.ignite.internal.util.future.GridFutureAdapter.resolve(GridFutureAdapter.java:259) at org.apache.ignite.internal.util.future.GridFutureAdapter.get0(GridFutureAdapter.java:207) at org.apache.ignite.internal.util.future.GridFutureAdapter.get(GridFutureAdapter.java:159) at org.apache.ignite.internal.processors.cache.GridCachePartitionExchangeManager$ExchangeWorker.body(GridCachePartitionExchangeManager.java:2374) at org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:110) at java.lang.Thread.run(Thread.java:748) Caused by: java.lang.IndexOutOfBoundsException: Index: 0, Size: 0 at java.util.ArrayList.rangeCheck(ArrayList.java:657) at java.util.ArrayList.get(ArrayList.java:433) at org.apache.ignite.internal.processors.cache.CacheGroupContext.singleCacheContext(CacheGroupContext.java:374) at org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtLocalPartition.(GridDhtLocalPartition.java:194) at org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtPartitionTopologyImpl.getOrCreatePartition(GridDhtPartitionTopologyImpl.java:816) at org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtPartitionTopologyImpl.initPartitions(GridDhtPartitionTopologyImpl.java:381) at org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtPartitionTopologyImpl.beforeExchange(GridDhtPartitionTopologyImpl.java:554) at
[jira] [Created] (IGNITE-9906) Added new method to get or wait for cache
Mikhail Cherkasov created IGNITE-9906: - Summary: Added new method to get or wait for cache Key: IGNITE-9906 URL: https://issues.apache.org/jira/browse/IGNITE-9906 Project: Ignite Issue Type: Improvement Reporter: Mikhail Cherkasov Assignee: Mikhail Cherkasov Attachments: Client.java, Server.java Due async nature of Ignite, ignite client might get cache creation event later then the rest of cluster and if server node created cache and pass it name to client, client might fail to get this cache, client.cache(name) will return null: # server creates cache server.getOrCreateCache() and return from getOrCreateCache method # server sends the cache name to client # client does client.cache(cacheName) and get null. It can be workaround by adding retirees, but it's a boilerplate code that we can add to our API. we can overload existing method ignite.cache() and add timeout for waiting. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (IGNITE-9844) Replace action in pessimistic transaction makes value unwrap and causes ClassNotFoundException
Mikhail Cherkasov created IGNITE-9844: - Summary: Replace action in pessimistic transaction makes value unwrap and causes ClassNotFoundException Key: IGNITE-9844 URL: https://issues.apache.org/jira/browse/IGNITE-9844 Project: Ignite Issue Type: Bug Components: cache Affects Versions: 2.6 Reporter: Mikhail Cherkasov Attachments: SimpleTest.java The problem can be reproduced only if you replace the existing value in a cache inside pessimistic transaction and server node doesn't have the class for the value which the node already has in the cache. The reproducer is attached, please make sure that you run server node without model class in class path. Stack trace: {code:java} [2018-10-10 10:16:31,828][ERROR][pub-#52%grid_0%][GridJobWorker] Failed to execute job [jobId=07acafe5661-c681a6d3-e7ab-4516-9931-e817e77cac5b, ses=GridJobSessionImpl [ses=GridTaskSessionImpl [taskName=class_not_found.SimpleTest$Task, dep=GridDeployment [ts=1539191791633, depMode=SHARED, clsLdr=GridDeploymentClassLoader [id=db2aafe5661-f793ea41-94c4-4ae0-8276-8cc771e48fa9, singleNode=false, nodeLdrMap=HashMap {c681a6d3-e7ab-4516-9931-e817e77cac5b=96acafe5661-c681a6d3-e7ab-4516-9931-e817e77cac5b}, p2pTimeout=5000, usrVer=0, depMode=SHARED, quiet=false], clsLdrId=db2aafe5661-f793ea41-94c4-4ae0-8276-8cc771e48fa9, userVer=0, loc=false, sampleClsName=class_not_found.SimpleTest$Task, pendingUndeploy=false, undeployed=false, usage=1]SharedDeployment [rmv=false, super=], taskClsName=class_not_found.SimpleTest$Task, sesId=f6acafe5661-c681a6d3-e7ab-4516-9931-e817e77cac5b, startTime=1539191791465, endTime=9223372036854775807, taskNodeId=c681a6d3-e7ab-4516-9931-e817e77cac5b, clsLdr=GridDeploymentClassLoader [id=db2aafe5661-f793ea41-94c4-4ae0-8276-8cc771e48fa9, singleNode=false, nodeLdrMap=HashMap {c681a6d3-e7ab-4516-9931-e817e77cac5b=96acafe5661-c681a6d3-e7ab-4516-9931-e817e77cac5b}, p2pTimeout=5000, usrVer=0, depMode=SHARED, quiet=false], closed=false, cpSpi=null, failSpi=null, loadSpi=null, usage=1, fullSup=false, internal=false, topPred=null, subjId=c681a6d3-e7ab-4516-9931-e817e77cac5b, mapFut=GridFutureAdapter [ignoreInterrupts=false, state=INIT, res=null, hash=550280323]IgniteFuture [orig=], execName=null], jobId=07acafe5661-c681a6d3-e7ab-4516-9931-e817e77cac5b]] class org.apache.ignite.IgniteException: class_not_found.SimpleTest$MyDomainObject at org.apache.ignite.internal.processors.closure.GridClosureProcessor$C2.execute(GridClosureProcessor.java:1858) at org.apache.ignite.internal.processors.job.GridJobWorker$2.call(GridJobWorker.java:568) at org.apache.ignite.internal.util.IgniteUtils.wrapThreadLoader(IgniteUtils.java:6797) at org.apache.ignite.internal.processors.job.GridJobWorker.execute0(GridJobWorker.java:562) at org.apache.ignite.internal.processors.job.GridJobWorker.body(GridJobWorker.java:491) at org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:120) at org.apache.ignite.internal.processors.job.GridJobProcessor.processJobExecuteRequest(GridJobProcessor.java:1191) at org.apache.ignite.internal.processors.job.GridJobProcessor$JobExecutionListener.onMessage(GridJobProcessor.java:1923) at org.apache.ignite.internal.managers.communication.GridIoManager.invokeListener(GridIoManager.java:1569) at org.apache.ignite.internal.managers.communication.GridIoManager.processRegularMessage0(GridIoManager.java:1197) at org.apache.ignite.internal.managers.communication.GridIoManager.access$4200(GridIoManager.java:127) at org.apache.ignite.internal.managers.communication.GridIoManager$9.run(GridIoManager.java:1093) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) Caused by: class org.apache.ignite.binary.BinaryInvalidTypeException: class_not_found.SimpleTest$MyDomainObject at org.apache.ignite.internal.binary.BinaryContext.descriptorForTypeId(BinaryContext.java:707) at org.apache.ignite.internal.binary.BinaryReaderExImpl.deserialize0(BinaryReaderExImpl.java:1757) at org.apache.ignite.internal.binary.BinaryReaderExImpl.deserialize(BinaryReaderExImpl.java:1716) at org.apache.ignite.internal.binary.BinaryObjectImpl.deserializeValue(BinaryObjectImpl.java:798) at org.apache.ignite.internal.binary.BinaryObjectImpl.value(BinaryObjectImpl.java:143) at org.apache.ignite.internal.processors.cache.CacheObjectUtils.unwrapBinary(CacheObjectUtils.java:177) at org.apache.ignite.internal.processors.cache.CacheObjectUtils.unwrapBinaryIfNeeded(CacheObjectUtils.java:67) at org.apache.ignite.internal.processors.cache.CacheObjectContext.unwrapBinaryIfNeeded(CacheObjectContext.java:125) at
[jira] [Created] (IGNITE-9702) Make IGNITE_BINARY_SORT_OBJECT_FIELDS enabled by default
Mikhail Cherkasov created IGNITE-9702: - Summary: Make IGNITE_BINARY_SORT_OBJECT_FIELDS enabled by default Key: IGNITE-9702 URL: https://issues.apache.org/jira/browse/IGNITE-9702 Project: Ignite Issue Type: Bug Reporter: Mikhail Cherkasov Fix For: 3.0 Right now: BinObjectImpl(a=1,b=2) != BinObjectImpl(a=2,b=1), so hash code of binary object depends of fields order and for ignite it's too different objects. It's unclear and counter-intuitive for a user. However, this can not be changed till 3.0, because it breaks compatibility with existent storage and requites migration with downtime and an util that will migrate binary objects to a new internal fields order. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (IGNITE-9600) SQL update fails if the sql updates more than one field
Mikhail Cherkasov created IGNITE-9600: - Summary: SQL update fails if the sql updates more than one field Key: IGNITE-9600 URL: https://issues.apache.org/jira/browse/IGNITE-9600 Project: Ignite Issue Type: Bug Components: sql Affects Versions: 2.6 Reporter: Mikhail Cherkasov SQL update fails if the sql updates more than one field: {noformat} Exception in thread "main" javax.cache.CacheException: Failed to execute map query on remote node [nodeId=9c46dc49-30d2-46ca-a2ec-8b1be7f19c91, errMsg=Failed to execute SQL query. Data conversion error converting "Entity2"; SQL statement: SELECT __Z0._KEY __C0_0, __Z0._VAL __C0_1, ((?1 AND (__Z0.ENTITY_NAME2 = ?2)) AND (__Z0.CONTINENT = ?3)) __C0_2 FROM PUBLIC.ENTITY_TABLE __Z0 WHERE __Z0.ENTITY_ID = ?4 [22018-195]] at org.apache.ignite.internal.processors.query.h2.twostep.GridReduceQueryExecutor.fail(GridReduceQueryExecutor.java:288) at org.apache.ignite.internal.processors.query.h2.twostep.GridReduceQueryExecutor.onFail(GridReduceQueryExecutor.java:278) at org.apache.ignite.internal.processors.query.h2.twostep.GridReduceQueryExecutor.onMessage(GridReduceQueryExecutor.java:257) at org.apache.ignite.internal.processors.query.h2.twostep.GridReduceQueryExecutor$2.onMessage(GridReduceQueryExecutor.java:202) at org.apache.ignite.internal.managers.communication.GridIoManager$ArrayListener.onMessage(GridIoManager.java:2349) at org.apache.ignite.internal.managers.communication.GridIoManager.invokeListener(GridIoManager.java:1569) at org.apache.ignite.internal.managers.communication.GridIoManager.processRegularMessage0(GridIoManager.java:1197) at org.apache.ignite.internal.managers.communication.GridIoManager.access$4200(GridIoManager.java:127) at org.apache.ignite.internal.managers.communication.GridIoManager$9.run(GridIoManager.java:1093) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748){noformat} Looks like ignite or underlying h2 engine generates query incorrectly: ((?1 AND (__Z0.ENTITY_NAME2 = ?2)) AND (__Z0.CONTINENT = ?3)) __C0_2 the name for first field is missed. The reproducer is attached. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (IGNITE-9527) NPE in CacheLateAffinityAssignmentTest#testNoForceKeysRequests test
Mikhail Cherkasov created IGNITE-9527: - Summary: NPE in CacheLateAffinityAssignmentTest#testNoForceKeysRequests test Key: IGNITE-9527 URL: https://issues.apache.org/jira/browse/IGNITE-9527 Project: Ignite Issue Type: Bug Reporter: Mikhail Cherkasov Assignee: Mikhail Cherkasov The wrong assertion was removed by the following ticket: https://issues.apache.org/jira/browse/IGNITE-5510, however, NPE exception can be observed in logs, so if [~sboikov] said that this method can be called concurrently and it's valid to have null here, then I think we should remove NPE from logs. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (IGNITE-9488) GridSpringCacheManagerMultiJvmSelfTest#testSyncCache test hangs
Mikhail Cherkasov created IGNITE-9488: - Summary: GridSpringCacheManagerMultiJvmSelfTest#testSyncCache test hangs Key: IGNITE-9488 URL: https://issues.apache.org/jira/browse/IGNITE-9488 Project: Ignite Issue Type: Bug Reporter: Mikhail Cherkasov GridSpringCacheManagerMultiJvmSelfTest#testSyncCache test hangs -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (IGNITE-9184) Cluster hangs during concurrent node restart and continues query registration
Mikhail Cherkasov created IGNITE-9184: - Summary: Cluster hangs during concurrent node restart and continues query registration Key: IGNITE-9184 URL: https://issues.apache.org/jira/browse/IGNITE-9184 Project: Ignite Issue Type: Bug Components: general Affects Versions: 2.6 Reporter: Mikhail Cherkasov Fix For: 2.7 Attachments: StressTest.java, stacktrace Please check the attached test case and stack trace. I can see: "Failed to wait for initial partition map exchange" message. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (IGNITE-9099) IgniteCache java doc does not cover all possible exceptions
Mikhail Cherkasov created IGNITE-9099: - Summary: IgniteCache java doc does not cover all possible exceptions Key: IGNITE-9099 URL: https://issues.apache.org/jira/browse/IGNITE-9099 Project: Ignite Issue Type: Bug Reporter: Mikhail Cherkasov IgniteCache java doc does not cover all possible exceptions. For example on if try to close cache after node stop there would be the following exception: org.apache.ignite.IgniteException: Failed to execute dynamic cache change request, node is stopping. at org.apache.ignite.internal.util.IgniteUtils.convertException(IgniteUtils.java:986) at org.apache.ignite.internal.util.future.IgniteFutureImpl.convertException(IgniteFutureImpl.java:168) at org.apache.ignite.internal.util.future.IgniteFutureImpl.get(IgniteFutureImpl.java:137) at org.apache.ignite.internal.processors.cache.GatewayProtectedCacheProxy.close(GatewayProtectedCacheProxy.java:1346) However, API for close method doesn't mention any exception at all. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (IGNITE-8985) Node segmented itself after connRecoveryTimeout
Mikhail Cherkasov created IGNITE-8985: - Summary: Node segmented itself after connRecoveryTimeout Key: IGNITE-8985 URL: https://issues.apache.org/jira/browse/IGNITE-8985 Project: Ignite Issue Type: Bug Reporter: Mikhail Cherkasov Attachments: Archive.zip I can see the following message in logs: [2018-07-10 16:27:13,111][WARN ][tcp-disco-msg-worker-#2] Unable to connect to next nodes in a ring, it seems local node is experiencing connectivity issues. Segmenting local node to avoid case when one node fails a big part of cluster. To disable that behavior set TcpDiscoverySpi.setConnectionRecoveryTimeout() to 0. [connRecoveryTimeout=1, effectiveConnRecoveryTimeout=1] [2018-07-10 16:27:13,112][WARN ][disco-event-worker-#61] Local node SEGMENTED: TcpDiscoveryNode [id=e1a19d8e-2253-458c-9757-e3372de3bef9, addrs=[127.0.0.1, 172.17.0.1, 172.25.1.17], sockAddrs=[/172.17.0.1:47500, lab17.gridgain.local/172.25.1.17:47500, /127.0.0.1:47500], discPort=47500, order=2, intOrder=2, lastExchangeTime=1531229233103, loc=true, ver=2.4.7#20180710-sha1:a48ae923, isClient=false] I have failure detection time out 60_000 and during the test I had GC <25secs, so I don't expect that node should be segmented. Logs are attached. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (IGNITE-8941) BinaryInvalidTypeException is thrown on invoke
Mikhail Cherkasov created IGNITE-8941: - Summary: BinaryInvalidTypeException is thrown on invoke Key: IGNITE-8941 URL: https://issues.apache.org/jira/browse/IGNITE-8941 Project: Ignite Issue Type: Bug Components: general Reporter: Mikhail Cherkasov Attachments: MyPocTest.java Reproducer is attached. The following exception is thrown: [2018-07-05 16:31:44,554][ERROR][Thread-6][GridDhtAtomicCache] Unexpected exception during cache update class org.apache.ignite.binary.BinaryInvalidTypeException: invoke0 at org.apache.ignite.internal.binary.BinaryContext.descriptorForTypeId(BinaryContext.java:707) at org.apache.ignite.internal.binary.BinaryReaderExImpl.deserialize0(BinaryReaderExImpl.java:1757) at org.apache.ignite.internal.binary.BinaryReaderExImpl.deserialize(BinaryReaderExImpl.java:1716) at org.apache.ignite.internal.binary.BinaryObjectImpl.deserializeValue(BinaryObjectImpl.java:798) at org.apache.ignite.internal.binary.BinaryObjectImpl.value(BinaryObjectImpl.java:143) at org.apache.ignite.internal.processors.cache.GridCacheUtils.value(GridCacheUtils.java:1312) at org.apache.ignite.internal.processors.cache.GridCacheReturn.addEntryProcessResult(GridCacheReturn.java:253) at org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicCache.updateSingle(GridDhtAtomicCache.java:2553) at org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicCache.update(GridDhtAtomicCache.java:1898) at org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicCache.updateAllAsyncInternal0(GridDhtAtomicCache.java:1740) at org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicCache.updateAllAsyncInternal(GridDhtAtomicCache.java:1630) at org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridNearAtomicAbstractUpdateFuture.sendSingleRequest(GridNearAtomicAbstractUpdateFuture.java:299) at org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridNearAtomicSingleUpdateFuture.map(GridNearAtomicSingleUpdateFuture.java:483) at org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridNearAtomicSingleUpdateFuture.mapOnTopology(GridNearAtomicSingleUpdateFuture.java:443) at org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridNearAtomicAbstractUpdateFuture.map(GridNearAtomicAbstractUpdateFuture.java:248) at org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicCache.update0(GridDhtAtomicCache.java:1119) at org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicCache.invoke0(GridDhtAtomicCache.java:827) at org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicCache.invoke(GridDhtAtomicCache.java:787) at org.apache.ignite.internal.processors.cache.IgniteCacheProxyImpl.invoke(IgniteCacheProxyImpl.java:1417) at org.apache.ignite.internal.processors.cache.IgniteCacheProxyImpl.invoke(IgniteCacheProxyImpl.java:1461) at org.apache.ignite.internal.processors.cache.GatewayProtectedCacheProxy.invoke(GatewayProtectedCacheProxy.java:1220) at my_poc_test.MyPocTest$InvokeTask.run(MyPocTest.java:172) at java.lang.Thread.run(Thread.java:748) Caused by: java.lang.ClassNotFoundException: invoke0 at java.net.URLClassLoader.findClass(URLClassLoader.java:381) at java.lang.ClassLoader.loadClass(ClassLoader.java:424) at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:338) at java.lang.ClassLoader.loadClass(ClassLoader.java:357) at java.lang.Class.forName0(Native Method) at java.lang.Class.forName(Class.java:348) at org.apache.ignite.internal.util.IgniteUtils.forName(IgniteUtils.java:8640) at org.apache.ignite.internal.MarshallerContextImpl.getClass(MarshallerContextImpl.java:349) at org.apache.ignite.internal.binary.BinaryContext.descriptorForTypeId(BinaryContext.java:698) ... 22 more -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (IGNITE-8926) Deadlock in meta data registration
Mikhail Cherkasov created IGNITE-8926: - Summary: Deadlock in meta data registration Key: IGNITE-8926 URL: https://issues.apache.org/jira/browse/IGNITE-8926 Project: Ignite Issue Type: Bug Components: general Reporter: Mikhail Cherkasov Assignee: Ilya Lantukh Attachments: 11948_WorkdayFabricManager.jstack Please file the attached jstack file with a deadlock. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (IGNITE-8908) NPE in BinaryMetadataTransport
Mikhail Cherkasov created IGNITE-8908: - Summary: NPE in BinaryMetadataTransport Key: IGNITE-8908 URL: https://issues.apache.org/jira/browse/IGNITE-8908 Project: Ignite Issue Type: Bug Components: general Reporter: Mikhail Cherkasov -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (IGNITE-8847) Node doesn't stop on node verification failure
Mikhail Cherkasov created IGNITE-8847: - Summary: Node doesn't stop on node verification failure Key: IGNITE-8847 URL: https://issues.apache.org/jira/browse/IGNITE-8847 Project: Ignite Issue Type: Bug Components: general Reporter: Mikhail Cherkasov Assignee: Mikhail Cherkasov Node doesn't stop on verification failure -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (IGNITE-8845) GridUnsafe.allocateMemory throws OutOfMemoryError which isn't handled
Mikhail Cherkasov created IGNITE-8845: - Summary: GridUnsafe.allocateMemory throws OutOfMemoryError which isn't handled Key: IGNITE-8845 URL: https://issues.apache.org/jira/browse/IGNITE-8845 Project: Ignite Issue Type: Bug Components: general Affects Versions: 2.5 Reporter: Mikhail Cherkasov Fix For: 2.6 Attachments: Main.java If there's no more native memory then Unsafe.allocateMemor throws java.lang.OutOfMemoryError. Errors - is type of exception after which you can't restore application and you need to close it and restart. I think in this case we can handle it and throw IgniteOOM instead. Reproducer is attached, it throws the following exception: Exception in thread "main" java.lang.OutOfMemoryError at sun.misc.Unsafe.allocateMemory(Native Method) at org.apache.ignite.internal.util.GridUnsafe.allocateMemory(GridUnsafe.java:1068) at org.apache.ignite.internal.mem.unsafe.UnsafeMemoryProvider.nextRegion(UnsafeMemoryProvider.java:80) at org.apache.ignite.internal.pagemem.impl.PageMemoryNoStoreImpl.addSegment(PageMemoryNoStoreImpl.java:612) at org.apache.ignite.internal.pagemem.impl.PageMemoryNoStoreImpl.allocatePage(PageMemoryNoStoreImpl.java:287) -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (IGNITE-8799) Web agent hides connection exceptions
Mikhail Cherkasov created IGNITE-8799: - Summary: Web agent hides connection exceptions Key: IGNITE-8799 URL: https://issues.apache.org/jira/browse/IGNITE-8799 Project: Ignite Issue Type: Bug Components: visor Reporter: Mikhail Cherkasov Assignee: Mikhail Cherkasov Web agent hides connection exceptions and this discourage analysis of connection problems -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (IGNITE-8778) Cache tests fail due short timeout
Mikhail Cherkasov created IGNITE-8778: - Summary: Cache tests fail due short timeout Key: IGNITE-8778 URL: https://issues.apache.org/jira/browse/IGNITE-8778 Project: Ignite Issue Type: Bug Components: general Reporter: Mikhail Cherkasov Assignee: Mikhail Cherkasov Cache tests can fail due time out: [https://ci.ignite.apache.org/project.html?projectId=IgniteTests24Java8=-6515019727174930828=testDetails] usually it passes, tests take ~50seconds, which is close to timeout. If TC is overloaded, tests can take >60sec, which leads to false failures. we need to increase timeout to avoid this. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (IGNITE-8683) Test fails after IGNITE-6639
Mikhail Cherkasov created IGNITE-8683: - Summary: Test fails after IGNITE-6639 Key: IGNITE-8683 URL: https://issues.apache.org/jira/browse/IGNITE-8683 Project: Ignite Issue Type: Bug Reporter: Mikhail Cherkasov Example of failed tests: org.apache.ignite.internal.processors.service.GridServiceProcessorMultiNodeSelfTest#testDeployOnEachNodeButClientUpdateTopology instead of checking address for loopback we should compare addr with locHost, because nodes can use the same port, but different local address and both addresses can be loopback. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (IGNITE-8660) Under some circumstances server node can re-join back to the cluster with the same id
Mikhail Cherkasov created IGNITE-8660: - Summary: Under some circumstances server node can re-join back to the cluster with the same id Key: IGNITE-8660 URL: https://issues.apache.org/jira/browse/IGNITE-8660 Project: Ignite Issue Type: Bug Reporter: Mikhail Cherkasov Assignee: Mikhail Cherkasov Under some circumstances server node can re-join back to the cluster with the same id, we need validation for this. The simplest is to check node id in topology history and drop new node in case the same id exist in topHist. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (IGNITE-8658) Add info message for complete partition exchange
Mikhail Cherkasov created IGNITE-8658: - Summary: Add info message for complete partition exchange Key: IGNITE-8658 URL: https://issues.apache.org/jira/browse/IGNITE-8658 Project: Ignite Issue Type: Bug Components: general Reporter: Mikhail Cherkasov Assignee: Mikhail Cherkasov It's very difficult to debug PME problems without message about it's completion. This message only available on debug level, however it's very often that we have only info level which needs to be analysed. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (IGNITE-8656) GridServiceProcessor does re-assignment even if no assignment is changed
Mikhail Cherkasov created IGNITE-8656: - Summary: GridServiceProcessor does re-assignment even if no assignment is changed Key: IGNITE-8656 URL: https://issues.apache.org/jira/browse/IGNITE-8656 Project: Ignite Issue Type: Bug Components: managed services Reporter: Mikhail Cherkasov GridServiceProcessor does re-assignment even if no assignment is changed and this cause excessive transaction on system replicated cache. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (IGNITE-8530) Exchange hangs during start/restart stress test
Mikhail Cherkasov created IGNITE-8530: - Summary: Exchange hangs during start/restart stress test Key: IGNITE-8530 URL: https://issues.apache.org/jira/browse/IGNITE-8530 Project: Ignite Issue Type: Bug Components: general Affects Versions: 2.4 Reporter: Mikhail Cherkasov Attachments: LocalRunner.java, Main2.java Please see attached test, it starts N_CORES*2+2 nodes first and after this starts N_CORES*2 threads with while(true) cycle in which closes and starts nodes with small random pause. After couple minutes it hangs with Failed to wait for partition map exchange. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (IGNITE-8502) Ignite client can hang during a rejoin
Mikhail Cherkasov created IGNITE-8502: - Summary: Ignite client can hang during a rejoin Key: IGNITE-8502 URL: https://issues.apache.org/jira/browse/IGNITE-8502 Project: Ignite Issue Type: Bug Components: general Reporter: Mikhail Cherkasov Assignee: Mikhail Cherkasov if server node doesn't response to client with TcpDiscoveryNodeAddFinishedMessage, client can wait for ever if joinTimeout == 0: [https://github.com/apache/ignite/blob/b6f1ab7a4cc3be5a09d14e4775a0f45ac09c87a5/modules/core/src/main/java/org/apache/ignite/spi/discovery/tcp/ClientImpl.java#L1866] -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (IGNITE-8153) Nodes fail to connect each other when SSL is enabled
Mikhail Cherkasov created IGNITE-8153: - Summary: Nodes fail to connect each other when SSL is enabled Key: IGNITE-8153 URL: https://issues.apache.org/jira/browse/IGNITE-8153 Project: Ignite Issue Type: Bug Components: general Affects Versions: 2.4 Reporter: Mikhail Cherkasov Assignee: Mikhail Cherkasov Fix For: 2.5 Nodes can fail to connect each other when SSL is enabled under some circumstances. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (IGNITE-7883) Cluster can have inconsistent affinity configuration
Mikhail Cherkasov created IGNITE-7883: - Summary: Cluster can have inconsistent affinity configuration Key: IGNITE-7883 URL: https://issues.apache.org/jira/browse/IGNITE-7883 Project: Ignite Issue Type: Bug Components: cache Affects Versions: 2.3 Reporter: Mikhail Cherkasov Fix For: 2.5 A cluster can have inconsistent affinity configuration if you created two nodes, one with affinity key configuration and other without it(in IgniteCfg or CacheCfg), both nodes will work fine with no exceptions, but in the same time they will apply different affinity rules to keys: {code:java} public class Test { private static int id = 0; public static void main(String[] args) { Ignite ignite = Ignition.start(getConfiguration(true, false)); Ignite ignite2 = Ignition.start(getConfiguration(false, false)); Affinity affinity = ignite.affinity("TEST"); Affinity affinity2 = ignite2.affinity("TEST"); for (int i = 0; i < 1_000_000; i++) { AKey key = new AKey(i); if(affinity.partition(key) != affinity2.partition(key)) System.out.println("FAILED for: " + key); } } @NotNull private static IgniteConfiguration getConfiguration(boolean withAffinityCfg, boolean client) { IgniteConfiguration cfg = new IgniteConfiguration(); TcpDiscoveryVmIpFinder finder = new TcpDiscoveryVmIpFinder(true); finder.setAddresses(Arrays.asList("localhost:47500..47600")); cfg.setClientMode(client); cfg.setIgniteInstanceName("test" + id++); if(withAffinityCfg) { CacheConfiguration cacheCfg = new CacheConfiguration("TEST"); cacheCfg.setAtomicityMode(CacheAtomicityMode.TRANSACTIONAL); cacheCfg.setCacheMode(CacheMode.PARTITIONED); cacheCfg.setKeyConfiguration(new CacheKeyConfiguration("multiplan.AKey", "a")); cfg.setCacheConfiguration(cacheCfg); } cfg.setDiscoverySpi(new TcpDiscoverySpi().setIpFinder(finder)); return cfg; } } class AKey { int a; public AKey(int a) { this.a = a; } @Override public String toString() { return "AKey{" + "b=" + a + '}'; } } {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (IGNITE-7880) Enum values not shown correctly in Webconsole
Mikhail Cherkasov created IGNITE-7880: - Summary: Enum values not shown correctly in Webconsole Key: IGNITE-7880 URL: https://issues.apache.org/jira/browse/IGNITE-7880 Project: Ignite Issue Type: Bug Components: visor Affects Versions: 2.3 Reporter: Mikhail Cherkasov Assignee: Mikhail Cherkasov Fix For: 2.5 Attachments: image-2018-03-05-13-34-48-992.png Enum values not shown correctly in Webconsole: !image-2018-03-05-13-34-48-992.png! -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (IGNITE-7793) SQL does not work if value has index filed which name equals to affinity key name
Mikhail Cherkasov created IGNITE-7793: - Summary: SQL does not work if value has index filed which name equals to affinity key name Key: IGNITE-7793 URL: https://issues.apache.org/jira/browse/IGNITE-7793 Project: Ignite Issue Type: Bug Components: sql Affects Versions: 2.3 Reporter: Mikhail Cherkasov Fix For: 2.5 SQL does not work if value has index filed which name equals to affinity key name: {code:java} public class AKey { @AffinityKeyMapped int a; public AKey(int a) { this.a = a; } } public class AVal { @QuerySqlField int a; public AVal(int a) { this.a = a; } } AKey aKey = new AKey(1); AVal aVal = new AVal(0); IgniteCache
[jira] [Created] (IGNITE-7707) Read lock for key/keys
Mikhail Cherkasov created IGNITE-7707: - Summary: Read lock for key/keys Key: IGNITE-7707 URL: https://issues.apache.org/jira/browse/IGNITE-7707 Project: Ignite Issue Type: Improvement Components: cache Reporter: Mikhail Cherkasov Right now, there's no way to take read lock for key or keys, so if you need to process several keys you can only lock this key in a transaction or by lockAll, but in this case, you can not read process this in parallel. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (IGNITE-7666) "Failed to parse query exception" has no description to find error in query
Mikhail Cherkasov created IGNITE-7666: - Summary: "Failed to parse query exception" has no description to find error in query Key: IGNITE-7666 URL: https://issues.apache.org/jira/browse/IGNITE-7666 Project: Ignite Issue Type: Bug Components: sql Affects Versions: 2.3 Reporter: Mikhail Cherkasov Fix For: 2.5 As an example, in the query below there are wrong quote characters around alias name(it requires no quotes or double quotes), but exception has no clue to find the error in query. This query is simple and the error easy to find, but it becomes almost impossible to find the error in real life queries: {noformat} 0: jdbc:ignite:thin://127.0.0.1/> SELECT Name as 'super_name' from person p where p.name = 'test'; Error: Failed to parse query: SELECT Name as 'super_name' from person p where p.name = 'test' (state=42000,code=0) java.sql.SQLException: Failed to parse query: SELECT Name as 'super_name' from person p where p.name = 'test' at org.apache.ignite.internal.jdbc.thin.JdbcThinConnection.sendRequest(JdbcThinConnection.java:671) at org.apache.ignite.internal.jdbc.thin.JdbcThinStatement.execute0(JdbcThinStatement.java:130) at org.apache.ignite.internal.jdbc.thin.JdbcThinStatement.execute(JdbcThinStatement.java:299) at sqlline.Commands.execute(Commands.java:823) at sqlline.Commands.sql(Commands.java:733) at sqlline.SqlLine.dispatch(SqlLine.java:795) at sqlline.SqlLine.begin(SqlLine.java:668) at sqlline.SqlLine.start(SqlLine.java:373) at sqlline.SqlLine.main(SqlLine.java:265){noformat} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (IGNITE-7663) AssertionError/NPE on "CREATE SCHEMA"
Mikhail Cherkasov created IGNITE-7663: - Summary: AssertionError/NPE on "CREATE SCHEMA" Key: IGNITE-7663 URL: https://issues.apache.org/jira/browse/IGNITE-7663 Project: Ignite Issue Type: Bug Components: sql Affects Versions: 2.3 Reporter: Mikhail Cherkasov Fix For: 2.5 Instead of UnsupportedOperationException exception, we have AssertionError: [https://stackoverflow.com/questions/48708238/ignite-database-create-schema-assertionerror] Errors mean that we can't continue work and should terminate process because now it is in unknown state and behavior is unpredictable, but I don't think that it's the case, isn't it? With disabled assertions, we have NPE there, but anyway, I expect to see UnsupportedOperationException if we try to run SQL that we don't support yet. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (IGNITE-7654) Geospatial queries does not work for JDBC/ODBC
Mikhail Cherkasov created IGNITE-7654: - Summary: Geospatial queries does not work for JDBC/ODBC Key: IGNITE-7654 URL: https://issues.apache.org/jira/browse/IGNITE-7654 Project: Ignite Issue Type: Bug Components: jdbc, odbc, sql, thin client Affects Versions: 2.3 Reporter: Mikhail Cherkasov Fix For: 2.5 Geospatial queries do not work for JDBC/ODBC. I can create a table with GEOMETRY from sqlline, like this: {code:java} CREATE TABLE GEO_TABLE(GID INTEGER PRIMARY KEY, THE_GEOM GEOMETRY);{code} table creation works fine, I can add rows: {code:java} INSERT INTO GEO_TABLE(GID, THE_GEOM) VALUES (2, 'POINT(500 505)');{code} but there's no way to select GEOMETRY objects: {code:java} SELECT THE_GEOM FROM GEO_TABLE;{code} {noformat} Error: class org.apache.ignite.binary.BinaryObjectException: Custom objects are not supported (state=5,code=0) java.sql.SQLException: class org.apache.ignite.binary.BinaryObjectException: Custom objects are not supported at org.apache.ignite.internal.jdbc.thin.JdbcThinConnection.sendRequest(JdbcThinConnection.java:671) at org.apache.ignite.internal.jdbc.thin.JdbcThinStatement.execute0(JdbcThinStatement.java:130) at org.apache.ignite.internal.jdbc.thin.JdbcThinStatement.execute(JdbcThinStatement.java:299) at sqlline.Commands.execute(Commands.java:823) at sqlline.Commands.sql(Commands.java:733) at sqlline.SqlLine.dispatch(SqlLine.java:795) at sqlline.SqlLine.begin(SqlLine.java:668) at sqlline.SqlLine.start(SqlLine.java:373) at sqlline.SqlLine.main(SqlLine.java:265){noformat} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (IGNITE-7642) Ignite fails with OOM if query has "NULLS LAST"
Mikhail Cherkasov created IGNITE-7642: - Summary: Ignite fails with OOM if query has "NULLS LAST" Key: IGNITE-7642 URL: https://issues.apache.org/jira/browse/IGNITE-7642 Project: Ignite Issue Type: Bug Components: sql Affects Versions: 2.3 Reporter: Mikhail Cherkasov Fix For: 2.5 Attachments: OrderByNullsLastTest.java I have an index for "a" filed of "A" type and the following SQL works fine: SELECT * FROM A ORDER BY a LIMIT 0 + 50 but as soon as I added "NULLS LAST" it starts to fail with OOM error: SELECT * FROM A WHERE a is not null ORDER BY a LIMIT 0 + 50 However for both queries, EXPLAIN says that it the uses index, I don't see why it should fail, but looks like ignite tryes to load all memory into heap and sort there, and this leads to OOM. A reproducer is attached. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (IGNITE-7607) FieldsQueryCursor should expose data types too
Mikhail Cherkasov created IGNITE-7607: - Summary: FieldsQueryCursor should expose data types too Key: IGNITE-7607 URL: https://issues.apache.org/jira/browse/IGNITE-7607 Project: Ignite Issue Type: Improvement Components: sql Reporter: Mikhail Cherkasov FieldsQueryCursor should expose data types too, this will simple users life in some case. This feature was requested by user list: http://apache-ignite-users.70518.x6.nabble.com/Optional-meta-schema-in-cache-level-tt19635.html -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (IGNITE-7572) Local cache fails to start on local node
Mikhail Cherkasov created IGNITE-7572: - Summary: Local cache fails to start on local node Key: IGNITE-7572 URL: https://issues.apache.org/jira/browse/IGNITE-7572 Project: Ignite Issue Type: Bug Components: cache Affects Versions: 2.3 Reporter: Mikhail Cherkasov Assignee: Mikhail Cherkasov Fix For: 2.5 Reproducer: import java.util.Arrays; import org.apache.ignite.Ignite; import org.apache.ignite.Ignition; import org.apache.ignite.cache.CacheMode; import org.apache.ignite.configuration.CacheConfiguration; import org.apache.ignite.configuration.IgniteConfiguration; import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.jetbrains.annotations.NotNull; public class LocalCache { private static int id; public static void main(String[] args) throws InterruptedException { Ignition.setClientMode(false); Ignite server = Ignition.start(getConfiguration()); System.out.println("Server is up"); Ignition.setClientMode(true); Ignite client = Ignition.start(getConfiguration()); System.out.println("Client is up"); } @NotNull private static IgniteConfiguration getConfiguration() { IgniteConfiguration cfg = new IgniteConfiguration(); TcpDiscoveryVmIpFinder finder = new TcpDiscoveryVmIpFinder(true); finder.setAddresses(Arrays.asList("localhost:47500..47600")); cfg.setIgniteInstanceName("test" + id++); CacheConfiguration cacheConfiguration = new CacheConfiguration("TEST"); cacheConfiguration.setCacheMode(CacheMode.LOCAL); cfg.setCacheConfiguration(cacheConfiguration); cfg.setDiscoverySpi(new TcpDiscoverySpi().setIpFinder(finder)); return cfg; } } -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (IGNITE-7523) Exception on data expiration after sharedRDD.saveValues call
Mikhail Cherkasov created IGNITE-7523: - Summary: Exception on data expiration after sharedRDD.saveValues call Key: IGNITE-7523 URL: https://issues.apache.org/jira/browse/IGNITE-7523 Project: Ignite Issue Type: Bug Components: spark Affects Versions: 2.3 Reporter: Mikhail Cherkasov Fix For: 2.5 Reproducer: package rdd_expiration; import java.util.ArrayList; import java.util.Arrays; import java.util.List; import java.util.UUID; import java.util.concurrent.atomic.AtomicLong; import javax.cache.Cache; import javax.cache.expiry.CreatedExpiryPolicy; import javax.cache.expiry.Duration; import org.apache.ignite.Ignite; import org.apache.ignite.IgniteCache; import org.apache.ignite.Ignition; import org.apache.ignite.configuration.CacheConfiguration; import org.apache.ignite.configuration.DataRegionConfiguration; import org.apache.ignite.configuration.DataStorageConfiguration; import org.apache.ignite.configuration.IgniteConfiguration; import org.apache.ignite.lang.IgniteOutClosure; import org.apache.ignite.spark.JavaIgniteContext; import org.apache.ignite.spark.JavaIgniteRDD; import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import org.apache.log4j.Level; import org.apache.log4j.Logger; import org.apache.spark.SparkConf; import org.apache.spark.api.java.JavaRDD; import org.apache.spark.api.java.JavaSparkContext; import static org.apache.ignite.cache.CacheAtomicityMode.ATOMIC; import static org.apache.ignite.cache.CacheMode.PARTITIONED; import static org.apache.ignite.cache.CacheWriteSynchronizationMode.FULL_SYNC; /** * This example demonstrates how to create an JavaIgnitedRDD and share it with multiple spark workers. The goal of this * particular example is to provide the simplest code example of this logic. * * This example will start Ignite in the embedded mode and will start an JavaIgniteContext on each Spark worker node. * * The example can work in the standalone mode as well that can be enabled by setting JavaIgniteContext's * \{@code standalone} property to \{@code true} and running an Ignite node separately with * `examples/config/spark/example-shared-rdd.xml` config. */ public class RddExpiration { /** * Executes the example. * @param args Command line arguments, none required. */ public static void main(String args[]) throws InterruptedException { Ignite server = null; for (int i = 0; i < 4; i++) { IgniteConfiguration serverCfg = createIgniteCfg(); serverCfg.setClientMode(false); serverCfg.setIgniteInstanceName("Server" + i); server = Ignition.start(serverCfg); } server.active(true); // Spark Configuration. SparkConf sparkConf = new SparkConf() .setAppName("JavaIgniteRDDExample") .setMaster("local") .set("spark.executor.instances", "2"); // Spark context. JavaSparkContext sparkContext = new JavaSparkContext(sparkConf); // Adjust the logger to exclude the logs of no interest. Logger.getRootLogger().setLevel(Level.ERROR); Logger.getLogger("org.apache.ignite").setLevel(Level.INFO); // Creates Ignite context with specific configuration and runs Ignite in the embedded mode. JavaIgniteContextigniteContext = new JavaIgniteContext ( sparkContext, new IgniteOutClosure() { @Override public IgniteConfiguration apply() { return createIgniteCfg(); } }, true); // Create a Java Ignite RDD of Type (Int,Int) Integer Pair. JavaIgniteRDD sharedRDD = igniteContext. fromCache("sharedRDD"); long start = System.currentTimeMillis(); long totalLoaded = 0; while(System.currentTimeMillis() - start < 55_000) { // Define data to be stored in the Ignite RDD (cache). List data = new ArrayList<>(20_000); for (int i = 0; i < 20_000; i++) data.add(i); // Preparing a Java RDD. JavaRDD javaRDD = sparkContext.parallelize(data); sharedRDD.saveValues(javaRDD); totalLoaded += 20_000; } System.out.println("Loaded " + totalLoaded); for (;;) { System.out.println(">>> Iterating over Ignite Shared RDD..."); IgniteCache
[jira] [Created] (IGNITE-7458) NPE after node restart with native persistence enabled.
Mikhail Cherkasov created IGNITE-7458: - Summary: NPE after node restart with native persistence enabled. Key: IGNITE-7458 URL: https://issues.apache.org/jira/browse/IGNITE-7458 Project: Ignite Issue Type: Bug Components: persistence Affects Versions: 2.3 Environment: We have a report from a user: [http://apache-ignite-users.70518.x6.nabble.com/NPE-from-the-native-persistence-enable-node-td19555.html] it says that NPE occurred after node restart with native persistence enabled. Reporter: Mikhail Cherkasov Fix For: 2.5 -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (IGNITE-7442) Data load hangs with SQL on-heap cache enabled
Mikhail Cherkasov created IGNITE-7442: - Summary: Data load hangs with SQL on-heap cache enabled Key: IGNITE-7442 URL: https://issues.apache.org/jira/browse/IGNITE-7442 Project: Ignite Issue Type: Bug Components: sql Affects Versions: 2.4 Reporter: Mikhail Cherkasov Assignee: Vladimir Ozerov Fix For: 2.5 The user uses putAll to load data into a cache, it loads data to Atomic cache and all keys have unique values, so there can not be a deadlock due to key order, but to be 200% sure about this, the user also uses TreeMap. In logs I can see 68 messages about pool starvation for the same thread: at o.a.i.i.processors.query.h2.database.H2Tree.compare(H2Tree.java:206) at o.a.i.i.processors.query.h2.database.H2Tree.compare(H2Tree.java:44) at o.a.i.i.processors.cache.persistence.tree.BPlusTree.compare(BPlusTree.java:4359) at o.a.i.i.processors.cache.persistence.tree.BPlusTree.findInsertionPoint(BPlusTree.java:4279) at o.a.i.i.processors.cache.persistence.tree.BPlusTree.access$1500(BPlusTree.java:81) at o.a.i.i.processors.cache.persistence.tree.BPlusTree$Search.run0(BPlusTree.java:261) at o.a.i.i.processors.cache.persistence.tree.BPlusTree$GetPageHandler.run(BPlusTree.java:4697) at o.a.i.i.processors.cache.persistence.tree.BPlusTree$GetPageHandler.run(BPlusTree.java:4682) at o.a.i.i.processors.cache.persistence.tree.util.PageHandler.readPage(PageHandler.java:158) at o.a.i.i.processors.cache.persistence.DataStructure.read(DataStructure.java:319) at o.a.i.i.processors.cache.persistence.tree.BPlusTree.putDown(BPlusTree.java:2254) at o.a.i.i.processors.cache.persistence.tree.BPlusTree.putDown(BPlusTree.java:2266) at o.a.i.i.processors.cache.persistence.tree.BPlusTree.doPut(BPlusTree.java:2006) at o.a.i.i.processors.cache.persistence.tree.BPlusTree.put(BPlusTree.java:1977) at o.a.i.i.processors.query.h2.database.H2TreeIndex.put(H2TreeIndex.java:220) at o.a.i.i.processors.query.h2.opt.GridH2Table.addToIndex(GridH2Table.java:568) at o.a.i.i.processors.query.h2.opt.GridH2Table.doUpdate(GridH2Table.java:516) at o.a.i.i.processors.query.h2.opt.GridH2Table.update(GridH2Table.java:425) at o.a.i.i.processors.query.h2.IgniteH2Indexing.store(IgniteH2Indexing.java:566) at o.a.i.i.processors.query.GridQueryProcessor.store(GridQueryProcessor.java:1731) at o.a.i.i.processors.cache.query.GridCacheQueryManager.store(GridCacheQueryManager.java:418) at o.a.i.i.processors.cache.IgniteCacheOffheapManagerImpl$CacheDataStoreImpl.finishUpdate(IgniteCacheOffheapManagerImpl.java:1363) at o.a.i.i.processors.cache.IgniteCacheOffheapManagerImpl$CacheDataStoreImpl.invoke(IgniteCacheOffheapManagerImpl.java:1218) at o.a.i.i.processors.cache.IgniteCacheOffheapManagerImpl.invoke(IgniteCacheOffheapManagerImpl.java:352) at o.a.i.i.processors.cache.GridCacheMapEntry.innerUpdate(GridCacheMapEntry.java:1693) at o.a.i.i.processors.cache.distributed.dht.atomic.GridDhtAtomicCache.processDhtAtomicUpdateRequest(GridDhtAtomicCache.java:3222) A completed number always is the same: Completed: 1826527 and furthermore, thread always has a runnable state. So it's in a runnable state for 30 minutes. So looks like it was looping somewhere inside: o.a.i.i.processors.cache.persistence.tree.BPlusTree.putDown method. The issue can be reproduced only with SQL on heap cache enabled. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (IGNITE-7319) Memory leak during creating/destroying local cache
Mikhail Cherkasov created IGNITE-7319: - Summary: Memory leak during creating/destroying local cache Key: IGNITE-7319 URL: https://issues.apache.org/jira/browse/IGNITE-7319 Project: Ignite Issue Type: Bug Components: cache Affects Versions: 2.3 Reporter: Mikhail Cherkasov Fix For: 2.4 The following code creates local caches: private IgniteCachecreateLocalCache(String name) { CacheConfiguration cCfg = new CacheConfiguration<>(); cCfg.setName(name); cCfg.setGroupName("localCaches"); // without group leak is much bigger! cCfg.setStoreKeepBinary(true); cCfg.setCacheMode(CacheMode.LOCAL); cCfg.setOnheapCacheEnabled(false); cCfg.setCopyOnRead(false); cCfg.setBackups(0); cCfg.setWriteBehindEnabled(false); cCfg.setReadThrough(false); cCfg.setReadFromBackup(false); cCfg.setQueryEntities(); return ignite.createCache(cCfg).withKeepBinary(); } The caches are placed in the queue and are picked up by the worker thread which just destroys them after removing from the queue. This setup seems to generate a memory leak of about 1GB per day. When looking at heap dump, I see all space is occupied by instances of java.util.concurrent.ConcurrentSkipListMap$Node. User list: http://apache-ignite-users.70518.x6.nabble.com/Memory-leak-in-GridCachePartitionExchangeManager-tt18995.html -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Created] (IGNITE-7231) Cassandra-sessions-pool is running after Ignition.stop
Mikhail Cherkasov created IGNITE-7231: - Summary: Cassandra-sessions-pool is running after Ignition.stop Key: IGNITE-7231 URL: https://issues.apache.org/jira/browse/IGNITE-7231 Project: Ignite Issue Type: Bug Affects Versions: 2.3 Reporter: Mikhail Cherkasov Assignee: Mikhail Cherkasov Fix For: 2.4 Cassandra-sessions-pool is running after Ignition.stop. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Created] (IGNITE-7196) Exchange can stuck and wait while new node restoring state from disk and starting caches
Mikhail Cherkasov created IGNITE-7196: - Summary: Exchange can stuck and wait while new node restoring state from disk and starting caches Key: IGNITE-7196 URL: https://issues.apache.org/jira/browse/IGNITE-7196 Project: Ignite Issue Type: Bug Components: cache Affects Versions: 2.3 Reporter: Mikhail Cherkasov Priority: Critical Fix For: 2.4 Exchange can stuck and wait while new node restoring state from disk and starting caches, there's a log snippet from a just joined new node that shows the issue: [21:36:13,023][INFO][exchange-worker-#62%statement_grid%][time] Started exchange init [topVer=AffinityTopologyVersion [topVer=57, minorTopVer=0], crd=false, evt=NODE_JOINED, evtNode=3ac1160e-0de4-41bc-a366-59292c9f03c1, customEvt=null, allowMerge=true] [21:36:13,023][INFO][exchange-worker-#62%statement_grid%][FilePageStoreManager] Resolved page store work directory: /mnt/store/node00-d1eb270c-d2cc-4550-87aa-64f6df2a9463 [21:36:13,024][INFO][exchange-worker-#62%statement_grid%][FileWriteAheadLogManager] Resolved write ahead log work directory: /mnt/wal/WAL/node00-d1eb270c-d2cc-4550-87aa-64f6df2a9463 [21:36:13,024][INFO][exchange-worker-#62%statement_grid%][FileWriteAheadLogManager] Resolved write ahead log archive directory: /mnt/wal/WAL_archive/node00-d1eb270c-d2cc-4550-87aa-64f6df2a9463 [21:36:13,046][INFO][exchange-worker-#62%statement_grid%][FileWriteAheadLogManager] Started write-ahead log manager [mode=DEFAULT] [21:36:13,065][INFO][exchange-worker-#62%statement_grid%][PageMemoryImpl] Started page memory [memoryAllocated=100.0 MiB, pages=6352, tableSize=373.4 KiB, checkpointBuffer=100.0 MiB] [21:36:13,105][INFO][exchange-worker-#62%statement_grid%][PageMemoryImpl] Started page memory [memoryAllocated=32.0 GiB, pages=2083376, tableSize=119.6 MiB, checkpointBuffer=896.0 MiB] [21:36:13,428][INFO][exchange-worker-#62%statement_grid%][GridCacheDatabaseSharedManager] Read checkpoint status [startMarker=/mnt/store/node00-d1eb270c-d2cc-4550-87aa-64f6df2a9463/cp/1512930965253-306c0895-1f5f-4237-bebf-8bf2b49682af-START.bin, endMarker=/mnt/store/node00-d1eb270c-d2cc-4550-87aa-64f6df2a9463/cp/1512930869357-1c24b6dc-d64c-4b83-8166-11edf1bfdad3-END.bin] [21:36:13,429][INFO][exchange-worker-#62%statement_grid%][GridCacheDatabaseSharedManager] Checking memory state [lastValidPos=FileWALPointer [idx=3582, fileOffset=59186076, len=9229, forceFlush=false], lastMarked=FileWALPointer [idx=3629, fileOffset=50829700, len=9229, forceFlush=false], lastCheckpointId=306c0895-1f5f-4237-bebf-8bf2b49682af] [21:36:13,429][WARNING][exchange-worker-#62%statement_grid%][GridCacheDatabaseSharedManager] Ignite node stopped in the middle of checkpoint. Will restore memory state and finish checkpoint on node start. [21:36:18,312][INFO][grid-nio-worker-tcp-comm-0-#41%statement_grid%][TcpCommunicationSpi] Accepted incoming communication connection [locAddr=/172.31.20.209:48100, rmtAddr=/172.31.17.115:57148] [21:36:21,619][INFO][exchange-worker-#62%statement_grid%][GridCacheDatabaseSharedManager] Found last checkpoint marker [cpId=306c0895-1f5f-4237-bebf-8bf2b49682af, pos=FileWALPointer [idx=3629, fileOffset=50829700, len=9229, forceFlush=false]] [21:36:21,620][INFO][exchange-worker-#62%statement_grid%][GridCacheDatabaseSharedManager] Finished applying memory changes [changesApplied=165103, time=8189ms] [21:36:22,403][INFO][grid-nio-worker-tcp-comm-1-#42%statement_grid%][TcpCommunicationSpi] Accepted incoming communication connection [locAddr=/172.31.20.209:48100, rmtAddr=/172.31.28.10:47964] [21:36:23,414][INFO][grid-nio-worker-tcp-comm-2-#43%statement_grid%][TcpCommunicationSpi] Accepted incoming communication connection [locAddr=/172.31.20.209:48100, rmtAddr=/172.31.27.101:46000] [21:36:33,019][WARNING][main][GridCachePartitionExchangeManager] Failed to wait for initial partition map exchange. Possible reasons are: ^-- Transactions in deadlock. ^-- Long running transactions (ignore if this is the case). ^-- Unreleased explicit locks. [21:36:53,021][WARNING][main][GridCachePartitionExchangeManager] Still waiting for initial partition map exchange [fut=GridDhtPartitionsExchangeFuture [firstDiscoEvt=DiscoveryEvent [evtNode=TcpDiscoveryNode [id=3ac1160e-0de4-41bc-a366-59292c9f03c1, addrs=[0:0:0:0:0:0:0:1%lo, 127.0.0.1, 172.31.20.209], sockAddrs=[/0:0:0:0:0:0:0:1%lo:48500, /127.0.0.1:48500, ip-172-31-20-209.eu-central-1.compute.internal/172.31.20.209:48500], discPort=48500, order=57, intOrder=36, lastExchangeTime=1512931012268, loc=true, ver=2.3.1#20171129-sha1:4b1ec0fe, isClient=false], topVer=57, nodeId8=3ac1160e, msg=null, type=NODE_JOINED, tstamp=1512930972992], crd=TcpDiscoveryNode [id=56c97317-26cf-43d2-bf76-0cab59c6fa5f, addrs=[0:0:0:0:0:0:0:1%lo, 127.0.0.1, 172.31.27.101], sockAddrs=[/0:0:0:0:0:0:0:1%lo:48500, /127.0.0.1:48500,
[jira] [Created] (IGNITE-7165) Re-balancing is cancelled if client node joins
Mikhail Cherkasov created IGNITE-7165: - Summary: Re-balancing is cancelled if client node joins Key: IGNITE-7165 URL: https://issues.apache.org/jira/browse/IGNITE-7165 Project: Ignite Issue Type: Bug Reporter: Mikhail Cherkasov Assignee: Anton Vinogradov Priority: Critical Re-balancing is canceled if client node joins. Re-balancing can take hours and each time when client node joins it starts again: [15:10:05,700][INFO][disco-event-worker-#61%statement_grid%][GridDiscoveryManager] Added new node to topology: TcpDiscoveryNode [id=979cf868-1c37-424a-9ad1-12db501f32ef, addrs=[0:0:0:0:0:0:0:1, 127.0.0.1, 172.31.16.213], sockAddrs=[/0:0:0:0:0:0:0:1:0, /127.0.0.1:0, /172.31.16.213:0], discPort=0, order=36, intOrder=24, lastExchangeTime=1512907805688, loc=false, ver=2.3.1#20171129-sha1:4b1ec0fe, isClient=true] [15:10:05,701][INFO][disco-event-worker-#61%statement_grid%][GridDiscoveryManager] Topology snapshot [ver=36, servers=7, clients=5, CPUs=128, heap=160.0GB] [15:10:05,702][INFO][exchange-worker-#62%statement_grid%][time] Started exchange init [topVer=AffinityTopologyVersion [topVer=36, minorTopVer=0], crd=false, evt=NODE_JOINED, evtNode=979cf868-1c37-424a-9ad1-12db501f32ef, customEvt=null, allowMerge=true] [15:10:05,702][INFO][exchange-worker-#62%statement_grid%][GridDhtPartitionsExchangeFuture] Finish exchange future [startVer=AffinityTopologyVersion [topVer=36, minorTopVer=0], resVer=AffinityTopologyVersion [topVer=36, minorTopVer=0], err=null] [15:10:05,702][INFO][exchange-worker-#62%statement_grid%][time] Finished exchange init [topVer=AffinityTopologyVersion [topVer=36, minorTopVer=0], crd=false] [15:10:05,703][INFO][exchange-worker-#62%statement_grid%][GridCachePartitionExchangeManager] Skipping rebalancing (nothing scheduled) [top=AffinityTopologyVersion [topVer=36, minorTopVer=0], evt=NODE_JOINED, node=979cf868-1c37-424a-9ad1-12db501f32ef] [15:10:08,706][INFO][exchange-worker-#62%statement_grid%][GridDhtPartitionDemander] Cancelled rebalancing from all nodes [topology=AffinityTopologyVersion [topVer=35, minorTopVer=0]] [15:10:08,707][INFO][exchange-worker-#62%statement_grid%][GridCachePartitionExchangeManager] Rebalancing scheduled [order=[statementp]] [15:10:08,707][INFO][exchange-worker-#62%statement_grid%][GridCachePartitionExchangeManager] Rebalancing started [top=null, evt=NODE_JOINED, node=a8be3c14-9add-48c3-b099-3fd304cfdbf4] [15:10:08,707][INFO][exchange-worker-#62%statement_grid%][GridDhtPartitionDemander] Starting rebalancing [mode=ASYNC, fromNode=2f6bde48-ffb5-4815-bd32-df4e57dc13e0, partitionsCount=18, topology=AffinityTopologyVersion [topVer=36, minorTopVer=0], updateSeq=-1754630006] [15:10:08,707][INFO][exchange-worker-#62%statement_grid%][GridDhtPartitionDemander] Starting rebalancing [mode=ASYNC, fromNode=35d01141-4dce-47dd-adf6-a4f3b2bb9da9, partitionsCount=15, topology=AffinityTopologyVersion [topVer=36, minorTopVer=0], updateSeq=-1754630006] [15:10:08,708][INFO][exchange-worker-#62%statement_grid%][GridDhtPartitionDemander] Starting rebalancing [mode=ASYNC, fromNode=b3a8be53-e61f-4023-a906-a265923837ba, partitionsCount=15, topology=AffinityTopologyVersion [topVer=36, minorTopVer=0], updateSeq=-1754630006] [15:10:08,708][INFO][exchange-worker-#62%statement_grid%][GridDhtPartitionDemander] Starting rebalancing [mode=ASYNC, fromNode=f825cb4e-7dcc-405f-a40d-c1dc1a3ade5a, partitionsCount=12, topology=AffinityTopologyVersion [topVer=36, minorTopVer=0], updateSeq=-1754630006] [15:10:08,708][INFO][exchange-worker-#62%statement_grid%][GridDhtPartitionDemander] Starting rebalancing [mode=ASYNC, fromNode=4ae1db91-8b88-4180-a84b-127a303959e9, partitionsCount=11, topology=AffinityTopologyVersion [topVer=36, minorTopVer=0], updateSeq=-1754630006] [15:10:08,708][INFO][exchange-worker-#62%statement_grid%][GridDhtPartitionDemander] Starting rebalancing [mode=ASYNC, fromNode=7c286481-7638-49e4-8c68-fa6aa65d8b76, partitionsCount=18, topology=AffinityTopologyVersion [topVer=36, minorTopVer=0], updateSeq=-1754630006] so in clusters with a big amount of data and the frequent client left/join events this means that a new server will never receive its partitions. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Created] (IGNITE-7050) Add support for spring3
Mikhail Cherkasov created IGNITE-7050: - Summary: Add support for spring3 Key: IGNITE-7050 URL: https://issues.apache.org/jira/browse/IGNITE-7050 Project: Ignite Issue Type: Improvement Affects Versions: 2.3 Environment: there are still users who use spring3 and hence can't use ignite which depends on spring4. I think we can create separate modules which Reporter: Mikhail Cherkasov Assignee: Mikhail Cherkasov Fix For: 2.4 -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Created] (IGNITE-7028) Memcached does not set type flags for response
Mikhail Cherkasov created IGNITE-7028: - Summary: Memcached does not set type flags for response Key: IGNITE-7028 URL: https://issues.apache.org/jira/browse/IGNITE-7028 Project: Ignite Issue Type: Bug Components: rest Affects Versions: 2.3 Reporter: Mikhail Cherkasov Fix For: 2.4 Memcached does not set type flags for response: http://apache-ignite-users.70518.x6.nabble.com/Memcached-doesn-t-store-flags-td18403.html -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Created] (IGNITE-7021) IgniteOOM is not propogated to client in case of implicit transaction
Mikhail Cherkasov created IGNITE-7021: - Summary: IgniteOOM is not propogated to client in case of implicit transaction Key: IGNITE-7021 URL: https://issues.apache.org/jira/browse/IGNITE-7021 Project: Ignite Issue Type: Bug Components: cache Affects Versions: 2.3 Reporter: Mikhail Cherkasov Priority: Critical Fix For: 2.4 it's related to https://issues.apache.org/jira/browse/IGNITE-7019 when transaction fails due IgniteOOM, ignite tries to rollback transaction and it fails too, because can't add free pages to free list due a new IgniteOOM: [2017-11-27 12:47:37,539][ERROR][sys-stripe-2-#4%cache.IgniteOutOfMemoryPropagationTest0%][GridNearTxLocal] Heuristic transaction failure. at org.apache.ignite.internal.processors.cache.transactions.IgniteTxLocalAdapter.userCommit(IgniteTxLocalAdapter.java:835) at org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtTxLocalAdapter.localFinish(GridDhtTxLocalAdapter.java:774) at org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtTxLocal.localFinish(GridDhtTxLocal.java:555) at org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtTxLocal.finishTx(GridDhtTxLocal.java:441) at org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtTxLocal.commitDhtLocalAsync(GridDhtTxLocal.java:489) at org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtTxLocal.commitAsync(GridDhtTxLocal.java:498) at org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtTxPrepareFuture.onDone(GridDhtTxPrepareFuture.java:727) at org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtTxPrepareFuture.onDone(GridDhtTxPrepareFuture.java:104) at org.apache.ignite.internal.util.future.GridFutureAdapter.onDone(GridFutureAdapter.java:451) at org.apache.ignite.internal.util.future.GridCompoundFuture.checkComplete(GridCompoundFuture.java:285) at org.apache.ignite.internal.util.future.GridCompoundFuture.markInitialized(GridCompoundFuture.java:276) at org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtTxPrepareFuture.prepare0(GridDhtTxPrepareFuture.java:1246) at org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtTxPrepareFuture.mapIfLocked(GridDhtTxPrepareFuture.java:666) at org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtTxPrepareFuture.prepare(GridDhtTxPrepareFuture.java:1040) at org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtTxLocal.prepareAsync(GridDhtTxLocal.java:398) at org.apache.ignite.internal.processors.cache.transactions.IgniteTxHandler.prepareNearTx(IgniteTxHandler.java:519) at org.apache.ignite.internal.processors.cache.transactions.IgniteTxHandler.processNearTxPrepareRequest0(IgniteTxHandler.java:150) at org.apache.ignite.internal.processors.cache.transactions.IgniteTxHandler.processNearTxPrepareRequest(IgniteTxHandler.java:135) at org.apache.ignite.internal.processors.cache.transactions.IgniteTxHandler.access$000(IgniteTxHandler.java:97) at org.apache.ignite.internal.processors.cache.transactions.IgniteTxHandler$1.apply(IgniteTxHandler.java:177) at org.apache.ignite.internal.processors.cache.transactions.IgniteTxHandler$1.apply(IgniteTxHandler.java:175) at org.apache.ignite.internal.processors.cache.GridCacheIoManager.processMessage(GridCacheIoManager.java:1060) at org.apache.ignite.internal.processors.cache.GridCacheIoManager.onMessage0(GridCacheIoManager.java:579) at org.apache.ignite.internal.processors.cache.GridCacheIoManager.handleMessage(GridCacheIoManager.java:378) at org.apache.ignite.internal.processors.cache.GridCacheIoManager.handleMessage(GridCacheIoManager.java:304) at org.apache.ignite.internal.processors.cache.GridCacheIoManager.access$100(GridCacheIoManager.java:99) at org.apache.ignite.internal.processors.cache.GridCacheIoManager$1.onMessage(GridCacheIoManager.java:293) at org.apache.ignite.internal.managers.communication.GridIoManager.invokeListener(GridIoManager.java:1555) at org.apache.ignite.internal.managers.communication.GridIoManager.processRegularMessage0(GridIoManager.java:1183) at org.apache.ignite.internal.managers.communication.GridIoManager.access$4200(GridIoManager.java:126) at org.apache.ignite.internal.managers.communication.GridIoManager$9.run(GridIoManager.java:1090) at org.apache.ignite.internal.util.StripedExecutor$Stripe.run(StripedExecutor.java:499) at java.lang.Thread.run(Thread.java:748) Caused by: class org.apache.ignite.IgniteException: Runtime failure on search row: org.apache.ignite.internal.processors.cache.tree.SearchRow@2b17e5c8
[jira] [Created] (IGNITE-7019) Cluster can not survive after IgniteOOM
Mikhail Cherkasov created IGNITE-7019: - Summary: Cluster can not survive after IgniteOOM Key: IGNITE-7019 URL: https://issues.apache.org/jira/browse/IGNITE-7019 Project: Ignite Issue Type: Bug Components: cache Affects Versions: 2.3 Reporter: Mikhail Cherkasov Priority: Critical Fix For: 2.4 even if we have full sync mode and transactional cache we can't add new nodes if there was IgniteOOM, after adding new nodes and re-balancing, old nodes can't evict partitions: [2017-11-17 20:02:24,588][ERROR][sys-#65%DR1%][GridDhtPreloader] Partition eviction failed, this can cause grid hang. class org.apache.ignite.internal.mem.IgniteOutOfMemoryException: Not enough memory allocated [policyName=100MB_Region_Eviction, size=104.9 MB] Consider increasing memory policy size, enabling evictions, adding more nodes to the cluster, reducing number of backups or reducing model size. at org.apache.ignite.internal.pagemem.impl.PageMemoryNoStoreImpl.allocatePage(PageMemoryNoStoreImpl.java:294) at org.apache.ignite.internal.processors.cache.persistence.DataStructure.allocatePageNoReuse(DataStructure.java:117) at org.apache.ignite.internal.processors.cache.persistence.DataStructure.allocatePage(DataStructure.java:105) at org.apache.ignite.internal.processors.cache.persistence.freelist.PagesList.addStripe(PagesList.java:413) at org.apache.ignite.internal.processors.cache.persistence.freelist.PagesList.getPageForPut(PagesList.java:528) at org.apache.ignite.internal.processors.cache.persistence.freelist.PagesList.put(PagesList.java:617) at org.apache.ignite.internal.processors.cache.persistence.freelist.FreeListImpl.addForRecycle(FreeListImpl.java:582) at org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree$Remove.reuseFreePages(BPlusTree.java:3847) at org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree$Remove.releaseAll(BPlusTree.java:4106) at org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree$Remove.access$6900(BPlusTree.java:3166) at org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree.doRemove(BPlusTree.java:1782) at org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree.remove(BPlusTree.java:1567) at org.apache.ignite.internal.processors.cache.IgniteCacheOffheapManagerImpl$CacheDataStoreImpl.remove(IgniteCacheOffheapManagerImpl.java:1387) at org.apache.ignite.internal.processors.cache.IgniteCacheOffheapManagerImpl.remove(IgniteCacheOffheapManagerImpl.java:374) at org.apache.ignite.internal.processors.cache.GridCacheMapEntry.removeValue(GridCacheMapEntry.java:3233) at org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtCacheEntry.clearInternal(GridDhtCacheEntry.java:588) at org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtLocalPartition.clearAll(GridDhtLocalPartition.java:892) at org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtLocalPartition.tryEvict(GridDhtLocalPartition.java:750) at org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPreloader$3.call(GridDhtPreloader.java:593) at org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPreloader$3.call(GridDhtPreloader.java:580) at org.apache.ignite.internal.util.IgniteUtils.wrapThreadLoader(IgniteUtils.java:6639) at org.apache.ignite.internal.processors.closure.GridClosureProcessor$2.body(GridClosureProcessor.java:967) at org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:110) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:748) -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Created] (IGNITE-6942) Auto re-connect to other node in case of failure of current
Mikhail Cherkasov created IGNITE-6942: - Summary: Auto re-connect to other node in case of failure of current Key: IGNITE-6942 URL: https://issues.apache.org/jira/browse/IGNITE-6942 Project: Ignite Issue Type: Improvement Security Level: Public (Viewable by anyone) Components: sql Reporter: Mikhail Cherkasov Fix For: 2.4 it will be great to have a re-connect feature for thin driver, in case if server failure it should choose another server node from a list of server nods. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Created] (IGNITE-6853) Cassandra cache store does not clean prepared statements cache when remove old cassandra session
Mikhail Cherkasov created IGNITE-6853: - Summary: Cassandra cache store does not clean prepared statements cache when remove old cassandra session Key: IGNITE-6853 URL: https://issues.apache.org/jira/browse/IGNITE-6853 Project: Ignite Issue Type: Bug Security Level: Public (Viewable by anyone) Components: cassandra Affects Versions: 2.3 Reporter: Mikhail Cherkasov Assignee: Mikhail Cherkasov Fix For: 2.4 Cassandra cache store does not clean prepared statements cache when remove old cassandra session which can lead to: Prepared statement cluster error detected, refreshing Cassandra session com.datastax.driver.core.exceptions.InvalidQueryException: Tried to execute unknown prepared query : 0xcad5832309a512feeb602eec67408130. You may have used a PreparedStatement that was created with another Cluster instance. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Created] (IGNITE-6753) Allow plugable page memory for testing proposes
Mikhail Cherkasov created IGNITE-6753: - Summary: Allow plugable page memory for testing proposes Key: IGNITE-6753 URL: https://issues.apache.org/jira/browse/IGNITE-6753 Project: Ignite Issue Type: Improvement Security Level: Public (Viewable by anyone) Components: general Environment: Allow plugable page memory for testing proposes. We need this ability to force fast IgniteOOM in tests. Reporter: Mikhail Cherkasov Priority: Minor Fix For: 2.4 -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Created] (IGNITE-6665) Client node re-joins only to the list from disco configuration and ignores the rest nodes
Mikhail Cherkasov created IGNITE-6665: - Summary: Client node re-joins only to the list from disco configuration and ignores the rest nodes Key: IGNITE-6665 URL: https://issues.apache.org/jira/browse/IGNITE-6665 Project: Ignite Issue Type: Bug Security Level: Public (Viewable by anyone) Components: general Affects Versions: 2.2 Reporter: Mikhail Cherkasov Fix For: 2.4 Client node re-joins only to the list from disco configuration and ignores the rest nodes. if we have a cluster with 3 server nodes and in client discovery configuration only 1 is mentioned and this server node left cluster, client node will try to re-join only to this one and will ignore the rest 2 server nodes. Reproducer is attached. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Created] (IGNITE-6654) Ignite client can hang in case IgniteOOM on server
Mikhail Cherkasov created IGNITE-6654: - Summary: Ignite client can hang in case IgniteOOM on server Key: IGNITE-6654 URL: https://issues.apache.org/jira/browse/IGNITE-6654 Project: Ignite Issue Type: Bug Security Level: Public (Viewable by anyone) Components: cache, general Reporter: Mikhail Cherkasov Assignee: Mikhail Cherkasov Ignite client can hang in case IgniteOOM on server -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Created] (IGNITE-6639) Ignite node can try to join to itself
Mikhail Cherkasov created IGNITE-6639: - Summary: Ignite node can try to join to itself Key: IGNITE-6639 URL: https://issues.apache.org/jira/browse/IGNITE-6639 Project: Ignite Issue Type: Bug Security Level: Public (Viewable by anyone) Components: general Affects Versions: 2.3 Reporter: Mikhail Cherkasov Assignee: Mikhail Cherkasov Fix For: 2.4 -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Created] (IGNITE-6580) Cluster can fail during concurrent re-balancing and cache destruction
Mikhail Cherkasov created IGNITE-6580: - Summary: Cluster can fail during concurrent re-balancing and cache destruction Key: IGNITE-6580 URL: https://issues.apache.org/jira/browse/IGNITE-6580 Project: Ignite Issue Type: Bug Components: cache Reporter: Mikhail Cherkasov Priority: Critical The following exceptions can be abserved during concurrent re-balancing and cache destruction: 1. {noformat} [00:01:27,135][ERROR][sys-#4375%null%][GridDhtPreloader] Partition eviction failed, this can cause grid hang. org.apache.ignite.IgniteException: Runtime failure on search row: Row@6be51c3d[ **REMOVED SENSITIVE INFORMATION** ] at org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree.doRemove(BPlusTree.java:1787) ~[ignite-core-2.1.4.jar:2.1.4] at org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree.remove(BPlusTree.java:1578) ~[ignite-core-2.1.4.jar:2.1.4] at org.apache.ignite.internal.processors.query.h2.database.H2TreeIndex.remove(H2TreeIndex.java:226) ~[ignite-indexing-2.1.4.jar:2.1.4] at org.apache.ignite.internal.processors.query.h2.opt.GridH2Table.doUpdate(GridH2Table.java:523) ~[ignite-indexing-2.1.4.jar:2.1.4] at org.apache.ignite.internal.processors.query.h2.opt.GridH2Table.update(GridH2Table.java:416) ~[ignite-indexing-2.1.4.jar:2.1.4] at org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing.remove(IgniteH2Indexing.java:574) ~[ignite-indexing-2.1.4.jar:2.1.4] at org.apache.ignite.internal.processors.query.GridQueryProcessor.remove(GridQueryProcessor.java:2172) ~[ignite-core-2.1.4.jar:2.1.4] at org.apache.ignite.internal.processors.cache.query.GridCacheQueryManager.remove(GridCacheQueryManager.java:451) ~[ignite-core-2.1.4.jar:2.1.4] at org.apache.ignite.internal.processors.cache.IgniteCacheOffheapManagerImpl$CacheDataStoreImpl.finishRemove(IgniteCacheOffheapManagerImpl.java:1462) ~[ignite-core-2.1.4.jar:2.1.4] at org.apache.ignite.internal.processors.cache.IgniteCacheOffheapManagerImpl$CacheDataStoreImpl.remove(IgniteCacheOffheapManagerImpl.java:1425) ~[ignite-core-2.1.4.jar:2.1.4] at org.apache.ignite.internal.processors.cache.IgniteCacheOffheapManagerImpl.remove(IgniteCacheOffheapManagerImpl.java:383) ~[ignite-core-2.1.4.jar:2.1.4] at org.apache.ignite.internal.processors.cache.GridCacheMapEntry.removeValue(GridCacheMapEntry.java:3224) ~[ignite-core-2.1.4.jar:2.1.4] at org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtCacheEntry.clearInternal(GridDhtCacheEntry.java:588) ~[ignite-core-2.1.4.jar:2.1.4] at org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtLocalPartition.clearAll(GridDhtLocalPartition.java:951) ~[ignite-core-2.1.4.jar:2.1.4] at org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtLocalPartition.tryEvict(GridDhtLocalPartition.java:809) ~[ignite-core-2.1.4.jar:2.1.4] at org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPreloader$3.call(GridDhtPreloader.java:593) [ignite-core-2.1.4.jar:2.1.4] at org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPreloader$3.call(GridDhtPreloader.java:580) [ignite-core-2.1.4.jar:2.1.4] at org.apache.ignite.internal.util.IgniteUtils.wrapThreadLoader(IgniteUtils.java:6629) [ignite-core-2.1.4.jar:2.1.4] at org.apache.ignite.internal.processors.closure.GridClosureProcessor$2.body(GridClosureProcessor.java:967) [ignite-core-2.1.4.jar:2.1.4] at org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:110) [ignite-core-2.1.4.jar:2.1.4] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [?:1.8.0_131] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [?:1.8.0_131] at java.lang.Thread.run(Thread.java:748) [?:1.8.0_131] Caused by: java.lang.IllegalStateException: Item not found: 1 at org.apache.ignite.internal.processors.cache.persistence.tree.io.DataPageIO.findIndirectItemIndex(DataPageIO.java:346) ~[ignite-core-2.1.4.jar:2.1.4] at org.apache.ignite.internal.processors.cache.persistence.tree.io.DataPageIO.getDataOffset(DataPageIO.java:446) ~[ignite-core-2.1.4.jar:2.1.4] at org.apache.ignite.internal.processors.cache.persistence.tree.io.DataPageIO.readPayload(DataPageIO.java:488) ~[ignite-core-2.1.4.jar:2.1.4] at org.apache.ignite.internal.processors.cache.persistence.CacheDataRowAdapter.initFromLink(CacheDataRowAdapter.java:149) ~[ignite-core-2.1.4.jar:2.1.4] at org.apache.ignite.internal.processors.cache.persistence.CacheDataRowAdapter.initFromLink(CacheDataRowAdapter.java:101) ~[ignite-core-2.1.4.jar:2.1.4] at
[jira] [Created] (IGNITE-6528) Warning if no table for BinaryObject
Mikhail Cherkasov created IGNITE-6528: - Summary: Warning if no table for BinaryObject Key: IGNITE-6528 URL: https://issues.apache.org/jira/browse/IGNITE-6528 Project: Ignite Issue Type: Improvement Components: binary, cache, sql Reporter: Mikhail Cherkasov I've seen several times that due wrong cache configuration people can't find data in cache and blame Ignite that it's buggy and doesn't work. And it's very difficult to find an error in the code, especially if you don't have reach experience with Ignite. The problem is that we don't have strong typing when defining QueryEntriy and a user can use an arbitrary string id to define a type, but he should use the same string id to obtain binary object builder, however, people sometimes confusing this. So the user can define QueryEntity with value type: queryEntity.setValueType("MyCoolName") and later put to cache the following binary object: ignite.binary.toBinary(value), but this object won't be indexed, because ignite.binary.toBinary uses class name as string id while indexing expects to find "MyCoolName" as id. The example is simple and the error is obvious when you see this two lines close to each other, however, in real life, cache definition and data ingestion are separated by tons of code. We can save a lot of man-hours for our users if Ignite will print a warning If a cache has a configured QE and user puts BinaryObject with typeName which doesn't correspond to any QE. The warning should be printed only once, something like: [WARN] No table is found for %typeName% binary object. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Created] (IGNITE-6437) DataStructure can not be obtained on client if it is created on server node.
Mikhail Cherkasov created IGNITE-6437: - Summary: DataStructure can not be obtained on client if it is created on server node. Key: IGNITE-6437 URL: https://issues.apache.org/jira/browse/IGNITE-6437 Project: Ignite Issue Type: Bug Components: data structures Affects Versions: 2.1 Reporter: Mikhail Cherkasov Priority: Critical Fix For: 2.3 DataStructure can not be obtained on client if it is created on server node. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Created] (IGNITE-6360) NPE occurs if object with null indexed field is added
Mikhail Cherkasov created IGNITE-6360: - Summary: NPE occurs if object with null indexed field is added Key: IGNITE-6360 URL: https://issues.apache.org/jira/browse/IGNITE-6360 Project: Ignite Issue Type: Bug Environment: NPE occurs if object with null indexed field is added Reporter: Mikhail Cherkasov Assignee: Mikhail Cherkasov Fix For: 2.3 -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Created] (IGNITE-6352) ignite-indexing is not compatible to OSGI
Mikhail Cherkasov created IGNITE-6352: - Summary: ignite-indexing is not compatible to OSGI Key: IGNITE-6352 URL: https://issues.apache.org/jira/browse/IGNITE-6352 Project: Ignite Issue Type: Bug Components: sql Affects Versions: 2.1 Reporter: Mikhail Cherkasov Fix For: 2.3 the issue is reported by user, there's his message: When trying to start Ignite in an OSGi context I get the following exception: Caused by: java.lang.NoClassDefFoundError: org/h2/server/Service at java.lang.Class.forName0(Native Method) at java.lang.Class.forName(Class.java:264) at org.apache.ignite.internal.IgniteComponentType.inClassPath(IgniteComponentType.java:153) at org.apache.ignite.internal.IgnitionEx$IgniteNamedInstance.start0(IgnitionEx.java:1832) at org.apache.ignite.internal.IgnitionEx$IgniteNamedInstance.start(IgnitionEx.java:1648) at org.apache.ignite.internal.IgnitionEx.start0(IgnitionEx.java:1076) at org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.java:506) at org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.java:482) at org.apache.ignite.Ignition.start(Ignition.java:304) That is because the h2 bundle (jar) is properly osgified, but does NOT export the package org.h2.server, so it isn't visible to my code's classloader -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Created] (IGNITE-6323) Ignite node not stopping after segmentation
Mikhail Cherkasov created IGNITE-6323: - Summary: Ignite node not stopping after segmentation Key: IGNITE-6323 URL: https://issues.apache.org/jira/browse/IGNITE-6323 Project: Ignite Issue Type: Bug Reporter: Mikhail Cherkasov The problem was found by a user and described in user list: http://apache-ignite-users.70518.x6.nabble.com/Ignite-node-not-stopping-after-segmentation-td16773.html copy of the message: """ I have follow up question on segmentation from my previous post. The issue I am trying to resolve is that ignite node does not stop on the segmented node. Here is brief information on my application. I have embedded Ignite into my application and using it for distributed caches. I am running Ignite cluster in my lab environment. I have two nodes in the cluster. In current setup, the application receives about 1 million data points every minute. I am putting the data into ignite distributed cache using data streamer. This way data gets distributed among members and each member further processes the data. The application also uses other distributed caches while processing the data. When a member node gets segmented, it does not stop. I get BEFORE_NODE_STOP event but nothing happens after that. Node hangs in some unstable state. I am suspecting that when node is trying to stop there are data in buffers of streamer which needs sent to other members. Because the node is segmented, it is not able to flush/drop the data. The application is also trying to access caches while node is stopping, that also causes deadlock situation. I have tried few things to make it work, Letting node stop after segmentation which is the default behavior. But the node gets stuck. Setting segmentation policy to NOOP. Plan was to stop the node manually after some clean up. This way when I get segmented event, I first try to close data streamer instance and cache instance. But when I trying to close data streamer, the close() call gets stuck. I was calling close with true to drop everything is streamer. But that did not help. On receiving segmentation event, restrict the application from accessing any caches. Then stop the node. Even then the node gets stuck. I have attached few thread dumps here. In each of them one thread is trying to stop the node, but gets into waiting state. """ -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Created] (IGNITE-6044) SQL insert waits for transaction commit, but it must be executed right away
Mikhail Cherkasov created IGNITE-6044: - Summary: SQL insert waits for transaction commit, but it must be executed right away Key: IGNITE-6044 URL: https://issues.apache.org/jira/browse/IGNITE-6044 Project: Ignite Issue Type: Bug Components: sql Affects Versions: 2.1 Environment: Doc says: ""Presently, DML supports the atomic mode only meaning that if there is a DML query that is executed as a part of an Ignite transaction then it will not be enlisted in the transaction's writing queue and will be executed right away."" https://apacheignite.readme.io/docs/dml#section-transactional-support However the data will be added to cache only after transaction commit. Reporter: Mikhail Cherkasov -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Created] (IGNITE-5944) Ignite 1.9 can't be started with configured IGFS and Hadoop secondary system
Mikhail Cherkasov created IGNITE-5944: - Summary: Ignite 1.9 can't be started with configured IGFS and Hadoop secondary system Key: IGNITE-5944 URL: https://issues.apache.org/jira/browse/IGNITE-5944 Project: Ignite Issue Type: Bug Affects Versions: 1.9 Reporter: Mikhail Cherkasov -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Created] (IGNITE-5942) Python3 pylibmc does not work with Ignite memcache mode
Mikhail Cherkasov created IGNITE-5942: - Summary: Python3 pylibmc does not work with Ignite memcache mode Key: IGNITE-5942 URL: https://issues.apache.org/jira/browse/IGNITE-5942 Project: Ignite Issue Type: Bug Reporter: Mikhail Cherkasov Example from: https://apacheignite.readme.io/v2.0/docs/memcached-support#python doesn't for Python 3.6. There's exception on the following call: client.set("key", "val") It was tested with another python library - it works, so looks like the problem with pylibmc/libmemcached integration with Ignite. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Created] (IGNITE-5940) DataStreamer throws exception as it's closed if OOM occurs on server node.
Mikhail Cherkasov created IGNITE-5940: - Summary: DataStreamer throws exception as it's closed if OOM occurs on server node. Key: IGNITE-5940 URL: https://issues.apache.org/jira/browse/IGNITE-5940 Project: Ignite Issue Type: Bug Affects Versions: 2.1 Reporter: Mikhail Cherkasov Assignee: Mikhail Cherkasov -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Created] (IGNITE-5921) Reduce contention for free list access
Mikhail Cherkasov created IGNITE-5921: - Summary: Reduce contention for free list access Key: IGNITE-5921 URL: https://issues.apache.org/jira/browse/IGNITE-5921 Project: Ignite Issue Type: Improvement Affects Versions: 2.1 Environment: Reduce contention for free list access. Reporter: Mikhail Cherkasov Assignee: Igor Seliverstov Reduce contention for free list access. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Created] (IGNITE-5918) Adding and searching objects in index tree produce a lot of garbage
Mikhail Cherkasov created IGNITE-5918: - Summary: Adding and searching objects in index tree produce a lot of garbage Key: IGNITE-5918 URL: https://issues.apache.org/jira/browse/IGNITE-5918 Project: Ignite Issue Type: Bug Reporter: Mikhail Cherkasov Assignee: Mikhail Cherkasov -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Created] (IGNITE-5790) Xml config can not be used in jdbs and user code simultaneously
Mikhail Cherkasov created IGNITE-5790: - Summary: Xml config can not be used in jdbs and user code simultaneously Key: IGNITE-5790 URL: https://issues.apache.org/jira/browse/IGNITE-5790 Project: Ignite Issue Type: Bug Components: jdbc Affects Versions: 2.1 Reporter: Mikhail Cherkasov Assignee: Mikhail Cherkasov Fix For: 2.1 when user uses the same xml config for jdbc driver and for his own ignite instance there can be : java.sql.SQLException: Failed to start Ignite node. Caused by: class org.apache.ignite.IgniteCheckedException: Ignite instance with this name has already been started: CustomeIgniteName because JDBC creates separate ignite instance, while user already has one with the same name. Of course that can be easily workarounded, user can support two configs or create jdbc connect first and then use Ignition.getOrStart(). However it's inconvenient for user and should be treated as usability issue. I see 2 solutions: 1) jdbc driver should use Ignition.getOrStart() 2) jdbc driver should connection string as ignite name. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Created] (IGNITE-5773) Scheduler throwing NullPointerException
Mikhail Cherkasov created IGNITE-5773: - Summary: Scheduler throwing NullPointerException Key: IGNITE-5773 URL: https://issues.apache.org/jira/browse/IGNITE-5773 Project: Ignite Issue Type: Bug Affects Versions: 2.0 Environment: Oracle Hotspot 1.8_121 Ignite 2.0.0 Springmix 4.3.7 Reporter: Mikhail Cherkasov Assignee: Alexey Goncharuk Priority: Critical Fix For: 2.1 NPE occurs during deploying a service as cluster singleton. Ignite scheduler is used as a cron for this purpose, however NPE occurs for ignite version 2.0.0. Below is the log information for the exception: 2017-06-06 13:21:08 ERROR GridServiceProcessor:495 - Failed to initialize service (service will not be deployed): AVxezSbWNphcxa1CYjfP java.lang.NullPointerException at org.apache.ignite.internal.processors.schedule.ScheduleFutureImpl.schedule(ScheduleFutureImpl.java:299) at org.apache.ignite.internal.processors.schedule.IgniteScheduleProcessor.schedule(IgniteScheduleProcessor.java:56) at org.apache.ignite.internal.IgniteSchedulerImpl.scheduleLocal(IgniteSchedulerImpl.java:109) at com.mypackage.state.services.MyService.startScheduler(MyService.scala:172) at com.mypackage.state.services.MyService.init(MyService.scala:149) at org.apache.ignite.internal.processors.service.GridServiceProcessor.redeploy(GridServiceProcessor.java:1097) at org.apache.ignite.internal.processors.service.GridServiceProcessor.processAssignment(GridServiceProcessor.java:1698) at org.apache.ignite.internal.processors.service.GridServiceProcessor.onSystemCacheUpdated(GridServiceProcessor.java:1372) at org.apache.ignite.internal.processors.service.GridServiceProcessor.access$300(GridServiceProcessor.java:117) at org.apache.ignite.internal.processors.service.GridServiceProcessor$ServiceEntriesListener$1.run0(GridServiceProcessor.java:1339) at org.apache.ignite.internal.processors.service.GridServiceProcessor$DepRunnable.run(GridServiceProcessor.java:1753) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) 2017-06-06 13:21:08:868 ERROR application - Unable to initialise GRID: class org.apache.ignite.IgniteException: null at org.apache.ignite.internal.util.IgniteUtils.convertException(IgniteUtils.java:949) at org.apache.ignite.internal.IgniteServicesImpl.deployClusterSingleton(IgniteServicesImpl.java:122) at com.mypackage.state.mypackage1.InitialiseGrid$$anonfun$apply$1.apply(InitialiseGrid.scala:22) at com.mypackage.state.mypackage1.InitialiseGrid$$anonfun$apply$1.apply(InitialiseGrid.scala:19) at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59) at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48) at com.mypackage.state.mypackage1.InitialiseGrid$.apply(InitialiseGrid.scala:19) at com.mypackage.state.Application$.main(Application.scala:54) at com.mypackage.state.Application.main(Application.scala) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at sbt.Run.invokeMain(Run.scala:67) at sbt.Run.run0(Run.scala:61) at sbt.Run.sbt$Run$$execute$1(Run.scala:51) at sbt.Run$$anonfun$run$1.apply$mcV$sp(Run.scala:55) at sbt.Run$$anonfun$run$1.apply(Run.scala:55) at sbt.Run$$anonfun$run$1.apply(Run.scala:55) at sbt.Logger$$anon$4.apply(Logger.scala:85) at sbt.TrapExit$App.run(TrapExit.scala:248) at java.lang.Thread.run(Thread.java:745) Caused by: class org.apache.ignite.IgniteCheckedException: null at org.apache.ignite.internal.util.IgniteUtils.cast(IgniteUtils.java:7242) at org.apache.ignite.internal.util.future.GridFutureAdapter.resolve(GridFutureAdapter.java:258) at org.apache.ignite.internal.util.future.GridFutureAdapter.get0(GridFutureAdapter.java:189) at org.apache.ignite.internal.util.future.GridFutureAdapter.get(GridFutureAdapter.java:139) at org.apache.ignite.internal.AsyncSupportAdapter.saveOrGet(AsyncSupportAdapter.java:112) at org.apache.ignite.internal.IgniteServicesImpl.deployClusterSingleton(IgniteServicesImpl.java:119) ... 20 more Caused by: java.lang.NullPointerException at org.apache.ignite.internal.processors.schedule.ScheduleFutureImpl.schedule(ScheduleFutureImpl.java:299) at
[jira] [Created] (IGNITE-5644) Metrics collection must be removed from discovery thread.
Mikhail Cherkasov created IGNITE-5644: - Summary: Metrics collection must be removed from discovery thread. Key: IGNITE-5644 URL: https://issues.apache.org/jira/browse/IGNITE-5644 Project: Ignite Issue Type: Bug Components: cache Reporter: Mikhail Cherkasov Assignee: Mikhail Cherkasov Fix For: 2.1 Cache metrics are copied in discovery worker threads. This looks a bit risky because in case of metrics collection may stall the whole cluster. We need to make sure that when the heartbeat message is processed, we already have a metrics snapshot enabled -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Created] (IGNITE-5575) Ignite returns wrong CacheMetrics for cluster group
Mikhail Cherkasov created IGNITE-5575: - Summary: Ignite returns wrong CacheMetrics for cluster group Key: IGNITE-5575 URL: https://issues.apache.org/jira/browse/IGNITE-5575 Project: Ignite Issue Type: Bug Components: cache Affects Versions: 2.1 Reporter: Mikhail Cherkasov Assignee: Mikhail Cherkasov -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Created] (IGNITE-5484) DataStructuresCacheKey and DataStructureInfoKey should have GridCacheInternal marker
Mikhail Cherkasov created IGNITE-5484: - Summary: DataStructuresCacheKey and DataStructureInfoKey should have GridCacheInternal marker Key: IGNITE-5484 URL: https://issues.apache.org/jira/browse/IGNITE-5484 Project: Ignite Issue Type: Bug Components: cache Reporter: Mikhail Cherkasov Assignee: Mikhail Cherkasov DataStructuresCacheKey and DataStructureInfoKey should have GridCacheInternal marker -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Created] (IGNITE-5461) Visor show wrong statistics for off heap memory
Mikhail Cherkasov created IGNITE-5461: - Summary: Visor show wrong statistics for off heap memory Key: IGNITE-5461 URL: https://issues.apache.org/jira/browse/IGNITE-5461 Project: Ignite Issue Type: Bug Reporter: Mikhail Cherkasov Assignee: Alexey Kuznetsov Visor show that data is stored in Heap, while the data is in off heap: Total: 1 Heap: 1 Off-Heap: 0 Off-Heap Memory: 0 while: cache.localPeek("Key1", ONHEAP) == null cache.localPeek("Key1", OFFHEAP) == Value reproducer is attached. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Created] (IGNITE-5364) Remove contention on DS creation or removing
Mikhail Cherkasov created IGNITE-5364: - Summary: Remove contention on DS creation or removing Key: IGNITE-5364 URL: https://issues.apache.org/jira/browse/IGNITE-5364 Project: Ignite Issue Type: Improvement Reporter: Mikhail Cherkasov Assignee: Mikhail Cherkasov Fix For: 2.1 All DSs are stored in one Map which itself is stored in utilityCache, this makes high contention on DS creation or removing, it requires lock on the key and manipulation with the Map under the lock. So all threads in cluster should wait for this lock to create or remove DS. -- This message was sent by Atlassian JIRA (v6.3.15#6346)