[jira] [Commented] (IGNITE-10926) ZookeeperDiscoverySpi: client does not survive after several cluster restarts
[ https://issues.apache.org/jira/browse/IGNITE-10926?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16766993#comment-16766993 ] Ivan Rakov commented on IGNITE-10926: - Merged to master. > ZookeeperDiscoverySpi: client does not survive after several cluster restarts > - > > Key: IGNITE-10926 > URL: https://issues.apache.org/jira/browse/IGNITE-10926 > Project: Ignite > Issue Type: Bug > Components: zookeeper >Reporter: Amelchev Nikita >Assignee: Amelchev Nikita >Priority: Major > Fix For: 2.8 > > Time Spent: 10m > Remaining Estimate: 0h > > {{ZookeeperDiscoveryImpl#cleanupPreviousClusterData}} can delete alive node > of a client in case of low internal order. > Steps to reproduce: > 1. Start server and client. > 2. Stop the server and wait for the client disconnected. > 3. Start and stop the server. The server hasn't time to process client join > request. > 4. Start server. It will delete alive client node because the client has low > internal order. The client will never connect. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (IGNITE-11304) SQL: Common caching of both local and distributed query metadata
[ https://issues.apache.org/jira/browse/IGNITE-11304?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16767025#comment-16767025 ] Vladimir Ozerov commented on IGNITE-11304: -- Test run: https://ci.ignite.apache.org/viewQueued.html?itemId=3076357 > SQL: Common caching of both local and distributed query metadata > > > Key: IGNITE-11304 > URL: https://issues.apache.org/jira/browse/IGNITE-11304 > Project: Ignite > Issue Type: Task > Components: sql >Reporter: Vladimir Ozerov >Assignee: Vladimir Ozerov >Priority: Major > Time Spent: 10m > Remaining Estimate: 0h > > Currently query metadata is only cached for distributed queries. For local > queries it is calculated on every request over and over again. Need to cache > it always in {{QueryParserResultSelect}}. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (IGNITE-11304) SQL: Common caching of both local and distributed query metadata
[ https://issues.apache.org/jira/browse/IGNITE-11304?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vladimir Ozerov updated IGNITE-11304: - Fix Version/s: 2.8 > SQL: Common caching of both local and distributed query metadata > > > Key: IGNITE-11304 > URL: https://issues.apache.org/jira/browse/IGNITE-11304 > Project: Ignite > Issue Type: Task > Components: sql >Reporter: Vladimir Ozerov >Assignee: Vladimir Ozerov >Priority: Major > Fix For: 2.8 > > Time Spent: 10m > Remaining Estimate: 0h > > Currently query metadata is only cached for distributed queries. For local > queries it is calculated on every request over and over again. Need to cache > it always in {{QueryParserResultSelect}}. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Assigned] (IGNITE-10214) Web console: dependency to open source JDBC driver is not generated in the project's pom file
[ https://issues.apache.org/jira/browse/IGNITE-10214?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vasiliy Sisko reassigned IGNITE-10214: -- Assignee: Pavel Konstantinov (was: Vasiliy Sisko) > Web console: dependency to open source JDBC driver is not generated in the > project's pom file > - > > Key: IGNITE-10214 > URL: https://issues.apache.org/jira/browse/IGNITE-10214 > Project: Ignite > Issue Type: Bug >Reporter: Pavel Konstantinov >Assignee: Pavel Konstantinov >Priority: Major > > Steps to reproduce: > # import caches from for example MySql DB > # check generated pom file -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (IGNITE-10627) Support custom preferences like date format and other similar features
[ https://issues.apache.org/jira/browse/IGNITE-10627?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16766904#comment-16766904 ] Alexander Kalinin commented on IGNITE-10627: [~ezhuravl] Could you please specify which preference should be present and how web-console should use them, Date format - ok, maybe laguage can be a preference, time zone - maybe. Could you please specify a list of such preferences and how this prefrences should be used in application. > Support custom preferences like date format and other similar features > -- > > Key: IGNITE-10627 > URL: https://issues.apache.org/jira/browse/IGNITE-10627 > Project: Ignite > Issue Type: Improvement >Reporter: Evgenii Zhuravlev >Assignee: Alexander Kalinin >Priority: Major > Fix For: 2.8 > > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Assigned] (IGNITE-10546) [ML] GMM with adding and removal of components
[ https://issues.apache.org/jira/browse/IGNITE-10546?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Alexey Platonov reassigned IGNITE-10546: Assignee: Alexey Platonov > [ML] GMM with adding and removal of components > -- > > Key: IGNITE-10546 > URL: https://issues.apache.org/jira/browse/IGNITE-10546 > Project: Ignite > Issue Type: New Feature > Components: ml >Reporter: Yury Babak >Assignee: Alexey Platonov >Priority: Major > > Improve fixed GMM by adding changeable number of components ability. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (IGNITE-11288) TcpDiscovery locks forever on SSLSocket.close().
[ https://issues.apache.org/jira/browse/IGNITE-11288?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16766998#comment-16766998 ] Eduard Shangareev commented on IGNITE-11288: Definitely, [~ivandasch] is right, we need to update PlatformConfigurationUtils with platform code, maybe in a separate ticket. I am OK with change, but we need to create a new ticket for updating platform code in 2.8 version. > TcpDiscovery locks forever on SSLSocket.close(). > > > Key: IGNITE-11288 > URL: https://issues.apache.org/jira/browse/IGNITE-11288 > Project: Ignite > Issue Type: Bug >Reporter: Pavel Voronkin >Assignee: Pavel Voronkin >Priority: Critical > Time Spent: 10m > Remaining Estimate: 0h > > Rootcause is java bug locking on SSLSocketImpl.close() on write lock: > //we create socket with soTimeout(0) here, but setting it here won't help > anyway. > RingMessageWorker: 3152 sock = spi.openSocket(addr, timeoutHelper); > //After timeout grid-timeout-worker blocks forever on SSLSOcketImpl.close(). > According to java8 SSLSocketImpl: > {code:java} > if (var1.isAlert((byte)0) && this.getSoLinger() >= 0) { > boolean var3 = Thread.interrupted(); > try { > if (this.writeLock.tryLock((long)this.getSoLinger(), TimeUnit.SECONDS)) { > try > { this.writeRecordInternal(var1, var2); } > finally > { this.writeLock.unlock(); } > } else > { SSLException var4 = new SSLException("SO_LINGER timeout, close_notify > message cannot be sent."); if (this.isLayered() && !this.autoClose) { > this.fatal((byte)-1, (Throwable)var4); } > else if (debug != null && Debug.isOn("ssl")) > { System.out.println(Thread.currentThread().getName() + ", received > Exception: " + var4); } > this.sess.invalidate(); > } > } catch (InterruptedException var14) > { var3 = true; } > if (var3) > { Thread.currentThread().interrupt(); } > } else > { this.writeLock.lock(); try { this.writeRecordInternal(var1, var2); } > finally > { this.writeLock.unlock(); } > }{code} > In case of soLinger is not set we fallback to this.writeLock.lock(); which > wait forever, cause RingMessageWorker is writing message with SO_TIMEOUT zero. > Solution: > 1) Set proper SO_TIMEOUT //that didn't help on Linux in case we drop packets > using iptables. > 2) Set SO_LINGER to some reasonable positive value. > Similar JDK bug [https://bugs.openjdk.java.net/browse/JDK-6668261]. > Guys end up setting SO_LINGER. > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (IGNITE-8613) Web console: investigate E2E tests on Node.js 10
[ https://issues.apache.org/jira/browse/IGNITE-8613?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrey Novikov updated IGNITE-8613: --- Fix Version/s: 2.8 > Web console: investigate E2E tests on Node.js 10 > > > Key: IGNITE-8613 > URL: https://issues.apache.org/jira/browse/IGNITE-8613 > Project: Ignite > Issue Type: Improvement > Components: wizards >Reporter: Ilya Borisov >Assignee: Andrey Novikov >Priority: Minor > Fix For: 2.8 > > Time Spent: 10m > Remaining Estimate: 0h > > Web console E2E tests fail spontaneously when run under Node.js 10. We should > investigate what causes it: Testcafe incompatibility or something in the web > console code. If new, compatible version of Testcafe becomes available, let's > update to it as a part of this issue. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (IGNITE-10937) Support data page scan for JDBC
[ https://issues.apache.org/jira/browse/IGNITE-10937?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16766975#comment-16766975 ] Pavel Kuznetsov commented on IGNITE-10937: -- Created separated issue for ODBC: IGNITE-11305 > Support data page scan for JDBC > --- > > Key: IGNITE-10937 > URL: https://issues.apache.org/jira/browse/IGNITE-10937 > Project: Ignite > Issue Type: Improvement > Components: sql >Reporter: Sergi Vladykin >Assignee: Pavel Kuznetsov >Priority: Major > Labels: performance > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (IGNITE-11307) SqlNative benchmarks failed with more than 1 client
Ilya Suntsov created IGNITE-11307: - Summary: SqlNative benchmarks failed with more than 1 client Key: IGNITE-11307 URL: https://issues.apache.org/jira/browse/IGNITE-11307 Project: Ignite Issue Type: Task Components: yardstick Affects Versions: 2.7 Reporter: Ilya Suntsov I saw the exception below when tried to run NativeSqlInsertDeleteBenchmark, NativeSqlQueryRangeBenchmark, NativeSqlUpdateRangeBenchmark with 4 servers and 8 clients. Looks like we need to use "*create* *table* *if* *not* *exists*" instead of "*create* *table*" {noformat} <19:55:12> Create table... <19:55:12> Creating table with schema: CREATE TABLE test_long (id LONG PRIMARY KEY, val LONG) WITH "wrap_value=true"; [2019-02-12 19:55:12,876][INFO ][exchange-worker-#58][GridDhtPartitionsExchangeFuture] Finish exchange future [startVer=AffinityTopologyVersion [topVer=12, minorTopVer=0], resVer=AffinityTopologyVersion [topVer=12, minorTopVer=0], err=null] [2019-02-12 19:55:12,881][INFO ][exchange-worker-#58][GridDhtPartitionsExchangeFuture] Completed partition exchange [localNode=f5594085-054c-492f-9112-301b196ff8b3, exchange=GridDhtPartitionsExchangeFuture [topVer=AffinityTopologyVersion [topVer=12, minorTopVer=0],. evt=NODE_JOINED, evtNode=TcpDiscoveryNode [id=0a475055-ad4c-46e4-88e9-ebeba9c846ce, addrs=ArrayList [127.0.0.1, 172.17.0.1, 172.25.1.26], sockAddrs=HashSet [/172.17.0.1:0, /127.0.0.1:0, lab26.gridgain.local/172.25.1.26:0], discPort=0, order=12, intOrder=12, lastExchangeTime=1549990512845, loc=false, ver=2.8.0#20190211-sha1:e59aa879, isClient=true], done=true], topVer=AffinityTopologyVersion [topVer=12, minorTopVer=0]] [2019-02-12 19:55:12,881][INFO ][exchange-worker-#58][GridDhtPartitionsExchangeFuture] Exchange timings [startVer=AffinityTopologyVersion [topVer=12, minorTopVer=0], resVer=AffinityTopologyVersion [topVer=12, minorTopVer=0], stage="Waiting in exchange queue" (0 ms),. stage="Exchange parameters initialization" (0 ms), stage="Determine exchange type" (5 ms), stage="Exchange done" (4 ms), stage="Total time" (9 ms)] [2019-02-12 19:55:12,881][INFO ][exchange-worker-#58][GridDhtPartitionsExchangeFuture] Exchange longest local stages [startVer=AffinityTopologyVersion [topVer=12, minorTopVer=0], resVer=AffinityTopologyVersion [topVer=12, minorTopVer=0]] [2019-02-12 19:55:12,881][INFO ][exchange-worker-#58][time] Finished exchange init [topVer=AffinityTopologyVersion [topVer=12, minorTopVer=0], crd=false] [2019-02-12 19:55:12,882][INFO ][exchange-worker-#58][GridCachePartitionExchangeManager] Skipping rebalancing (no affinity changes) [top=AffinityTopologyVersion [topVer=12, minorTopVer=0], rebTopVer=AffinityTopologyVersion [topVer=-1, minorTopVer=0], evt=NODE_JOINED, e vtNode=0a475055-ad4c-46e4-88e9-ebeba9c846ce, client=true] [2019-02-12 19:55:12,972][INFO ][exchange-worker-#58][time] Started exchange init [topVer=AffinityTopologyVersion [topVer=12, minorTopVer=1], crd=false, evt=DISCOVERY_CUSTOM_EVT, evtNode=da386c99-7b45-4c5b-913b-95eb5b70118f, customEvt=DynamicCacheChangeBatch. [id=5ac22a2e861-fb1a5341-6a6e-4ef8-8a74-072c95cb2e08, reqs=ArrayList [DynamicCacheChangeRequest [cacheName=SQL_PUBLIC_TEST_LONG, hasCfg=true, nodeId=da386c99-7b45-4c5b-913b-95eb5b70118f, clientStartOnly=false, stop=false, destroy=false, disabledAfterStartfalse]], exchangeActions=ExchangeActions [startCaches=[SQL_PUBLIC_TEST_LONG], stopCaches=null, startGrps=[SQL_PUBLIC_TEST_LONG], stopGrps=[], resetParts=null, stateChangeRequest=null], startCaches=false], allowMerge=false] [2019-02-12 19:55:12,986][INFO ][exchange-worker-#58][time] Finished exchange init [topVer=AffinityTopologyVersion [topVer=12, minorTopVer=1], crd=false] [2019-02-12 19:55:13,063][INFO ][sys-#66][GridDhtPartitionsExchangeFuture] Received full message, will finish exchange [node=e301e555-2a31-4bdd-a574-9ec412f4c435, resVer=AffinityTopologyVersion [topVer=12, minorTopVer=1]] [2019-02-12 19:55:13,064][INFO ][sys-#66][GridDhtPartitionsExchangeFuture] Finish exchange future [startVer=AffinityTopologyVersion [topVer=12, minorTopVer=1], resVer=AffinityTopologyVersion [topVer=12, minorTopVer=1], err=null] [2019-02-12 19:55:13,075][INFO ][sys-#66][GridDhtPartitionsExchangeFuture] Completed partition exchange [localNode=f5594085-054c-492f-9112-301b196ff8b3, exchange=GridDhtPartitionsExchangeFuture [topVer=AffinityTopologyVersion [topVer=12, minorTopVer=1], evt=DISCOVERY_CUSTOM_EVT, evtNode=TcpDiscoveryNode. [id=da386c99-7b45-4c5b-913b-95eb5b70118f, addrs=ArrayList [127.0.0.1, 172.17.0.1, 172.25.1.13], sockAddrs=HashSet [/172.17.0.1:0, lab13.gridgain.local/172.25.1.13:0, /127.0.0.1:0], discPort=0, order=7, intOrder=7, lastExchangeTime=1549990507545, loc=false, ver=2.8.0#20190211-sha1:e59aa879, isClient=true], done=true], topVer=AffinityTopologyVersion [topVer=12, minorTopVer=1]] [2019-02-12
[jira] [Updated] (IGNITE-11308) Add soLinger parameter support in TcpDiscoverySpi .NET configuration.
[ https://issues.apache.org/jira/browse/IGNITE-11308?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Pavel Voronkin updated IGNITE-11308: Description: NET client should support TcpDiscoverry.soLinger parameter. > Add soLinger parameter support in TcpDiscoverySpi .NET configuration. > - > > Key: IGNITE-11308 > URL: https://issues.apache.org/jira/browse/IGNITE-11308 > Project: Ignite > Issue Type: Improvement >Reporter: Pavel Voronkin >Priority: Major > > NET client should support TcpDiscoverry.soLinger parameter. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (IGNITE-11288) TcpDiscovery locks forever on SSLSocket.close().
[ https://issues.apache.org/jira/browse/IGNITE-11288?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16766927#comment-16766927 ] Ivan Daschinskiy commented on IGNITE-11288: --- Patch looks good to me, however, it would be a good idea to create separate ticket for passing this option for .NET Apache.Ignite.Core.Discovery.Tcp.TcpDiscoverySpi. > TcpDiscovery locks forever on SSLSocket.close(). > > > Key: IGNITE-11288 > URL: https://issues.apache.org/jira/browse/IGNITE-11288 > Project: Ignite > Issue Type: Bug >Reporter: Pavel Voronkin >Assignee: Pavel Voronkin >Priority: Critical > Time Spent: 10m > Remaining Estimate: 0h > > Rootcause is java bug locking on SSLSocketImpl.close() on write lock: > //we create socket with soTimeout(0) here, but setting it here won't help > anyway. > RingMessageWorker: 3152 sock = spi.openSocket(addr, timeoutHelper); > //After timeout grid-timeout-worker blocks forever on SSLSOcketImpl.close(). > According to java8 SSLSocketImpl: > {code:java} > if (var1.isAlert((byte)0) && this.getSoLinger() >= 0) { > boolean var3 = Thread.interrupted(); > try { > if (this.writeLock.tryLock((long)this.getSoLinger(), TimeUnit.SECONDS)) { > try > { this.writeRecordInternal(var1, var2); } > finally > { this.writeLock.unlock(); } > } else > { SSLException var4 = new SSLException("SO_LINGER timeout, close_notify > message cannot be sent."); if (this.isLayered() && !this.autoClose) { > this.fatal((byte)-1, (Throwable)var4); } > else if (debug != null && Debug.isOn("ssl")) > { System.out.println(Thread.currentThread().getName() + ", received > Exception: " + var4); } > this.sess.invalidate(); > } > } catch (InterruptedException var14) > { var3 = true; } > if (var3) > { Thread.currentThread().interrupt(); } > } else > { this.writeLock.lock(); try { this.writeRecordInternal(var1, var2); } > finally > { this.writeLock.unlock(); } > }{code} > In case of soLinger is not set we fallback to this.writeLock.lock(); which > wait forever, cause RingMessageWorker is writing message with SO_TIMEOUT zero. > Solution: > 1) Set proper SO_TIMEOUT //that didn't help on Linux in case we drop packets > using iptables. > 2) Set SO_LINGER to some reasonable positive value. > Similar JDK bug [https://bugs.openjdk.java.net/browse/JDK-6668261]. > Guys end up setting SO_LINGER. > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (IGNITE-11280) SQL: Cache all queries, not only two-step
[ https://issues.apache.org/jira/browse/IGNITE-11280?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16766940#comment-16766940 ] Ignite TC Bot commented on IGNITE-11280: {panel:title=-- Run :: All: Possible Blockers|borderStyle=dashed|borderColor=#ccc|titleBGColor=#F7D6C1} {color:#d04437}ZooKeeper (Discovery) 4{color} [[tests 0 TIMEOUT , Exit Code |https://ci.ignite.apache.org/viewLog.html?buildId=3070251]] * IgniteCachePutRetryAtomicSelfTest.testInvokeAll (last started) {color:#d04437}JDBC Driver{color} [[tests 1|https://ci.ignite.apache.org/viewLog.html?buildId=3070177]] * IgniteJdbcDriverTestSuite: JdbcThinLocalQueriesSelfTest.testLocalThinJdbcQuery - 0,0% fails in last 419 master runs. {panel} [TeamCity *-- Run :: All* Results|https://ci.ignite.apache.org/viewLog.html?buildId=3070270buildTypeId=IgniteTests24Java8_RunAll] > SQL: Cache all queries, not only two-step > - > > Key: IGNITE-11280 > URL: https://issues.apache.org/jira/browse/IGNITE-11280 > Project: Ignite > Issue Type: Task > Components: sql >Reporter: Vladimir Ozerov >Assignee: Vladimir Ozerov >Priority: Major > Fix For: 2.8 > > Time Spent: 10m > Remaining Estimate: 0h > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (IGNITE-11288) TcpDiscovery locks forever on SSLSocket.close().
[ https://issues.apache.org/jira/browse/IGNITE-11288?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Dmitriy Govorukhin updated IGNITE-11288: Fix Version/s: 2.8 > TcpDiscovery locks forever on SSLSocket.close(). > > > Key: IGNITE-11288 > URL: https://issues.apache.org/jira/browse/IGNITE-11288 > Project: Ignite > Issue Type: Bug >Reporter: Pavel Voronkin >Assignee: Pavel Voronkin >Priority: Critical > Fix For: 2.8 > > Time Spent: 10m > Remaining Estimate: 0h > > Rootcause is java bug locking on SSLSocketImpl.close() on write lock: > //we create socket with soTimeout(0) here, but setting it here won't help > anyway. > RingMessageWorker: 3152 sock = spi.openSocket(addr, timeoutHelper); > //After timeout grid-timeout-worker blocks forever on SSLSOcketImpl.close(). > According to java8 SSLSocketImpl: > {code:java} > if (var1.isAlert((byte)0) && this.getSoLinger() >= 0) { > boolean var3 = Thread.interrupted(); > try { > if (this.writeLock.tryLock((long)this.getSoLinger(), TimeUnit.SECONDS)) { > try > { this.writeRecordInternal(var1, var2); } > finally > { this.writeLock.unlock(); } > } else > { SSLException var4 = new SSLException("SO_LINGER timeout, close_notify > message cannot be sent."); if (this.isLayered() && !this.autoClose) { > this.fatal((byte)-1, (Throwable)var4); } > else if (debug != null && Debug.isOn("ssl")) > { System.out.println(Thread.currentThread().getName() + ", received > Exception: " + var4); } > this.sess.invalidate(); > } > } catch (InterruptedException var14) > { var3 = true; } > if (var3) > { Thread.currentThread().interrupt(); } > } else > { this.writeLock.lock(); try { this.writeRecordInternal(var1, var2); } > finally > { this.writeLock.unlock(); } > }{code} > In case of soLinger is not set we fallback to this.writeLock.lock(); which > wait forever, cause RingMessageWorker is writing message with SO_TIMEOUT zero. > Solution: > 1) Set proper SO_TIMEOUT //that didn't help on Linux in case we drop packets > using iptables. > 2) Set SO_LINGER to some reasonable positive value. > Similar JDK bug [https://bugs.openjdk.java.net/browse/JDK-6668261]. > Guys end up setting SO_LINGER. > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (IGNITE-11306) NativeSqlJoinQueryRangeBenchmark doesn't work
[ https://issues.apache.org/jira/browse/IGNITE-11306?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ilya Suntsov updated IGNITE-11306: -- Description: Config: {noformat} now0=`date +'%H%M%S'` # JVM options. JVM_OPTS=${JVM_OPTS}" -DIGNITE_QUIET=false" # Uncomment to enable concurrent garbage collection (GC) if you encounter long GC pauses. JVM_OPTS=${JVM_OPTS}" \ -Xms8g \ -Xmx8g \ -Xloggc:./gc${now0}.log \ -XX:+PrintGCDetails \ -verbose:gc \ -XX:+UseParNewGC \ -XX:+UseConcMarkSweepGC \ -XX:+PrintGCDateStamps \ " #Ignite version ver=ver-2.8.0-SNAPSHOT-rev-816f435d- # List of default probes. # Add DStatProbe or VmStatProbe if your OS supports it (e.g. if running on Linux). BENCHMARK_DEFAULT_PROBES=ThroughputLatencyProbe,PercentileProbe,DStatProbe # Packages where the specified benchmark is searched by reflection mechanism. BENCHMARK_PACKAGES=org.yardstickframework,org.apache.ignite.yardstick # Flag which indicates to restart the servers before every benchmark execution. RESTART_SERVERS=true # Probe point writer class name. # BENCHMARK_WRITER= # Comma-separated list of the hosts to run BenchmarkServers on. SERVER_HOSTS=172.25.1.30,172.25.1.27,172.25.1.28,172.25.1.29 # Comma-separated list of the hosts to run BenchmarkDrivers on. DRIVER_HOSTS=172.25.1.11 # Remote username. # REMOTE_USER= # Number of nodes, used to wait for the specified number of nodes to start. nodesNum=$((`echo ${SERVER_HOSTS} | tr ',' '\n' | wc -l` + `echo ${DRIVER_HOSTS} | tr ',' '\n' | wc -l`)) # Backups count. b=1 # Warmup. w=60 # Duration. d=180 # Threads count. t=64 # Sync mode. sm=PRIMARY_SYNC # Jobs. j=10 # Run configuration which contains all benchmarks. # Note that each benchmark is set to run for 300 seconds (5 min) with warm-up set to 60 seconds (1 minute). CONFIGS="\ -cfg ${SCRIPT_DIR}/../config/ignite-config.xml -nn ${nodesNum} -b ${b} -w ${w} -d ${d} -t ${t} -sm ${sm} -pc 2 -r 10 --sqlRange 1 --client -dn NativeSqlJoinQueryRangeBenchmark -sn IgniteNode -ds ${ver}sql-select-native-join-r1-${b}-backup,\ " {noformat} Exception: {noformat} <12:50:49> Populate 9 <12:50:50> Populate 10 <12:50:50> Probe writer is not configured (using default CSV writer) <12:50:50> ThroughputLatencyProbe is started. <12:50:50> PercentileProbe is started. <12:50:50> DStatProbe is started. Command: 'dstat -m --all --noheaders --noupdate 1' <12:50:50> Starting warmup. Finishing main test [ts=1550051451762, date=Wed Feb 13 12:50:51 MSK 2019] ERROR: Shutting down benchmark driver to unexpected exception. Type '--help' for usage. java.lang.Exception: Invalid result set size [actual=0, expected=1] *<-->*at org.apache.ignite.yardstick.jdbc.NativeSqlJoinQueryRangeBenchmark.test(NativeSqlJoinQueryRangeBenchmark.java:84) *<-->*at org.yardstickframework.impl.BenchmarkRunner$2.run(BenchmarkRunner.java:178) *<-->*at java.lang.Thread.run(Thread.java:748) [2019-02-13 12:50:51,810][INFO ][Thread-8][GridCacheProcessor] Stopped cache [cacheName=compute] [2019-02-13 12:50:51,812][INFO ][Thread-8][GridCacheProcessor] Stopped cache [cacheName=query] [2019-02-13 12:50:51,813][INFO ][Thread-8][GridCacheProcessor] Stopped cache [cacheName=atomic-index-with-eviction] [2019-02-13 12:50:51,813][INFO ][Thread-8][GridCacheProcessor] Stopped cache [cacheName=atomic-index] [2019-02-13 12:50:51,813][INFO ][Thread-8][GridCacheProcessor] Stopped cache [cacheName=tx] [2019-02-13 12:50:51,815][INFO ][Thread-8][GridCacheProcessor] Stopped cache [cacheName=atomic] [2019-02-13 12:50:51,815][INFO ][Thread-8][GridCacheProcessor] Stopped cache [cacheName=ignite-sys-cache] [2019-02-13 12:50:51,815][INFO ][Thread-8][GridCacheProcessor] Stopped cache [cacheName=SQL_PUBLIC_PERSON] [2019-02-13 12:50:51,816][INFO ][Thread-8][GridCacheProcessor] Stopped cache [cacheName=SQL_PUBLIC_ORGANIZATION] [2019-02-13 12:50:51,825][INFO ][Thread-8][IgniteKernal]*.* {noformat} was: Config: {noformat} *now0**=*`date +'%H%M%S'` # JVM options. *JVM_OPTS**=**${JVM_OPTS}*" -DIGNITE_QUIET*=**false*" # Uncomment to enable concurrent garbage collection (GC) if you encounter long GC pauses. *JVM_OPTS**=**${JVM_OPTS}*" *\* -Xms8g *\* -Xmx8g *\* -Xloggc:./gc*${now0}*.log *\* -XX:+PrintGCDetails *\* -verbose:gc *\* -XX:+UseParNewGC *\* -XX:+UseConcMarkSweepGC *\* -XX:+PrintGCDateStamps *\* " #Ignite version *ver**=*ver-*2*.*8*.*0*-SNAPSHOT-rev-816f435d- # List of default probes. # Add DStatProbe or VmStatProbe if your OS supports it (e.g. if running on Linux). *BENCHMARK_DEFAULT_PROBES**=*ThroughputLatencyProbe,PercentileProbe,DStatProbe # Packages where the specified benchmark is searched by reflection mechanism. *BENCHMARK_PACKAGES**=*org.yardstickframework,org.apache.ignite.yardstick # Flag which indicates to restart the servers before every benchmark execution. *RESTART_SERVERS**=**true* # Probe point
[jira] [Commented] (IGNITE-8613) Web console: investigate E2E tests on Node.js 10
[ https://issues.apache.org/jira/browse/IGNITE-8613?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16766972#comment-16766972 ] Andrey Novikov commented on IGNITE-8613: Merged to master. > Web console: investigate E2E tests on Node.js 10 > > > Key: IGNITE-8613 > URL: https://issues.apache.org/jira/browse/IGNITE-8613 > Project: Ignite > Issue Type: Improvement > Components: wizards >Reporter: Ilya Borisov >Assignee: Andrey Novikov >Priority: Minor > Time Spent: 10m > Remaining Estimate: 0h > > Web console E2E tests fail spontaneously when run under Node.js 10. We should > investigate what causes it: Testcafe incompatibility or something in the web > console code. If new, compatible version of Testcafe becomes available, let's > update to it as a part of this issue. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (IGNITE-11308) Add soLinger parameter support in TcpDiscoverySpi .NET configuration.
Pavel Voronkin created IGNITE-11308: --- Summary: Add soLinger parameter support in TcpDiscoverySpi .NET configuration. Key: IGNITE-11308 URL: https://issues.apache.org/jira/browse/IGNITE-11308 Project: Ignite Issue Type: Improvement Reporter: Pavel Voronkin -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (IGNITE-11304) SQL: Common caching of both local and distributed query metadata
Vladimir Ozerov created IGNITE-11304: Summary: SQL: Common caching of both local and distributed query metadata Key: IGNITE-11304 URL: https://issues.apache.org/jira/browse/IGNITE-11304 Project: Ignite Issue Type: Task Components: sql Reporter: Vladimir Ozerov Assignee: Vladimir Ozerov Currently query metadata is only cached for distributed queries. For local queries it is calculated on every request over and over again. Need to cache it always in {{QueryParserResultSelect}}. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (IGNITE-11297) Improving read of hot variables in WAL
[ https://issues.apache.org/jira/browse/IGNITE-11297?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16767007#comment-16767007 ] Ignite TC Bot commented on IGNITE-11297: {panel:title=-- Run :: All: No blockers found!|borderStyle=dashed|borderColor=#ccc|titleBGColor=#D6F7C1}{panel} [TeamCity *-- Run :: All* Results|https://ci.ignite.apache.org/viewLog.html?buildId=3069516buildTypeId=IgniteTests24Java8_RunAll] > Improving read of hot variables in WAL > -- > > Key: IGNITE-11297 > URL: https://issues.apache.org/jira/browse/IGNITE-11297 > Project: Ignite > Issue Type: Improvement >Reporter: Anton Kalashnikov >Assignee: Anton Kalashnikov >Priority: Major > Time Spent: 10m > Remaining Estimate: 0h > > Looks like it is not neccessery mark some variables as volatile in > FileWriteAheadLogManager because its initialized only one time on start but > its have a lot of read of them. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (IGNITE-11306) NativeSqlJoinQueryRangeBenchmark doesn't work
Ilya Suntsov created IGNITE-11306: - Summary: NativeSqlJoinQueryRangeBenchmark doesn't work Key: IGNITE-11306 URL: https://issues.apache.org/jira/browse/IGNITE-11306 Project: Ignite Issue Type: Task Components: yardstick Affects Versions: 2.7 Reporter: Ilya Suntsov Config: {noformat} *now0**=*`date +'%H%M%S'` # JVM options. *JVM_OPTS**=**${JVM_OPTS}*" -DIGNITE_QUIET*=**false*" # Uncomment to enable concurrent garbage collection (GC) if you encounter long GC pauses. *JVM_OPTS**=**${JVM_OPTS}*" *\* -Xms8g *\* -Xmx8g *\* -Xloggc:./gc*${now0}*.log *\* -XX:+PrintGCDetails *\* -verbose:gc *\* -XX:+UseParNewGC *\* -XX:+UseConcMarkSweepGC *\* -XX:+PrintGCDateStamps *\* " #Ignite version *ver**=*ver-*2*.*8*.*0*-SNAPSHOT-rev-816f435d- # List of default probes. # Add DStatProbe or VmStatProbe if your OS supports it (e.g. if running on Linux). *BENCHMARK_DEFAULT_PROBES**=*ThroughputLatencyProbe,PercentileProbe,DStatProbe # Packages where the specified benchmark is searched by reflection mechanism. *BENCHMARK_PACKAGES**=*org.yardstickframework,org.apache.ignite.yardstick # Flag which indicates to restart the servers before every benchmark execution. *RESTART_SERVERS**=**true* # Probe point writer class name. # BENCHMARK_WRITER= # Comma-separated list of the hosts to run BenchmarkServers on. *SERVER_HOSTS**=172*.*25*.*1*.*30*,*172*.*25*.*1*.*27*,*172*.*25*.*1*.*28*,*172*.*25*.*1*.*29* # Comma-separated list of the hosts to run BenchmarkDrivers on. *DRIVER_HOSTS**=172*.*25*.*1*.*11* # Remote username. # REMOTE_USER= # Number of nodes, used to wait for the specified number of nodes to start. *nodesNum**=*$((`echo *${SERVER_HOSTS}* | tr ',' '\n' | wc -l` + `echo *${DRIVER_HOSTS}* | tr ',' '\n' | wc -l`)) # Backups count. *b**=1* # Warmup. *w**=60* # Duration. *d**=180* # Threads count. *t**=64* # Sync mode. *sm**=*PRIMARY_SYNC # Jobs. *j**=10* # Run configuration which contains all benchmarks. # Note that each benchmark is set to run for 300 seconds (5 min) with warm-up set to 60 seconds (1 minute). *CONFIGS**=*"*\* -cfg *${SCRIPT_DIR}*/../config/ignite-config.xml -nn *${nodesNum}* -b *${b}* -w *${w}* -d *${d}* -t *${t}* -sm *${sm}* -pc *2* -r *10* *-*-sqlRange *1* *-*-client *-dn* NativeSqlJoinQueryRangeBenchmark -sn IgniteNode -ds *${ver}*sql-select-native-join-r1-*${b}*-backup,*\* " {noformat} Exception: {noformat} <12:50:49> Populate 9 <12:50:50> Populate 10 <12:50:50> Probe writer is not configured (using default CSV writer) <12:50:50> ThroughputLatencyProbe is started. <12:50:50> PercentileProbe is started. <12:50:50> DStatProbe is started. Command: 'dstat -m --all --noheaders --noupdate 1' <12:50:50> Starting warmup. Finishing main test [ts=1550051451762, date=Wed Feb 13 12:50:51 MSK 2019] ERROR: Shutting down benchmark driver to unexpected exception. Type '--help' for usage. java.lang.Exception: Invalid result set size [actual=0, expected=1] *<-->*at org.apache.ignite.yardstick.jdbc.NativeSqlJoinQueryRangeBenchmark.test(NativeSqlJoinQueryRangeBenchmark.java:84) *<-->*at org.yardstickframework.impl.BenchmarkRunner$2.run(BenchmarkRunner.java:178) *<-->*at java.lang.Thread.run(Thread.java:748) [2019-02-13 12:50:51,810][INFO ][Thread-8][GridCacheProcessor] Stopped cache [cacheName=compute] [2019-02-13 12:50:51,812][INFO ][Thread-8][GridCacheProcessor] Stopped cache [cacheName=query] [2019-02-13 12:50:51,813][INFO ][Thread-8][GridCacheProcessor] Stopped cache [cacheName=atomic-index-with-eviction] [2019-02-13 12:50:51,813][INFO ][Thread-8][GridCacheProcessor] Stopped cache [cacheName=atomic-index] [2019-02-13 12:50:51,813][INFO ][Thread-8][GridCacheProcessor] Stopped cache [cacheName=tx] [2019-02-13 12:50:51,815][INFO ][Thread-8][GridCacheProcessor] Stopped cache [cacheName=atomic] [2019-02-13 12:50:51,815][INFO ][Thread-8][GridCacheProcessor] Stopped cache [cacheName=ignite-sys-cache] [2019-02-13 12:50:51,815][INFO ][Thread-8][GridCacheProcessor] Stopped cache [cacheName=SQL_PUBLIC_PERSON] [2019-02-13 12:50:51,816][INFO ][Thread-8][GridCacheProcessor] Stopped cache [cacheName=SQL_PUBLIC_ORGANIZATION] [2019-02-13 12:50:51,825][INFO ][Thread-8][IgniteKernal]*.* {noformat} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (IGNITE-11280) SQL: Cache all queries, not only two-step
[ https://issues.apache.org/jira/browse/IGNITE-11280?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16766942#comment-16766942 ] Vladimir Ozerov commented on IGNITE-11280: -- ZooKeeper - unrelated. JDBC - it was a problem in the test, fixed. > SQL: Cache all queries, not only two-step > - > > Key: IGNITE-11280 > URL: https://issues.apache.org/jira/browse/IGNITE-11280 > Project: Ignite > Issue Type: Task > Components: sql >Reporter: Vladimir Ozerov >Assignee: Vladimir Ozerov >Priority: Major > Fix For: 2.8 > > Time Spent: 10m > Remaining Estimate: 0h > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (IGNITE-11305) Support data page scan for ODBC
Pavel Kuznetsov created IGNITE-11305: Summary: Support data page scan for ODBC Key: IGNITE-11305 URL: https://issues.apache.org/jira/browse/IGNITE-11305 Project: Ignite Issue Type: Improvement Components: sql Reporter: Pavel Kuznetsov Just like IGNITE-10937, we need the same for ODBC. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (IGNITE-8732) SQL: REPLICATED cache cannot be left-joined to PARTITIONED
[ https://issues.apache.org/jira/browse/IGNITE-8732?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16766974#comment-16766974 ] Andrew Mashenkov commented on IGNITE-8732: -- This ticket has a closed duplicate (IGNITE-5016) and I see a number of tests muted with IGNITE-5016. We should unmute these tests within this ticket. > SQL: REPLICATED cache cannot be left-joined to PARTITIONED > -- > > Key: IGNITE-8732 > URL: https://issues.apache.org/jira/browse/IGNITE-8732 > Project: Ignite > Issue Type: Bug > Components: sql >Affects Versions: 2.5 >Reporter: Vladimir Ozerov >Priority: Major > Labels: sql-engine > > *Steps to reproduce* > # Run > {{org.apache.ignite.sqltests.ReplicatedSqlTest#testLeftJoinReplicatedPartitioned}} > # Observe that we have 2x results on 2-node cluster > *Root Cause* > {{left LEFT JOIN right ON cond}} operation assumes full scan of of a left > expression. Currently we perform this scan on every node and then simply > merge results on reducer. Two nodes, two scans of {{REPLICATED}} cache, 2x > results. > *Potential Solutions* > We may consider several solutions. Deeper analysis is required to understand > which is the right one. > # Perform deduplication on reducer - this most prospective and general > technique, described in more details below > # Treat {{REPLICATED}} cache as {{PARTITIONED}}. Essentially, we just need to > pass proper backup filter. But what if {{REPLICATED}} cache spans more nodes > than {{PARTITIONED}}? We cannot rely on primary/backup in this case > # Implement additional execution phase as follows: > {code} > SELECT left.cols, right.cols FROM left INNER JOIN right ON cond; > // Get "inner join" part > UNION > UNICAST SELECT left.cols, [NULL].cols FROM left WHERE left.id NOT IN ([ids > from the first phase]) // Get "outer join" part > {code} > *Reducer Deduplication* > The idea is to get all data locally and then perform final deduplication. > This may incur high network overhead, because of lot of duplicated left parts > would be transferred. However, this could be optimized greatly with the > following techniques applied one after another > # Semi-jions: {{left}} is {{joined}} on mapper node, but instead of sending > {{(left, right)}} relation, we send {{(left) + (right)}} > # In case {{left}} part is known to be idempotent (i.e. it produces the same > result set on all nodes), only one node will send {{(left) + (right)}}, other > nodes will send {{(right)}} only > # Merge {{left}} results with if needed (i.e. if idempotence-related opto was > not applicable) > # Join {{left}} and {{right}} parts on reducer -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (IGNITE-10937) Support data page scan for JDBC
[ https://issues.apache.org/jira/browse/IGNITE-10937?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Pavel Kuznetsov updated IGNITE-10937: - Summary: Support data page scan for JDBC (was: Support data page scan for JDBC/ODBC) > Support data page scan for JDBC > --- > > Key: IGNITE-10937 > URL: https://issues.apache.org/jira/browse/IGNITE-10937 > Project: Ignite > Issue Type: Improvement > Components: sql >Reporter: Sergi Vladykin >Assignee: Pavel Kuznetsov >Priority: Major > Labels: performance > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (IGNITE-11177) IGNITE.NODE_METRICS view fails with "Cannot parse "TIME" constant" > 24h
[ https://issues.apache.org/jira/browse/IGNITE-11177?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16767009#comment-16767009 ] Pavel Kuznetsov commented on IGNITE-11177: -- [~alex_pl] thank you for the suggested solution. In your approach we cannot test some metrics ("UPTIME" for example), cuz those are not a Job Metrics but vm (aka local) metrics. Maybe it's not a big deal to test every metric. I think it's better to leave all the cases, but if this test become fragile, I change to suggested solution. > IGNITE.NODE_METRICS view fails with "Cannot parse "TIME" constant" > 24h > > > Key: IGNITE-11177 > URL: https://issues.apache.org/jira/browse/IGNITE-11177 > Project: Ignite > Issue Type: Bug > Components: sql >Affects Versions: 2.7 >Reporter: Ilya Kasnacheev >Assignee: Pavel Kuznetsov >Priority: Major > Labels: metrics > Fix For: 2.8 > > > This is because we are using TIME type for several additive measurements: > {quote}SqlSystemViewNodeMetrics.class: > > valueTimeFromMillis(metrics.getTotalJobsExecutionTime()), > valueTimeFromMillis(metrics.getTotalBusyTime()), > > valueTimeFromMillis(metrics.getTotalIdleTime()),{quote} > which will be hundreds of hours on long-running cluster, but {{TIME}} type is > limited to 24 hours and will fail to be converted otherwise, as in: > {quote}0: jdbc:ignite:thin://localhost> SELECT CAST('40:52:26.548' AS TIME); > Error: Failed to parse query. Невозможно преобразование строки "40:52:26.548" > в тип "TIME" > Cannot parse "TIME" constant "40:52:26.548"; SQL statement: > SELECT CAST('40:52:26.548' AS TIME) [22007-197] (state=42000,code=1001){quote} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (IGNITE-11306) NativeSqlJoinQueryRangeBenchmark Invalid result set size
[ https://issues.apache.org/jira/browse/IGNITE-11306?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ilya Suntsov updated IGNITE-11306: -- Summary: NativeSqlJoinQueryRangeBenchmark Invalid result set size (was: NativeSqlJoinQueryRangeBenchmark doesn't work) > NativeSqlJoinQueryRangeBenchmark Invalid result set size > > > Key: IGNITE-11306 > URL: https://issues.apache.org/jira/browse/IGNITE-11306 > Project: Ignite > Issue Type: Task > Components: yardstick >Affects Versions: 2.7 >Reporter: Ilya Suntsov >Priority: Major > > Config: > {noformat} > now0=`date +'%H%M%S'` > # JVM options. > JVM_OPTS=${JVM_OPTS}" -DIGNITE_QUIET=false" > # Uncomment to enable concurrent garbage collection (GC) if you encounter > long GC pauses. > JVM_OPTS=${JVM_OPTS}" \ > -Xms8g \ > -Xmx8g \ > -Xloggc:./gc${now0}.log \ > -XX:+PrintGCDetails \ > -verbose:gc \ > -XX:+UseParNewGC \ > -XX:+UseConcMarkSweepGC \ > -XX:+PrintGCDateStamps \ > " > #Ignite version > ver=ver-2.8.0-SNAPSHOT-rev-816f435d- > # List of default probes. > # Add DStatProbe or VmStatProbe if your OS supports it (e.g. if running on > Linux). > BENCHMARK_DEFAULT_PROBES=ThroughputLatencyProbe,PercentileProbe,DStatProbe > # Packages where the specified benchmark is searched by reflection mechanism. > BENCHMARK_PACKAGES=org.yardstickframework,org.apache.ignite.yardstick > # Flag which indicates to restart the servers before every benchmark > execution. > RESTART_SERVERS=true > # Probe point writer class name. > # BENCHMARK_WRITER= > # Comma-separated list of the hosts to run BenchmarkServers on. > SERVER_HOSTS=172.25.1.30,172.25.1.27,172.25.1.28,172.25.1.29 > # Comma-separated list of the hosts to run BenchmarkDrivers on. > DRIVER_HOSTS=172.25.1.11 > # Remote username. > # REMOTE_USER= > # Number of nodes, used to wait for the specified number of nodes to start. > nodesNum=$((`echo ${SERVER_HOSTS} | tr ',' '\n' | wc -l` + `echo > ${DRIVER_HOSTS} | tr ',' '\n' | wc -l`)) > # Backups count. > b=1 > # Warmup. > w=60 > # Duration. > d=180 > # Threads count. > t=64 > # Sync mode. > sm=PRIMARY_SYNC > # Jobs. > j=10 > # Run configuration which contains all benchmarks. > # Note that each benchmark is set to run for 300 seconds (5 min) with warm-up > set to 60 seconds (1 minute). > CONFIGS="\ > -cfg ${SCRIPT_DIR}/../config/ignite-config.xml -nn ${nodesNum} -b ${b} -w > ${w} -d ${d} -t ${t} -sm ${sm} -pc 2 -r 10 --sqlRange 1 --client > -dn NativeSqlJoinQueryRangeBenchmark -sn IgniteNode -ds > ${ver}sql-select-native-join-r1-${b}-backup,\ > " > {noformat} > Exception: > {noformat} > <12:50:49> Populate 9 > <12:50:50> Populate 10 > <12:50:50> Probe writer is not configured (using default CSV > writer) > <12:50:50> ThroughputLatencyProbe is started. > <12:50:50> PercentileProbe is started. > <12:50:50> DStatProbe is started. Command: 'dstat -m --all > --noheaders --noupdate 1' > <12:50:50> Starting warmup. > Finishing main test [ts=1550051451762, date=Wed Feb 13 12:50:51 MSK 2019] > ERROR: Shutting down benchmark driver to unexpected exception. > Type '--help' for usage. > java.lang.Exception: Invalid result set size [actual=0, expected=1] > *<-->*at > org.apache.ignite.yardstick.jdbc.NativeSqlJoinQueryRangeBenchmark.test(NativeSqlJoinQueryRangeBenchmark.java:84) > *<-->*at > org.yardstickframework.impl.BenchmarkRunner$2.run(BenchmarkRunner.java:178) > *<-->*at java.lang.Thread.run(Thread.java:748) > [2019-02-13 12:50:51,810][INFO ][Thread-8][GridCacheProcessor] Stopped cache > [cacheName=compute] > [2019-02-13 12:50:51,812][INFO ][Thread-8][GridCacheProcessor] Stopped cache > [cacheName=query] > [2019-02-13 12:50:51,813][INFO ][Thread-8][GridCacheProcessor] Stopped cache > [cacheName=atomic-index-with-eviction] > [2019-02-13 12:50:51,813][INFO ][Thread-8][GridCacheProcessor] Stopped cache > [cacheName=atomic-index] > [2019-02-13 12:50:51,813][INFO ][Thread-8][GridCacheProcessor] Stopped cache > [cacheName=tx] > [2019-02-13 12:50:51,815][INFO ][Thread-8][GridCacheProcessor] Stopped cache > [cacheName=atomic] > [2019-02-13 12:50:51,815][INFO ][Thread-8][GridCacheProcessor] Stopped cache > [cacheName=ignite-sys-cache] > [2019-02-13 12:50:51,815][INFO ][Thread-8][GridCacheProcessor] Stopped cache > [cacheName=SQL_PUBLIC_PERSON] > [2019-02-13 12:50:51,816][INFO ][Thread-8][GridCacheProcessor] Stopped cache > [cacheName=SQL_PUBLIC_ORGANIZATION] > [2019-02-13 12:50:51,825][INFO ][Thread-8][IgniteKernal]*.* > > {noformat} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (IGNITE-10214) Web console: dependency to open source JDBC driver is not generated in the project's pom file
[ https://issues.apache.org/jira/browse/IGNITE-10214?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16767019#comment-16767019 ] Vasiliy Sisko commented on IGNITE-10214: Fixed generated version of MysqlDataSource > Web console: dependency to open source JDBC driver is not generated in the > project's pom file > - > > Key: IGNITE-10214 > URL: https://issues.apache.org/jira/browse/IGNITE-10214 > Project: Ignite > Issue Type: Bug >Reporter: Pavel Konstantinov >Assignee: Vasiliy Sisko >Priority: Major > > Steps to reproduce: > # import caches from for example MySql DB > # check generated pom file -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (IGNITE-11304) SQL: Common caching of both local and distributed query metadata
[ https://issues.apache.org/jira/browse/IGNITE-11304?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vladimir Ozerov updated IGNITE-11304: - Ignite Flags: (was: Docs Required) > SQL: Common caching of both local and distributed query metadata > > > Key: IGNITE-11304 > URL: https://issues.apache.org/jira/browse/IGNITE-11304 > Project: Ignite > Issue Type: Task > Components: sql >Reporter: Vladimir Ozerov >Assignee: Vladimir Ozerov >Priority: Major > > Currently query metadata is only cached for distributed queries. For local > queries it is calculated on every request over and over again. Need to cache > it always in {{QueryParserResultSelect}}. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (IGNITE-11288) TcpDiscovery locks forever on SSLSocket.close().
[ https://issues.apache.org/jira/browse/IGNITE-11288?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Dmitriy Govorukhin updated IGNITE-11288: Ignite Flags: (was: Docs Required) > TcpDiscovery locks forever on SSLSocket.close(). > > > Key: IGNITE-11288 > URL: https://issues.apache.org/jira/browse/IGNITE-11288 > Project: Ignite > Issue Type: Bug >Reporter: Pavel Voronkin >Assignee: Pavel Voronkin >Priority: Critical > Fix For: 2.8 > > Time Spent: 10m > Remaining Estimate: 0h > > Rootcause is java bug locking on SSLSocketImpl.close() on write lock: > //we create socket with soTimeout(0) here, but setting it here won't help > anyway. > RingMessageWorker: 3152 sock = spi.openSocket(addr, timeoutHelper); > //After timeout grid-timeout-worker blocks forever on SSLSOcketImpl.close(). > According to java8 SSLSocketImpl: > {code:java} > if (var1.isAlert((byte)0) && this.getSoLinger() >= 0) { > boolean var3 = Thread.interrupted(); > try { > if (this.writeLock.tryLock((long)this.getSoLinger(), TimeUnit.SECONDS)) { > try > { this.writeRecordInternal(var1, var2); } > finally > { this.writeLock.unlock(); } > } else > { SSLException var4 = new SSLException("SO_LINGER timeout, close_notify > message cannot be sent."); if (this.isLayered() && !this.autoClose) { > this.fatal((byte)-1, (Throwable)var4); } > else if (debug != null && Debug.isOn("ssl")) > { System.out.println(Thread.currentThread().getName() + ", received > Exception: " + var4); } > this.sess.invalidate(); > } > } catch (InterruptedException var14) > { var3 = true; } > if (var3) > { Thread.currentThread().interrupt(); } > } else > { this.writeLock.lock(); try { this.writeRecordInternal(var1, var2); } > finally > { this.writeLock.unlock(); } > }{code} > In case of soLinger is not set we fallback to this.writeLock.lock(); which > wait forever, cause RingMessageWorker is writing message with SO_TIMEOUT zero. > Solution: > 1) Set proper SO_TIMEOUT //that didn't help on Linux in case we drop packets > using iptables. > 2) Set SO_LINGER to some reasonable positive value. > Similar JDK bug [https://bugs.openjdk.java.net/browse/JDK-6668261]. > Guys end up setting SO_LINGER. > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Resolved] (IGNITE-10759) Disable implicit distributed joins when queryParallelizm>1.
[ https://issues.apache.org/jira/browse/IGNITE-10759?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vladimir Ozerov resolved IGNITE-10759. -- Resolution: Duplicate Duplicate to IGNITE-11310. > Disable implicit distributed joins when queryParallelizm>1. > --- > > Key: IGNITE-10759 > URL: https://issues.apache.org/jira/browse/IGNITE-10759 > Project: Ignite > Issue Type: Bug > Components: sql >Affects Versions: 1.9 >Reporter: Andrew Mashenkov >Priority: Major > Fix For: 2.8 > > Time Spent: 10m > Remaining Estimate: 0h > > For now, local query with queryParallelizm>1 enables joins between partitions > on same even if distributedJoins flag is false. > This behaviour is unexpected and can't be disabled. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (IGNITE-11310) SQL: remove special interaction between query parallelism and distributed joins
[ https://issues.apache.org/jira/browse/IGNITE-11310?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16767099#comment-16767099 ] Vladimir Ozerov commented on IGNITE-11310: -- A number of new failures appeared. Some of them are due to incorrect tests (e.g. missing collocation). But some of them show actual cause of "distributedJoins" flag magic. When "local" flag is set we need to query local partitions. If parallelism is enabled, then we have to query each stripe separately, and then merge the result using standard two-phase flow. For some reason "distributedJoins" flag was used to force query split and execution in two-step mode. This is wrong. What we need to do instead, is to add special flag "split needed" to parsing result. Then this flag should be used to decide how query is executed - with or without split. Previous PRs are removed since their implementation was incorrect. > SQL: remove special interaction between query parallelism and distributed > joins > --- > > Key: IGNITE-11310 > URL: https://issues.apache.org/jira/browse/IGNITE-11310 > Project: Ignite > Issue Type: Task > Components: sql >Reporter: Vladimir Ozerov >Assignee: Vladimir Ozerov >Priority: Major > Fix For: 2.8 > > Time Spent: 20m > Remaining Estimate: 0h > > Currently we enable so-called "local distributed joins" when query is > executed locally with enabled parallelism. This behavior is not needed and > needs to be removed. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (IGNITE-11258) JDBC Thin: update connection setup logic.
[ https://issues.apache.org/jira/browse/IGNITE-11258?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16767124#comment-16767124 ] Alexander Lapin commented on IGNITE-11258: -- [~vozerov] Ready for preliminary review. > JDBC Thin: update connection setup logic. > - > > Key: IGNITE-11258 > URL: https://issues.apache.org/jira/browse/IGNITE-11258 > Project: Ignite > Issue Type: Task > Components: sql >Reporter: Alexander Lapin >Priority: Major > Labels: iep-23 > Time Spent: 10m > Remaining Estimate: 0h > > # On thin client startup it connects to *all* *nodes* provided by user by > client configuration. > # Upon handshake server returns its UUID to client. > # By the end of the startup procedure, client have open connections to all > available server nodes and the following mapping (*nodeMap*): [UUID => > Connection]. > Connection to all nodes helps to identify available nodes, but can lead to > significant delay, when thin client is used on a large cluster with a long IP > list provided by user. To lower this delay, asynchronous establishment of > connections can be used. > For more information see [IEP-23: Best Effort > Affinity|https://cwiki.apache.org/confluence/display/IGNITE/IEP-23%3A+Best+Effort+Affinity+for+thin+clients] -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (IGNITE-11310) SQL: remove special interaction between query parallelism and distributed joins
[ https://issues.apache.org/jira/browse/IGNITE-11310?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16767048#comment-16767048 ] Vladimir Ozerov commented on IGNITE-11310: -- Implemented. Limited test run for SQL only: https://ci.ignite.apache.org/viewQueued.html?itemId=3076387 > SQL: remove special interaction between query parallelism and distributed > joins > --- > > Key: IGNITE-11310 > URL: https://issues.apache.org/jira/browse/IGNITE-11310 > Project: Ignite > Issue Type: Task > Components: sql >Reporter: Vladimir Ozerov >Assignee: Vladimir Ozerov >Priority: Major > Fix For: 2.8 > > Time Spent: 10m > Remaining Estimate: 0h > > Currently we enable so-called "local distributed joins" when query is > executed locally with enabled parallelism. This behavior is not needed and > needs to be removed. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (IGNITE-11310) SQL: remove special interaction between query parallelism and distributed joins
[ https://issues.apache.org/jira/browse/IGNITE-11310?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vladimir Ozerov updated IGNITE-11310: - Ignite Flags: (was: Docs Required) > SQL: remove special interaction between query parallelism and distributed > joins > --- > > Key: IGNITE-11310 > URL: https://issues.apache.org/jira/browse/IGNITE-11310 > Project: Ignite > Issue Type: Task > Components: sql >Reporter: Vladimir Ozerov >Assignee: Vladimir Ozerov >Priority: Major > Fix For: 2.8 > > Time Spent: 10m > Remaining Estimate: 0h > > Currently we enable so-called "local distributed joins" when query is > executed locally with enabled parallelism. This behavior is not needed and > needs to be removed. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (IGNITE-11288) TcpDiscovery locks forever on SSLSocket.close().
[ https://issues.apache.org/jira/browse/IGNITE-11288?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16767054#comment-16767054 ] Pavel Voronkin commented on IGNITE-11288: - Thanks > TcpDiscovery locks forever on SSLSocket.close(). > > > Key: IGNITE-11288 > URL: https://issues.apache.org/jira/browse/IGNITE-11288 > Project: Ignite > Issue Type: Bug >Reporter: Pavel Voronkin >Assignee: Pavel Voronkin >Priority: Critical > Fix For: 2.8 > > Time Spent: 20m > Remaining Estimate: 0h > > Rootcause is java bug locking on SSLSocketImpl.close() on write lock: > //we create socket with soTimeout(0) here, but setting it here won't help > anyway. > RingMessageWorker: 3152 sock = spi.openSocket(addr, timeoutHelper); > //After timeout grid-timeout-worker blocks forever on SSLSOcketImpl.close(). > According to java8 SSLSocketImpl: > {code:java} > if (var1.isAlert((byte)0) && this.getSoLinger() >= 0) { > boolean var3 = Thread.interrupted(); > try { > if (this.writeLock.tryLock((long)this.getSoLinger(), TimeUnit.SECONDS)) { > try > { this.writeRecordInternal(var1, var2); } > finally > { this.writeLock.unlock(); } > } else > { SSLException var4 = new SSLException("SO_LINGER timeout, close_notify > message cannot be sent."); if (this.isLayered() && !this.autoClose) { > this.fatal((byte)-1, (Throwable)var4); } > else if (debug != null && Debug.isOn("ssl")) > { System.out.println(Thread.currentThread().getName() + ", received > Exception: " + var4); } > this.sess.invalidate(); > } > } catch (InterruptedException var14) > { var3 = true; } > if (var3) > { Thread.currentThread().interrupt(); } > } else > { this.writeLock.lock(); try { this.writeRecordInternal(var1, var2); } > finally > { this.writeLock.unlock(); } > }{code} > In case of soLinger is not set we fallback to this.writeLock.lock(); which > wait forever, cause RingMessageWorker is writing message with SO_TIMEOUT zero. > Solution: > 1) Set proper SO_TIMEOUT //that didn't help on Linux in case we drop packets > using iptables. > 2) Set SO_LINGER to some reasonable positive value. > Similar JDK bug [https://bugs.openjdk.java.net/browse/JDK-6668261]. > Guys end up setting SO_LINGER. > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (IGNITE-11247) MVCC: Tests has been forgotten to unmute.
[ https://issues.apache.org/jira/browse/IGNITE-11247?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16767058#comment-16767058 ] Ignite TC Bot commented on IGNITE-11247: {panel:title=-- Run :: All: Possible Blockers|borderStyle=dashed|borderColor=#ccc|titleBGColor=#F7D6C1} {color:#d04437}MVCC Queries{color} [[tests 5|https://ci.ignite.apache.org/viewLog.html?buildId=3039187]] * IgniteCacheMvccSqlTestSuite: CacheMvccPartitionedSqlCoordinatorFailoverTest.testAccountsTxSql_SingleNode_CoordinatorFails_Persistence - 0,0% fails in last 0 master runs. * IgniteCacheMvccSqlTestSuite: CacheMvccReplicatedSqlCoordinatorFailoverTest.testAccountsTxSql_SingleNode_CoordinatorFails_Persistence - 0,0% fails in last 0 master runs. {color:#d04437}PDS (Indexing){color} [[tests 2|https://ci.ignite.apache.org/viewLog.html?buildId=3060610]] * IgnitePdsWithIndexingTestSuite: IgniteTwoRegionsRebuildIndexTest.testRebuildIndexes - 3,1% fails in last 417 master runs. {panel} [TeamCity *-- Run :: All* Results|https://ci.ignite.apache.org/viewLog.html?buildId=3037821buildTypeId=IgniteTests24Java8_RunAll] > MVCC: Tests has been forgotten to unmute. > - > > Key: IGNITE-11247 > URL: https://issues.apache.org/jira/browse/IGNITE-11247 > Project: Ignite > Issue Type: Bug > Components: mvcc >Reporter: Andrew Mashenkov >Assignee: Andrew Mashenkov >Priority: Major > Labels: MakeTeamcityGreenAgain, mvcc_stabilization_stage_1 > Time Spent: 10m > Remaining Estimate: 0h > > There are muted\ignored tests that are not being run on TC, but tickets for > fixing them looks already resolved. > Let's recheck those tests and either unmute them or create a new tickets to > fix lately if needed. > IgniteBasicWithPersistenceTestSuite > * testIoomErrorMvccPdsHandling - IGNITE-10185 > IgniteCacheMvccSqlTestSuite > * testSqlReadInsideTxInProgressCoordinatorFails - IGNITE-8841 > * testSqlReadInsideTxInProgressCoordinatorFails_ReadDelay - IGNITE-8841 > * > testPutAllGetAll_ClientServer_Backups1_SinglePartition_RestartRandomSrv_SqlDml > - IGNITE-10752 > * testAccountsTxSql_SingleNode_CoordinatorFails_Persistence - IGNITE-10753 > > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (IGNITE-11311) MVCC: SQL full table scan query can return duplicates.
[ https://issues.apache.org/jira/browse/IGNITE-11311?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrew Mashenkov updated IGNITE-11311: -- Ignite Flags: (was: Docs Required) > MVCC: SQL full table scan query can return duplicates. > -- > > Key: IGNITE-11311 > URL: https://issues.apache.org/jira/browse/IGNITE-11311 > Project: Ignite > Issue Type: Bug > Components: mvcc, sql >Reporter: Andrew Mashenkov >Priority: Major > > SQL query like "select * from table" can return duplicate rows. > Possible reasons can be > * due to SQL query iterates over data pages directly and can see > inconsistent state (IGNITE-10561) > * Same as IGNITE-10767, query see stale pages. > * Smth is wrong with mvcc versions visibility. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (IGNITE-11267) Print warning when keystore password arguments are used in control.sh (bat)
[ https://issues.apache.org/jira/browse/IGNITE-11267?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16767044#comment-16767044 ] Ignite TC Bot commented on IGNITE-11267: {panel:title=-- Run :: All: No blockers found!|borderStyle=dashed|borderColor=#ccc|titleBGColor=#D6F7C1}{panel} [TeamCity *-- Run :: All* Results|https://ci.ignite.apache.org/viewLog.html?buildId=3060237buildTypeId=IgniteTests24Java8_RunAll] > Print warning when keystore password arguments are used in control.sh (bat) > --- > > Key: IGNITE-11267 > URL: https://issues.apache.org/jira/browse/IGNITE-11267 > Project: Ignite > Issue Type: Task >Affects Versions: 2.7 >Reporter: Andrey Kuznetsov >Assignee: Andrey Kuznetsov >Priority: Minor > Time Spent: 10m > Remaining Estimate: 0h > > Control utility gets keystore/truststore password either as command line > argument or as console input. Former way is insecure, and user should be > warned. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (IGNITE-11312) JDBC: Thin driver doesn't reports incorrect property names
Stanislav Lukyanov created IGNITE-11312: --- Summary: JDBC: Thin driver doesn't reports incorrect property names Key: IGNITE-11312 URL: https://issues.apache.org/jira/browse/IGNITE-11312 Project: Ignite Issue Type: Improvement Components: jdbc Reporter: Stanislav Lukyanov JDBC driver reports the properties it supports via getPropertyInfo method. It currently reports the property names as simple strings, like "enforceJoinOrder". However, when the properties are processed on connect they are looked up with prefix "ignite.jdbc", e.g. "ignite.jdbc.enforceJoinOrder". Because of this UI tools like DBeaver can't properly pass the properties to Ignite. For example, when "enforceJoinOrder" is set to true in "Connection settings" -> "Driver properties" menu of DBeaver it has no effect. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (IGNITE-11310) SQL: Remove special interaction between query parallelism and distributed joins
[ https://issues.apache.org/jira/browse/IGNITE-11310?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vladimir Ozerov updated IGNITE-11310: - Summary: SQL: Remove special interaction between query parallelism and distributed joins (was: SQL: remove special interaction between query parallelism and distributed joins) > SQL: Remove special interaction between query parallelism and distributed > joins > --- > > Key: IGNITE-11310 > URL: https://issues.apache.org/jira/browse/IGNITE-11310 > Project: Ignite > Issue Type: Task > Components: sql >Reporter: Vladimir Ozerov >Assignee: Vladimir Ozerov >Priority: Major > Fix For: 2.8 > > Time Spent: 20m > Remaining Estimate: 0h > > Currently we enable so-called "local distributed joins" when query is > executed locally with enabled parallelism. This behavior is not needed and > needs to be removed. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Comment Edited] (IGNITE-11258) JDBC Thin: update connection setup logic.
[ https://issues.apache.org/jira/browse/IGNITE-11258?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16767124#comment-16767124 ] Alexander Lapin edited comment on IGNITE-11258 at 2/13/19 12:35 PM: [~vozerov] Ready for preliminary review. Please, pay attention, that given ticket also contains changes from IGNITE-11257. was (Author: alapin): [~vozerov] Ready for preliminary review. > JDBC Thin: update connection setup logic. > - > > Key: IGNITE-11258 > URL: https://issues.apache.org/jira/browse/IGNITE-11258 > Project: Ignite > Issue Type: Task > Components: sql >Reporter: Alexander Lapin >Priority: Major > Labels: iep-23 > Time Spent: 10m > Remaining Estimate: 0h > > # On thin client startup it connects to *all* *nodes* provided by user by > client configuration. > # Upon handshake server returns its UUID to client. > # By the end of the startup procedure, client have open connections to all > available server nodes and the following mapping (*nodeMap*): [UUID => > Connection]. > Connection to all nodes helps to identify available nodes, but can lead to > significant delay, when thin client is used on a large cluster with a long IP > list provided by user. To lower this delay, asynchronous establishment of > connections can be used. > For more information see [IEP-23: Best Effort > Affinity|https://cwiki.apache.org/confluence/display/IGNITE/IEP-23%3A+Best+Effort+Affinity+for+thin+clients] -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (IGNITE-11216) Ignite.sh fails on Mac OS and Linux - Java 11
[ https://issues.apache.org/jira/browse/IGNITE-11216?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16767145#comment-16767145 ] Dmitriy Pavlov commented on IGNITE-11216: - [~ustas], could you please review this fix https://github.com/apache/ignite/pull/6095 ? IMO, LGTM, but it would be better if you can take a look, as well. > Ignite.sh fails on Mac OS and Linux - Java 11 > - > > Key: IGNITE-11216 > URL: https://issues.apache.org/jira/browse/IGNITE-11216 > Project: Ignite > Issue Type: Task >Affects Versions: 2.7 >Reporter: Denis Magda >Assignee: Peter Ivanov >Priority: Blocker > Fix For: 2.8 > > Time Spent: 50m > Remaining Estimate: 0h > > Ignite.sh fails on Mac OS Mojave with the following JDK version: > java version "11.0.2" 2019-01-15 LTS > Java(TM) SE Runtime Environment 18.9 (build 11.0.2+9-LTS) > Java HotSpot(TM) 64-Bit Server VM 18.9 (build 11.0.2+9-LTS, mixed mode) > The same issue is reproduced on Linux and the workaround is discussed here: > https://issues.apache.org/jira/browse/IGNITE-3 > The exception is as follows: > {noformat} > /Users/dmagda/Downloads/apache-ignite-2.7.0-bin/bin/include/functions.sh: > line 40: [: -eq: unary operator expected > ./ignite.sh: line 152: [: -eq: unary operator expected > ./ignite.sh: line 157: [: -gt: unary operator expected > ./ignite.sh: line 170: [: -eq: unary operator expected > WARNING: An illegal reflective access operation has occurred > WARNING: Illegal reflective access by > org.apache.ignite.internal.util.GridUnsafe$2 > (file:/Users/dmagda/Downloads/apache-ignite-2.7.0-bin/libs/ignite-core-2.7.0.jar) > to field java.nio.Buffer.address > WARNING: Please consider reporting this to the maintainers of > org.apache.ignite.internal.util.GridUnsafe$2 > WARNING: Use --illegal-access=warn to enable warnings of further illegal > reflective access operations > WARNING: All illegal access operations will be denied in a future release > Exception in thread "main" java.lang.ExceptionInInitializerError > at > org.apache.ignite.internal.util.IgniteUtils.(IgniteUtils.java:795) > at > org.apache.ignite.lang.IgniteProductVersion.fromString(IgniteProductVersion.java:305) > at > org.apache.ignite.internal.IgniteVersionUtils.(IgniteVersionUtils.java:71) > at > org.apache.ignite.startup.cmdline.CommandLineStartup.(CommandLineStartup.java:99) > Caused by: java.lang.RuntimeException: jdk.internal.misc.JavaNioAccess class > is unavailable. > at > org.apache.ignite.internal.util.GridUnsafe.javaNioAccessObject(GridUnsafe.java:1453) > at > org.apache.ignite.internal.util.GridUnsafe.(GridUnsafe.java:112) > ... 4 more > Caused by: java.lang.IllegalAccessException: class > org.apache.ignite.internal.util.GridUnsafe cannot access class > jdk.internal.misc.SharedSecrets (in module java.base) because module > java.base does not export jdk.internal.misc to unnamed module @4f83df68 > at > java.base/jdk.internal.reflect.Reflection.newIllegalAccessException(Reflection.java:361) > at > java.base/java.lang.reflect.AccessibleObject.checkAccess(AccessibleObject.java:591) > at java.base/java.lang.reflect.Method.invoke(Method.java:558) > at > org.apache.ignite.internal.util.GridUnsafe.javaNioAccessObject(GridUnsafe.java:1450) > ... 5 more > {noformat} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (IGNITE-11310) SQL: remove special interaction between query parallelism and distributed joins
Vladimir Ozerov created IGNITE-11310: Summary: SQL: remove special interaction between query parallelism and distributed joins Key: IGNITE-11310 URL: https://issues.apache.org/jira/browse/IGNITE-11310 Project: Ignite Issue Type: Task Components: sql Reporter: Vladimir Ozerov Assignee: Vladimir Ozerov Fix For: 2.8 Currently we enable so-called "local distributed joins" when query is executed locally with enabled parallelism. This behavior is not needed and needs to be removed. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (IGNITE-11309) JDBC Thin: add flag or property to disable best effort affinity
Alexander Lapin created IGNITE-11309: Summary: JDBC Thin: add flag or property to disable best effort affinity Key: IGNITE-11309 URL: https://issues.apache.org/jira/browse/IGNITE-11309 Project: Ignite Issue Type: Task Components: sql Affects Versions: 2.8 Reporter: Alexander Lapin It's necessary to have an ability to disable best effort affinity among thin clients including thin jdbc client. It's not obvious whether it should be flag in connection string, app properties or some other place, so research required. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (IGNITE-11312) JDBC: Thin driver doesn't reports incorrect property names
[ https://issues.apache.org/jira/browse/IGNITE-11312?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Stanislav Lukyanov updated IGNITE-11312: Labels: newbie (was: ) > JDBC: Thin driver doesn't reports incorrect property names > -- > > Key: IGNITE-11312 > URL: https://issues.apache.org/jira/browse/IGNITE-11312 > Project: Ignite > Issue Type: Improvement > Components: jdbc >Reporter: Stanislav Lukyanov >Priority: Major > Labels: newbie > > JDBC driver reports the properties it supports via getPropertyInfo method. It > currently reports the property names as simple strings, like > "enforceJoinOrder". However, when the properties are processed on connect > they are looked up with prefix "ignite.jdbc", e.g. > "ignite.jdbc.enforceJoinOrder". > Because of this UI tools like DBeaver can't properly pass the properties to > Ignite. For example, when "enforceJoinOrder" is set to true in "Connection > settings" -> "Driver properties" menu of DBeaver it has no effect. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Reopened] (IGNITE-11247) MVCC: Tests has been forgotten to unmute.
[ https://issues.apache.org/jira/browse/IGNITE-11247?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrew Mashenkov reopened IGNITE-11247: --- > MVCC: Tests has been forgotten to unmute. > - > > Key: IGNITE-11247 > URL: https://issues.apache.org/jira/browse/IGNITE-11247 > Project: Ignite > Issue Type: Bug > Components: mvcc >Reporter: Andrew Mashenkov >Assignee: Andrew Mashenkov >Priority: Major > Labels: MakeTeamcityGreenAgain, mvcc_stabilization_stage_1 > Time Spent: 10m > Remaining Estimate: 0h > > There are muted\ignored tests that are not being run on TC, but tickets for > fixing them looks already resolved. > Let's recheck those tests and either unmute them or create a new tickets to > fix lately if needed. > IgniteBasicWithPersistenceTestSuite > * testIoomErrorMvccPdsHandling - IGNITE-10185 > IgniteCacheMvccSqlTestSuite > * testSqlReadInsideTxInProgressCoordinatorFails - IGNITE-8841 > * testSqlReadInsideTxInProgressCoordinatorFails_ReadDelay - IGNITE-8841 > * > testPutAllGetAll_ClientServer_Backups1_SinglePartition_RestartRandomSrv_SqlDml > - IGNITE-10752 > * testAccountsTxSql_SingleNode_CoordinatorFails_Persistence - IGNITE-10753 > > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Resolved] (IGNITE-11247) MVCC: Tests has been forgotten to unmute.
[ https://issues.apache.org/jira/browse/IGNITE-11247?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrew Mashenkov resolved IGNITE-11247. --- Resolution: Fixed Muted failed tests with IGNITE-11311 > MVCC: Tests has been forgotten to unmute. > - > > Key: IGNITE-11247 > URL: https://issues.apache.org/jira/browse/IGNITE-11247 > Project: Ignite > Issue Type: Bug > Components: mvcc >Reporter: Andrew Mashenkov >Assignee: Andrew Mashenkov >Priority: Major > Labels: MakeTeamcityGreenAgain, mvcc_stabilization_stage_1 > Time Spent: 10m > Remaining Estimate: 0h > > There are muted\ignored tests that are not being run on TC, but tickets for > fixing them looks already resolved. > Let's recheck those tests and either unmute them or create a new tickets to > fix lately if needed. > IgniteBasicWithPersistenceTestSuite > * testIoomErrorMvccPdsHandling - IGNITE-10185 > IgniteCacheMvccSqlTestSuite > * testSqlReadInsideTxInProgressCoordinatorFails - IGNITE-8841 > * testSqlReadInsideTxInProgressCoordinatorFails_ReadDelay - IGNITE-8841 > * > testPutAllGetAll_ClientServer_Backups1_SinglePartition_RestartRandomSrv_SqlDml > - IGNITE-10752 > * testAccountsTxSql_SingleNode_CoordinatorFails_Persistence - IGNITE-10753 > > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (IGNITE-11298) TcpCommunicationSpi does not support TLSv1.3
[ https://issues.apache.org/jira/browse/IGNITE-11298?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ilya Kasnacheev updated IGNITE-11298: - Description: When started on Java 11 we cannot form a secure cluster - Discovery will happily use the default TLSv1.3 but Communication will fail with its custom SSLEngine-using code. Need to fix that. Until that, nodes may be salvaged by setProtocol("TLSv1.2") on SslContextFactory, or by system property -Djdk.tls.client.protocols="TLSv1.2" was: When started on Java 11 we cannot form a secure cluster - Discovery will happily use the default TLSv1.2 but Communication will fail with its custom SSLEngine-using code. Need to fix that. Until that, nodes may be salvaged by setProtocol("TLSv1.2") on SslContextFactory, or by system property -Djdk.tls.client.protocols="TLSv1.2" > TcpCommunicationSpi does not support TLSv1.3 > > > Key: IGNITE-11298 > URL: https://issues.apache.org/jira/browse/IGNITE-11298 > Project: Ignite > Issue Type: Bug > Components: general >Affects Versions: 2.7 >Reporter: Ilya Kasnacheev >Priority: Major > Labels: Java11 > > When started on Java 11 we cannot form a secure cluster - Discovery will > happily use the default TLSv1.3 but Communication will fail with its custom > SSLEngine-using code. > Need to fix that. > Until that, nodes may be salvaged by setProtocol("TLSv1.2") on > SslContextFactory, or by system property -Djdk.tls.client.protocols="TLSv1.2" -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (IGNITE-10926) ZookeeperDiscoverySpi: client does not survive after several cluster restarts
[ https://issues.apache.org/jira/browse/IGNITE-10926?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16767129#comment-16767129 ] Dmitriy Pavlov commented on IGNITE-10926: - [~ivan.glukos] thank you for picking up this merge. > ZookeeperDiscoverySpi: client does not survive after several cluster restarts > - > > Key: IGNITE-10926 > URL: https://issues.apache.org/jira/browse/IGNITE-10926 > Project: Ignite > Issue Type: Bug > Components: zookeeper >Reporter: Amelchev Nikita >Assignee: Amelchev Nikita >Priority: Major > Fix For: 2.8 > > Time Spent: 20m > Remaining Estimate: 0h > > {{ZookeeperDiscoveryImpl#cleanupPreviousClusterData}} can delete alive node > of a client in case of low internal order. > Steps to reproduce: > 1. Start server and client. > 2. Stop the server and wait for the client disconnected. > 3. Start and stop the server. The server hasn't time to process client join > request. > 4. Start server. It will delete alive client node because the client has low > internal order. The client will never connect. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (IGNITE-11265) JVM Crashes on TeamCity
[ https://issues.apache.org/jira/browse/IGNITE-11265?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16767148#comment-16767148 ] Ignite TC Bot commented on IGNITE-11265: {panel:title=-- Run :: All: Possible Blockers|borderStyle=dashed|borderColor=#ccc|titleBGColor=#F7D6C1} {color:#d04437}Queries 1{color} [[tests 1|https://ci.ignite.apache.org/viewLog.html?buildId=3070375]] * IgniteBinaryCacheQueryTestSuite: IndexingCachePartitionLossPolicySelfTest.testReadWriteSafe - 0,0% fails in last 413 master runs. {color:#d04437}Cache (Failover) 1{color} [[tests 6|https://ci.ignite.apache.org/viewLog.html?buildId=3070339]] * IgniteCacheFailoverTestSuite: IgniteChangingBaselineUpCacheRemoveFailoverTest.testPutAndRemoveOptimisticSerializableTx - 0,0% fails in last 406 master runs. * IgniteCacheFailoverTestSuite: IgniteChangingBaselineDownCacheRemoveFailoverTest.testPutAndRemovePessimisticTx - 0,0% fails in last 406 master runs. * IgniteCacheFailoverTestSuite: IgniteChangingBaselineUpCacheRemoveFailoverTest.testPutAndRemove - 0,0% fails in last 406 master runs. * IgniteCacheFailoverTestSuite: IgniteChangingBaselineUpCacheRemoveFailoverTest.testPutAndRemovePessimisticTx - 0,0% fails in last 406 master runs. * IgniteCacheFailoverTestSuite: IgniteChangingBaselineDownCacheRemoveFailoverTest.testPutAndRemoveOptimisticSerializableTx - 0,0% fails in last 406 master runs. * IgniteCacheFailoverTestSuite: IgniteChangingBaselineDownCacheRemoveFailoverTest.testPutAndRemove - 0,0% fails in last 406 master runs. {color:#d04437}ZooKeeper (Discovery) 2{color} [[tests 0 TIMEOUT , Exit Code |https://ci.ignite.apache.org/viewLog.html?buildId=3070322]] * GridCachePartitionedNodeRestartTest.testRestartWithTxTenNodesTwoBackups (last started) {color:#d04437}Cache 2{color} [[tests 0 TIMEOUT , Exit Code |https://ci.ignite.apache.org/viewLog.html?buildId=3070348]] * GridCacheDhtPreloadMultiThreadedSelfTest.testConcurrentNodesStartStop (last started) {panel} [TeamCity *-- Run :: All* Results|https://ci.ignite.apache.org/viewLog.html?buildId=3070399buildTypeId=IgniteTests24Java8_RunAll] > JVM Crashes on TeamCity > --- > > Key: IGNITE-11265 > URL: https://issues.apache.org/jira/browse/IGNITE-11265 > Project: Ignite > Issue Type: Bug >Reporter: Dmitriy Pavlov >Assignee: Eduard Shangareev >Priority: Critical > Attachments: hs_err_pid2431080.log.txt, hs_err_pid2458635.log.txt, > hs_err_pid2674225.log.txt, hs_err_pid3473289.log.txt > > Time Spent: 10m > Remaining Estimate: 0h > > All crash dumps complain about the same method > org.apache.ignite.internal.processors.cache.persistence.tree.util.PageHandler.writeLock > Data Structures (https://ci.ignite.apache.org/viewLog.html?buildId=3007882) > https://ci.ignite.apache.org/repository/download/IgniteTests24Java8_DataStructures/3007882:id/hs_err_pid2674225.log > Other recent examples > Queries 1 > https://ci.ignite.apache.org/repository/download/IgniteTests24Java8_Queries1/3027655:id/hs_err_pid2458635.log > Client Nodes > https://ci.ignite.apache.org/repository/download/IgniteTests24Java8_ClientNodes/3027569:id/hs_err_pid2431080.log > Zookeeper Discovery > https://ci.ignite.apache.org/repository/download/IgniteTests24Java8_ZooKeeperDiscovery1/3027601:id/hs_err_pid3473289.log -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (IGNITE-10937) Support data page scan for JDBC
[ https://issues.apache.org/jira/browse/IGNITE-10937?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Pavel Kuznetsov updated IGNITE-10937: - Description: In scope jdbc v2 and thin drivers We need to add connection property that reflects states of data page scan feature: enabled/disabled/not defined. > Support data page scan for JDBC > --- > > Key: IGNITE-10937 > URL: https://issues.apache.org/jira/browse/IGNITE-10937 > Project: Ignite > Issue Type: Improvement > Components: sql >Reporter: Sergi Vladykin >Assignee: Pavel Kuznetsov >Priority: Major > Labels: performance > > In scope jdbc v2 and thin drivers > We need to add connection property that reflects states of data page scan > feature: enabled/disabled/not defined. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (IGNITE-11288) TcpDiscovery locks forever on SSLSocket.close().
[ https://issues.apache.org/jira/browse/IGNITE-11288?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16767035#comment-16767035 ] Dmitriy Govorukhin commented on IGNITE-11288: - Merged to master. > TcpDiscovery locks forever on SSLSocket.close(). > > > Key: IGNITE-11288 > URL: https://issues.apache.org/jira/browse/IGNITE-11288 > Project: Ignite > Issue Type: Bug >Reporter: Pavel Voronkin >Assignee: Pavel Voronkin >Priority: Critical > Fix For: 2.8 > > Time Spent: 10m > Remaining Estimate: 0h > > Rootcause is java bug locking on SSLSocketImpl.close() on write lock: > //we create socket with soTimeout(0) here, but setting it here won't help > anyway. > RingMessageWorker: 3152 sock = spi.openSocket(addr, timeoutHelper); > //After timeout grid-timeout-worker blocks forever on SSLSOcketImpl.close(). > According to java8 SSLSocketImpl: > {code:java} > if (var1.isAlert((byte)0) && this.getSoLinger() >= 0) { > boolean var3 = Thread.interrupted(); > try { > if (this.writeLock.tryLock((long)this.getSoLinger(), TimeUnit.SECONDS)) { > try > { this.writeRecordInternal(var1, var2); } > finally > { this.writeLock.unlock(); } > } else > { SSLException var4 = new SSLException("SO_LINGER timeout, close_notify > message cannot be sent."); if (this.isLayered() && !this.autoClose) { > this.fatal((byte)-1, (Throwable)var4); } > else if (debug != null && Debug.isOn("ssl")) > { System.out.println(Thread.currentThread().getName() + ", received > Exception: " + var4); } > this.sess.invalidate(); > } > } catch (InterruptedException var14) > { var3 = true; } > if (var3) > { Thread.currentThread().interrupt(); } > } else > { this.writeLock.lock(); try { this.writeRecordInternal(var1, var2); } > finally > { this.writeLock.unlock(); } > }{code} > In case of soLinger is not set we fallback to this.writeLock.lock(); which > wait forever, cause RingMessageWorker is writing message with SO_TIMEOUT zero. > Solution: > 1) Set proper SO_TIMEOUT //that didn't help on Linux in case we drop packets > using iptables. > 2) Set SO_LINGER to some reasonable positive value. > Similar JDK bug [https://bugs.openjdk.java.net/browse/JDK-6668261]. > Guys end up setting SO_LINGER. > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Closed] (IGNITE-10759) Disable implicit distributed joins when queryParallelizm>1.
[ https://issues.apache.org/jira/browse/IGNITE-10759?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vladimir Ozerov closed IGNITE-10759. > Disable implicit distributed joins when queryParallelizm>1. > --- > > Key: IGNITE-10759 > URL: https://issues.apache.org/jira/browse/IGNITE-10759 > Project: Ignite > Issue Type: Bug > Components: sql >Affects Versions: 1.9 >Reporter: Andrew Mashenkov >Priority: Major > Fix For: 2.8 > > Time Spent: 10m > Remaining Estimate: 0h > > For now, local query with queryParallelizm>1 enables joins between partitions > on same even if distributedJoins flag is false. > This behaviour is unexpected and can't be disabled. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (IGNITE-11311) MVCC: SQL full table scan query can return duplicates.
Andrew Mashenkov created IGNITE-11311: - Summary: MVCC: SQL full table scan query can return duplicates. Key: IGNITE-11311 URL: https://issues.apache.org/jira/browse/IGNITE-11311 Project: Ignite Issue Type: Bug Components: mvcc, sql Reporter: Andrew Mashenkov SQL query like "select * from table" can return duplicate rows. Possible reasons can be * due to SQL query iterates over data pages directly and can see inconsistent state (IGNITE-10561) * Same as IGNITE-10767, query see stale pages. * Smth is wrong with mvcc versions visibility. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (IGNITE-11304) SQL: Common caching of both local and distributed query metadata
[ https://issues.apache.org/jira/browse/IGNITE-11304?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16767127#comment-16767127 ] Ignite TC Bot commented on IGNITE-11304: {panel:title=-- Run :: All: Possible Blockers|borderStyle=dashed|borderColor=#ccc|titleBGColor=#F7D6C1} {color:#d04437}Cache 7{color} [[tests 1|https://ci.ignite.apache.org/viewLog.html?buildId=3076311]] * IgniteCacheTestSuite7: TxRollbackAsyncWithPersistenceTest.testSynchronousRollback - 0,0% fails in last 416 master runs. {color:#d04437}MVCC Cache{color} [[tests 1|https://ci.ignite.apache.org/viewLog.html?buildId=3076288]] * IgniteCacheMvccTestSuite: CacheMvccVacuumTest.testVacuumNotStartedOnNonAffinityNode - 0,0% fails in last 182 master runs. {panel} [TeamCity *-- Run :: All* Results|https://ci.ignite.apache.org/viewLog.html?buildId=3076357buildTypeId=IgniteTests24Java8_RunAll] > SQL: Common caching of both local and distributed query metadata > > > Key: IGNITE-11304 > URL: https://issues.apache.org/jira/browse/IGNITE-11304 > Project: Ignite > Issue Type: Task > Components: sql >Reporter: Vladimir Ozerov >Assignee: Vladimir Ozerov >Priority: Major > Fix For: 2.8 > > Time Spent: 10m > Remaining Estimate: 0h > > Currently query metadata is only cached for distributed queries. For local > queries it is calculated on every request over and over again. Need to cache > it always in {{QueryParserResultSelect}}. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Assigned] (IGNITE-11258) JDBC Thin: update connection setup logic.
[ https://issues.apache.org/jira/browse/IGNITE-11258?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vladimir Ozerov reassigned IGNITE-11258: Assignee: Alexander Lapin > JDBC Thin: update connection setup logic. > - > > Key: IGNITE-11258 > URL: https://issues.apache.org/jira/browse/IGNITE-11258 > Project: Ignite > Issue Type: Task > Components: sql >Reporter: Alexander Lapin >Assignee: Alexander Lapin >Priority: Major > Labels: iep-23 > Time Spent: 10m > Remaining Estimate: 0h > > # On thin client startup it connects to *all* *nodes* provided by user by > client configuration. > # Upon handshake server returns its UUID to client. > # By the end of the startup procedure, client have open connections to all > available server nodes and the following mapping (*nodeMap*): [UUID => > Connection]. > Connection to all nodes helps to identify available nodes, but can lead to > significant delay, when thin client is used on a large cluster with a long IP > list provided by user. To lower this delay, asynchronous establishment of > connections can be used. > For more information see [IEP-23: Best Effort > Affinity|https://cwiki.apache.org/confluence/display/IGNITE/IEP-23%3A+Best+Effort+Affinity+for+thin+clients] -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (IGNITE-11257) JDBC Thin: update handshake protocol so that the node returns its UUID.
[ https://issues.apache.org/jira/browse/IGNITE-11257?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16767185#comment-16767185 ] Vladimir Ozerov commented on IGNITE-11257: -- [~alapin], looks good, but please note that we can re-use 2_8_0 protocol version as it is not released yet. > JDBC Thin: update handshake protocol so that the node returns its UUID. > --- > > Key: IGNITE-11257 > URL: https://issues.apache.org/jira/browse/IGNITE-11257 > Project: Ignite > Issue Type: Task > Components: sql >Reporter: Alexander Lapin >Assignee: Alexander Lapin >Priority: Major > Labels: iep-23 > Fix For: 2.8 > > Time Spent: 20m > Remaining Estimate: 0h > > Add node UUID to successful handshake response. > For more information see [IEP-23: Best Effort > Affinity|https://cwiki.apache.org/confluence/display/IGNITE/IEP-23%3A+Best+Effort+Affinity+for+thin+clients] -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (IGNITE-11300) MVCC: forbid using DataStreamer with allowOverwrite=true
[ https://issues.apache.org/jira/browse/IGNITE-11300?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16767187#comment-16767187 ] Andrew Mashenkov commented on IGNITE-11300: --- [~Pavlukhin], With allowOverwrite=true each entry will be updated within separate implicit transaction and data streamer provides snapshot isolation guaranties only on per-entry basis. So, both policies looks safe as just skip operation on conflict as retry operation. The first policy looks simpliest and performant and second one may requires to resolve issues on unstable topologies (such as retry on recently updated top ver) to prevent deadlocks. With allowOverwrite=false we just put INITIAL entry version to the entry versions list and it may be impossible to overwrite already invisible entry. Assume, you have put entry into cache, then removed it and it wasn't cleaned with Vacuum due to some reason (some active Tx is on fly), and now you start DataStreamer with allowOverwrite=false that will try to insert INITIAL version. Will this change visible? Do we support this case correctly? > MVCC: forbid using DataStreamer with allowOverwrite=true > > > Key: IGNITE-11300 > URL: https://issues.apache.org/jira/browse/IGNITE-11300 > Project: Ignite > Issue Type: Task > Components: mvcc >Affects Versions: 2.7 >Reporter: Ivan Pavlukhin >Priority: Major > Fix For: 2.8 > > > Calling {{IgniteDataStreamer.allowOverwrite(true)}} configures a streamer to > use single-key cache put/remove operations for data modification. But > put/remove operations on MVCC caches can be aborted due to write conflicts. > So, some development effort is needed to support that mode properly. Let's > throw exception in such case for MVCC caches. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (IGNITE-11258) JDBC Thin: update connection setup logic.
[ https://issues.apache.org/jira/browse/IGNITE-11258?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16767209#comment-16767209 ] Vladimir Ozerov commented on IGNITE-11258: -- [~alapin], my recommendations: # Make {{JdbcThinTcpIo}} - once initialized it never changes (no reconnect). Once disconnected it is disconnected forever. This way we will avoid a lot of bugs # Need to properly handle "sticky" case, when a group of requests must be routed through the same IO. Examples: "next page" requests, transactions, streaming, copy. One way to achieve this is to pass desired IO to "send" method, and return IO on which the request was executed back together with {{JdbcResult}}. # I would suggest to keep {{connected}} logic as simple as possible for now - {{true}} if and only if we successfully established connections to all nodes # Old logic should work fine with multiple established connections including proper "stickiness" > JDBC Thin: update connection setup logic. > - > > Key: IGNITE-11258 > URL: https://issues.apache.org/jira/browse/IGNITE-11258 > Project: Ignite > Issue Type: Task > Components: sql >Reporter: Alexander Lapin >Assignee: Alexander Lapin >Priority: Major > Labels: iep-23 > Time Spent: 10m > Remaining Estimate: 0h > > # On thin client startup it connects to *all* *nodes* provided by user by > client configuration. > # Upon handshake server returns its UUID to client. > # By the end of the startup procedure, client have open connections to all > available server nodes and the following mapping (*nodeMap*): [UUID => > Connection]. > Connection to all nodes helps to identify available nodes, but can lead to > significant delay, when thin client is used on a large cluster with a long IP > list provided by user. To lower this delay, asynchronous establishment of > connections can be used. > For more information see [IEP-23: Best Effort > Affinity|https://cwiki.apache.org/confluence/display/IGNITE/IEP-23%3A+Best+Effort+Affinity+for+thin+clients] -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Assigned] (IGNITE-11226) SQL: Remove GridQueryIndexing.prepareNativeStatement
[ https://issues.apache.org/jira/browse/IGNITE-11226?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vladimir Ozerov reassigned IGNITE-11226: Assignee: Pavel Kuznetsov > SQL: Remove GridQueryIndexing.prepareNativeStatement > > > Key: IGNITE-11226 > URL: https://issues.apache.org/jira/browse/IGNITE-11226 > Project: Ignite > Issue Type: Task > Components: sql >Reporter: Vladimir Ozerov >Assignee: Pavel Kuznetsov >Priority: Major > > This method is the only leak of H2 internals to the outer code. Close > analysis of code reveals that the only reason we have it is *JDBC metadata*. > Need to create a method which will prepare metadata for a statement and > return it as a detached object. Most probably we already have all necessary > mechanics. This is more about refactoring. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (IGNITE-11299) During SSL Handshake GridNioServer.processWrite is invoked constantly
[ https://issues.apache.org/jira/browse/IGNITE-11299?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16767225#comment-16767225 ] Ilya Kasnacheev commented on IGNITE-11299: -- [~yzhdanov] [~sboikov] [~dpavlov] please review NIO SSL patch. I have ran some suites on Java 11 - SSL works, ditto SPI on Windows: https://ci.ignite.apache.org/viewLog.html?buildId=3077134=IgniteTests24Java8_SpiWindows https://ci.ignite.apache.org/viewLog.html?buildId=3077015=queuedBuildOverviewTab https://ci.ignite.apache.org/viewLog.html?buildId=3076776=IgniteTests24Java8_CacheFailoverSsl=testsInfo https://ci.ignite.apache.org/viewLog.html?buildId=3076778=queuedBuildOverviewTab https://ci.ignite.apache.org/viewLog.html?buildId=3076518; This is in addition to regular Visa shown above. > During SSL Handshake GridNioServer.processWrite is invoked constantly > - > > Key: IGNITE-11299 > URL: https://issues.apache.org/jira/browse/IGNITE-11299 > Project: Ignite > Issue Type: Bug > Components: general >Affects Versions: 2.7 >Reporter: Ilya Kasnacheev >Priority: Major > Time Spent: 10m > Remaining Estimate: 0h > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Assigned] (IGNITE-11299) During SSL Handshake GridNioServer.processWrite is invoked constantly
[ https://issues.apache.org/jira/browse/IGNITE-11299?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ilya Kasnacheev reassigned IGNITE-11299: Assignee: Ilya Kasnacheev > During SSL Handshake GridNioServer.processWrite is invoked constantly > - > > Key: IGNITE-11299 > URL: https://issues.apache.org/jira/browse/IGNITE-11299 > Project: Ignite > Issue Type: Bug > Components: general >Affects Versions: 2.7 >Reporter: Ilya Kasnacheev >Assignee: Ilya Kasnacheev >Priority: Major > Time Spent: 10m > Remaining Estimate: 0h > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (IGNITE-11299) During SSL Handshake GridNioServer.processWrite is invoked constantly
[ https://issues.apache.org/jira/browse/IGNITE-11299?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ilya Kasnacheev updated IGNITE-11299: - Description: Causes busy looping in processSelectionKeyOptimized() This also causes problems on Windows/Java 11 since if key is always ready for writing it will never be shown as ready for reading. > During SSL Handshake GridNioServer.processWrite is invoked constantly > - > > Key: IGNITE-11299 > URL: https://issues.apache.org/jira/browse/IGNITE-11299 > Project: Ignite > Issue Type: Bug > Components: general >Affects Versions: 2.7 >Reporter: Ilya Kasnacheev >Assignee: Ilya Kasnacheev >Priority: Major > Labels: ssl > Time Spent: 10m > Remaining Estimate: 0h > > Causes busy looping in processSelectionKeyOptimized() > This also causes problems on Windows/Java 11 since if key is always ready for > writing it will never be shown as ready for reading. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Closed] (IGNITE-11257) JDBC Thin: update handshake protocol so that the node returns its UUID.
[ https://issues.apache.org/jira/browse/IGNITE-11257?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Alexander Lapin closed IGNITE-11257. > JDBC Thin: update handshake protocol so that the node returns its UUID. > --- > > Key: IGNITE-11257 > URL: https://issues.apache.org/jira/browse/IGNITE-11257 > Project: Ignite > Issue Type: Task > Components: jdbc >Reporter: Alexander Lapin >Assignee: Alexander Lapin >Priority: Major > Labels: iep-23 > Fix For: 2.8 > > Time Spent: 20m > Remaining Estimate: 0h > > Add node UUID to successful handshake response. > For more information see [IEP-23: Best Effort > Affinity|https://cwiki.apache.org/confluence/display/IGNITE/IEP-23%3A+Best+Effort+Affinity+for+thin+clients] -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (IGNITE-11258) JDBC Thin: update connection setup logic.
[ https://issues.apache.org/jira/browse/IGNITE-11258?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vladimir Ozerov updated IGNITE-11258: - Component/s: (was: sql) jdbc > JDBC Thin: update connection setup logic. > - > > Key: IGNITE-11258 > URL: https://issues.apache.org/jira/browse/IGNITE-11258 > Project: Ignite > Issue Type: Task > Components: jdbc >Reporter: Alexander Lapin >Assignee: Alexander Lapin >Priority: Major > Labels: iep-23 > Fix For: 2.8 > > Time Spent: 10m > Remaining Estimate: 0h > > # On thin client startup it connects to *all* *nodes* provided by user by > client configuration. > # Upon handshake server returns its UUID to client. > # By the end of the startup procedure, client have open connections to all > available server nodes and the following mapping (*nodeMap*): [UUID => > Connection]. > Connection to all nodes helps to identify available nodes, but can lead to > significant delay, when thin client is used on a large cluster with a long IP > list provided by user. To lower this delay, asynchronous establishment of > connections can be used. > For more information see [IEP-23: Best Effort > Affinity|https://cwiki.apache.org/confluence/display/IGNITE/IEP-23%3A+Best+Effort+Affinity+for+thin+clients] -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (IGNITE-11300) MVCC: forbid using DataStreamer with allowOverwrite=true
[ https://issues.apache.org/jira/browse/IGNITE-11300?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16767211#comment-16767211 ] Ivan Pavlukhin commented on IGNITE-11300: - [~amashenkov], {{allowOverwrite=true}} have not been explicitly tested. Moreover, streamer implementation using single {{cache.put}} operations will most likely have very poor performance. {{allowOverwrite=false}} should be addressed in scope of IGNITE-9314. Currently, streamer in such mode will not insert a tuple if there is anything (e.g. aborted versions) for a give key in BPlusTree. > MVCC: forbid using DataStreamer with allowOverwrite=true > > > Key: IGNITE-11300 > URL: https://issues.apache.org/jira/browse/IGNITE-11300 > Project: Ignite > Issue Type: Task > Components: mvcc >Affects Versions: 2.7 >Reporter: Ivan Pavlukhin >Priority: Major > Fix For: 2.8 > > > Calling {{IgniteDataStreamer.allowOverwrite(true)}} configures a streamer to > use single-key cache put/remove operations for data modification. But > put/remove operations on MVCC caches can be aborted due to write conflicts. > So, some development effort is needed to support that mode properly. Let's > throw exception in such case for MVCC caches. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (IGNITE-11258) JDBC Thin: update connection setup logic.
[ https://issues.apache.org/jira/browse/IGNITE-11258?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vladimir Ozerov updated IGNITE-11258: - Fix Version/s: 2.8 > JDBC Thin: update connection setup logic. > - > > Key: IGNITE-11258 > URL: https://issues.apache.org/jira/browse/IGNITE-11258 > Project: Ignite > Issue Type: Task > Components: sql >Reporter: Alexander Lapin >Assignee: Alexander Lapin >Priority: Major > Labels: iep-23 > Fix For: 2.8 > > Time Spent: 10m > Remaining Estimate: 0h > > # On thin client startup it connects to *all* *nodes* provided by user by > client configuration. > # Upon handshake server returns its UUID to client. > # By the end of the startup procedure, client have open connections to all > available server nodes and the following mapping (*nodeMap*): [UUID => > Connection]. > Connection to all nodes helps to identify available nodes, but can lead to > significant delay, when thin client is used on a large cluster with a long IP > list provided by user. To lower this delay, asynchronous establishment of > connections can be used. > For more information see [IEP-23: Best Effort > Affinity|https://cwiki.apache.org/confluence/display/IGNITE/IEP-23%3A+Best+Effort+Affinity+for+thin+clients] -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (IGNITE-11258) JDBC Thin: update connection setup logic.
[ https://issues.apache.org/jira/browse/IGNITE-11258?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vladimir Ozerov updated IGNITE-11258: - Ignite Flags: (was: Docs Required) > JDBC Thin: update connection setup logic. > - > > Key: IGNITE-11258 > URL: https://issues.apache.org/jira/browse/IGNITE-11258 > Project: Ignite > Issue Type: Task > Components: jdbc >Reporter: Alexander Lapin >Assignee: Alexander Lapin >Priority: Major > Labels: iep-23 > Fix For: 2.8 > > Time Spent: 10m > Remaining Estimate: 0h > > # On thin client startup it connects to *all* *nodes* provided by user by > client configuration. > # Upon handshake server returns its UUID to client. > # By the end of the startup procedure, client have open connections to all > available server nodes and the following mapping (*nodeMap*): [UUID => > Connection]. > Connection to all nodes helps to identify available nodes, but can lead to > significant delay, when thin client is used on a large cluster with a long IP > list provided by user. To lower this delay, asynchronous establishment of > connections can be used. > For more information see [IEP-23: Best Effort > Affinity|https://cwiki.apache.org/confluence/display/IGNITE/IEP-23%3A+Best+Effort+Affinity+for+thin+clients] -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (IGNITE-11299) During SSL Handshake GridNioServer.processWrite is invoked constantly
[ https://issues.apache.org/jira/browse/IGNITE-11299?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ilya Kasnacheev updated IGNITE-11299: - Ignite Flags: (was: Docs Required) > During SSL Handshake GridNioServer.processWrite is invoked constantly > - > > Key: IGNITE-11299 > URL: https://issues.apache.org/jira/browse/IGNITE-11299 > Project: Ignite > Issue Type: Bug > Components: general >Affects Versions: 2.7 >Reporter: Ilya Kasnacheev >Assignee: Ilya Kasnacheev >Priority: Major > Labels: ssl > Time Spent: 10m > Remaining Estimate: 0h > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (IGNITE-6580) Cluster can fail during concurrent re-balancing and cache destruction
[ https://issues.apache.org/jira/browse/IGNITE-6580?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16767237#comment-16767237 ] Ilya Kasnacheev commented on IGNITE-6580: - Possibly fixed by IGNITE-9303 > Cluster can fail during concurrent re-balancing and cache destruction > - > > Key: IGNITE-6580 > URL: https://issues.apache.org/jira/browse/IGNITE-6580 > Project: Ignite > Issue Type: Bug > Components: cache >Affects Versions: 2.2 >Reporter: Mikhail Cherkasov >Assignee: Alexey Goncharuk >Priority: Major > Fix For: 2.8 > > > The following exceptions can be abserved during concurrent re-balancing and > cache destruction: > 1. > {noformat} > [00:01:27,135][ERROR][sys-#4375%null%][GridDhtPreloader] Partition eviction > failed, this can cause grid hang. > org.apache.ignite.IgniteException: Runtime failure on search row: > Row@6be51c3d[ **REMOVED SENSITIVE INFORMATION** ] > at > org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree.doRemove(BPlusTree.java:1787) > ~[ignite-core-2.1.4.jar:2.1.4] > at > org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree.remove(BPlusTree.java:1578) > ~[ignite-core-2.1.4.jar:2.1.4] > at > org.apache.ignite.internal.processors.query.h2.database.H2TreeIndex.remove(H2TreeIndex.java:226) > ~[ignite-indexing-2.1.4.jar:2.1.4] > at > org.apache.ignite.internal.processors.query.h2.opt.GridH2Table.doUpdate(GridH2Table.java:523) > ~[ignite-indexing-2.1.4.jar:2.1.4] > at > org.apache.ignite.internal.processors.query.h2.opt.GridH2Table.update(GridH2Table.java:416) > ~[ignite-indexing-2.1.4.jar:2.1.4] > at > org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing.remove(IgniteH2Indexing.java:574) > ~[ignite-indexing-2.1.4.jar:2.1.4] > at > org.apache.ignite.internal.processors.query.GridQueryProcessor.remove(GridQueryProcessor.java:2172) > ~[ignite-core-2.1.4.jar:2.1.4] > at > org.apache.ignite.internal.processors.cache.query.GridCacheQueryManager.remove(GridCacheQueryManager.java:451) > ~[ignite-core-2.1.4.jar:2.1.4] > at > org.apache.ignite.internal.processors.cache.IgniteCacheOffheapManagerImpl$CacheDataStoreImpl.finishRemove(IgniteCacheOffheapManagerImpl.java:1462) > ~[ignite-core-2.1.4.jar:2.1.4] > at > org.apache.ignite.internal.processors.cache.IgniteCacheOffheapManagerImpl$CacheDataStoreImpl.remove(IgniteCacheOffheapManagerImpl.java:1425) > ~[ignite-core-2.1.4.jar:2.1.4] > at > org.apache.ignite.internal.processors.cache.IgniteCacheOffheapManagerImpl.remove(IgniteCacheOffheapManagerImpl.java:383) > ~[ignite-core-2.1.4.jar:2.1.4] > at > org.apache.ignite.internal.processors.cache.GridCacheMapEntry.removeValue(GridCacheMapEntry.java:3224) > ~[ignite-core-2.1.4.jar:2.1.4] > at > org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtCacheEntry.clearInternal(GridDhtCacheEntry.java:588) > ~[ignite-core-2.1.4.jar:2.1.4] > at > org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtLocalPartition.clearAll(GridDhtLocalPartition.java:951) > ~[ignite-core-2.1.4.jar:2.1.4] > at > org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtLocalPartition.tryEvict(GridDhtLocalPartition.java:809) > ~[ignite-core-2.1.4.jar:2.1.4] > at > org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPreloader$3.call(GridDhtPreloader.java:593) > [ignite-core-2.1.4.jar:2.1.4] > at > org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPreloader$3.call(GridDhtPreloader.java:580) > [ignite-core-2.1.4.jar:2.1.4] > at > org.apache.ignite.internal.util.IgniteUtils.wrapThreadLoader(IgniteUtils.java:6629) > [ignite-core-2.1.4.jar:2.1.4] > at > org.apache.ignite.internal.processors.closure.GridClosureProcessor$2.body(GridClosureProcessor.java:967) > [ignite-core-2.1.4.jar:2.1.4] > at > org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:110) > [ignite-core-2.1.4.jar:2.1.4] > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) > [?:1.8.0_131] > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) > [?:1.8.0_131] > at java.lang.Thread.run(Thread.java:748) [?:1.8.0_131] > Caused by: java.lang.IllegalStateException: Item not found: 1 > at > org.apache.ignite.internal.processors.cache.persistence.tree.io.DataPageIO.findIndirectItemIndex(DataPageIO.java:346) > ~[ignite-core-2.1.4.jar:2.1.4] > at > org.apache.ignite.internal.processors.cache.persistence.tree.io.DataPageIO.getDataOffset(DataPageIO.java:446) > ~[ignite-core-2.1.4.jar:2.1.4] > at >
[jira] [Commented] (IGNITE-6580) Cluster can fail during concurrent re-balancing and cache destruction
[ https://issues.apache.org/jira/browse/IGNITE-6580?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16767244#comment-16767244 ] Ilya Kasnacheev commented on IGNITE-6580: - [~mcherkasov] please check if it can still be reproduced on master. > Cluster can fail during concurrent re-balancing and cache destruction > - > > Key: IGNITE-6580 > URL: https://issues.apache.org/jira/browse/IGNITE-6580 > Project: Ignite > Issue Type: Bug > Components: cache >Affects Versions: 2.2 >Reporter: Mikhail Cherkasov >Assignee: Alexey Goncharuk >Priority: Major > Fix For: 2.8 > > > The following exceptions can be abserved during concurrent re-balancing and > cache destruction: > 1. > {noformat} > [00:01:27,135][ERROR][sys-#4375%null%][GridDhtPreloader] Partition eviction > failed, this can cause grid hang. > org.apache.ignite.IgniteException: Runtime failure on search row: > Row@6be51c3d[ **REMOVED SENSITIVE INFORMATION** ] > at > org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree.doRemove(BPlusTree.java:1787) > ~[ignite-core-2.1.4.jar:2.1.4] > at > org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree.remove(BPlusTree.java:1578) > ~[ignite-core-2.1.4.jar:2.1.4] > at > org.apache.ignite.internal.processors.query.h2.database.H2TreeIndex.remove(H2TreeIndex.java:226) > ~[ignite-indexing-2.1.4.jar:2.1.4] > at > org.apache.ignite.internal.processors.query.h2.opt.GridH2Table.doUpdate(GridH2Table.java:523) > ~[ignite-indexing-2.1.4.jar:2.1.4] > at > org.apache.ignite.internal.processors.query.h2.opt.GridH2Table.update(GridH2Table.java:416) > ~[ignite-indexing-2.1.4.jar:2.1.4] > at > org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing.remove(IgniteH2Indexing.java:574) > ~[ignite-indexing-2.1.4.jar:2.1.4] > at > org.apache.ignite.internal.processors.query.GridQueryProcessor.remove(GridQueryProcessor.java:2172) > ~[ignite-core-2.1.4.jar:2.1.4] > at > org.apache.ignite.internal.processors.cache.query.GridCacheQueryManager.remove(GridCacheQueryManager.java:451) > ~[ignite-core-2.1.4.jar:2.1.4] > at > org.apache.ignite.internal.processors.cache.IgniteCacheOffheapManagerImpl$CacheDataStoreImpl.finishRemove(IgniteCacheOffheapManagerImpl.java:1462) > ~[ignite-core-2.1.4.jar:2.1.4] > at > org.apache.ignite.internal.processors.cache.IgniteCacheOffheapManagerImpl$CacheDataStoreImpl.remove(IgniteCacheOffheapManagerImpl.java:1425) > ~[ignite-core-2.1.4.jar:2.1.4] > at > org.apache.ignite.internal.processors.cache.IgniteCacheOffheapManagerImpl.remove(IgniteCacheOffheapManagerImpl.java:383) > ~[ignite-core-2.1.4.jar:2.1.4] > at > org.apache.ignite.internal.processors.cache.GridCacheMapEntry.removeValue(GridCacheMapEntry.java:3224) > ~[ignite-core-2.1.4.jar:2.1.4] > at > org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtCacheEntry.clearInternal(GridDhtCacheEntry.java:588) > ~[ignite-core-2.1.4.jar:2.1.4] > at > org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtLocalPartition.clearAll(GridDhtLocalPartition.java:951) > ~[ignite-core-2.1.4.jar:2.1.4] > at > org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtLocalPartition.tryEvict(GridDhtLocalPartition.java:809) > ~[ignite-core-2.1.4.jar:2.1.4] > at > org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPreloader$3.call(GridDhtPreloader.java:593) > [ignite-core-2.1.4.jar:2.1.4] > at > org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPreloader$3.call(GridDhtPreloader.java:580) > [ignite-core-2.1.4.jar:2.1.4] > at > org.apache.ignite.internal.util.IgniteUtils.wrapThreadLoader(IgniteUtils.java:6629) > [ignite-core-2.1.4.jar:2.1.4] > at > org.apache.ignite.internal.processors.closure.GridClosureProcessor$2.body(GridClosureProcessor.java:967) > [ignite-core-2.1.4.jar:2.1.4] > at > org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:110) > [ignite-core-2.1.4.jar:2.1.4] > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) > [?:1.8.0_131] > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) > [?:1.8.0_131] > at java.lang.Thread.run(Thread.java:748) [?:1.8.0_131] > Caused by: java.lang.IllegalStateException: Item not found: 1 > at > org.apache.ignite.internal.processors.cache.persistence.tree.io.DataPageIO.findIndirectItemIndex(DataPageIO.java:346) > ~[ignite-core-2.1.4.jar:2.1.4] > at > org.apache.ignite.internal.processors.cache.persistence.tree.io.DataPageIO.getDataOffset(DataPageIO.java:446) > ~[ignite-core-2.1.4.jar:2.1.4] >
[jira] [Commented] (IGNITE-11257) JDBC Thin: update handshake protocol so that the node returns its UUID.
[ https://issues.apache.org/jira/browse/IGNITE-11257?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16767253#comment-16767253 ] Alexander Lapin commented on IGNITE-11257: -- Merged to IGNITE-11287. > JDBC Thin: update handshake protocol so that the node returns its UUID. > --- > > Key: IGNITE-11257 > URL: https://issues.apache.org/jira/browse/IGNITE-11257 > Project: Ignite > Issue Type: Task > Components: jdbc >Reporter: Alexander Lapin >Assignee: Alexander Lapin >Priority: Major > Labels: iep-23 > Fix For: 2.8 > > Time Spent: 20m > Remaining Estimate: 0h > > Add node UUID to successful handshake response. > For more information see [IEP-23: Best Effort > Affinity|https://cwiki.apache.org/confluence/display/IGNITE/IEP-23%3A+Best+Effort+Affinity+for+thin+clients] -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (IGNITE-11287) JDBC Thin: best effort affinity
[ https://issues.apache.org/jira/browse/IGNITE-11287?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vladimir Ozerov updated IGNITE-11287: - Fix Version/s: 2.8 > JDBC Thin: best effort affinity > --- > > Key: IGNITE-11287 > URL: https://issues.apache.org/jira/browse/IGNITE-11287 > Project: Ignite > Issue Type: Task >Reporter: Alexander Lapin >Priority: Major > Labels: iep-23, sql > Fix For: 2.8 > > > It's an umbrella ticket for implementing > [IEP-23|https://cwiki.apache.org/confluence/display/IGNITE/IEP-23%3A+Best+Effort+Affinity+for+thin+clients] > within the scope of JDBC Thin driver. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (IGNITE-11257) JDBC Thin: update handshake protocol so that the node returns its UUID.
[ https://issues.apache.org/jira/browse/IGNITE-11257?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vladimir Ozerov updated IGNITE-11257: - Component/s: (was: sql) jdbc > JDBC Thin: update handshake protocol so that the node returns its UUID. > --- > > Key: IGNITE-11257 > URL: https://issues.apache.org/jira/browse/IGNITE-11257 > Project: Ignite > Issue Type: Task > Components: jdbc >Reporter: Alexander Lapin >Assignee: Alexander Lapin >Priority: Major > Labels: iep-23 > Fix For: 2.8 > > Time Spent: 20m > Remaining Estimate: 0h > > Add node UUID to successful handshake response. > For more information see [IEP-23: Best Effort > Affinity|https://cwiki.apache.org/confluence/display/IGNITE/IEP-23%3A+Best+Effort+Affinity+for+thin+clients] -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (IGNITE-11287) JDBC Thin: best effort affinity
[ https://issues.apache.org/jira/browse/IGNITE-11287?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vladimir Ozerov updated IGNITE-11287: - Labels: iep-23 (was: iep-23 sql) > JDBC Thin: best effort affinity > --- > > Key: IGNITE-11287 > URL: https://issues.apache.org/jira/browse/IGNITE-11287 > Project: Ignite > Issue Type: Task > Components: jdbc >Reporter: Alexander Lapin >Priority: Major > Labels: iep-23 > Fix For: 2.8 > > > It's an umbrella ticket for implementing > [IEP-23|https://cwiki.apache.org/confluence/display/IGNITE/IEP-23%3A+Best+Effort+Affinity+for+thin+clients] > within the scope of JDBC Thin driver. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (IGNITE-11257) JDBC Thin: update handshake protocol so that the node returns its UUID.
[ https://issues.apache.org/jira/browse/IGNITE-11257?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vladimir Ozerov updated IGNITE-11257: - Ignite Flags: (was: Docs Required) > JDBC Thin: update handshake protocol so that the node returns its UUID. > --- > > Key: IGNITE-11257 > URL: https://issues.apache.org/jira/browse/IGNITE-11257 > Project: Ignite > Issue Type: Task > Components: sql >Reporter: Alexander Lapin >Assignee: Alexander Lapin >Priority: Major > Labels: iep-23 > Fix For: 2.8 > > Time Spent: 20m > Remaining Estimate: 0h > > Add node UUID to successful handshake response. > For more information see [IEP-23: Best Effort > Affinity|https://cwiki.apache.org/confluence/display/IGNITE/IEP-23%3A+Best+Effort+Affinity+for+thin+clients] -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (IGNITE-11287) JDBC Thin: best effort affinity
[ https://issues.apache.org/jira/browse/IGNITE-11287?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vladimir Ozerov updated IGNITE-11287: - Issue Type: New Feature (was: Task) > JDBC Thin: best effort affinity > --- > > Key: IGNITE-11287 > URL: https://issues.apache.org/jira/browse/IGNITE-11287 > Project: Ignite > Issue Type: New Feature > Components: jdbc >Reporter: Alexander Lapin >Priority: Major > Labels: iep-23 > Fix For: 2.8 > > > It's an umbrella ticket for implementing > [IEP-23|https://cwiki.apache.org/confluence/display/IGNITE/IEP-23%3A+Best+Effort+Affinity+for+thin+clients] > within the scope of JDBC Thin driver. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (IGNITE-11287) JDBC Thin: best effort affinity
[ https://issues.apache.org/jira/browse/IGNITE-11287?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vladimir Ozerov updated IGNITE-11287: - Component/s: jdbc > JDBC Thin: best effort affinity > --- > > Key: IGNITE-11287 > URL: https://issues.apache.org/jira/browse/IGNITE-11287 > Project: Ignite > Issue Type: Task > Components: jdbc >Reporter: Alexander Lapin >Priority: Major > Labels: iep-23, sql > Fix For: 2.8 > > > It's an umbrella ticket for implementing > [IEP-23|https://cwiki.apache.org/confluence/display/IGNITE/IEP-23%3A+Best+Effort+Affinity+for+thin+clients] > within the scope of JDBC Thin driver. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (IGNITE-11313) Cluster hangs on cache invoke with binary objects creation
Ivan Bessonov created IGNITE-11313: -- Summary: Cluster hangs on cache invoke with binary objects creation Key: IGNITE-11313 URL: https://issues.apache.org/jira/browse/IGNITE-11313 Project: Ignite Issue Type: Bug Reporter: Ivan Bessonov Assignee: Ivan Bessonov Creating of binary objects in entry processors in parallel with continuous queries may lead to deadlock: {code:java} [2019-02-11 18:52:50,129][WARN ][grid-timeout-worker-#39] >>> Possible starvation in striped pool. Thread name: sys-stripe-13-#14 Queue: [] Deadlock: false Completed: 1 Thread [name="sys-stripe-13-#14", id=33, state=WAITING, blockCnt=3, waitCnt=3] at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.park(LockSupport.java:304) at o.a.i.i.util.future.GridFutureAdapter.get0(GridFutureAdapter.java:178) at o.a.i.i.util.future.GridFutureAdapter.get(GridFutureAdapter.java:141) at o.a.i.i.MarshallerContextImpl.registerClassName(MarshallerContextImpl.java:284) at o.a.i.i.binary.BinaryContext.registerUserClassName(BinaryContext.java:1202) at o.a.i.i.binary.builder.BinaryObjectBuilderImpl.serializeTo(BinaryObjectBuilderImpl.java:366) at o.a.i.i.binary.builder.BinaryObjectBuilderImpl.build(BinaryObjectBuilderImpl.java:189) at o.a.i.scenario.InvokeTask$MyEntryProcessor.process(InvokeTask.java:106) at o.a.i.i.processors.cache.EntryProcessorResourceInjectorProxy.process(EntryProcessorResourceInjectorProxy.java:68) at o.a.i.i.processors.cache.distributed.dht.GridDhtTxPrepareFuture.onEntriesLocked(GridDhtTxPrepareFuture.java:446) at o.a.i.i.processors.cache.distributed.dht.GridDhtTxPrepareFuture.prepare0(GridDhtTxPrepareFuture.java:1302) at o.a.i.i.processors.cache.distributed.dht.GridDhtTxPrepareFuture.mapIfLocked(GridDhtTxPrepareFuture.java:713) at o.a.i.i.processors.cache.distributed.dht.GridDhtTxPrepareFuture.prepare(GridDhtTxPrepareFuture.java:1103) at o.a.i.i.processors.cache.distributed.dht.GridDhtTxLocal.prepareAsync(GridDhtTxLocal.java:405) at o.a.i.i.processors.cache.transactions.IgniteTxHandler.prepareNearTx(IgniteTxHandler.java:569) at o.a.i.i.processors.cache.transactions.IgniteTxHandler.prepareNearTx(IgniteTxHandler.java:367) at o.a.i.i.processors.cache.transactions.IgniteTxHandler.processNearTxPrepareRequest0(IgniteTxHandler.java:171) at o.a.i.i.processors.cache.transactions.IgniteTxHandler.processNearTxPrepareRequest(IgniteTxHandler.java:156) at o.a.i.i.processors.cache.transactions.IgniteTxHandler.access$000(IgniteTxHandler.java:118) at o.a.i.i.processors.cache.transactions.IgniteTxHandler$1.apply(IgniteTxHandler.java:198) at o.a.i.i.processors.cache.transactions.IgniteTxHandler$1.apply(IgniteTxHandler.java:196) at o.a.i.i.processors.cache.GridCacheIoManager.processMessage(GridCacheIoManager.java:1129) at o.a.i.i.processors.cache.GridCacheIoManager.onMessage0(GridCacheIoManager.java:594) at o.a.i.i.processors.cache.GridCacheIoManager.handleMessage(GridCacheIoManager.java:393) at o.a.i.i.processors.cache.GridCacheIoManager.handleMessage(GridCacheIoManager.java:319) at o.a.i.i.processors.cache.GridCacheIoManager.access$100(GridCacheIoManager.java:109) at o.a.i.i.processors.cache.GridCacheIoManager$1.onMessage(GridCacheIoManager.java:308) at o.a.i.i.managers.communication.GridIoManager.invokeListener(GridIoManager.java:1569) at o.a.i.i.managers.communication.GridIoManager.processRegularMessage0(GridIoManager.java:1197) at o.a.i.i.managers.communication.GridIoManager.access$4200(GridIoManager.java:127) at o.a.i.i.managers.communication.GridIoManager$9.run(GridIoManager.java:1093) at o.a.i.i.util.StripedExecutor$Stripe.body(StripedExecutor.java:505) at o.a.i.i.util.worker.GridWorker.run(GridWorker.java:120) at java.lang.Thread.run(Thread.java:748) {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Comment Edited] (IGNITE-10693) MVCC TX: Random server restart tests became failed
[ https://issues.apache.org/jira/browse/IGNITE-10693?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16767218#comment-16767218 ] Igor Seliverstov edited comment on IGNITE-10693 at 2/13/19 2:07 PM: Seems we have wrong mapping logic in case of unstable topology. In the patch reworked mapping is implemented. the main flow looks so: 1) find all nodes that have all partitions for involved replicated caches (optional step) 2) find nodes that have each partition of each involved partitioned cache. Filter these nodes using previously collected nodes for replicated caches if needed 3) reduce the nodes count for execution - use as little as possible nodes having all needed partitions but not intersecting partitions between nodes. was (Author: gvvinblade): Seems we have wrong mapping logic in case of unstable topology. In the patch reworked mapping implemented. the main flow looks so: 1) find all nodes that have all partitions for involved replicated caches (optional step) 2) find nodes that have each partition of each involved partitioned cache. Filter these nodes using previously collected nodes for replicated caches if needed 3) reduce the nodes count for execution - use as little as possible nodes having all needed partitions but not intersecting partitions between nodes. > MVCC TX: Random server restart tests became failed > -- > > Key: IGNITE-10693 > URL: https://issues.apache.org/jira/browse/IGNITE-10693 > Project: Ignite > Issue Type: Bug > Components: mvcc, sql >Reporter: Igor Seliverstov >Assignee: Igor Seliverstov >Priority: Major > Labels: failover, mvcc_stabilization_stage_1 > Fix For: 2.8 > > Time Spent: 10m > Remaining Estimate: 0h > > [one|https://ci.ignite.apache.org/project.html?projectId=IgniteTests24Java8=7945125576049372508=%3Cdefault%3E=testDetails], > > [two|https://ci.ignite.apache.org/project.html?projectId=IgniteTests24Java8=8412476034648229784=%3Cdefault%3E=testDetails], > > [three|https://ci.ignite.apache.org/project.html?projectId=IgniteTests24Java8=254244004184327163=%3Cdefault%3E=testDetails], > all these tests became failed after IGNITE-9630 has been merged to master. > Seems there is an issue in partition calculating mechanism on unstable > topology. I suppose the algorithm uses partitions on nodes just become > primary while the partitions are in moving state. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (IGNITE-11299) During SSL Handshake GridNioServer.processWrite is invoked constantly
[ https://issues.apache.org/jira/browse/IGNITE-11299?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16767220#comment-16767220 ] Ignite TC Bot commented on IGNITE-11299: {panel:title=-- Run :: All: Possible Blockers|borderStyle=dashed|borderColor=#ccc|titleBGColor=#F7D6C1} {color:#d04437}SPI{color} [[tests 4|https://ci.ignite.apache.org/viewLog.html?buildId=3077015]] * IgniteSpiTestSuite: TcpDiscoverySslParametersTest.testNonExistentCipherSuite - 0,0% fails in last 418 master runs. {color:#d04437}Basic 1{color} [[tests 0 TIMEOUT , Exit Code |https://ci.ignite.apache.org/viewLog.html?buildId=3071192]] * SetTxTimeoutOnPartitionMapExchangeTest.testSetTxTimeoutOnClientDuringPartitionMapExchange (last started) {panel} [TeamCity *-- Run :: All* Results|https://ci.ignite.apache.org/viewLog.html?buildId=3071265buildTypeId=IgniteTests24Java8_RunAll] > During SSL Handshake GridNioServer.processWrite is invoked constantly > - > > Key: IGNITE-11299 > URL: https://issues.apache.org/jira/browse/IGNITE-11299 > Project: Ignite > Issue Type: Bug > Components: general >Affects Versions: 2.7 >Reporter: Ilya Kasnacheev >Priority: Major > Time Spent: 10m > Remaining Estimate: 0h > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (IGNITE-11299) During SSL Handshake GridNioServer.processWrite is invoked constantly
[ https://issues.apache.org/jira/browse/IGNITE-11299?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ilya Kasnacheev updated IGNITE-11299: - Description: Causes busy looping in processSelectionKeyOptimized() This also causes problems on Windows/Java 11 since if key is always ready for writing it will never be shown as ready for reading. The reason for this behavior that during handshake we never un-listen OP_WRITE was: Causes busy looping in processSelectionKeyOptimized() This also causes problems on Windows/Java 11 since if key is always ready for writing it will never be shown as ready for reading. > During SSL Handshake GridNioServer.processWrite is invoked constantly > - > > Key: IGNITE-11299 > URL: https://issues.apache.org/jira/browse/IGNITE-11299 > Project: Ignite > Issue Type: Bug > Components: general >Affects Versions: 2.7 >Reporter: Ilya Kasnacheev >Assignee: Ilya Kasnacheev >Priority: Major > Labels: ssl > Time Spent: 10m > Remaining Estimate: 0h > > Causes busy looping in processSelectionKeyOptimized() > This also causes problems on Windows/Java 11 since if key is always ready for > writing it will never be shown as ready for reading. > The reason for this behavior that during handshake we never un-listen OP_WRITE -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (IGNITE-11299) During SSL Handshake GridNioServer.processWrite is invoked constantly
[ https://issues.apache.org/jira/browse/IGNITE-11299?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ilya Kasnacheev updated IGNITE-11299: - Labels: ssl (was: ) > During SSL Handshake GridNioServer.processWrite is invoked constantly > - > > Key: IGNITE-11299 > URL: https://issues.apache.org/jira/browse/IGNITE-11299 > Project: Ignite > Issue Type: Bug > Components: general >Affects Versions: 2.7 >Reporter: Ilya Kasnacheev >Assignee: Ilya Kasnacheev >Priority: Major > Labels: ssl > Time Spent: 10m > Remaining Estimate: 0h > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (IGNITE-10644) CorruptedTreeException might occur after force node kill during transaction
[ https://issues.apache.org/jira/browse/IGNITE-10644?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16767242#comment-16767242 ] Ilya Kasnacheev commented on IGNITE-10644: -- I cannot reproduce it after IGNITE-9303 any more! [~pvinokurov] please check if you can still observe it. > CorruptedTreeException might occur after force node kill during transaction > --- > > Key: IGNITE-10644 > URL: https://issues.apache.org/jira/browse/IGNITE-10644 > Project: Ignite > Issue Type: Bug >Reporter: Pavel Voronkin >Priority: Major > > Partition eviction process on the other hand: > > 2018-12-10 20:59:24.426 > [ERROR]sys-#204%_GRID%GridNodeName%[o.a.i.i.p.c.d.d.t.PartitionsEvictManager] > Partition eviction failed, this can cause grid hang. > org.h2.message.DbException: General error: "class > org.apache.ignite.internal.processors.cache.persistence.tree.CorruptedTreeException: > Runtime failure on search row: Row@3580787f[ key: 4071535538120363041, val: > X.common.dpl.model.backstream.DBackStreamMessage_DPL_PROXY > [idHash=1961442513, hash=529139710, colocationKey=14465, entityType=I, > lastChangeDate=1544464745135, errorMessage=No api > [X.scripts.ucp.retail.propagate.publicapi.ClientPropagateService] services > available for route: [*][*][kbt] (zone-node-module).IP: [*]. > List of services violations: > NODE MODULE FILTER VIOLATIONS > No services or violations were found for routing, partition_X_id=5, > messageId=1211871172446406939, entityId=1211871174131851324, ownerId=ucp, > responseDate=null, entityVersion=1, isDeleted=false, requestDate=Mon Dec 10 > 20:59:05 MSK 2018, id=4071535538120363041], ver: GridCacheVersion > [topVer=155940834, order=1544596983071, nodeOrder=114] ][ I, null, > 1211871172446406939, 1211871174131851324, null, 1, 2018-12-10 20:59:05.115, > No api [X.scripts.ucp.retail.propagate.publicapi.ClientPropagateService] > services available for route: [*][*][kbt] (zone-node-module).IP: [*]. > List of services violations: > NODE MODULE FILTER VIOLATIONS > No services or violations were found for routing, 4071535538120363041, FALSE, > 5 ]" [5-195] > at org.h2.message.DbException.get(DbException.java:168) > at org.h2.message.DbException.convert(DbException.java:295) > at > org.apache.ignite.internal.processors.query.h2.database.H2TreeIndex.removex(H2TreeIndex.java:293) > at > org.apache.ignite.internal.processors.query.h2.opt.GridH2Table.remove(GridH2Table.java:515) > at > org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing.remove(IgniteH2Indexing.java:738) > at > org.apache.ignite.internal.processors.query.GridQueryProcessor.remove(GridQueryProcessor.java:2487) > at > org.apache.ignite.internal.processors.cache.query.GridCacheQueryManager.remove(GridCacheQueryManager.java:433) > at > org.apache.ignite.internal.processors.cache.IgniteCacheOffheapManagerImpl$CacheDataStoreImpl.finishRemove(IgniteCacheOffheapManagerImpl.java:1465) > at > org.apache.ignite.internal.processors.cache.IgniteCacheOffheapManagerImpl$CacheDataStoreImpl.remove(IgniteCacheOffheapManagerImpl.java:1435) > at > org.apache.ignite.internal.processors.cache.persistence.GridCacheOffheapManager$GridCacheDataStore.remove(GridCacheOffheapManager.java:1633) > at > org.apache.ignite.internal.processors.cache.IgniteCacheOffheapManagerImpl.remove(IgniteCacheOffheapManagerImpl.java:383) > at > org.apache.ignite.internal.processors.cache.GridCacheMapEntry.removeValue(GridCacheMapEntry.java:3706) > at > org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtCacheEntry.clearInternal(GridDhtCacheEntry.java:652) > at > org.apache.ignite.internal.processors.cache.distributed.dht.topology.GridDhtLocalPartition.clearAll(GridDhtLocalPartition.java:1079) > at > org.apache.ignite.internal.processors.cache.distributed.dht.topology.GridDhtLocalPartition.tryClear(GridDhtLocalPartition.java:915) > at > org.apache.ignite.internal.processors.cache.distributed.dht.topology.PartitionsEvictManager$PartitionEvictionTask.run(PartitionsEvictManager.java:423) > at > org.apache.ignite.internal.util.IgniteUtils.wrapThreadLoader(IgniteUtils.java:6782) > at > org.apache.ignite.internal.processors.closure.GridClosureProcessor$1.body(GridClosureProcessor.java:827) > at org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:110) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) > at java.lang.Thread.run(Thread.java:745) > Caused by: org.h2.jdbc.JdbcSQLException: General error: "class > org.apache.ignite.internal.processors.cache.persistence.tree.CorruptedTreeException: > Runtime failure on search row: Row@3580787f[ key: 4071535538120363041, val: >
[jira] [Created] (IGNITE-11314) JDBC Thin: add transaction-scoped flag to JdbcHandler's responses.
Alexander Lapin created IGNITE-11314: Summary: JDBC Thin: add transaction-scoped flag to JdbcHandler's responses. Key: IGNITE-11314 URL: https://issues.apache.org/jira/browse/IGNITE-11314 Project: Ignite Issue Type: Task Components: jdbc Reporter: Alexander Lapin Within the context of best effort affinity, and particular, multi-connections, it's necessary to use "sticky" connections in case of "next page" requests, transactions, streaming and copy. In order to implement transaction-based-sticky use case we need to know whether we are in transnational scope or not. So JdbcRequestHandler ought to retrieve query execution plan, analyse whether transaction exists and propagate corresponding flag to the client. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Resolved] (IGNITE-11257) JDBC Thin: update handshake protocol so that the node returns its UUID.
[ https://issues.apache.org/jira/browse/IGNITE-11257?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Alexander Lapin resolved IGNITE-11257. -- > JDBC Thin: update handshake protocol so that the node returns its UUID. > --- > > Key: IGNITE-11257 > URL: https://issues.apache.org/jira/browse/IGNITE-11257 > Project: Ignite > Issue Type: Task > Components: jdbc >Reporter: Alexander Lapin >Assignee: Alexander Lapin >Priority: Major > Labels: iep-23 > Fix For: 2.8 > > Time Spent: 20m > Remaining Estimate: 0h > > Add node UUID to successful handshake response. > For more information see [IEP-23: Best Effort > Affinity|https://cwiki.apache.org/confluence/display/IGNITE/IEP-23%3A+Best+Effort+Affinity+for+thin+clients] -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (IGNITE-6135) java.sql.Date is serialized using OptimizedMarshaller
[ https://issues.apache.org/jira/browse/IGNITE-6135?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vladimir Ozerov updated IGNITE-6135: Priority: Blocker (was: Major) > java.sql.Date is serialized using OptimizedMarshaller > - > > Key: IGNITE-6135 > URL: https://issues.apache.org/jira/browse/IGNITE-6135 > Project: Ignite > Issue Type: Bug > Components: binary >Affects Versions: 2.1 >Reporter: Valentin Kulichenko >Assignee: Amelchev Nikita >Priority: Blocker > > For some reason, if an object has a field of {{java.sql.Date}}, it's > serialized with {{OptimizedMarshaller}}. It should be a first class citizen, > similar to {{java.util.Date}}. > In addition, it's possible to write a field using builder like this: > {code} > builder.setField(name, val, java.util.Date.class) > {code} > where {{val}} is instance of {{java.sql.Date}}. This leads to an exception > during deserialization, because {{java.util.Date}} would be expected. > More context and code reproducing the issue can be found here: > http://apache-ignite-users.70518.x6.nabble.com/JDBC-store-Date-deserialization-problem-td16276.html -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (IGNITE-11315) [ML] Nonlinear SVM
Artem Malykh created IGNITE-11315: - Summary: [ML] Nonlinear SVM Key: IGNITE-11315 URL: https://issues.apache.org/jira/browse/IGNITE-11315 Project: Ignite Issue Type: Improvement Components: ml Reporter: Artem Malykh Assignee: Artem Malykh -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (IGNITE-10693) MVCC TX: Random server restart tests became failed
[ https://issues.apache.org/jira/browse/IGNITE-10693?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16767356#comment-16767356 ] Ignite TC Bot commented on IGNITE-10693: {panel:title=-- Run :: All: Possible Blockers|borderStyle=dashed|borderColor=#ccc|titleBGColor=#F7D6C1} {color:#d04437}PDS 2{color} [[tests 1|https://ci.ignite.apache.org/viewLog.html?buildId=3077502]] * IgnitePdsTestSuite2: IgniteWalHistoryReservationsTest.testNodeLeftDuringExchange - 0,0% fails in last 415 master runs. {color:#d04437}Queries 2{color} [[tests 5|https://ci.ignite.apache.org/viewLog.html?buildId=3077445]] * IgniteBinaryCacheQueryTestSuite2: IgniteCacheQueryNodeRestartSelfTest.testRestarts - 0,0% fails in last 416 master runs. * IgniteBinaryCacheQueryTestSuite2: IgniteChangingBaselineCacheQueryNodeRestartSelfTest.testRestarts - 0,0% fails in last 416 master runs. * IgniteBinaryCacheQueryTestSuite2: DynamicIndexPartitionedTransactionalConcurrentSelfTest.testCoordinatorChange - 0,0% fails in last 416 master runs. * IgniteBinaryCacheQueryTestSuite2: DynamicIndexPartitionedAtomicConcurrentSelfTest.testCoordinatorChange - 0,0% fails in last 416 master runs. {color:#d04437}Cache 5{color} [[tests 4|https://ci.ignite.apache.org/viewLog.html?buildId=3077487]] * IgniteCacheWithIndexingTestSuite: CacheTtlAtomicPartitionedSelfTest.testDefaultTimeToLivePreload - 0,0% fails in last 417 master runs. * IgniteCacheWithIndexingTestSuite: IgniteClientReconnectQueriesTest.testReconnectQueryInProgress - 0,0% fails in last 417 master runs. * IgniteCacheWithIndexingTestSuite: IgniteClientReconnectQueriesTest.testQueryReconnect - 0,0% fails in last 417 master runs. {color:#d04437}JDBC Driver{color} [[tests 6|https://ci.ignite.apache.org/viewLog.html?buildId=3077442]] * IgniteJdbcDriverTestSuite: JdbcComplexQuerySelfTest.testBetween - 0,0% fails in last 418 master runs. * IgniteJdbcDriverTestSuite: JdbcComplexQuerySelfTest.testJoinWithoutAlias - 0,0% fails in last 418 master runs. * IgniteJdbcDriverTestSuite: JdbcComplexQuerySelfTest.testJoin - 0,0% fails in last 418 master runs. * IgniteJdbcDriverTestSuite: JdbcComplexQuerySelfTest.testIn - 0,0% fails in last 418 master runs. * IgniteJdbcDriverTestSuite: JdbcDistributedJoinsQueryTest.testJoin - 0,0% fails in last 418 master runs. * IgniteJdbcDriverTestSuite: JdbcComplexQuerySelfTest.testCalculatedValue - 0,0% fails in last 418 master runs. {color:#d04437}Spring{color} [[tests 0 Exit Code |https://ci.ignite.apache.org/viewLog.html?buildId=3077446]] {color:#d04437}MVCC PDS 4{color} [[tests 0 TIMEOUT , Exit Code , BUILD_RUNNER_ERROR |https://ci.ignite.apache.org/viewLog.html?buildId=3077533]] {panel} [TeamCity *-- Run :: All* Results|https://ci.ignite.apache.org/viewLog.html?buildId=3077535buildTypeId=IgniteTests24Java8_RunAll] > MVCC TX: Random server restart tests became failed > -- > > Key: IGNITE-10693 > URL: https://issues.apache.org/jira/browse/IGNITE-10693 > Project: Ignite > Issue Type: Bug > Components: mvcc, sql >Reporter: Igor Seliverstov >Assignee: Igor Seliverstov >Priority: Major > Labels: failover, mvcc_stabilization_stage_1 > Fix For: 2.8 > > Time Spent: 10m > Remaining Estimate: 0h > > [one|https://ci.ignite.apache.org/project.html?projectId=IgniteTests24Java8=7945125576049372508=%3Cdefault%3E=testDetails], > > [two|https://ci.ignite.apache.org/project.html?projectId=IgniteTests24Java8=8412476034648229784=%3Cdefault%3E=testDetails], > > [three|https://ci.ignite.apache.org/project.html?projectId=IgniteTests24Java8=254244004184327163=%3Cdefault%3E=testDetails], > all these tests became failed after IGNITE-9630 has been merged to master. > Seems there is an issue in partition calculating mechanism on unstable > topology. I suppose the algorithm uses partitions on nodes just become > primary while the partitions are in moving state. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (IGNITE-11296) 3rd-party persistence: Backup and primary partitions data differ after a single IgniteCache.get that loaded data from the persistent store which breaks skipStore and Ig
[ https://issues.apache.org/jira/browse/IGNITE-11296?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Igor Kamyshnikov updated IGNITE-11296: -- Attachment: PrimaryBackupTest.java > 3rd-party persistence: Backup and primary partitions data differ after a > single IgniteCache.get that loaded data from the persistent store which > breaks skipStore and Ignite JDBC behavior > -- > > Key: IGNITE-11296 > URL: https://issues.apache.org/jira/browse/IGNITE-11296 > Project: Ignite > Issue Type: Bug > Components: cache, cassandra >Affects Versions: 2.5, 2.7 >Reporter: Igor Kamyshnikov >Priority: Major > Attachments: PrimaryBackupTest.java > > > 1) run 2 ignite servers on different machines > (this is important because of > org.apache.ignite.internal.processors.cache.GridCacheContext#selectAffinityNodeBalanced > - it takes into account MACs) > the cache under test should be partitioned with backups = 1. > 2) run cassandra and insert some records into cassandra > 3) connect to the ignite cluster as Ignite client node and invoke > IgniteCache.get(pk); > for the existing pk. This will load data into caches. > 4) execute IgniteCache.withSkipStore().get(pk) several times > The values returned will be randomly NULLs or non-NULLs. > 5) depending on a chance, the data loaded in 3) can appear in primary > partition or backup partition. If they are in backup partition, then they are > not visible to Ignite JDBC. > Various techniques with ignite.affinity.mapKeyToPrimaryAndBackups and > {noformat}ignite.compute.call(() -> { cache.localPeek }){noformat} prove that > either backup partition or primary partition does not contain data after p. > 3). > However, cache.loadCache(null) loads all the data in both primary and backup > partitions. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (IGNITE-11296) 3rd-party persistence: Backup and primary partitions data differ after a single IgniteCache.get that loaded data from the persistent store which breaks skipStore and Ig
[ https://issues.apache.org/jira/browse/IGNITE-11296?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Igor Kamyshnikov updated IGNITE-11296: -- Description: 1) run 2 ignite servers on different machines (this is important because of org.apache.ignite.internal.processors.cache.GridCacheContext#selectAffinityNodeBalanced - it takes into account MACs) the cache under test should be partitioned with backups = 1. 2) run cassandra and insert some records into cassandra 3) connect to the ignite cluster as Ignite client node and invoke IgniteCache.get(pk); for the existing pk. This will load data into caches. 4) execute IgniteCache.withSkipStore().get(pk) several times The values returned will be randomly NULLs or non-NULLs. 5) depending on a chance, the data loaded in 3) can appear in primary partition or backup partition. If they are in backup partition, then they are not visible to Ignite JDBC. Various techniques with ignite.affinity.mapKeyToPrimaryAndBackups and {noformat}ignite.compute.call(() -> { cache.localPeek }){noformat} prove that either backup partition or primary partition does not contain data after p. 3). However, cache.loadCache(null) loads all the data in both primary and backup partitions. Self describing demo code added: [^PrimaryBackupTest.java] : 1) create nodes with different MACs 2) create a test cache with a number of backup partitions 3) implicitely load several keys, making sure we found all the keys that have at least one empty partition after implicit load. was: 1) run 2 ignite servers on different machines (this is important because of org.apache.ignite.internal.processors.cache.GridCacheContext#selectAffinityNodeBalanced - it takes into account MACs) the cache under test should be partitioned with backups = 1. 2) run cassandra and insert some records into cassandra 3) connect to the ignite cluster as Ignite client node and invoke IgniteCache.get(pk); for the existing pk. This will load data into caches. 4) execute IgniteCache.withSkipStore().get(pk) several times The values returned will be randomly NULLs or non-NULLs. 5) depending on a chance, the data loaded in 3) can appear in primary partition or backup partition. If they are in backup partition, then they are not visible to Ignite JDBC. Various techniques with ignite.affinity.mapKeyToPrimaryAndBackups and {noformat}ignite.compute.call(() -> { cache.localPeek }){noformat} prove that either backup partition or primary partition does not contain data after p. 3). However, cache.loadCache(null) loads all the data in both primary and backup partitions. > 3rd-party persistence: Backup and primary partitions data differ after a > single IgniteCache.get that loaded data from the persistent store which > breaks skipStore and Ignite JDBC behavior > -- > > Key: IGNITE-11296 > URL: https://issues.apache.org/jira/browse/IGNITE-11296 > Project: Ignite > Issue Type: Bug > Components: cache, cassandra >Affects Versions: 2.5, 2.7 >Reporter: Igor Kamyshnikov >Priority: Major > Attachments: PrimaryBackupTest.java > > > 1) run 2 ignite servers on different machines > (this is important because of > org.apache.ignite.internal.processors.cache.GridCacheContext#selectAffinityNodeBalanced > - it takes into account MACs) > the cache under test should be partitioned with backups = 1. > 2) run cassandra and insert some records into cassandra > 3) connect to the ignite cluster as Ignite client node and invoke > IgniteCache.get(pk); > for the existing pk. This will load data into caches. > 4) execute IgniteCache.withSkipStore().get(pk) several times > The values returned will be randomly NULLs or non-NULLs. > 5) depending on a chance, the data loaded in 3) can appear in primary > partition or backup partition. If they are in backup partition, then they are > not visible to Ignite JDBC. > Various techniques with ignite.affinity.mapKeyToPrimaryAndBackups and > {noformat}ignite.compute.call(() -> { cache.localPeek }){noformat} prove that > either backup partition or primary partition does not contain data after p. > 3). > However, cache.loadCache(null) loads all the data in both primary and backup > partitions. > Self describing demo code added: [^PrimaryBackupTest.java] : > 1) create nodes with different MACs > 2) create a test cache with a number of backup partitions > 3) implicitely load several keys, making sure we found all the keys that have > at least one empty partition after implicit load. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (IGNITE-11296) 3rd-party persistence: Backup and primary partitions data differ after a single IgniteCache.get that loaded data from the persistent store which breaks skipStore and Ig
[ https://issues.apache.org/jira/browse/IGNITE-11296?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Igor Kamyshnikov updated IGNITE-11296: -- Attachment: (was: PrimaryBackupTest.java) > 3rd-party persistence: Backup and primary partitions data differ after a > single IgniteCache.get that loaded data from the persistent store which > breaks skipStore and Ignite JDBC behavior > -- > > Key: IGNITE-11296 > URL: https://issues.apache.org/jira/browse/IGNITE-11296 > Project: Ignite > Issue Type: Bug > Components: cache, cassandra >Affects Versions: 2.5, 2.7 >Reporter: Igor Kamyshnikov >Priority: Major > Attachments: PrimaryBackupTest.java > > > 1) run 2 ignite servers on different machines > (this is important because of > org.apache.ignite.internal.processors.cache.GridCacheContext#selectAffinityNodeBalanced > - it takes into account MACs) > the cache under test should be partitioned with backups = 1. > 2) run cassandra and insert some records into cassandra > 3) connect to the ignite cluster as Ignite client node and invoke > IgniteCache.get(pk); > for the existing pk. This will load data into caches. > 4) execute IgniteCache.withSkipStore().get(pk) several times > The values returned will be randomly NULLs or non-NULLs. > 5) depending on a chance, the data loaded in 3) can appear in primary > partition or backup partition. If they are in backup partition, then they are > not visible to Ignite JDBC. > Various techniques with ignite.affinity.mapKeyToPrimaryAndBackups and > {noformat}ignite.compute.call(() -> { cache.localPeek }){noformat} prove that > either backup partition or primary partition does not contain data after p. > 3). > However, cache.loadCache(null) loads all the data in both primary and backup > partitions. > Self describing demo code added: [^PrimaryBackupTest.java] : > 1) create nodes with different MACs > 2) create a test cache with a number of backup partitions > 3) implicitely load several keys, making sure we found all the keys that have > at least one empty partition after implicit load. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (IGNITE-11296) 3rd-party persistence: Backup and primary partitions data differ after a single IgniteCache.get that loaded data from the persistent store which breaks skipStore and Ig
[ https://issues.apache.org/jira/browse/IGNITE-11296?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Igor Kamyshnikov updated IGNITE-11296: -- Description: 1) run 2 ignite servers on different machines (this is important because of org.apache.ignite.internal.processors.cache.GridCacheContext#selectAffinityNodeBalanced - it takes into account MACs) the cache under test should be partitioned with backups = 1. 2) run cassandra and insert some records into cassandra 3) connect to the ignite cluster as Ignite client node and invoke IgniteCache.get(pk); for the existing pk. This will load data into caches. 4) execute IgniteCache.withSkipStore().get(pk) several times The values returned will be randomly NULLs or non-NULLs. 5) depending on a chance, the data loaded in 3) can appear in primary partition or backup partition. If they are in backup partition, then they are not visible to Ignite JDBC. Various techniques with ignite.affinity.mapKeyToPrimaryAndBackups and {noformat}ignite.compute.call(() -> { cache.localPeek }){noformat} prove that either backup partition or primary partition does not contain data after p. 3). However, cache.loadCache(null) loads all the data in both primary and backup partitions. Self describing demo code added: [^PrimaryBackupTest.java] : 1) create nodes with different MACs 2) create a test cache with a number of backup partitions 3) implicitely load several keys, making sure we found all the keys that have at least one empty partition after implicit load. 4) test other partterns of putting keys 5) test SQL and gets with skip store for implicitely loaded keys, making sure that both nulls and not nulls are returned. was: 1) run 2 ignite servers on different machines (this is important because of org.apache.ignite.internal.processors.cache.GridCacheContext#selectAffinityNodeBalanced - it takes into account MACs) the cache under test should be partitioned with backups = 1. 2) run cassandra and insert some records into cassandra 3) connect to the ignite cluster as Ignite client node and invoke IgniteCache.get(pk); for the existing pk. This will load data into caches. 4) execute IgniteCache.withSkipStore().get(pk) several times The values returned will be randomly NULLs or non-NULLs. 5) depending on a chance, the data loaded in 3) can appear in primary partition or backup partition. If they are in backup partition, then they are not visible to Ignite JDBC. Various techniques with ignite.affinity.mapKeyToPrimaryAndBackups and {noformat}ignite.compute.call(() -> { cache.localPeek }){noformat} prove that either backup partition or primary partition does not contain data after p. 3). However, cache.loadCache(null) loads all the data in both primary and backup partitions. Self describing demo code added: [^PrimaryBackupTest.java] : 1) create nodes with different MACs 2) create a test cache with a number of backup partitions 3) implicitely load several keys, making sure we found all the keys that have at least one empty partition after implicit load. > 3rd-party persistence: Backup and primary partitions data differ after a > single IgniteCache.get that loaded data from the persistent store which > breaks skipStore and Ignite JDBC behavior > -- > > Key: IGNITE-11296 > URL: https://issues.apache.org/jira/browse/IGNITE-11296 > Project: Ignite > Issue Type: Bug > Components: cache, cassandra >Affects Versions: 2.5, 2.7 >Reporter: Igor Kamyshnikov >Priority: Major > Attachments: PrimaryBackupTest.java > > > 1) run 2 ignite servers on different machines > (this is important because of > org.apache.ignite.internal.processors.cache.GridCacheContext#selectAffinityNodeBalanced > - it takes into account MACs) > the cache under test should be partitioned with backups = 1. > 2) run cassandra and insert some records into cassandra > 3) connect to the ignite cluster as Ignite client node and invoke > IgniteCache.get(pk); > for the existing pk. This will load data into caches. > 4) execute IgniteCache.withSkipStore().get(pk) several times > The values returned will be randomly NULLs or non-NULLs. > 5) depending on a chance, the data loaded in 3) can appear in primary > partition or backup partition. If they are in backup partition, then they are > not visible to Ignite JDBC. > Various techniques with ignite.affinity.mapKeyToPrimaryAndBackups and > {noformat}ignite.compute.call(() -> { cache.localPeek }){noformat} prove that > either backup partition or primary partition does not contain data after p. > 3). > However, cache.loadCache(null) loads all the data in both primary and backup > partitions. > Self
[jira] [Updated] (IGNITE-11296) 3rd-party persistence: Backup and primary partitions data differ after a single IgniteCache.get that loaded data from the persistent store which breaks skipStore and Ig
[ https://issues.apache.org/jira/browse/IGNITE-11296?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Igor Kamyshnikov updated IGNITE-11296: -- Attachment: PrimaryBackupTest.java > 3rd-party persistence: Backup and primary partitions data differ after a > single IgniteCache.get that loaded data from the persistent store which > breaks skipStore and Ignite JDBC behavior > -- > > Key: IGNITE-11296 > URL: https://issues.apache.org/jira/browse/IGNITE-11296 > Project: Ignite > Issue Type: Bug > Components: cache, cassandra >Affects Versions: 2.5, 2.7 >Reporter: Igor Kamyshnikov >Priority: Major > Attachments: PrimaryBackupTest.java > > > 1) run 2 ignite servers on different machines > (this is important because of > org.apache.ignite.internal.processors.cache.GridCacheContext#selectAffinityNodeBalanced > - it takes into account MACs) > the cache under test should be partitioned with backups = 1. > 2) run cassandra and insert some records into cassandra > 3) connect to the ignite cluster as Ignite client node and invoke > IgniteCache.get(pk); > for the existing pk. This will load data into caches. > 4) execute IgniteCache.withSkipStore().get(pk) several times > The values returned will be randomly NULLs or non-NULLs. > 5) depending on a chance, the data loaded in 3) can appear in primary > partition or backup partition. If they are in backup partition, then they are > not visible to Ignite JDBC. > Various techniques with ignite.affinity.mapKeyToPrimaryAndBackups and > {noformat}ignite.compute.call(() -> { cache.localPeek }){noformat} prove that > either backup partition or primary partition does not contain data after p. > 3). > However, cache.loadCache(null) loads all the data in both primary and backup > partitions. > Self describing demo code added: [^PrimaryBackupTest.java] : > 1) create nodes with different MACs > 2) create a test cache with a number of backup partitions > 3) implicitely load several keys, making sure we found all the keys that have > at least one empty partition after implicit load. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (IGNITE-10214) Web console: dependency to open source JDBC driver is not generated in the project's pom file
[ https://issues.apache.org/jira/browse/IGNITE-10214?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16767794#comment-16767794 ] Pavel Konstantinov commented on IGNITE-10214: - My test failed: after I imported some cache from the MySQL DB using mysql-connector-java-8.0.13.jar I got Generic JDBC dialect in the cache settings !screenshot-1.png! > Web console: dependency to open source JDBC driver is not generated in the > project's pom file > - > > Key: IGNITE-10214 > URL: https://issues.apache.org/jira/browse/IGNITE-10214 > Project: Ignite > Issue Type: Bug >Reporter: Pavel Konstantinov >Assignee: Pavel Konstantinov >Priority: Major > Attachments: screenshot-1.png > > > Steps to reproduce: > # import caches from for example MySql DB > # check generated pom file -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Comment Edited] (IGNITE-10214) Web console: dependency to open source JDBC driver is not generated in the project's pom file
[ https://issues.apache.org/jira/browse/IGNITE-10214?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16767794#comment-16767794 ] Pavel Konstantinov edited comment on IGNITE-10214 at 2/14/19 1:57 AM: -- My test failed: after I imported some cache from the MySQL DB using mysql-connector-java-8.0.13.jar (attached) I got Generic JDBC dialect in the cache settings !screenshot-1.png! was (Author: pkonstantinov): My test failed: after I imported some cache from the MySQL DB using mysql-connector-java-8.0.13.jar I got Generic JDBC dialect in the cache settings !screenshot-1.png! > Web console: dependency to open source JDBC driver is not generated in the > project's pom file > - > > Key: IGNITE-10214 > URL: https://issues.apache.org/jira/browse/IGNITE-10214 > Project: Ignite > Issue Type: Bug >Reporter: Pavel Konstantinov >Assignee: Pavel Konstantinov >Priority: Major > Attachments: mysql-connector-java-8.0.13.jar, screenshot-1.png > > > Steps to reproduce: > # import caches from for example MySql DB > # check generated pom file -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (IGNITE-10214) Web console: dependency to open source JDBC driver is not generated in the project's pom file
[ https://issues.apache.org/jira/browse/IGNITE-10214?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Pavel Konstantinov updated IGNITE-10214: Attachment: screenshot-1.png > Web console: dependency to open source JDBC driver is not generated in the > project's pom file > - > > Key: IGNITE-10214 > URL: https://issues.apache.org/jira/browse/IGNITE-10214 > Project: Ignite > Issue Type: Bug >Reporter: Pavel Konstantinov >Assignee: Pavel Konstantinov >Priority: Major > Attachments: mysql-connector-java-8.0.13.jar, screenshot-1.png > > > Steps to reproduce: > # import caches from for example MySql DB > # check generated pom file -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Assigned] (IGNITE-10214) Web console: dependency to open source JDBC driver is not generated in the project's pom file
[ https://issues.apache.org/jira/browse/IGNITE-10214?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vasiliy Sisko reassigned IGNITE-10214: -- Assignee: Pavel Konstantinov (was: Vasiliy Sisko) > Web console: dependency to open source JDBC driver is not generated in the > project's pom file > - > > Key: IGNITE-10214 > URL: https://issues.apache.org/jira/browse/IGNITE-10214 > Project: Ignite > Issue Type: Bug >Reporter: Pavel Konstantinov >Assignee: Pavel Konstantinov >Priority: Major > Attachments: mysql-connector-java-8.0.13.jar, screenshot-1.png > > > Steps to reproduce: > # import caches from for example MySql DB > # check generated pom file -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (IGNITE-11165) Add note to the documentation that cache name will be used as folder name in case of using persistence
[ https://issues.apache.org/jira/browse/IGNITE-11165?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Evgenii Zhuravlev updated IGNITE-11165: --- Fix Version/s: 2.7 > Add note to the documentation that cache name will be used as folder name in > case of using persistence > -- > > Key: IGNITE-11165 > URL: https://issues.apache.org/jira/browse/IGNITE-11165 > Project: Ignite > Issue Type: Improvement > Components: documentation >Reporter: Evgenii Zhuravlev >Assignee: Artem Budnikov >Priority: Major > Fix For: 2.7 > > > We should add a note that it's not recommended to use symbols which are not > allowed to use in the file system in case of using persistence. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Assigned] (IGNITE-10214) Web console: dependency to open source JDBC driver is not generated in the project's pom file
[ https://issues.apache.org/jira/browse/IGNITE-10214?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Pavel Konstantinov reassigned IGNITE-10214: --- Assignee: Vasiliy Sisko (was: Pavel Konstantinov) > Web console: dependency to open source JDBC driver is not generated in the > project's pom file > - > > Key: IGNITE-10214 > URL: https://issues.apache.org/jira/browse/IGNITE-10214 > Project: Ignite > Issue Type: Bug >Reporter: Pavel Konstantinov >Assignee: Vasiliy Sisko >Priority: Major > Attachments: mysql-connector-java-8.0.13.jar, screenshot-1.png > > > Steps to reproduce: > # import caches from for example MySql DB > # check generated pom file -- This message was sent by Atlassian JIRA (v7.6.3#76005)