[jira] [Commented] (IGNITE-7578) Web console: Actualize configuration of ClientConnectorConfiguration
[ https://issues.apache.org/jira/browse/IGNITE-7578?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16356488#comment-16356488 ] Pavel Konstantinov commented on IGNITE-7578: Need to fix the following: 'SSL factory' must become mandatory if 'Use Ignite SSL' is OFF. > Web console: Actualize configuration of ClientConnectorConfiguration > > > Key: IGNITE-7578 > URL: https://issues.apache.org/jira/browse/IGNITE-7578 > Project: Ignite > Issue Type: Bug >Reporter: Vasiliy Sisko >Assignee: Pavel Konstantinov >Priority: Major > Fix For: 2.5 > > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Assigned] (IGNITE-7578) Web console: Actualize configuration of ClientConnectorConfiguration
[ https://issues.apache.org/jira/browse/IGNITE-7578?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Pavel Konstantinov reassigned IGNITE-7578: -- Assignee: Vasiliy Sisko (was: Pavel Konstantinov) > Web console: Actualize configuration of ClientConnectorConfiguration > > > Key: IGNITE-7578 > URL: https://issues.apache.org/jira/browse/IGNITE-7578 > Project: Ignite > Issue Type: Bug >Reporter: Vasiliy Sisko >Assignee: Vasiliy Sisko >Priority: Major > Fix For: 2.5 > > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Assigned] (IGNITE-7059) Improve Collocated Processing page on the site
[ https://issues.apache.org/jira/browse/IGNITE-7059?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Denis Magda reassigned IGNITE-7059: --- Assignee: Denis Magda > Improve Collocated Processing page on the site > -- > > Key: IGNITE-7059 > URL: https://issues.apache.org/jira/browse/IGNITE-7059 > Project: Ignite > Issue Type: Sub-task > Components: site >Reporter: Denis Magda >Assignee: Denis Magda >Priority: Major > Fix For: 2.4 > > > Presently the collocated processing [1] covers general aspects of this > paradigm. Elaborate more on the following: > * How it's related to SQL > * How it's related to compute grid and ML > * As for compute grid, mention that it allows broadcast or run computations > on concrete nodes as well. > [1] https://ignite.apache.org/collocatedprocessing.html -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Closed] (IGNITE-7058) Make out a site page for ACID Transactions
[ https://issues.apache.org/jira/browse/IGNITE-7058?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Denis Magda closed IGNITE-7058. --- > Make out a site page for ACID Transactions > -- > > Key: IGNITE-7058 > URL: https://issues.apache.org/jira/browse/IGNITE-7058 > Project: Ignite > Issue Type: Sub-task > Components: site >Reporter: Denis Magda >Assignee: Denis Magda >Priority: Major > Fix For: 2.4 > > > ACID transactions are a major feature of Ignite and have to be exposed under > the Features menu on the site. > Make out the page covering the following: > * 2Phase Commit Protocol > * Pessimistic and Optimistic Modes > * Deadlock detection -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (IGNITE-7058) Make out a site page for ACID Transactions
[ https://issues.apache.org/jira/browse/IGNITE-7058?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16356363#comment-16356363 ] Denis Magda commented on IGNITE-7058: - The page is ready and the diagram is fine. > Make out a site page for ACID Transactions > -- > > Key: IGNITE-7058 > URL: https://issues.apache.org/jira/browse/IGNITE-7058 > Project: Ignite > Issue Type: Sub-task > Components: site >Reporter: Denis Magda >Assignee: Denis Magda >Priority: Major > Fix For: 2.4 > > > ACID transactions are a major feature of Ignite and have to be exposed under > the Features menu on the site. > Make out the page covering the following: > * 2Phase Commit Protocol > * Pessimistic and Optimistic Modes > * Deadlock detection -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Assigned] (IGNITE-7503) MLP documentation
[ https://issues.apache.org/jira/browse/IGNITE-7503?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Prachi Garg reassigned IGNITE-7503: --- Assignee: Denis Magda (was: Prachi Garg) > MLP documentation > - > > Key: IGNITE-7503 > URL: https://issues.apache.org/jira/browse/IGNITE-7503 > Project: Ignite > Issue Type: Sub-task > Components: documentation, ml >Reporter: Yury Babak >Assignee: Denis Magda >Priority: Major > Labels: documentaion > Fix For: 2.4 > > > A need to add documentation about MLP -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (IGNITE-7503) MLP documentation
[ https://issues.apache.org/jira/browse/IGNITE-7503?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16356300#comment-16356300 ] Prachi Garg commented on IGNITE-7503: - Made few corrections. [~dmagda], please review. > MLP documentation > - > > Key: IGNITE-7503 > URL: https://issues.apache.org/jira/browse/IGNITE-7503 > Project: Ignite > Issue Type: Sub-task > Components: documentation, ml >Reporter: Yury Babak >Assignee: Prachi Garg >Priority: Major > Labels: documentaion > Fix For: 2.4 > > > A need to add documentation about MLP -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (IGNITE-7351) Create a page for continuous queries
[ https://issues.apache.org/jira/browse/IGNITE-7351?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Denis Magda updated IGNITE-7351: Fix Version/s: (was: 2.5) 2.4 > Create a page for continuous queries > > > Key: IGNITE-7351 > URL: https://issues.apache.org/jira/browse/IGNITE-7351 > Project: Ignite > Issue Type: Sub-task > Components: site >Reporter: Denis Magda >Priority: Major > Fix For: 2.4 > > > Create a page for continuous queries putting it under Features menu on the > site. This capability is a strong distinguisher from relational databases > that do not support continuous notifications and processing built around them. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (IGNITE-7062) Ignite page with video resources and recording
[ https://issues.apache.org/jira/browse/IGNITE-7062?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Denis Magda updated IGNITE-7062: Fix Version/s: (was: 2.4) 2.5 > Ignite page with video resources and recording > -- > > Key: IGNITE-7062 > URL: https://issues.apache.org/jira/browse/IGNITE-7062 > Project: Ignite > Issue Type: Task > Components: site >Reporter: Denis Magda >Assignee: Prachi Garg >Priority: Major > Fix For: 2.5 > > > There is a plenty of recordings of Ignite meetups, webinars and conference > talks available on the Internet. Some of them introduce basic components and > capabilities, some share best practices and pitfalls while the other share > use cases. > Generally, it's beneficial for both Ignite community and users to gather and > expose the most useful ones under a special video recording section. For > instance, we might consider these talks to be added right away: > * Ignite use case: https://youtu.be/1D8hyLWMtfM > * Ignite essentials: https://www.youtube.com/watch?v=G22L2KW9gEQ > * Kubernetes: https://www.youtube.com/watch?v=igDB0wyodr8 > Instead of creating a new page for this purpose I would rework the > screencasts' page combining all the media content there: > https://ignite.apache.org/screencasts.html -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (IGNITE-7587) SQL COPY: document the command
[ https://issues.apache.org/jira/browse/IGNITE-7587?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Denis Magda updated IGNITE-7587: Fix Version/s: (was: 2.4) 2.5 > SQL COPY: document the command > -- > > Key: IGNITE-7587 > URL: https://issues.apache.org/jira/browse/IGNITE-7587 > Project: Ignite > Issue Type: Improvement > Components: documentation, sql >Affects Versions: 2.4 >Reporter: Kirill Shirokov >Priority: Major > Fix For: 2.5 > > > SQL COPY command needs to be documented at readme.io. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (IGNITE-7645) Clarify eviction policy documentation
Denis Magda created IGNITE-7645: --- Summary: Clarify eviction policy documentation Key: IGNITE-7645 URL: https://issues.apache.org/jira/browse/IGNITE-7645 Project: Ignite Issue Type: Task Reporter: Denis Magda Assignee: Denis Magda Fix For: 2.4 Eviction policies work differently depending on the configuration that might be one of the following: * Just off-heap memory w/o Ignite persistence * off-heap memory + on-heap cache * off-heap memory + Ignite persistence * off-heap memory + swap or cache store Cover all these scenarios on the main eviction doc page:https://apacheignite.readme.io/docs/evictions More details: http://apache-ignite-developers.2346864.n4.nabble.com/Eviction-policies-with-persistence-td26588.html -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (IGNITE-7337) Spark Data Frames: support saving a data frame in Ignite
[ https://issues.apache.org/jira/browse/IGNITE-7337?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Valentin Kulichenko updated IGNITE-7337: Fix Version/s: (was: 2.5) 2.4 > Spark Data Frames: support saving a data frame in Ignite > > > Key: IGNITE-7337 > URL: https://issues.apache.org/jira/browse/IGNITE-7337 > Project: Ignite > Issue Type: New Feature > Components: spark >Affects Versions: 2.3 >Reporter: Valentin Kulichenko >Assignee: Nikolay Izhikov >Priority: Major > Fix For: 2.4 > > > Once Ignite data source for Spark is implemented, we need to add an ability > to store a data frame in Ignite. Most likely if should be enough to provide > implementation for the following traits: > * {{InsertableRelation}} > * {{CreatableRelationProvider}} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (IGNITE-7337) Spark Data Frames: support saving a data frame in Ignite
[ https://issues.apache.org/jira/browse/IGNITE-7337?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16355930#comment-16355930 ] Valentin Kulichenko commented on IGNITE-7337: - [~NIzhikov], {{allowOverwrite}} should depend on {{SaveMode}} I think. I.e. if the mode is {{Overwrite}}, then {{allowOverwrite}} should be {{false}} (ironic :)). If {{Append}}, then {{true}}. Agree? Let's add other parameters to {{IgniteDataFrameSettings}}. Once this is done, check the tests and merge to master if everything is green. > Spark Data Frames: support saving a data frame in Ignite > > > Key: IGNITE-7337 > URL: https://issues.apache.org/jira/browse/IGNITE-7337 > Project: Ignite > Issue Type: New Feature > Components: spark >Affects Versions: 2.3 >Reporter: Valentin Kulichenko >Assignee: Nikolay Izhikov >Priority: Major > Fix For: 2.5 > > > Once Ignite data source for Spark is implemented, we need to add an ability > to store a data frame in Ignite. Most likely if should be enough to provide > implementation for the following traits: > * {{InsertableRelation}} > * {{CreatableRelationProvider}} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (IGNITE-7337) Spark Data Frames: support saving a data frame in Ignite
[ https://issues.apache.org/jira/browse/IGNITE-7337?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16355813#comment-16355813 ] Nikolay Izhikov commented on IGNITE-7337: - > BTW, as a general note, I see that there are still many methods throwing > UnsupportedOperationException, at least in the IgniteExternalCatalog > Do we know what are the features that we still do not support? Is it possible > to create a list of them and then the tickets to address later? Yes, I will create tickets for feature to implement in {{IgniteExternalCatalog}} in a few days. > Spark Data Frames: support saving a data frame in Ignite > > > Key: IGNITE-7337 > URL: https://issues.apache.org/jira/browse/IGNITE-7337 > Project: Ignite > Issue Type: New Feature > Components: spark >Affects Versions: 2.3 >Reporter: Valentin Kulichenko >Assignee: Nikolay Izhikov >Priority: Major > Fix For: 2.5 > > > Once Ignite data source for Spark is implemented, we need to add an ability > to store a data frame in Ignite. Most likely if should be enough to provide > implementation for the following traits: > * {{InsertableRelation}} > * {{CreatableRelationProvider}} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Assigned] (IGNITE-7048) Cache get fails on node not in BaselineTopology.
[ https://issues.apache.org/jira/browse/IGNITE-7048?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Alexey Goncharuk reassigned IGNITE-7048: Assignee: Alexey Goncharuk > Cache get fails on node not in BaselineTopology. > > > Key: IGNITE-7048 > URL: https://issues.apache.org/jira/browse/IGNITE-7048 > Project: Ignite > Issue Type: Bug > Components: persistence >Reporter: Sergey Chugunov >Assignee: Alexey Goncharuk >Priority: Major > Fix For: 2.4 > > > As an example take a look at > IgnitePdsBinaryMetadataOnClusterRestartTest::testMixedMetadataIsRestoredOnRestart. > When reading data for check from node not in BaselineTopology it fails with > the following assertion: > {noformat}java.lang.AssertionError: result = true, persistenceEnabled = true, > partitionState = EVICTED > at > org.apache.ignite.internal.processors.cache.GridCacheContext.allowFastLocalRead(GridCacheContext.java:2044) > at > org.apache.ignite.internal.processors.cache.distributed.dht.GridPartitionedSingleGetFuture.mapKeyToNode(GridPartitionedSingleGetFuture.java:321) > at > org.apache.ignite.internal.processors.cache.distributed.dht.GridPartitionedSingleGetFuture.map(GridPartitionedSingleGetFuture.java:211) > at > org.apache.ignite.internal.processors.cache.distributed.dht.GridPartitionedSingleGetFuture.init(GridPartitionedSingleGetFuture.java:203) > at > org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicCache.getAsync0(GridDhtAtomicCache.java:1392) > at > org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicCache.access$1600(GridDhtAtomicCache.java:131) > at > org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicCache$16.apply(GridDhtAtomicCache.java:470) > at > org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicCache$16.apply(GridDhtAtomicCache.java:468) > at > org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicCache.asyncOp(GridDhtAtomicCache.java:757) > at > org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicCache.getAsync(GridDhtAtomicCache.java:468) > at > org.apache.ignite.internal.processors.cache.GridCacheAdapter.get0(GridCacheAdapter.java:4545) > at > org.apache.ignite.internal.processors.cache.GridCacheAdapter.get(GridCacheAdapter.java:4526) > at > org.apache.ignite.internal.processors.cache.GridCacheAdapter.get(GridCacheAdapter.java:1343) > at > org.apache.ignite.internal.processors.cache.IgniteCacheProxyImpl.get(IgniteCacheProxyImpl.java:828) > at > org.apache.ignite.internal.processors.cache.GatewayProtectedCacheProxy.get(GatewayProtectedCacheProxy.java:662) > at > org.apache.ignite.internal.processors.cache.persistence.IgnitePdsBinaryMetadataOnClusterRestartTest.examineStaticMetadata(IgnitePdsBinaryMetadataOnClusterRestartTest.java:145) > at > org.apache.ignite.internal.processors.cache.persistence.IgnitePdsBinaryMetadataOnClusterRestartTest.testMixedMetadataIsRestoredOnRestart(IgnitePdsBinaryMetadataOnClusterRestartTest.java:334) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) > at junit.framework.TestCase.runTest(TestCase.java:176) > at > org.apache.ignite.testframework.junits.GridAbstractTest.runTestInternal(GridAbstractTest.java:2000) > at > org.apache.ignite.testframework.junits.GridAbstractTest.access$000(GridAbstractTest.java:132) > at > org.apache.ignite.testframework.junits.GridAbstractTest$5.run(GridAbstractTest.java:1915) > at java.lang.Thread.run(Thread.java:745) > {noformat} > The problem with the test is that in method > *GridCacheProcessor::prepareCacheStart* flag *affNode* is calculated ignoring > information about BaselineTopology distribution. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Resolved] (IGNITE-7048) Cache get fails on node not in BaselineTopology.
[ https://issues.apache.org/jira/browse/IGNITE-7048?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Alexey Goncharuk resolved IGNITE-7048. -- Resolution: Fixed Fix Version/s: (was: 2.5) 2.4 Fixed in IGNITE-7505 > Cache get fails on node not in BaselineTopology. > > > Key: IGNITE-7048 > URL: https://issues.apache.org/jira/browse/IGNITE-7048 > Project: Ignite > Issue Type: Bug > Components: persistence >Reporter: Sergey Chugunov >Priority: Major > Fix For: 2.4 > > > As an example take a look at > IgnitePdsBinaryMetadataOnClusterRestartTest::testMixedMetadataIsRestoredOnRestart. > When reading data for check from node not in BaselineTopology it fails with > the following assertion: > {noformat}java.lang.AssertionError: result = true, persistenceEnabled = true, > partitionState = EVICTED > at > org.apache.ignite.internal.processors.cache.GridCacheContext.allowFastLocalRead(GridCacheContext.java:2044) > at > org.apache.ignite.internal.processors.cache.distributed.dht.GridPartitionedSingleGetFuture.mapKeyToNode(GridPartitionedSingleGetFuture.java:321) > at > org.apache.ignite.internal.processors.cache.distributed.dht.GridPartitionedSingleGetFuture.map(GridPartitionedSingleGetFuture.java:211) > at > org.apache.ignite.internal.processors.cache.distributed.dht.GridPartitionedSingleGetFuture.init(GridPartitionedSingleGetFuture.java:203) > at > org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicCache.getAsync0(GridDhtAtomicCache.java:1392) > at > org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicCache.access$1600(GridDhtAtomicCache.java:131) > at > org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicCache$16.apply(GridDhtAtomicCache.java:470) > at > org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicCache$16.apply(GridDhtAtomicCache.java:468) > at > org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicCache.asyncOp(GridDhtAtomicCache.java:757) > at > org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicCache.getAsync(GridDhtAtomicCache.java:468) > at > org.apache.ignite.internal.processors.cache.GridCacheAdapter.get0(GridCacheAdapter.java:4545) > at > org.apache.ignite.internal.processors.cache.GridCacheAdapter.get(GridCacheAdapter.java:4526) > at > org.apache.ignite.internal.processors.cache.GridCacheAdapter.get(GridCacheAdapter.java:1343) > at > org.apache.ignite.internal.processors.cache.IgniteCacheProxyImpl.get(IgniteCacheProxyImpl.java:828) > at > org.apache.ignite.internal.processors.cache.GatewayProtectedCacheProxy.get(GatewayProtectedCacheProxy.java:662) > at > org.apache.ignite.internal.processors.cache.persistence.IgnitePdsBinaryMetadataOnClusterRestartTest.examineStaticMetadata(IgnitePdsBinaryMetadataOnClusterRestartTest.java:145) > at > org.apache.ignite.internal.processors.cache.persistence.IgnitePdsBinaryMetadataOnClusterRestartTest.testMixedMetadataIsRestoredOnRestart(IgnitePdsBinaryMetadataOnClusterRestartTest.java:334) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) > at junit.framework.TestCase.runTest(TestCase.java:176) > at > org.apache.ignite.testframework.junits.GridAbstractTest.runTestInternal(GridAbstractTest.java:2000) > at > org.apache.ignite.testframework.junits.GridAbstractTest.access$000(GridAbstractTest.java:132) > at > org.apache.ignite.testframework.junits.GridAbstractTest$5.run(GridAbstractTest.java:1915) > at java.lang.Thread.run(Thread.java:745) > {noformat} > The problem with the test is that in method > *GridCacheProcessor::prepareCacheStart* flag *affNode* is calculated ignoring > information about BaselineTopology distribution. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (IGNITE-5759) IgniteCache5 suite timed out by GridCachePartitionEvictionDuringReadThroughSelfTest.testPartitionRent
[ https://issues.apache.org/jira/browse/IGNITE-5759?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Alexey Goncharuk updated IGNITE-5759: - Fix Version/s: 2.5 > IgniteCache5 suite timed out by > GridCachePartitionEvictionDuringReadThroughSelfTest.testPartitionRent > - > > Key: IGNITE-5759 > URL: https://issues.apache.org/jira/browse/IGNITE-5759 > Project: Ignite > Issue Type: Bug >Reporter: Dmitriy Pavlov >Assignee: Dmitriy Pavlov >Priority: Critical > Labels: MakeTeamcityGreenAgain, test-fail > Fix For: 2.5 > > Attachments: threadDumpFromLogs.log > > > http://ci.ignite.apache.org/viewLog.html?buildId=727951=Ignite20Tests_IgniteCache5 > There is no 'Test has been timed out' message in logs. > Last 'Starting test:' message was > GridCachePartitionEvictionDuringReadThroughSelfTest#testPartitionRent > Latest exception from working test was as follows; > {noformat} > [23:19:11]W: [org.apache.ignite:ignite-core] [2017-07-14 > 20:19:11,392][ERROR][tcp-comm-worker-#8980%distributed.GridCachePartitionEvictionDuringReadThroughSelfTest4%][TcpCommunicationSpi] > TcpCommunicationSpi failed to establish connection to node, node will be > dropped from cluster [rmtNode=TcpDiscoveryNode > [id=a93fce57-6b2d-4947-8c23-8a677b93, addrs=[127.0.0.1], > sockAddrs=[/127.0.0.1:47503], discPort=47503, order=4, intOrder=4, > lastExchangeTime=1500063443391, loc=false, ver=2.1.0#19700101-sha1:, > isClient=false]] > [23:19:11]W: [org.apache.ignite:ignite-core] class > org.apache.ignite.IgniteCheckedException: Failed to connect to node (is node > still alive?). Make sure that each ComputeTask and cache Transaction has a > timeout set in order to prevent parties from waiting forever in case of > network issues [nodeId=a93fce57-6b2d-4947-8c23-8a677b93, > addrs=[/127.0.0.1:45273]] > [23:19:11]W: [org.apache.ignite:ignite-core]at > org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi.createTcpClient(TcpCommunicationSpi.java:3173) > [23:19:11]W: [org.apache.ignite:ignite-core]at > org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi.createNioClient(TcpCommunicationSpi.java:2757) > [23:19:11]W: [org.apache.ignite:ignite-core]at > org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi.reserveClient(TcpCommunicationSpi.java:2649) > [23:19:11]W: [org.apache.ignite:ignite-core]at > org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi.access$5900(TcpCommunicationSpi.java:245) > [23:19:11]W: [org.apache.ignite:ignite-core]at > org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi$CommunicationWorker.processDisconnect(TcpCommunicationSpi.java:4065) > [23:19:11]W: [org.apache.ignite:ignite-core]at > org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi$CommunicationWorker.body(TcpCommunicationSpi.java:3891) > [23:19:11]W: [org.apache.ignite:ignite-core]at > org.apache.ignite.spi.IgniteSpiThread.run(IgniteSpiThread.java:62) > [23:19:11]W: [org.apache.ignite:ignite-core]Suppressed: > class org.apache.ignite.IgniteCheckedException: Failed to connect to address > [addr=/127.0.0.1:45273, err=Connection refused] > [23:19:11]W: [org.apache.ignite:ignite-core]at > org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi.createTcpClient(TcpCommunicationSpi.java:3178) > [23:19:11]W: [org.apache.ignite:ignite-core]... 6 > more > [23:19:11]W: [org.apache.ignite:ignite-core]Caused by: > java.net.ConnectException: Connection refused > [23:19:11]W: [org.apache.ignite:ignite-core]at > sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) > [23:19:11]W: [org.apache.ignite:ignite-core]at > sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:744) > [23:19:11]W: [org.apache.ignite:ignite-core]at > sun.nio.ch.SocketAdaptor.connect(SocketAdaptor.java:117) > [23:19:11]W: [org.apache.ignite:ignite-core]at > org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi.createTcpClient(TcpCommunicationSpi.java:3024) > [23:19:11]W: [org.apache.ignite:ignite-core]... 6 > more > {noformat} > and then > {noformat} > [23:19:11]W: [org.apache.ignite:ignite-core] [2017-07-14 > 20:19:11,895][WARN ][main][root] Interrupting threads started so far: 5 > [23:19:11] : [Step 4/5] [2017-07-14 20:19:11,895][INFO ][main][root] >>> > Stopping test class: GridCachePartitionEvictionDuringReadThroughSelfTest <<< > [23:19:11]W:
[jira] [Commented] (IGNITE-7019) Cluster can not survive after IgniteOOM
[ https://issues.apache.org/jira/browse/IGNITE-7019?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16355742#comment-16355742 ] Igor Seliverstov commented on IGNITE-7019: -- [~cyberdemon], I've left a couple of comments on github > Cluster can not survive after IgniteOOM > --- > > Key: IGNITE-7019 > URL: https://issues.apache.org/jira/browse/IGNITE-7019 > Project: Ignite > Issue Type: Bug > Components: cache >Affects Versions: 2.3 >Reporter: Mikhail Cherkasov >Assignee: Dmitriy Sorokin >Priority: Critical > Labels: iep-7 > Fix For: 2.5 > > > even if we have full sync mode and transactional cache we can't add new nodes > if there was IgniteOOM, after adding new nodes and re-balancing, old nodes > can't evict partitions: > {code} > [2017-11-17 20:02:24,588][ERROR][sys-#65%DR1%][GridDhtPreloader] Partition > eviction failed, this can cause grid hang. > class org.apache.ignite.internal.mem.IgniteOutOfMemoryException: Not enough > memory allocated [policyName=100MB_Region_Eviction, size=104.9 MB] > Consider increasing memory policy size, enabling evictions, adding more nodes > to the cluster, reducing number of backups or reducing model size. > at > org.apache.ignite.internal.pagemem.impl.PageMemoryNoStoreImpl.allocatePage(PageMemoryNoStoreImpl.java:294) > at > org.apache.ignite.internal.processors.cache.persistence.DataStructure.allocatePageNoReuse(DataStructure.java:117) > at > org.apache.ignite.internal.processors.cache.persistence.DataStructure.allocatePage(DataStructure.java:105) > at > org.apache.ignite.internal.processors.cache.persistence.freelist.PagesList.addStripe(PagesList.java:413) > at > org.apache.ignite.internal.processors.cache.persistence.freelist.PagesList.getPageForPut(PagesList.java:528) > at > org.apache.ignite.internal.processors.cache.persistence.freelist.PagesList.put(PagesList.java:617) > at > org.apache.ignite.internal.processors.cache.persistence.freelist.FreeListImpl.addForRecycle(FreeListImpl.java:582) > at > org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree$Remove.reuseFreePages(BPlusTree.java:3847) > at > org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree$Remove.releaseAll(BPlusTree.java:4106) > at > org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree$Remove.access$6900(BPlusTree.java:3166) > at > org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree.doRemove(BPlusTree.java:1782) > at > org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree.remove(BPlusTree.java:1567) > at > org.apache.ignite.internal.processors.cache.IgniteCacheOffheapManagerImpl$CacheDataStoreImpl.remove(IgniteCacheOffheapManagerImpl.java:1387) > at > org.apache.ignite.internal.processors.cache.IgniteCacheOffheapManagerImpl.remove(IgniteCacheOffheapManagerImpl.java:374) > at > org.apache.ignite.internal.processors.cache.GridCacheMapEntry.removeValue(GridCacheMapEntry.java:3233) > at > org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtCacheEntry.clearInternal(GridDhtCacheEntry.java:588) > at > org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtLocalPartition.clearAll(GridDhtLocalPartition.java:892) > at > org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtLocalPartition.tryEvict(GridDhtLocalPartition.java:750) > at > org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPreloader$3.call(GridDhtPreloader.java:593) > at > org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPreloader$3.call(GridDhtPreloader.java:580) > at > org.apache.ignite.internal.util.IgniteUtils.wrapThreadLoader(IgniteUtils.java:6639) > at > org.apache.ignite.internal.processors.closure.GridClosureProcessor$2.body(GridClosureProcessor.java:967) > at > org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:110) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) > at java.lang.Thread.run(Thread.java:748) > {code} > Discussion on the dev list: > http://apache-ignite-developers.2346864.n4.nabble.com/How-properly-handle-IgniteOOM-td25288.html -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Resolved] (IGNITE-7177) Custom discovery messages from plugins are handled incorrectly
[ https://issues.apache.org/jira/browse/IGNITE-7177?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Alexey Goncharuk resolved IGNITE-7177. -- Resolution: Fixed Fix Version/s: (was: 2.5) 2.4 Fixed as a part of IEP-4 > Custom discovery messages from plugins are handled incorrectly > -- > > Key: IGNITE-7177 > URL: https://issues.apache.org/jira/browse/IGNITE-7177 > Project: Ignite > Issue Type: Bug >Affects Versions: 2.1 >Reporter: Alexey Goncharuk >Assignee: Alexey Goncharuk >Priority: Major > Fix For: 2.4 > > > We call onServerNodeJoin for custom messages which do not change affinity -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (IGNITE-7644) Add an utility to export all key-value data from a persisted partition
Alexey Goncharuk created IGNITE-7644: Summary: Add an utility to export all key-value data from a persisted partition Key: IGNITE-7644 URL: https://issues.apache.org/jira/browse/IGNITE-7644 Project: Ignite Issue Type: Improvement Components: persistence Affects Versions: 2.1 Reporter: Alexey Goncharuk Fix For: 2.5 We need an emergency utility analogous to pgdump which will be able to full-scan all PDS partition pages and extract all survived data in some form that later can be uploaded back to Ignite cluster -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (IGNITE-7535) SQL COPY command: implement encoding option
[ https://issues.apache.org/jira/browse/IGNITE-7535?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16355644#comment-16355644 ] Kirill Shirokov commented on IGNITE-7535: - Please note that I consciously include import order fixes in this patch. > SQL COPY command: implement encoding option > --- > > Key: IGNITE-7535 > URL: https://issues.apache.org/jira/browse/IGNITE-7535 > Project: Ignite > Issue Type: Improvement > Components: sql >Reporter: Kirill Shirokov >Assignee: Kirill Shirokov >Priority: Major > > The syntax can be something like: > {noformat} > COPY > ... > FORMAT CSV > ... > [CHARSET ""] > {noformat} > CHARSET is optional. By default the encoding is UTF-8. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (IGNITE-6552) The ability to set WAL history size in time units
[ https://issues.apache.org/jira/browse/IGNITE-6552?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Alexey Goncharuk updated IGNITE-6552: - Fix Version/s: 2.5 > The ability to set WAL history size in time units > - > > Key: IGNITE-6552 > URL: https://issues.apache.org/jira/browse/IGNITE-6552 > Project: Ignite > Issue Type: Improvement > Components: persistence >Affects Versions: 2.2 >Reporter: Vladislav Pyatkov >Priority: Major > Fix For: 2.5 > > > We can to set size of WAL history in number of checkpoints. > {code} > org.apache.ignite.configuration.PersistentStoreConfiguration#setWalHistorySize > {code} > But it is not convenient fro end user. Nobody to say how many checkpoint to > occur over several minutes. > I think, it will be better if we will have ability to set WAL history size in > time units (milliseconds for example). -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (IGNITE-6552) The ability to set WAL history size in time units
[ https://issues.apache.org/jira/browse/IGNITE-6552?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16355640#comment-16355640 ] Alexey Goncharuk commented on IGNITE-6552: -- Alternatively, we may have an ability to set WAL history size in Gbytes. When we are too close to the WAL limit, we should trigger another checkpoint. > The ability to set WAL history size in time units > - > > Key: IGNITE-6552 > URL: https://issues.apache.org/jira/browse/IGNITE-6552 > Project: Ignite > Issue Type: Improvement > Components: persistence >Affects Versions: 2.2 >Reporter: Vladislav Pyatkov >Priority: Major > > We can to set size of WAL history in number of checkpoints. > {code} > org.apache.ignite.configuration.PersistentStoreConfiguration#setWalHistorySize > {code} > But it is not convenient fro end user. Nobody to say how many checkpoint to > occur over several minutes. > I think, it will be better if we will have ability to set WAL history size in > time units (milliseconds for example). -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (IGNITE-7605) SQL COPY: add more SQL parser tests for positive scenarios
[ https://issues.apache.org/jira/browse/IGNITE-7605?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16355635#comment-16355635 ] ASF GitHub Bot commented on IGNITE-7605: GitHub user gg-shq opened a pull request: https://github.com/apache/ignite/pull/3488 IGNITE-7605: Add more SQL parser tests for COPY command You can merge this pull request into a Git repository by running: $ git pull https://github.com/gridgain/apache-ignite ignite-7605 Alternatively you can review and apply these changes as the patch at: https://github.com/apache/ignite/pull/3488.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #3488 commit 3175214a8f12fe9a945b3020dabb5b19dee5edd3 Author: gg-shqDate: 2018-02-07T16:00:12Z IGNITE-7606: Added BATCH_SIZE tests > SQL COPY: add more SQL parser tests for positive scenarios > -- > > Key: IGNITE-7605 > URL: https://issues.apache.org/jira/browse/IGNITE-7605 > Project: Ignite > Issue Type: Bug > Components: sql >Affects Versions: 2.4 >Reporter: Kirill Shirokov >Assignee: Kirill Shirokov >Priority: Major > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Assigned] (IGNITE-7605) SQL COPY: add more SQL parser tests for positive scenarios
[ https://issues.apache.org/jira/browse/IGNITE-7605?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kirill Shirokov reassigned IGNITE-7605: --- Assignee: Kirill Shirokov > SQL COPY: add more SQL parser tests for positive scenarios > -- > > Key: IGNITE-7605 > URL: https://issues.apache.org/jira/browse/IGNITE-7605 > Project: Ignite > Issue Type: Bug > Components: sql >Affects Versions: 2.4 >Reporter: Kirill Shirokov >Assignee: Kirill Shirokov >Priority: Major > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (IGNITE-7535) SQL COPY command: implement encoding option
[ https://issues.apache.org/jira/browse/IGNITE-7535?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16355624#comment-16355624 ] ASF GitHub Bot commented on IGNITE-7535: GitHub user gg-shq opened a pull request: https://github.com/apache/ignite/pull/3487 IGNITE-7535: Implement CHARSET option in COPY ... FORMAT CSV command You can merge this pull request into a Git repository by running: $ git pull https://github.com/gridgain/apache-ignite ignite-7535 Alternatively you can review and apply these changes as the patch at: https://github.com/apache/ignite/pull/3487.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #3487 commit deb7994f0fd4233e3e0b699794b9066af87195c7 Author: gg-shqDate: 2018-01-19T12:13:37Z IGNITE-6917: Intermediate commit commit e7747a58c2cdacc6987d625a46d1f79a81863cd3 Author: gg-shq Date: 2018-01-19T17:21:50Z IGNITE-6917: Intermediate commit commit 6f37e6751285a96bdf757b392e1d4113bb47ee48 Author: gg-shq Date: 2018-01-19T17:30:22Z IGNITE-6917: Intermediate commit commit 49f0324c77d0bb3b4ec87317b1ecbde1bd6f34b1 Author: gg-shq Date: 2018-01-22T10:27:34Z IGNITE-6917: Intermediate commit commit a5bec61d41d8dc242cfbf11a7cf03c23bbbcd7c3 Author: gg-shq Date: 2018-01-22T12:25:04Z IGNITE-6917: Intermediate commit commit e18e18696fc92b93b17decf087721c693625ac36 Author: gg-shq Date: 2018-01-22T12:35:56Z IGNITE-6917: Intermediate commit commit 990c04919e181535e57290ee2516a9603657c160 Author: gg-shq Date: 2018-01-22T16:18:18Z IGNITE-6917: Intermediate commit commit 8b163410845a6e6233fa8a2746402651ccea3f69 Author: gg-shq Date: 2018-01-22T17:16:20Z IGNITE-6917: Intermediate commit commit faf762815ef58865c560d6de722be446e429c61d Author: gg-shq Date: 2018-01-23T12:37:56Z IGNITE-6917: Intermediate commit commit f79343f04360b913b32403d5aa0defaf5d04b357 Author: gg-shq Date: 2018-01-23T12:40:34Z IGNITE-6917: Intermediate commit commit efc5d7ab9bad52aaad0872977495a158b0e47770 Author: gg-shq Date: 2018-01-23T13:50:04Z IGNITE-6917: Added BATCH_SIZE parameter to COPY SQL command for internal testing. Adding tests. commit b61db5de48d9a91a658f6133a2ad2544f358ebbf Author: gg-shq Date: 2018-01-24T11:18:55Z IGNITE-6917: Adding tests. Clarifying default columns set. commit aa31488b74c74f881c247339a4b2bd31bf45b849 Author: gg-shq Date: 2018-01-24T19:00:17Z IGNITE-6917: More tests, more logging, cleanups, streaming CSV decoder commit 01125f4bb68bc4ae958cae1d2f8f7dee493fa55e Author: gg-shq Date: 2018-01-25T11:54:29Z IGNITE-6917: Javadoc, added BulkLoadCacheWriter. commit 1a21cd91b3571a23d21bab7cb653478312178bb0 Author: gg-shq Date: 2018-01-25T12:29:03Z IGNITE-6917: Javadoc, javadoc, javadoc. commit a34060392bd5d86b0118fbd26127460d54f918c3 Author: gg-shq Date: 2018-01-25T12:29:57Z IGNITE-6917: Fixed a syntax error added involuntarily in the previous commit. commit ccaef2e349a728145c77c50d96d24e8a38ac35e1 Author: gg-shq Date: 2018-01-25T13:31:40Z IGNITE-6917: Fixed charset decoder bugs, tests, handling of empty lines commit b4cf0a4a4fb6cf3c6c35f09fe99ac5954541a679 Author: gg-shq Date: 2018-01-26T10:54:01Z Merge branch 'master' of https://git-wip-us.apache.org/repos/asf/ignite into ignite-6917-1 # Conflicts: # modules/core/src/main/java/org/apache/ignite/internal/sql/SqlParser.java # modules/indexing/src/main/java/org/apache/ignite/internal/processors/query/h2/IgniteH2Indexing.java commit 26694bc00b76895d4f22e8416af29000197230ec Author: gg-shq Date: 2018-01-26T12:45:32Z IGNITE-6917: Moved syntax tests to a separate file, moved truncated rows handling from UpdatePlan.processRow() to a different place, minor changes commit 1d7a9f8818dff4c4bc6a8f9d509a23be394b3e59 Author: gg-shq Date: 2018-01-26T12:55:47Z IGNITE-6917: Find input files by using IgniteUtils.resolveIgnitePath(), test fixes. commit 58c9b2dc7190c149e9c9fa377a2581849bb41420 Author: gg-shq Date: 2018-01-26T12:58:58Z Merge branch 'master' of https://git-wip-us.apache.org/repos/asf/ignite into ignite-6917-1 commit 47eace53c82770b076b26bbba87944c872a941ad Author: gg-shq Date: 2018-01-26T13:06:11Z IGNITE-6917: Javadoc, tidying up. commit ba0f9c822a0873662a5505892956ca4e68d87e56 Author: gg-shq Date: 2018-01-26T14:53:10Z IGNITE-6917: Added error reporting and tests for batch mode and into jdbc2 driver.
[jira] [Resolved] (IGNITE-6891) Proper behavior on Persistence errors
[ https://issues.apache.org/jira/browse/IGNITE-6891?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Dmitriy Sorokin resolved IGNITE-6891. - Resolution: Duplicate > Proper behavior on Persistence errors > -- > > Key: IGNITE-6891 > URL: https://issues.apache.org/jira/browse/IGNITE-6891 > Project: Ignite > Issue Type: Improvement >Reporter: Anton Vinogradov >Assignee: Dmitriy Sorokin >Priority: Major > Labels: iep-7 > Fix For: 2.5 > > > Node should be stopped anyway, what we can provide is user callback, > something like beforeNodeStop'. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (IGNITE-6890) General way for handling Ignite failures
[ https://issues.apache.org/jira/browse/IGNITE-6890?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Dmitriy Sorokin updated IGNITE-6890: Summary: General way for handling Ignite failures (was: Proper behavior on ExchangeWorker exits with error ) > General way for handling Ignite failures > > > Key: IGNITE-6890 > URL: https://issues.apache.org/jira/browse/IGNITE-6890 > Project: Ignite > Issue Type: Improvement >Reporter: Anton Vinogradov >Assignee: Dmitriy Sorokin >Priority: Major > Labels: iep-7 > Fix For: 2.5 > > > Ignite failures which should be handled are: > # Topology segmentation; > # Exchange worker stop; > # Persistence errors. > Proper behavior should be selected according to result of calling > IgniteFailureHandler instance, custom implementation of which can be provided > in IgniteConfiguration. It can be node stop, restart or nothing. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (IGNITE-6890) Proper behavior on ExchangeWorker exits with error
[ https://issues.apache.org/jira/browse/IGNITE-6890?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Dmitriy Sorokin updated IGNITE-6890: Description: Ignite failures which should be handled are: # Topology segmentation; # Exchange worker stop; # Persistence errors. Proper behavior should be selected according to result of calling IgniteFailureHandler instance, custom implementation of which can be provided in IgniteConfiguration. It can be node stop, restart or nothing. was:Node should be stopped anyway, what we can provide is user callback, something like beforeNodeStop'. > Proper behavior on ExchangeWorker exits with error > --- > > Key: IGNITE-6890 > URL: https://issues.apache.org/jira/browse/IGNITE-6890 > Project: Ignite > Issue Type: Improvement >Reporter: Anton Vinogradov >Assignee: Dmitriy Sorokin >Priority: Major > Labels: iep-7 > Fix For: 2.5 > > > Ignite failures which should be handled are: > # Topology segmentation; > # Exchange worker stop; > # Persistence errors. > Proper behavior should be selected according to result of calling > IgniteFailureHandler instance, custom implementation of which can be provided > in IgniteConfiguration. It can be node stop, restart or nothing. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (IGNITE-7586) SQL COPY: add code examples
[ https://issues.apache.org/jira/browse/IGNITE-7586?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16355604#comment-16355604 ] ASF GitHub Bot commented on IGNITE-7586: Github user asfgit closed the pull request at: https://github.com/apache/ignite/pull/3485 > SQL COPY: add code examples > --- > > Key: IGNITE-7586 > URL: https://issues.apache.org/jira/browse/IGNITE-7586 > Project: Ignite > Issue Type: Improvement > Components: sql >Affects Versions: 2.4 >Reporter: Kirill Shirokov >Assignee: Kirill Shirokov >Priority: Major > Labels: iep-1 > Fix For: 2.4 > > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Comment Edited] (IGNITE-6842) Stop all nodes after test by default.
[ https://issues.apache.org/jira/browse/IGNITE-6842?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16355588#comment-16355588 ] Maxim Muzafarov edited comment on IGNITE-6842 at 2/7/18 3:27 PM: - Discussed in dev-list {quote}*Anton Vinogradov* We discussed with Dima privately, and decided 1) We have to assert that there is no alive nodes at GridAbstractTest'sbeforeTestsStarted 2) We have to kill all alive nodes (without force) at GridAbstractTest'safterTestsStopped 3) In case of any exceptions at #2 we have to see test error 4) We can get rid of all useless stopAllGrids at GridAbstractTest'ssubclasses. {quote} {quote}*Dmitry Pavlov* {quote} {quote}Yes, this solution allows to cover both cases: a) not stopped node from previous test and b) allows to remove useless code that stops Ignite nodes from each test. {quote} was (Author: mmuzaf): Discussed in dev-list {quote}*Anton Vinogradov*We discussed with Dima privately, and decided 1) We have to assert that there is no alive nodes at GridAbstractTest'sbeforeTestsStarted 2) We have to kill all alive nodes (without force) at GridAbstractTest'safterTestsStopped 3) In case of any exceptions at #2 we have to see test error 4) We can get rid of all useless stopAllGrids at GridAbstractTest'ssubclasses.{quote} {quote}*Dmitry Pavlov*{quote} {quote}Yes, this solution allows to cover both cases: a) not stopped node from previous test and b) allows to remove useless code that stops Ignite nodes from each test.{quote} > Stop all nodes after test by default. > - > > Key: IGNITE-6842 > URL: https://issues.apache.org/jira/browse/IGNITE-6842 > Project: Ignite > Issue Type: Improvement >Reporter: Alexei Scherbakov >Assignee: Maxim Muzafarov >Priority: Major > Labels: newbie > Fix For: 2.5 > > > Currently it's required to manually call stopAllGrids() after test completion. > This leads to errors in subsequent tests if someone forgets to call it and to > additional boilerplate code in tests. > Right choice is to make this default behavior. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (IGNITE-6842) Stop all nodes after test by default.
[ https://issues.apache.org/jira/browse/IGNITE-6842?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16355588#comment-16355588 ] Maxim Muzafarov commented on IGNITE-6842: - Discussed in dev-list {quote}*Anton Vinogradov*We discussed with Dima privately, and decided 1) We have to assert that there is no alive nodes at GridAbstractTest'sbeforeTestsStarted 2) We have to kill all alive nodes (without force) at GridAbstractTest'safterTestsStopped 3) In case of any exceptions at #2 we have to see test error 4) We can get rid of all useless stopAllGrids at GridAbstractTest'ssubclasses.{quote} {quote}*Dmitry Pavlov*{quote} {quote}Yes, this solution allows to cover both cases: a) not stopped node from previous test and b) allows to remove useless code that stops Ignite nodes from each test.{quote} > Stop all nodes after test by default. > - > > Key: IGNITE-6842 > URL: https://issues.apache.org/jira/browse/IGNITE-6842 > Project: Ignite > Issue Type: Improvement >Reporter: Alexei Scherbakov >Assignee: Maxim Muzafarov >Priority: Major > Labels: newbie > Fix For: 2.5 > > > Currently it's required to manually call stopAllGrids() after test completion. > This leads to errors in subsequent tests if someone forgets to call it and to > additional boilerplate code in tests. > Right choice is to make this default behavior. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (IGNITE-7508) GridKernalContextImpl::isDaemon creates contention on system properties access
[ https://issues.apache.org/jira/browse/IGNITE-7508?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16355591#comment-16355591 ] ASF GitHub Bot commented on IGNITE-7508: Github user asfgit closed the pull request at: https://github.com/apache/ignite/pull/3468 > GridKernalContextImpl::isDaemon creates contention on system properties access > -- > > Key: IGNITE-7508 > URL: https://issues.apache.org/jira/browse/IGNITE-7508 > Project: Ignite > Issue Type: Bug > Components: general >Reporter: Stanislav Lukyanov >Assignee: Andrew Mashenkov >Priority: Major > > GridKernalContextImpl::isDaemon reads system property IGNITE_DAEMON on every > call, leading to contention on the system properties lock. The lock is shown > as contended in the Java Mission Control analysis of a JFR recording of the > IgnitePutGetBenchmark. > The fix would be to cache IGNITE_DAEMON value (e.g. in IgniteUtils) since it > isn't supposed to be changed during the JVM's lifetime anyway. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Assigned] (IGNITE-7540) Sequential checkpoints cause overwrite of already cleaned & freed offheap page
[ https://issues.apache.org/jira/browse/IGNITE-7540?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Pavel Kovalenko reassigned IGNITE-7540: --- Assignee: Pavel Kovalenko (was: Alexey Goncharuk) > Sequential checkpoints cause overwrite of already cleaned & freed offheap page > -- > > Key: IGNITE-7540 > URL: https://issues.apache.org/jira/browse/IGNITE-7540 > Project: Ignite > Issue Type: Bug > Components: persistence >Affects Versions: 2.4 >Reporter: Ilya Kasnacheev >Assignee: Pavel Kovalenko >Priority: Major > Attachments: IgnitePdsDestroyCacheTest.java > > > The sequence of events as follows: > in GridCacheProcessor.onExchangeDone(), > {color:#660e7a}sharedCtx{color}.database().waitForCheckpoint({color:#008000}"caches > stop"{color}) is peformed and then cache is destroyed and all its pages are > freed and cleared asynchronously. > However, it is entirely possible that after waitForCheckpoint(), next > checkpoint will start immediately. It is typical when a lot of data being > loaded into Ignite, leading to rapid checkpoint buffer depletion, as well as > with artificially increased checkpoint frequency, as used in reproducer. > Then, checkpointer will save (overwrite) metadata page: > {code:java} > at > org.apache.ignite.internal.processors.cache.persistence.pagemem.PageMemoryImpl.writeUnlockPage(PageMemoryImpl.java:1330) > at > org.apache.ignite.internal.processors.cache.persistence.pagemem.PageMemoryImpl.writeUnlock(PageMemoryImpl.java:428) > at > org.apache.ignite.internal.processors.cache.persistence.pagemem.PageMemoryImpl.writeUnlock(PageMemoryImpl.java:422) > at > org.apache.ignite.internal.processors.cache.persistence.GridCacheOffheapManager.saveStoreMetadata(GridCacheOffheapManager.java:375) > at > org.apache.ignite.internal.processors.cache.persistence.GridCacheOffheapManager.onCheckpointBegin(GridCacheOffheapManager.java:163) > at > org.apache.ignite.internal.processors.cache.persistence.GridCacheDatabaseSharedManager$Checkpointer.markCheckpointBegin(GridCacheDatabaseSharedManager.java:2309) > at > org.apache.ignite.internal.processors.cache.persistence.GridCacheDatabaseSharedManager$Checkpointer.doCheckpoint(GridCacheDatabaseSharedManager.java:2088) > at > org.apache.ignite.internal.processors.cache.persistence.GridCacheDatabaseSharedManager$Checkpointer.body(GridCacheDatabaseSharedManager.java:2013) > at > org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:110) > at java.lang.Thread.run(Thread.java:748){code} > This will happen after cache is already destroyed and even after the page is > already zeroed by PageMemoryImpl$ClearSegmentRunnable.run(). > Then, some new cache is being created, and in > GridCacheOffheapManager$GridCacheDataStore.getOrAllocatePartitionMetas(), > pageMem.acquirePage() will return this page, expected zeroed, but actually > containing metadata for old cache's partition. Then, type == > PageIO.T_PART_META check will return true and the following exception is > issued, leading to cache state inconsistency and data loss: > {code:java} > Caused by: java.lang.IllegalStateException: Failed to get page IO instance > (page content is corrupted) > at > org.apache.ignite.internal.processors.cache.persistence.tree.io.IOVersions.forVersion(IOVersions.java:83) > at > org.apache.ignite.internal.processors.cache.persistence.tree.io.IOVersions.forPage(IOVersions.java:95) > at > org.apache.ignite.internal.processors.cache.persistence.freelist.PagesList.init(PagesList.java:175) > at > org.apache.ignite.internal.processors.cache.persistence.freelist.FreeListImpl.(FreeListImpl.java:370) > at > org.apache.ignite.internal.processors.cache.persistence.GridCacheOffheapManager$GridCacheDataStore$1.(GridCacheOffheapManager.java:932) > at > org.apache.ignite.internal.processors.cache.persistence.GridCacheOffheapManager$GridCacheDataStore.init0(GridCacheOffheapManager.java:929) > at > org.apache.ignite.internal.processors.cache.persistence.GridCacheOffheapManager$GridCacheDataStore.invoke(GridCacheOffheapManager.java:1295) > at > org.apache.ignite.internal.processors.cache.IgniteCacheOffheapManagerImpl.invoke(IgniteCacheOffheapManagerImpl.java:344) > at > org.apache.ignite.internal.processors.cache.GridCacheMapEntry.storeValue(GridCacheMapEntry.java:3191) > at > org.apache.ignite.internal.processors.cache.GridCacheMapEntry.initialValue(GridCacheMapEntry.java:2571) > at > org.apache.ignite.internal.processors.datastreamer.DataStreamerImpl$IsolatedUpdater.receive(DataStreamerImpl.java:2096) > at > org.apache.ignite.internal.processors.datastreamer.DataStreamerUpdateJob.call(DataStreamerUpdateJob.java:140) >
[jira] [Assigned] (IGNITE-640) Implement IgniteMultimap data structures
[ https://issues.apache.org/jira/browse/IGNITE-640?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Alexey Goncharuk reassigned IGNITE-640: --- Assignee: Alexey Goncharuk > Implement IgniteMultimap data structures > > > Key: IGNITE-640 > URL: https://issues.apache.org/jira/browse/IGNITE-640 > Project: Ignite > Issue Type: Sub-task > Components: data structures >Reporter: Dmitriy Setrakyan >Assignee: Alexey Goncharuk >Priority: Major > > We need to add {{IgniteMultimap}} data structure in addition to other data > structures provided by Ignite. {{IgniteMultiMap}} should have similar API to > {{java.util.Map}} class in JDK, but support the semantics of multiple values > per key, similar to [Guava > Multimap|http://docs.guava-libraries.googlecode.com/git/javadoc/com/google/common/collect/Multimap.html]. > > However, unlike in Guava, our multi-map should work with Lists, not > Collections. Lists should make it possible to support the following methods: > {code} > // Gets value at a certain index for a key. > V get(K, index); > // Gets all values for a collection of keys at a certain index. > MapgetAll(Collection, index); > // Gets values for specified indexes for a key. > List get(K, Iterable indexes); > // Gets all values for a collection of keys at specified indexes. > Map getAll(Collection, Iterable indexes); > // Gets values for specified range of indexes, between min and max. > List get(K, int min, int max); > // Gets all values for a collection of keys for a specified index range, > between min and max. > Map getAll(Collection, int min, int max); > // Gets all values for a specific key. > List get(K); > // Gets all values for a collection of keys. > Map getAll(Collection); > // Iterate through all elements with a certain index. > Iterator > iterate(int idx); > // Do we need this? > Collection get(K, IgniteBiPredicate ) > {code} > Multimap should also support colocated and non-colocated modes, similar to > [IgniteQueue|https://github.com/apache/incubator-ignite/blob/master/modules/core/src/main/java/org/apache/ignite/IgniteQueue.java] > and its implementation, > [GridAtomicCacheQueueImpl|https://github.com/apache/incubator-ignite/blob/master/modules/core/src/main/java/org/apache/ignite/internal/processors/datastructures/GridAtomicCacheQueueImpl.java]. > h2. Design Details > The most natural way to implement such map, would be to store every value > under a separate key in an Ignite cache. For example, let's say that we have > a key {{K}} with multiple values: {{V0, V1, V2, ...}}. Then the cache should > end up with the following values {{K0, V0}}, {{K1, V1}}, {{K2, V2}}, etc. > This means that we need to wrap user key into our own, internal key, which > will also have {{index}} field. > Also note that we need to collocate all the values for the same key on the > same node, which means that we need to define user key K as the affinity key, > like so: > {code} > class MultiKey { > @CacheAffinityMapped > private K key; > int index; > } > {code} > Look ups of values at specific indexes becomes very simple. Just attach a > specific index to a key and do a cache lookup. Look ups for all values for a > key should work as following: > {code} > MultiKey key; > V v = null; > int index = 0; > List res = new LinkedList<>(); > do { > v = cache.get(MultiKey(K, index)); > if (v != null) > res.add(v); > index++; > } > while (v != null); > return res; > {code} > We could also use batching for performance reason. In this case the batch > size should be configurable. > {code} > int index = 0; > List res = new LinkedList<>(); > while (true) { > List batch = new ArrayList<>(batchSize); > // Populate batch. > for (; index < batchSize; index++) > batch.add(new MultiKey(K, index % batchSize); > Map batchRes = cache.getAll(batch); > // Potentially need to properly sort values, based on the key order, > // if the returning map does not do it automatically. > res.addAll(batchRes.values()); > if (res.size() < batch.size()) > break; > } > return res; > {code} > h2. Evictions > Evictions in the {{IgniteMultiMap}} should have 2 levels: maximum number of > keys, and maximum number of values for a key. The maximum number of keys > should be controlled by Ignite standard eviction policy. The maximum number > of values for a key should be controlled by the implementation of the > multi-map. Either eviction parameter should be configurable. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Assigned] (IGNITE-640) Implement IgniteMultimap data structures
[ https://issues.apache.org/jira/browse/IGNITE-640?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Alexey Goncharuk reassigned IGNITE-640: --- Assignee: (was: Alexey Goncharuk) > Implement IgniteMultimap data structures > > > Key: IGNITE-640 > URL: https://issues.apache.org/jira/browse/IGNITE-640 > Project: Ignite > Issue Type: Sub-task > Components: data structures >Reporter: Dmitriy Setrakyan >Priority: Major > > We need to add {{IgniteMultimap}} data structure in addition to other data > structures provided by Ignite. {{IgniteMultiMap}} should have similar API to > {{java.util.Map}} class in JDK, but support the semantics of multiple values > per key, similar to [Guava > Multimap|http://docs.guava-libraries.googlecode.com/git/javadoc/com/google/common/collect/Multimap.html]. > > However, unlike in Guava, our multi-map should work with Lists, not > Collections. Lists should make it possible to support the following methods: > {code} > // Gets value at a certain index for a key. > V get(K, index); > // Gets all values for a collection of keys at a certain index. > MapgetAll(Collection, index); > // Gets values for specified indexes for a key. > List get(K, Iterable indexes); > // Gets all values for a collection of keys at specified indexes. > Map getAll(Collection, Iterable indexes); > // Gets values for specified range of indexes, between min and max. > List get(K, int min, int max); > // Gets all values for a collection of keys for a specified index range, > between min and max. > Map getAll(Collection, int min, int max); > // Gets all values for a specific key. > List get(K); > // Gets all values for a collection of keys. > Map getAll(Collection); > // Iterate through all elements with a certain index. > Iterator > iterate(int idx); > // Do we need this? > Collection get(K, IgniteBiPredicate ) > {code} > Multimap should also support colocated and non-colocated modes, similar to > [IgniteQueue|https://github.com/apache/incubator-ignite/blob/master/modules/core/src/main/java/org/apache/ignite/IgniteQueue.java] > and its implementation, > [GridAtomicCacheQueueImpl|https://github.com/apache/incubator-ignite/blob/master/modules/core/src/main/java/org/apache/ignite/internal/processors/datastructures/GridAtomicCacheQueueImpl.java]. > h2. Design Details > The most natural way to implement such map, would be to store every value > under a separate key in an Ignite cache. For example, let's say that we have > a key {{K}} with multiple values: {{V0, V1, V2, ...}}. Then the cache should > end up with the following values {{K0, V0}}, {{K1, V1}}, {{K2, V2}}, etc. > This means that we need to wrap user key into our own, internal key, which > will also have {{index}} field. > Also note that we need to collocate all the values for the same key on the > same node, which means that we need to define user key K as the affinity key, > like so: > {code} > class MultiKey { > @CacheAffinityMapped > private K key; > int index; > } > {code} > Look ups of values at specific indexes becomes very simple. Just attach a > specific index to a key and do a cache lookup. Look ups for all values for a > key should work as following: > {code} > MultiKey key; > V v = null; > int index = 0; > List res = new LinkedList<>(); > do { > v = cache.get(MultiKey(K, index)); > if (v != null) > res.add(v); > index++; > } > while (v != null); > return res; > {code} > We could also use batching for performance reason. In this case the batch > size should be configurable. > {code} > int index = 0; > List res = new LinkedList<>(); > while (true) { > List batch = new ArrayList<>(batchSize); > // Populate batch. > for (; index < batchSize; index++) > batch.add(new MultiKey(K, index % batchSize); > Map batchRes = cache.getAll(batch); > // Potentially need to properly sort values, based on the key order, > // if the returning map does not do it automatically. > res.addAll(batchRes.values()); > if (res.size() < batch.size()) > break; > } > return res; > {code} > h2. Evictions > Evictions in the {{IgniteMultiMap}} should have 2 levels: maximum number of > keys, and maximum number of values for a key. The maximum number of keys > should be controlled by Ignite standard eviction policy. The maximum number > of values for a key should be controlled by the implementation of the > multi-map. Either eviction parameter should be configurable. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (IGNITE-7586) SQL COPY: add code examples
[ https://issues.apache.org/jira/browse/IGNITE-7586?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16355509#comment-16355509 ] Alexander Paschenko commented on IGNITE-7586: - [~kirill.shirokov], Thanks, my comments: # Please refactor imports - we don't use wildcards. Also order of imports does matter, please have a look at guidelines. (First java.*, then everything else.) IDEA may easily be tuned to lay imports out just the way we need. # Please remove "create index", it is not necessary to the scope of this example. # Please change ""JDBC example finished." to ""JDBC COPY command example finished." > SQL COPY: add code examples > --- > > Key: IGNITE-7586 > URL: https://issues.apache.org/jira/browse/IGNITE-7586 > Project: Ignite > Issue Type: Improvement > Components: sql >Affects Versions: 2.4 >Reporter: Kirill Shirokov >Assignee: Kirill Shirokov >Priority: Major > Labels: iep-1 > Fix For: 2.4 > > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (IGNITE-7638) Page replacement process in PDS mode affect segment loaded pages table performance
[ https://issues.apache.org/jira/browse/IGNITE-7638?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16355480#comment-16355480 ] Dmitriy Pavlov commented on IGNITE-7638: Fast fix with migrating continious removed segments to empty if last one is empty gives some considerable effect. Segments statistics in my test is as follows: {noformat} Avg steps in loaded pages table, get: 348 steps, 4358 ops/sec, put: 1037 steps, 1380 ops/sec, capacity: 20242 size: 7978 removedCnt: 12023 emptyCnt: 241 rmv->empty cells: 20624 Avg steps in loaded pages table, get: 183 steps, 7715 ops/sec, put: 788 steps, 1624 ops/sec, capacity: 20242 size: 7978 removedCnt: 12060 emptyCnt: 204 rmv->empty cells: 18963 Avg steps in loaded pages table, get: 235 steps, 7874 ops/sec, put: 1050 steps, 1721 ops/sec, capacity: 20242 size: 7978 removedCnt: 12066 emptyCnt: 198 rmv->empty cells: 19910 Avg steps in loaded pages table, get: 421 steps, 8345 ops/sec, put: 2048 steps, 1643 ops/sec, capacity: 20242 size: 7978 removedCnt: 12233 emptyCnt: 31 rmv->empty cells: 17944 Avg steps in loaded pages table, get: 349 steps, 7883 ops/sec, put: 1683 steps, 1559 ops/sec, capacity: 20242 size: 7978 removedCnt: 12078 emptyCnt: 186 rmv->empty cells: 18935 Avg steps in loaded pages table, get: 264 steps, 8060 ops/sec, put: 1283 steps, 1568 ops/sec, capacity: 20242 size: 7978 removedCnt: 12105 emptyCnt: 159 rmv->empty cells: 18854 Avg steps in loaded pages table, get: 405 steps, 8289 ops/sec, put: 1781 steps, 1856 ops/sec, capacity: 20242 size: 7978 removedCnt: 12142 emptyCnt: 122 rmv->empty cells: 18529 Avg steps in loaded pages table, get: 347 steps, 8059 ops/sec, put: 1494 steps, 1873 ops/sec, capacity: 20242 size: 7978 removedCnt: 12097 emptyCnt: 167 rmv->empty cells: 19652 {noformat} > Page replacement process in PDS mode affect segment loaded pages table > performance > -- > > Key: IGNITE-7638 > URL: https://issues.apache.org/jira/browse/IGNITE-7638 > Project: Ignite > Issue Type: Bug > Components: persistence >Reporter: Dmitriy Pavlov >Assignee: Dmitriy Pavlov >Priority: Critical > Fix For: 2.5 > > > There is > - durable memory segment in PDS enabled data region > - page replacement started > then Ignite performance slow down occurs, JFR & Visual VM shows get from > FullPageIdTable requires significiant time to be executed (~5-10% оf samples > count and up to 40% CPU usage) > {noformat} > Avg steps in loaded pages table, get: 9456 steps, put: 20243 steps, > capacity: 20242 size: 7978 > {noformat} > Effect of influence of FullPageID table is cumulative, once it starts to > grow, number of steps required grows also. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (IGNITE-7586) SQL COPY: add code examples
[ https://issues.apache.org/jira/browse/IGNITE-7586?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kirill Shirokov updated IGNITE-7586: Labels: iep-1 (was: ) > SQL COPY: add code examples > --- > > Key: IGNITE-7586 > URL: https://issues.apache.org/jira/browse/IGNITE-7586 > Project: Ignite > Issue Type: Improvement > Components: sql >Affects Versions: 2.4 >Reporter: Kirill Shirokov >Assignee: Kirill Shirokov >Priority: Major > Labels: iep-1 > Fix For: 2.4 > > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (IGNITE-7586) SQL COPY: add code examples
[ https://issues.apache.org/jira/browse/IGNITE-7586?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kirill Shirokov updated IGNITE-7586: Labels: (was: sql) > SQL COPY: add code examples > --- > > Key: IGNITE-7586 > URL: https://issues.apache.org/jira/browse/IGNITE-7586 > Project: Ignite > Issue Type: Improvement > Components: sql >Affects Versions: 2.4 >Reporter: Kirill Shirokov >Assignee: Kirill Shirokov >Priority: Major > Labels: iep-1 > Fix For: 2.4 > > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (IGNITE-7586) SQL COPY: add code examples
[ https://issues.apache.org/jira/browse/IGNITE-7586?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16355447#comment-16355447 ] ASF GitHub Bot commented on IGNITE-7586: GitHub user gg-shq opened a pull request: https://github.com/apache/ignite/pull/3485 IGNITE-7586: Added COPY command into the JDBC example. You can merge this pull request into a Git repository by running: $ git pull https://github.com/gridgain/apache-ignite ignite-7586 Alternatively you can review and apply these changes as the patch at: https://github.com/apache/ignite/pull/3485.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #3485 commit 530756813411d51e731ffd5f145c7c9f57adbb31 Author: gg-shqDate: 2018-02-07T13:38:59Z IGNITE-7586: Added COPY command into the JDBC example. > SQL COPY: add code examples > --- > > Key: IGNITE-7586 > URL: https://issues.apache.org/jira/browse/IGNITE-7586 > Project: Ignite > Issue Type: Improvement > Components: sql >Affects Versions: 2.4 >Reporter: Kirill Shirokov >Assignee: Kirill Shirokov >Priority: Major > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (IGNITE-6630) Incorrect time units of average transaction commit/rollback duration cache metrics.
[ https://issues.apache.org/jira/browse/IGNITE-6630?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16355422#comment-16355422 ] Pavel Pereslegin commented on IGNITE-6630: -- [~NIzhikov], thank you, I updated the link. > Incorrect time units of average transaction commit/rollback duration cache > metrics. > --- > > Key: IGNITE-6630 > URL: https://issues.apache.org/jira/browse/IGNITE-6630 > Project: Ignite > Issue Type: Bug >Affects Versions: 2.2 >Reporter: Pavel Pereslegin >Assignee: Pavel Pereslegin >Priority: Minor > Labels: metrics, newbie > Fix For: 2.5 > > > AverageTxCommitTime and AverageTxRollbackTime metrics in CacheMetrics counts > in milliseconds instead of microseconds as pointed in javadoc. > Simple junit reproducer: > {code:java} > public class CacheMetricsTxAvgTimeTest extends GridCommonAbstractTest { > /** */ > privateCacheConfiguration cacheConfiguration(String name) { > CacheConfiguration cacheConfiguration = new > CacheConfiguration<>(name); > cacheConfiguration.setCacheMode(CacheMode.PARTITIONED); > cacheConfiguration.setAtomicityMode(CacheAtomicityMode.TRANSACTIONAL); > cacheConfiguration.setStatisticsEnabled(true); > return cacheConfiguration; > } > /** */ > public void testTxCommitDuration() throws Exception { > try ( Ignite node = startGrid(0)) { > IgniteCache
[jira] [Commented] (IGNITE-7192) JDBC: support FQDN to multiple IPs during connection establishment
[ https://issues.apache.org/jira/browse/IGNITE-7192?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16355421#comment-16355421 ] Sergey Kalashnikov commented on IGNITE-7192: [~guseinov], looks good to me now. Thanks > JDBC: support FQDN to multiple IPs during connection establishment > -- > > Key: IGNITE-7192 > URL: https://issues.apache.org/jira/browse/IGNITE-7192 > Project: Ignite > Issue Type: Improvement > Components: jdbc >Affects Versions: 2.1 >Reporter: Alexey Popov >Assignee: Roman Guseinov >Priority: Major > Labels: pull-request-available > > Thin JDBC driver may have FQDN (host name) at a connection string. > Currently, it resolves this FQDN to one IP and tries to connect to this IP > only. > It is better to try to connect to multiple IPs one-by-one if DNS returns > multiple A-records (FQDN can be resolved to several IPs) until successful > connection. It could give a simple fallback option for the JDBC thin driver > users. > A similar functionality is already implemented in ODBC driver. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (IGNITE-6625) JDBC thin: support SSL connection to Ignite node
[ https://issues.apache.org/jira/browse/IGNITE-6625?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16355412#comment-16355412 ] Sergey Kalashnikov commented on IGNITE-6625: [~tledkov-gridgain], the changes look good to me. Thank you. > JDBC thin: support SSL connection to Ignite node > > > Key: IGNITE-6625 > URL: https://issues.apache.org/jira/browse/IGNITE-6625 > Project: Ignite > Issue Type: Improvement > Components: jdbc >Affects Versions: 2.2 >Reporter: Taras Ledkov >Assignee: Taras Ledkov >Priority: Major > Fix For: 2.5 > > > SSL connection must be supported for JDBC thin driver. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Assigned] (IGNITE-7586) SQL COPY: add code examples
[ https://issues.apache.org/jira/browse/IGNITE-7586?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kirill Shirokov reassigned IGNITE-7586: --- Assignee: Kirill Shirokov > SQL COPY: add code examples > --- > > Key: IGNITE-7586 > URL: https://issues.apache.org/jira/browse/IGNITE-7586 > Project: Ignite > Issue Type: Improvement > Components: sql >Affects Versions: 2.4 >Reporter: Kirill Shirokov >Assignee: Kirill Shirokov >Priority: Major > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (IGNITE-6217) Add benchmark to compare JDBC drivers and native SQL execution
[ https://issues.apache.org/jira/browse/IGNITE-6217?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16355393#comment-16355393 ] ASF GitHub Bot commented on IGNITE-6217: Github user asfgit closed the pull request at: https://github.com/apache/ignite/pull/2558 > Add benchmark to compare JDBC drivers and native SQL execution > -- > > Key: IGNITE-6217 > URL: https://issues.apache.org/jira/browse/IGNITE-6217 > Project: Ignite > Issue Type: Task > Components: jdbc, sql, yardstick >Affects Versions: 2.1 >Reporter: Taras Ledkov >Assignee: Pavel Kuznetsov >Priority: Major > > We have to compare performance of the native SQL execution (via Ignite SQL > API), JDBC v2 driver, that uses Ignite client to connect to grid and JDBC > thin client. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (IGNITE-7436) Username/password authentication for thin clients
[ https://issues.apache.org/jira/browse/IGNITE-7436?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16355370#comment-16355370 ] Taras Ledkov commented on IGNITE-7436: -- [~al.psc], [~skalashnikov], [~gvvinblade], [~vozerov], guys. I really need in *primary* review. Please be focused on around {{IgniteAuthenticationProcessor, SqlParser, DdlStatementsProcessor}} changes and don't pay your attention to not removed debug print at the authentication processor. > Username/password authentication for thin clients > - > > Key: IGNITE-7436 > URL: https://issues.apache.org/jira/browse/IGNITE-7436 > Project: Ignite > Issue Type: Improvement > Components: jdbc, odbc, thin client >Affects Versions: 2.3 >Reporter: Taras Ledkov >Assignee: Taras Ledkov >Priority: Major > Fix For: 2.5 > > > This is an umbrella ticket to track all task related to the thin clients > authentication. > [Devlist > discussion|http://apache-ignite-developers.2346864.n4.nabble.com/Username-password-authentication-for-thin-clients-td26058.html] -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (IGNITE-7451) Make Linear SVM for multi-classification
[ https://issues.apache.org/jira/browse/IGNITE-7451?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16355369#comment-16355369 ] ASF GitHub Bot commented on IGNITE-7451: GitHub user zaleslaw opened a pull request: https://github.com/apache/ignite/pull/3484 IGNITE-7451: Make Linear SVM for multi-classification Added model, trainer and tests for them You can merge this pull request into a Git repository by running: $ git pull https://github.com/gridgain/apache-ignite ignite-7451 Alternatively you can review and apply these changes as the patch at: https://github.com/apache/ignite/pull/3484.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #3484 commit bb2450fe2a69233bbff557710ca87ed09a5d95bd Author: Zinoviev AlexeyDate: 2018-02-02T14:22:06Z First version of model and trainer commit a2cc1938d2bc8c78341f67d02a01099e0f6a8293 Author: zaleslaw Date: 2018-02-06T14:23:57Z Merge branch 'master' into ignite-7451 commit 83d7488766db3afe8edd8f969d98c02131a79c24 Author: zaleslaw Date: 2018-02-06T15:53:17Z Added tests > Make Linear SVM for multi-classification > > > Key: IGNITE-7451 > URL: https://issues.apache.org/jira/browse/IGNITE-7451 > Project: Ignite > Issue Type: Sub-task > Components: ml >Reporter: Aleksey Zinoviev >Assignee: Aleksey Zinoviev >Priority: Minor > > Compare and choose one of approaches _one-against-one or one-against-the rest_ > Read the paper [https://www.csie.ntu.edu.tw/~cjlin/papers/multisvm.pdf] -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (IGNITE-7436) Username/password authentication for thin clients
[ https://issues.apache.org/jira/browse/IGNITE-7436?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16355358#comment-16355358 ] ASF GitHub Bot commented on IGNITE-7436: GitHub user tledkov-gridgain opened a pull request: https://github.com/apache/ignite/pull/3483 IGNITE-7436 Username/password authentication for thin clients You can merge this pull request into a Git repository by running: $ git pull https://github.com/gridgain/apache-ignite ignite-7436 Alternatively you can review and apply these changes as the patch at: https://github.com/apache/ignite/pull/3483.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #3483 commit 48d383be2c86f632f4394f21681e6e20233c67fb Author: tledkov-gridgainDate: 2018-01-16T15:46:18Z IGNITE-7439: save the progress commit a6394d0dbe3be38e65025e3de1f5dbe0b0f0b3f9 Author: tledkov-gridgain Date: 2018-01-17T09:02:24Z IGNITE-7439: save the progress commit 5f816e0eeec35e3b714887c4342a4d6159e8ae30 Author: tledkov-gridgain Date: 2018-01-17T13:16:45Z Merge branch '_master' into ignite-7439 commit 2c78e7d1a9be9ad858c6369f5033a9236781c99f Author: tledkov-gridgain Date: 2018-01-18T13:21:13Z IGNITE-7439: save the progress commit b484b50865acf1b996e3d85713c2c7e6d246bbb5 Author: tledkov-gridgain Date: 2018-01-18T13:45:19Z Merge branch '_master' into ignite-7439 commit 4bcb6e4ea4ccba9eea2a06111302015ecaab7846 Author: tledkov-gridgain Date: 2018-01-19T12:08:29Z IGNITE-7439: save the progress commit 7957e19a95d8a3ab6046d38bd117f65834009985 Author: tledkov-gridgain Date: 2018-01-19T12:16:12Z Merge branch 'master' into ignite-7439 # Conflicts: # modules/core/src/main/java/org/apache/ignite/configuration/ClientConnectorConfiguration.java commit 1006b666bc5e570a20afd54d6ecf455183ce1dbb Author: tledkov-gridgain Date: 2018-01-19T16:52:22Z IGNITE-7439: save the progress commit 9e12b872177da6f9ea1d21b4464b22ae822b9c9a Author: tledkov-gridgain Date: 2018-01-23T10:25:32Z IGNITE-7439: save the progress commit 7edc494c9899266143496e57ab9a4cdee6ef804f Author: tledkov-gridgain Date: 2018-01-23T10:57:03Z Merge branch 'master' into ignite-7439 # Conflicts: # modules/core/src/main/java/org/apache/ignite/internal/GridTopic.java # modules/core/src/main/java/org/apache/ignite/internal/managers/communication/GridIoMessageFactory.java commit bba2f71aa5c916c65bdb9ff315afeb3cfe0b8e57 Author: tledkov-gridgain Date: 2018-01-24T10:45:25Z IGNITE-7439: save the progress commit af83ecd0a2c44ac5d9d394b7b099d46eefd9b813 Author: tledkov-gridgain Date: 2018-01-25T14:21:27Z IGNITE-7439: save the progress commit 83809e9473f832fdc472bdceab8233a9824fbaac Author: tledkov-gridgain Date: 2018-01-25T14:21:36Z Merge branch '_master' into ignite-7439 commit 27f428b7a2d72d298745c828815fee3c1882968d Author: tledkov-gridgain Date: 2018-01-26T10:49:53Z Merge branch '_master' into ignite-7439 commit f15b405fc3bf9c62999661c627bd690519c78394 Author: tledkov-gridgain Date: 2018-01-29T09:09:27Z IGNITE-7439: save the progress commit 0087b2ce8c5887a301000bd1558bfdd7493930ff Author: tledkov-gridgain Date: 2018-01-29T15:24:20Z IGNITE-7439: save the progress commit c9a7ac9a23ab32b40ed380920304cbce9390e84d Author: tledkov-gridgain Date: 2018-01-30T09:07:33Z IGNITE-7557: save the progress commit d2e2bf96f50a75ee43d7c02a70f9363a95e4373b Author: tledkov-gridgain Date: 2018-01-30T09:42:55Z Merge branch '_master' into ignite-7439 commit ea897bc5756bcada73be703a538ce09901782230 Author: tledkov-gridgain Date: 2018-02-01T15:06:06Z IGNITE-7439: save the progress commit 664eb5aeb5525072b353ffc0dc2427ec517747a7 Author: tledkov-gridgain Date: 2018-02-01T15:08:53Z IGNITE-7439: save the progress commit 41c6cfef7227aa1b4092efc23bf888c3016638d2 Author: tledkov-gridgain Date: 2018-02-01T15:13:19Z Merge branch 'master' into ignite-7439 # Conflicts: # modules/core/src/main/java/org/apache/ignite/configuration/ClientConnectorConfiguration.java commit a52e46cca6a8d506206a76377f43f649737ea73d Author: tledkov-gridgain Date: 2018-02-01T15:17:43Z Merge branch 'ignite-7439' into ignite-7557 commit 24e0f3cda82d1386d3bdcecc72338a365f862102 Author: tledkov-gridgain Date: 2018-02-02T09:29:29Z IGNITE-7557: save the progress commit 0a233d05b8859674a17f4398e57e412fbb303aeb Author: tledkov-gridgain Date:
[jira] [Commented] (IGNITE-7538) Introduce new test project for Ignite 2.0+ with Java 8 / Java 9 compatibility
[ https://issues.apache.org/jira/browse/IGNITE-7538?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16355351#comment-16355351 ] ASF GitHub Bot commented on IGNITE-7538: GitHub user vveider opened a pull request: https://github.com/apache/ignite/pull/3482 IGNITE-7538 Introduce new test project for Ignite 2.0+ with Java 8 / Java 9 compatibility You can merge this pull request into a Git repository by running: $ git pull https://github.com/gridgain/apache-ignite ignite-7538 Alternatively you can review and apply these changes as the patch at: https://github.com/apache/ignite/pull/3482.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #3482 commit f3b4e8bf9e65350d6c35b787e173f21d08794d79 Author: Ivanov PetrDate: 2018-02-06T08:22:23Z IGNITE-7538 Introduce new test project for Ignite 2.0+ with Java 8 / Java 9 compatibility * updated versions of flatten-maven-plugin and apache-rat-plugin for RAT licenses check task commit d72815bf86bbaf4bef53e8bba40d3beaf18611bb Author: Ivanov Petr Date: 2018-02-07T11:44:20Z IGNITE-7538 Introduce new test project for Ignite 2.0+ with Java 8 / Java 9 compatibility * update maven-javadoc-plugin version > Introduce new test project for Ignite 2.0+ with Java 8 / Java 9 compatibility > - > > Key: IGNITE-7538 > URL: https://issues.apache.org/jira/browse/IGNITE-7538 > Project: Ignite > Issue Type: Sub-task >Reporter: Peter Ivanov >Assignee: Peter Ivanov >Priority: Critical > > After IGNITE-7203 and IGNITE-6730 there are two separate test project at CI > which meet the problem of test configuration synchronization. It is necessary > to devise a solution for overcoming this issue. > Possible approaches: > # Single project for tests with run with different parameters. > Problems: > #* Test History will show history for all builds with any parameters > combination. > #* Mute test will mute test for all parameters configuration. > # Several project (differentiated by parameter) with build configuration > synchronisation automation. > Problems: > #* Maintainability — how to seamlessly support build configuration > synchronisation. > Research for both approaches must be held. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (IGNITE-7638) Page replacement process in PDS mode affect segment loaded pages table performance
[ https://issues.apache.org/jira/browse/IGNITE-7638?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16355349#comment-16355349 ] ASF GitHub Bot commented on IGNITE-7638: GitHub user dspavlov opened a pull request: https://github.com/apache/ignite/pull/3481 IGNITE-7638: Test created to reproduce FullPageIdTable You can merge this pull request into a Git repository by running: $ git pull https://github.com/gridgain/apache-ignite ignite-7638 Alternatively you can review and apply these changes as the patch at: https://github.com/apache/ignite/pull/3481.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #3481 commit 90befa9d3bc89833eeed42a8814e8a7f51a11448 Author: dpavlovDate: 2018-01-25T18:11:07Z IGNITE-7533: Throttle writing threads according fsync progress and checkpoint write speed commit 74411e7aea9744df7f7656006807aa0403ae921f Author: dpavlov Date: 2018-01-29T15:51:23Z IGNITE-7533: Throttle writing threads according fsync progress and checkpoint write speed commit 4be02ec3444596ee0bc95bd55ece9ba741e729a1 Author: dpavlov Date: 2018-01-29T15:55:48Z Merge branch 'master' into ignite-7533 commit f5d383ddef1b2a66470ec110f781a98aa78c5d03 Author: dpavlov Date: 2018-01-29T17:08:35Z IGNITE-7533: Throttle writing threads according fsync progress and checkpoint write speed commit 7c0afa374907202e45d6dcbfae88af1c3a27687f Author: dpavlov Date: 2018-01-30T14:04:52Z IGNITE-7533: Option to enable old implementation of throttling commit 9f9c1e7955d894bbfd8a8572362d2c13177a60c6 Author: dpavlov Date: 2018-01-30T14:13:42Z IGNITE-7380: Flaky test reproduction commit 8d8aecd55a4d94c091e970ccf3bbbd72272cf325 Author: dpavlov Date: 2018-01-30T14:20:50Z IGNITE-7380: Flaky test reproduction commit cf9d42ba77133c4c6e37b1d8a7ac6d54054d1bc1 Author: dpavlov Date: 2018-01-30T14:40:26Z IGNITE-7533: Preserve order of writing in fsync commit b37f27275446a8cccafbd231c35e3605d9fd7089 Author: dpavlov Date: 2018-01-30T14:43:10Z IGNITE-7380: Increase of timeout of checkpoint commit b05ef5dae3d4e5deef6482844989077aba6f1bf2 Author: dpavlov Date: 2018-01-30T16:23:51Z IGNITE-7533: Too much pages written case, no throttling in case too long wait. Added more delay in case low space left commit 62685bcb363add269930af99e59b74d396870b55 Author: dpavlov Date: 2018-01-30T16:37:35Z IGNITE-7380: Flaky test reproduction commit c7ba24580199a238506ea00176aaf7ae229aa135 Author: dpavlov Date: 2018-01-30T17:57:15Z IGNITE-7533: Sandbox test with progress gaps detection was added. commit d32654d902ca2fe0ec4e3fa5327afe84f949cdaf Author: dpavlov Date: 2018-01-31T15:52:12Z IGNITE-7175: fix compatible with speed based throttling commit 018ed3c0de21cbe1ddd9f7558a417369740bd2cc Author: dpavlov Date: 2018-01-31T17:28:23Z IGNITE-7533: recurrent warning of throttling if significant pressure. commit 687d1ffd3d7e6ccc9a0c609ffda6762e7707d788 Author: dpavlov Date: 2018-01-31T17:38:43Z IGNITE-7533: recurrent warning of throttling if significant pressure: cp pages added commit 94dac70231c52a915717ca444e7aba8e4b816003 Author: dpavlov Date: 2018-01-31T18:15:03Z IGNITE-7533: Test suite added commit 25f4774c1990cb956d2d01eb1c261da762a8798e Author: dpavlov Date: 2018-01-31T18:15:46Z IGNITE-7533: Test suite added commit c0a078d45caf08e5f4d9b28f901176155b26c8a4 Author: dpavlov Date: 2018-02-01T11:59:21Z Merge branch 'master' into ignite-7533 commit eab2b06c776e6e4a5f7ee6d1b8a9aa7596831660 Author: dpavlov Date: 2018-02-01T12:27:31Z IGNITE-7533: Message updated to be more clear commit 45845535cf42b2d86b7f2ff967713baf3c4c430b Author: dpavlov Date: 2018-02-01T17:34:33Z IGNITE-7533: data streamer test commit 8c78c888e565f90d60ed2cd7ba426128b22a9c8a Author: dpavlov Date: 2018-02-06T17:52:07Z IGNITE-7638: Problem demonstrated with FullPageIdTable commit aff846cd60fc8e3f4d901ed8e865709bcc81518b Author: dpavlov Date: 2018-02-07T10:27:19Z Merge branch 'master' into ignite-7638 # Conflicts: # modules/core/src/main/java/org/apache/ignite/internal/processors/cache/persistence/GridCacheDatabaseSharedManager.java # modules/core/src/test/java/org/apache/ignite/testsuites/IgnitePdsUnitTestSuite.java > Page replacement process in PDS mode affect segment loaded pages table > performance > -- > >
[jira] [Commented] (IGNITE-6917) SQL: implement COPY command for efficient data loading
[ https://issues.apache.org/jira/browse/IGNITE-6917?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16355336#comment-16355336 ] ASF GitHub Bot commented on IGNITE-6917: Github user asfgit closed the pull request at: https://github.com/apache/ignite/pull/3419 > SQL: implement COPY command for efficient data loading > -- > > Key: IGNITE-6917 > URL: https://issues.apache.org/jira/browse/IGNITE-6917 > Project: Ignite > Issue Type: New Feature > Components: sql >Affects Versions: 2.4 >Reporter: Vladimir Ozerov >Assignee: Kirill Shirokov >Priority: Major > Labels: iep-1 > Fix For: 2.4 > > > Inspired by Postgres [1] > Common use case - bulk data load through JDBC/ODBC interface. Currently it is > only possible to execute single commands one by one. We already can batch > them to improve performance, but there is still big room for improvement. > We should think of a completely new command - {{COPY}}. It will accept a file > (or input stream in general case) on the client side, then transfer data to > the cluster, and then execute update inside the cluster, e.g. through > streamer. > First of all we need to create quick and dirty prototype to assess potential > performance improvement. It speedup is confirmed, we should build base > implementation which will accept only files. But at the same time we should > understand how it will evolve in future: multiple file formats (probably > including Hadoop formarts, e.g. Parquet), escape characters, input streams, > etc.. > [1] [https://www.postgresql.org/docs/9.6/static/sql-copy.html] > h1. Proposed syntax > Curent implementation: > {noformat} > COPY > FROM "file.name" > INTO . > [(col-name, ...)] > FORMAT -- Only CSV format is supported in the current > release > [BATCH_SIZE ] > {noformat} > We may want to gradually add features to this command in future to have > something like this: > {noformat} > COPY > FROM "file.name"[CHARSET ""] > INTO . [CREATE [IF NOT EXISTS]] > [(col-name [] [NULLABLE] [ESCAPES], ...) [MATCH HEADER]] > FORMAT (csv|tsv|...) > -- CSV format options: > [FIELDSEP='column-separators-regexp'] > [LINESEP='row-separators-regexp'] > [QUOTE='quote-chars'] > [ESCAPE='escape-char'] > [NULL='null-sequence'] > [COMMENT='single-line-comment-start-char'] > [TRIM_LINES] > [IMPORT_EMPTY_LINES] > [CHARSET ""] > [ROWS -] > --or-- > [SKIP ROWS ] [MAX ROWS ] > [COLS -] > --or-- > [SKIP COLS ] [MAX COLS ] > [(MATCH | SKIP) HEADER] > [(REPLACE|IGNORE|ABORT ON [])) DUPLICATE KEYS] > [BATCH SIZE ( ROWS | [K|M|G|T|P])] > [COMPRESS "codec-name" [codec options]] > [LOCK (TABLE|ROWS)] > [NOLOGGING] > [BACKEND (DIRECT | STREAMER)] > {noformat} > h1. Implementation decisions and notes > h2. Parsing > * We support CSV format described in RFC 4180. > * Custom row and column separators, quoting characters are currently hardcoded > * Escape sequences, line comment characters are currently not supported > * We may want to support fixed-length formats (via format descriptors) in > future > * We may want to strip comments from lines (for example, starting with '#') > * We may want to allow user to either ignore empty lines or treat them as a > special case of record having all default values > * We may allow user to enable whitespace trimming from beginning and end of a > line > * We may want to allow user to specify error handling strategy: e.g., only > one quote character is present or escape sequence is invalid. > h2. File handling > * File character set to be supported in future > * Skipped/imported row number (or first/last line or skip header option), > skipped/imported column number (or first/last column): to be supported in > future > * Line start pattern (as in MySQL): no support planned > * We currently support only client-side import. No server-side file import. > * We may want to support client-side stdin import in future. > * We do not handle importing multiple files from single command > * We don't benefit from any kind of pre-sorting pre-partitioning data on > client side. > * We don't include any any metadata, such as line number from client side. > h3. Transferring data > * We send file data via batches. In future we will support batch size > (specified with rows per batch or data block size > per batch). > * We may want to implement data compression in future. > * We connect to single node in JDBC driver (no multi-node connections). > h3. Cache/tables/column handling > * We don't create table in the bulk load command > * We may want to have and option for reading header row, which contains > column names to match columns > * In future we may wish to support COLUMNS
[jira] [Commented] (IGNITE-7337) Spark Data Frames: support saving a data frame in Ignite
[ https://issues.apache.org/jira/browse/IGNITE-7337?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16355324#comment-16355324 ] Nikolay Izhikov commented on IGNITE-7337: - > Is there a way to use IgniteDataStreamer instead? Actually, there is a way! Thank you, Valentin, good catch! I've updated PR with IgniteDataStreamer usage. Also, jdbc driver has several parameters to configure internal streamer: {{streamingAllowOverwrite}}, {{streamingFlushFrequency}}, {{streamingPerNodeBufferSize}}, {{streamingPerNodeParallelOperations}}. Should we extend configuration of Data Frame Write in similar manner? https://apacheignite.readme.io/docs/jdbc-driver#streaming-mode > Spark Data Frames: support saving a data frame in Ignite > > > Key: IGNITE-7337 > URL: https://issues.apache.org/jira/browse/IGNITE-7337 > Project: Ignite > Issue Type: New Feature > Components: spark >Affects Versions: 2.3 >Reporter: Valentin Kulichenko >Assignee: Nikolay Izhikov >Priority: Major > Fix For: 2.5 > > > Once Ignite data source for Spark is implemented, we need to add an ability > to store a data frame in Ignite. Most likely if should be enough to provide > implementation for the following traits: > * {{InsertableRelation}} > * {{CreatableRelationProvider}} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (IGNITE-6917) SQL: implement COPY command for efficient data loading
[ https://issues.apache.org/jira/browse/IGNITE-6917?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16355283#comment-16355283 ] Kirill Shirokov commented on IGNITE-6917: - [~al.psc], so, the bottom line is that you don't have any other review comments, so I've moved this issue to 'Patch Available' state. Please revert it back if you haven't finished with reviewing. > SQL: implement COPY command for efficient data loading > -- > > Key: IGNITE-6917 > URL: https://issues.apache.org/jira/browse/IGNITE-6917 > Project: Ignite > Issue Type: New Feature > Components: sql >Affects Versions: 2.4 >Reporter: Vladimir Ozerov >Assignee: Kirill Shirokov >Priority: Major > Labels: iep-1 > Fix For: 2.4 > > > Inspired by Postgres [1] > Common use case - bulk data load through JDBC/ODBC interface. Currently it is > only possible to execute single commands one by one. We already can batch > them to improve performance, but there is still big room for improvement. > We should think of a completely new command - {{COPY}}. It will accept a file > (or input stream in general case) on the client side, then transfer data to > the cluster, and then execute update inside the cluster, e.g. through > streamer. > First of all we need to create quick and dirty prototype to assess potential > performance improvement. It speedup is confirmed, we should build base > implementation which will accept only files. But at the same time we should > understand how it will evolve in future: multiple file formats (probably > including Hadoop formarts, e.g. Parquet), escape characters, input streams, > etc.. > [1] [https://www.postgresql.org/docs/9.6/static/sql-copy.html] > h1. Proposed syntax > Curent implementation: > {noformat} > COPY > FROM "file.name" > INTO . > [(col-name, ...)] > FORMAT -- Only CSV format is supported in the current > release > [BATCH_SIZE ] > {noformat} > We may want to gradually add features to this command in future to have > something like this: > {noformat} > COPY > FROM "file.name"[CHARSET ""] > INTO . [CREATE [IF NOT EXISTS]] > [(col-name [] [NULLABLE] [ESCAPES], ...) [MATCH HEADER]] > FORMAT (csv|tsv|...) > -- CSV format options: > [FIELDSEP='column-separators-regexp'] > [LINESEP='row-separators-regexp'] > [QUOTE='quote-chars'] > [ESCAPE='escape-char'] > [NULL='null-sequence'] > [COMMENT='single-line-comment-start-char'] > [TRIM_LINES] > [IMPORT_EMPTY_LINES] > [CHARSET ""] > [ROWS -] > --or-- > [SKIP ROWS ] [MAX ROWS ] > [COLS -] > --or-- > [SKIP COLS ] [MAX COLS ] > [(MATCH | SKIP) HEADER] > [(REPLACE|IGNORE|ABORT ON [])) DUPLICATE KEYS] > [BATCH SIZE ( ROWS | [K|M|G|T|P])] > [COMPRESS "codec-name" [codec options]] > [LOCK (TABLE|ROWS)] > [NOLOGGING] > [BACKEND (DIRECT | STREAMER)] > {noformat} > h1. Implementation decisions and notes > h2. Parsing > * We support CSV format described in RFC 4180. > * Custom row and column separators, quoting characters are currently hardcoded > * Escape sequences, line comment characters are currently not supported > * We may want to support fixed-length formats (via format descriptors) in > future > * We may want to strip comments from lines (for example, starting with '#') > * We may want to allow user to either ignore empty lines or treat them as a > special case of record having all default values > * We may allow user to enable whitespace trimming from beginning and end of a > line > * We may want to allow user to specify error handling strategy: e.g., only > one quote character is present or escape sequence is invalid. > h2. File handling > * File character set to be supported in future > * Skipped/imported row number (or first/last line or skip header option), > skipped/imported column number (or first/last column): to be supported in > future > * Line start pattern (as in MySQL): no support planned > * We currently support only client-side import. No server-side file import. > * We may want to support client-side stdin import in future. > * We do not handle importing multiple files from single command > * We don't benefit from any kind of pre-sorting pre-partitioning data on > client side. > * We don't include any any metadata, such as line number from client side. > h3. Transferring data > * We send file data via batches. In future we will support batch size > (specified with rows per batch or data block size > per batch). > * We may want to implement data compression in future. > * We connect to single node in JDBC driver (no multi-node connections). > h3. Cache/tables/column handling > * We don't create table in the bulk load command > * We may want to have and option for reading
[jira] [Comment Edited] (IGNITE-6917) SQL: implement COPY command for efficient data loading
[ https://issues.apache.org/jira/browse/IGNITE-6917?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16355283#comment-16355283 ] Kirill Shirokov edited comment on IGNITE-6917 at 2/7/18 10:29 AM: -- [~al.psc], so, the bottom line is that you don't have any other review comments, so I've moved this issue to 'Patch Available' state. Please revert it back if you haven't finished with reviewing. Relevant tests have passed (the failures in the build aren't related to this issue). https://ci.ignite.apache.org/viewLog.html?buildId=1075531=buildResultsDiv=IgniteTests24Java8_RunAllSql was (Author: kirill.shirokov): [~al.psc], so, the bottom line is that you don't have any other review comments, so I've moved this issue to 'Patch Available' state. Please revert it back if you haven't finished with reviewing. > SQL: implement COPY command for efficient data loading > -- > > Key: IGNITE-6917 > URL: https://issues.apache.org/jira/browse/IGNITE-6917 > Project: Ignite > Issue Type: New Feature > Components: sql >Affects Versions: 2.4 >Reporter: Vladimir Ozerov >Assignee: Kirill Shirokov >Priority: Major > Labels: iep-1 > Fix For: 2.4 > > > Inspired by Postgres [1] > Common use case - bulk data load through JDBC/ODBC interface. Currently it is > only possible to execute single commands one by one. We already can batch > them to improve performance, but there is still big room for improvement. > We should think of a completely new command - {{COPY}}. It will accept a file > (or input stream in general case) on the client side, then transfer data to > the cluster, and then execute update inside the cluster, e.g. through > streamer. > First of all we need to create quick and dirty prototype to assess potential > performance improvement. It speedup is confirmed, we should build base > implementation which will accept only files. But at the same time we should > understand how it will evolve in future: multiple file formats (probably > including Hadoop formarts, e.g. Parquet), escape characters, input streams, > etc.. > [1] [https://www.postgresql.org/docs/9.6/static/sql-copy.html] > h1. Proposed syntax > Curent implementation: > {noformat} > COPY > FROM "file.name" > INTO . > [(col-name, ...)] > FORMAT -- Only CSV format is supported in the current > release > [BATCH_SIZE ] > {noformat} > We may want to gradually add features to this command in future to have > something like this: > {noformat} > COPY > FROM "file.name"[CHARSET ""] > INTO . [CREATE [IF NOT EXISTS]] > [(col-name [] [NULLABLE] [ESCAPES], ...) [MATCH HEADER]] > FORMAT (csv|tsv|...) > -- CSV format options: > [FIELDSEP='column-separators-regexp'] > [LINESEP='row-separators-regexp'] > [QUOTE='quote-chars'] > [ESCAPE='escape-char'] > [NULL='null-sequence'] > [COMMENT='single-line-comment-start-char'] > [TRIM_LINES] > [IMPORT_EMPTY_LINES] > [CHARSET ""] > [ROWS -] > --or-- > [SKIP ROWS ] [MAX ROWS ] > [COLS -] > --or-- > [SKIP COLS ] [MAX COLS ] > [(MATCH | SKIP) HEADER] > [(REPLACE|IGNORE|ABORT ON [])) DUPLICATE KEYS] > [BATCH SIZE ( ROWS | [K|M|G|T|P])] > [COMPRESS "codec-name" [codec options]] > [LOCK (TABLE|ROWS)] > [NOLOGGING] > [BACKEND (DIRECT | STREAMER)] > {noformat} > h1. Implementation decisions and notes > h2. Parsing > * We support CSV format described in RFC 4180. > * Custom row and column separators, quoting characters are currently hardcoded > * Escape sequences, line comment characters are currently not supported > * We may want to support fixed-length formats (via format descriptors) in > future > * We may want to strip comments from lines (for example, starting with '#') > * We may want to allow user to either ignore empty lines or treat them as a > special case of record having all default values > * We may allow user to enable whitespace trimming from beginning and end of a > line > * We may want to allow user to specify error handling strategy: e.g., only > one quote character is present or escape sequence is invalid. > h2. File handling > * File character set to be supported in future > * Skipped/imported row number (or first/last line or skip header option), > skipped/imported column number (or first/last column): to be supported in > future > * Line start pattern (as in MySQL): no support planned > * We currently support only client-side import. No server-side file import. > * We may want to support client-side stdin import in future. > * We do not handle importing multiple files from single command > * We don't benefit from any kind of pre-sorting pre-partitioning data on > client side. > * We don't include any any
[jira] [Comment Edited] (IGNITE-7415) Ability to disable WAL (Documentation)
[ https://issues.apache.org/jira/browse/IGNITE-7415?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16355235#comment-16355235 ] Anton Vinogradov edited comment on IGNITE-7415 at 2/7/18 9:57 AM: -- SQL: Turning on {{ALTER TABLE tableName LOGGING}} Turning off {{ALTER TABLE tableName NOLOGGING}} Java: Current state {{ignite.cluster().isWalEnabled(cacheName);}} Turning on {{ignite.cluster().enableWal(cacheName);}} Turning off {{ignite.cluster().disableWal(cacheName);}} was (Author: avinogradov): SQL: Turning on {{ALTER TABLE cacheName LOGGING}} Turning off {{ALTER TABLE cacheName NOLOGGING}} Java: Current state {{ignite.cluster().isWalEnabled(cacheName);}} Turning on {{ignite.cluster().enableWal(cacheName);}} Turning off {{ignite.cluster().disableWal(cacheName);}} > Ability to disable WAL (Documentation) > -- > > Key: IGNITE-7415 > URL: https://issues.apache.org/jira/browse/IGNITE-7415 > Project: Ignite > Issue Type: Task > Components: documentation >Reporter: Anton Vinogradov >Assignee: Vladimir Ozerov >Priority: Major > Fix For: 2.4 > > > Need to update > [https://apacheignite.readme.io/docs/write-ahead-log#section-wal-modes] > [https://apacheignite.readme.io/docs/data-loading] -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (IGNITE-7428) Thin client Java API - cache open/getName/getConfiguration/size
[ https://issues.apache.org/jira/browse/IGNITE-7428?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Alexey Kukushkin updated IGNITE-7428: - Summary: Thin client Java API - cache open/getName/getConfiguration/size (was: Thin client Java API - cache open/getName/getConfiguration/size/close) > Thin client Java API - cache open/getName/getConfiguration/size > --- > > Key: IGNITE-7428 > URL: https://issues.apache.org/jira/browse/IGNITE-7428 > Project: Ignite > Issue Type: Sub-task >Reporter: Alexey Kukushkin >Assignee: Alexey Kukushkin >Priority: Major > Labels: data, java, thin > > Implement cache open/getName/getConfiguration/close/size thin client Java API > including unit and system tests and samples. > Cache > getName(): String > getConfiguration(clazz): Configuration > close() > size(modes: CachePeekMode...) -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (IGNITE-7428) Thin client Java API - cache open/getName/getConfiguration/size
[ https://issues.apache.org/jira/browse/IGNITE-7428?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Alexey Kukushkin updated IGNITE-7428: - Description: Implement cache open/getName/getConfiguration/close/size thin client Java API including unit and system tests and samples. Cache getName(): String getConfiguration(clazz): Configuration size(modes: CachePeekMode...) was: Implement cache open/getName/getConfiguration/close/size thin client Java API including unit and system tests and samples. Cache getName(): String getConfiguration(clazz): Configuration close() size(modes: CachePeekMode...) > Thin client Java API - cache open/getName/getConfiguration/size > --- > > Key: IGNITE-7428 > URL: https://issues.apache.org/jira/browse/IGNITE-7428 > Project: Ignite > Issue Type: Sub-task >Reporter: Alexey Kukushkin >Assignee: Alexey Kukushkin >Priority: Major > Labels: data, java, thin > > Implement cache open/getName/getConfiguration/close/size thin client Java API > including unit and system tests and samples. > Cache > getName(): String > getConfiguration(clazz): Configuration > size(modes: CachePeekMode...) -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (IGNITE-7415) Ability to disable WAL (Documentation)
[ https://issues.apache.org/jira/browse/IGNITE-7415?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16355235#comment-16355235 ] Anton Vinogradov commented on IGNITE-7415: -- SQL: Turning on {{ALTER TABLE cacheName LOGGING}} Turning off {{ALTER TABLE cacheName NOLOGGING}} Java: Current state {{ignite.cluster().isWalEnabled(cacheName);}} Turning on {{ignite.cluster().enableWal(cacheName);}} Turning off {{ignite.cluster().disableWal(cacheName);}} > Ability to disable WAL (Documentation) > -- > > Key: IGNITE-7415 > URL: https://issues.apache.org/jira/browse/IGNITE-7415 > Project: Ignite > Issue Type: Task > Components: documentation >Reporter: Anton Vinogradov >Assignee: Vladimir Ozerov >Priority: Major > Fix For: 2.4 > > > Need to update > [https://apacheignite.readme.io/docs/write-ahead-log#section-wal-modes] > [https://apacheignite.readme.io/docs/data-loading] -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Issue Comment Deleted] (IGNITE-7337) Spark Data Frames: support saving a data frame in Ignite
[ https://issues.apache.org/jira/browse/IGNITE-7337?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Nikolay Izhikov updated IGNITE-7337: Comment: was deleted (was: [~vkulichenko] > you're doing single inserts one by one when saving the data, this will not > perform well. I agreed. Is there any way to do some kind of batch data loads? For example, JdbcPreparedStatement doesn't support batches, https://github.com/apache/ignite/blob/master/modules/core/src/main/java/org/apache/ignite/internal/jdbc/JdbcPreparedStatement.java#L196 > Is there a way to use IgniteDataStreamer instead? I look into Streamer API and can't see how it can be used for batch loading. Can you give me some ideas or examples how to use it for a performance boosting of SQL inserts?) > Spark Data Frames: support saving a data frame in Ignite > > > Key: IGNITE-7337 > URL: https://issues.apache.org/jira/browse/IGNITE-7337 > Project: Ignite > Issue Type: New Feature > Components: spark >Affects Versions: 2.3 >Reporter: Valentin Kulichenko >Assignee: Nikolay Izhikov >Priority: Major > Fix For: 2.5 > > > Once Ignite data source for Spark is implemented, we need to add an ability > to store a data frame in Ignite. Most likely if should be enough to provide > implementation for the following traits: > * {{InsertableRelation}} > * {{CreatableRelationProvider}} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (IGNITE-7643) Broken javadoc in partitioned dataset
[ https://issues.apache.org/jira/browse/IGNITE-7643?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16355228#comment-16355228 ] ASF GitHub Bot commented on IGNITE-7643: Github user asfgit closed the pull request at: https://github.com/apache/ignite/pull/3480 > Broken javadoc in partitioned dataset > -- > > Key: IGNITE-7643 > URL: https://issues.apache.org/jira/browse/IGNITE-7643 > Project: Ignite > Issue Type: Task > Components: ml >Affects Versions: 2.5 >Reporter: Yury Babak >Assignee: Yury Babak >Priority: Major > Labels: javadoc > Fix For: 2.5 > > > [22:25:12][Step 7/7] [WARNING] Javadoc Warnings > [22:25:12][Step 7/7] [WARNING] > /data/teamcity/work/bd85361428dcdb1/examples/src/main/java/org/apache/ignite/examples/ml/dataset/AlgorithmSpecificDatasetExample.java:51: > warning - Tag @link: reference not found: AlgorithmSpecificDataset > [22:25:12][Step 7/7] [WARNING] > /data/teamcity/work/bd85361428dcdb1/examples/src/main/java/org/apache/ignite/examples/ml/dataset/AlgorithmSpecificDatasetExample.java:51: > warning - Tag @link: reference not found: AlgorithmSpecificPartitionContext -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (IGNITE-7428) Thin client Java API - cache open/getName/getConfiguration/size/close
[ https://issues.apache.org/jira/browse/IGNITE-7428?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Alexey Kukushkin updated IGNITE-7428: - Environment: (was: Implement cache open/getName/getConfiguration/close/size thin client Java API including unit and system tests and samples. Cache getName(): String getConfiguration(clazz): Configuration close() size(modes: CachePeekMode...)) > Thin client Java API - cache open/getName/getConfiguration/size/close > - > > Key: IGNITE-7428 > URL: https://issues.apache.org/jira/browse/IGNITE-7428 > Project: Ignite > Issue Type: Sub-task >Reporter: Alexey Kukushkin >Assignee: Alexey Kukushkin >Priority: Major > Labels: data, java, thin > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (IGNITE-7428) Thin client Java API - cache open/getName/getConfiguration/size/close
[ https://issues.apache.org/jira/browse/IGNITE-7428?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Alexey Kukushkin updated IGNITE-7428: - Description: Implement cache open/getName/getConfiguration/close/size thin client Java API including unit and system tests and samples. Cache getName(): String getConfiguration(clazz): Configuration close() size(modes: CachePeekMode...) > Thin client Java API - cache open/getName/getConfiguration/size/close > - > > Key: IGNITE-7428 > URL: https://issues.apache.org/jira/browse/IGNITE-7428 > Project: Ignite > Issue Type: Sub-task >Reporter: Alexey Kukushkin >Assignee: Alexey Kukushkin >Priority: Major > Labels: data, java, thin > > Implement cache open/getName/getConfiguration/close/size thin client Java API > including unit and system tests and samples. > Cache > getName(): String > getConfiguration(clazz): Configuration > close() > size(modes: CachePeekMode...) -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Assigned] (IGNITE-7428) Thin client Java API - cache open/getName/getConfiguration/size/close
[ https://issues.apache.org/jira/browse/IGNITE-7428?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Alexey Kukushkin reassigned IGNITE-7428: Assignee: Alexey Kukushkin > Thin client Java API - cache open/getName/getConfiguration/size/close > - > > Key: IGNITE-7428 > URL: https://issues.apache.org/jira/browse/IGNITE-7428 > Project: Ignite > Issue Type: Sub-task > Environment: Implement cache open/getName/getConfiguration/close/size > thin client Java API including unit and system tests and samples. > Cache > getName(): String > getConfiguration(clazz): Configuration > close() > size(modes: CachePeekMode...) >Reporter: Alexey Kukushkin >Assignee: Alexey Kukushkin >Priority: Major > Labels: data, java, thin > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (IGNITE-7337) Spark Data Frames: support saving a data frame in Ignite
[ https://issues.apache.org/jira/browse/IGNITE-7337?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16355213#comment-16355213 ] Nikolay Izhikov commented on IGNITE-7337: - [~vkulichenko] > you're doing single inserts one by one when saving the data, this will not > perform well. I agreed. Is there any way to do some kind of batch data loads? For example, JdbcPreparedStatement doesn't support batches, https://github.com/apache/ignite/blob/master/modules/core/src/main/java/org/apache/ignite/internal/jdbc/JdbcPreparedStatement.java#L196 > Is there a way to use IgniteDataStreamer instead? I look into Streamer API and can't see how it can be used for batch loading. Can you give me some ideas or examples how to use it for a performance boosting of SQL inserts? > Spark Data Frames: support saving a data frame in Ignite > > > Key: IGNITE-7337 > URL: https://issues.apache.org/jira/browse/IGNITE-7337 > Project: Ignite > Issue Type: New Feature > Components: spark >Affects Versions: 2.3 >Reporter: Valentin Kulichenko >Assignee: Nikolay Izhikov >Priority: Major > Fix For: 2.5 > > > Once Ignite data source for Spark is implemented, we need to add an ability > to store a data frame in Ignite. Most likely if should be enough to provide > implementation for the following traits: > * {{InsertableRelation}} > * {{CreatableRelationProvider}} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Resolved] (IGNITE-7429) Thin client Java API - cache put/get/contains
[ https://issues.apache.org/jira/browse/IGNITE-7429?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Alexey Kukushkin resolved IGNITE-7429. -- Resolution: Fixed > Thin client Java API - cache put/get/contains > - > > Key: IGNITE-7429 > URL: https://issues.apache.org/jira/browse/IGNITE-7429 > Project: Ignite > Issue Type: Sub-task > Environment: Implement cache put/get/contains thin client Java API > including unit and system tests and samples. > Cache > put(key, val) > get(key): V > containsKey(key): boolean > >Reporter: Alexey Kukushkin >Assignee: Alexey Kukushkin >Priority: Major > Labels: data, java, thin > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Assigned] (IGNITE-6113) Partition eviction prevents exchange from completion
[ https://issues.apache.org/jira/browse/IGNITE-6113?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Pavel Kovalenko reassigned IGNITE-6113: --- Assignee: Alexey Goncharuk (was: Pavel Kovalenko) Ready to review (coupled with [~ilantukh] rebalance changes): TC: https://ci.ignite.apache.org/project.html?projectId=IgniteTests24Java8_IgniteTests24Java8=pull%2F3445%2Fhead PR: https://github.com/apache/ignite/pull/3445 > Partition eviction prevents exchange from completion > > > Key: IGNITE-6113 > URL: https://issues.apache.org/jira/browse/IGNITE-6113 > Project: Ignite > Issue Type: Bug >Affects Versions: 2.1 >Reporter: Vladislav Pyatkov >Assignee: Alexey Goncharuk >Priority: Major > > I has waited for 3 hours for completion without any success. > exchange-worker is blocked. > {noformat} > "exchange-worker-#92%DPL_GRID%grid554.ca.sbrf.ru%" #173 prio=5 os_prio=0 > tid=0x7f0835c2e000 nid=0xb907 runnable [0x7e74ab1d] >java.lang.Thread.State: TIMED_WAITING (parking) > at sun.misc.Unsafe.park(Native Method) > - parking to wait for <0x7efee630a7c0> (a > org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtLocalPartition$1) > at > java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) > at > java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedNanos(AbstractQueuedSynchronizer.java:1037) > at > java.util.concurrent.locks.AbstractQueuedSynchronizer.tryAcquireSharedNanos(AbstractQueuedSynchronizer.java:1328) > at > org.apache.ignite.internal.util.future.GridFutureAdapter.get0(GridFutureAdapter.java:189) > at > org.apache.ignite.internal.util.future.GridFutureAdapter.get(GridFutureAdapter.java:139) > at > org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPreloader.assign(GridDhtPreloader.java:340) > at > org.apache.ignite.internal.processors.cache.GridCachePartitionExchangeManager$ExchangeWorker.body(GridCachePartitionExchangeManager.java:1801) > at > org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:110) > at java.lang.Thread.run(Thread.java:748) >Locked ownable synchronizers: > - None > {noformat} > {noformat} > "sys-#124%DPL_GRID%grid554.ca.sbrf.ru%" #278 prio=5 os_prio=0 > tid=0x7e731c02d000 nid=0xbf4d runnable [0x7e734e7f7000] >java.lang.Thread.State: RUNNABLE > at sun.nio.ch.FileDispatcherImpl.write0(Native Method) > at sun.nio.ch.FileDispatcherImpl.write(FileDispatcherImpl.java:60) > at sun.nio.ch.IOUtil.writeFromNativeBuffer(IOUtil.java:93) > at sun.nio.ch.IOUtil.write(IOUtil.java:51) > at sun.nio.ch.FileChannelImpl.write(FileChannelImpl.java:211) > - locked <0x7f056161bf88> (a java.lang.Object) > at > org.gridgain.grid.cache.db.wal.FileWriteAheadLogManager$FileWriteHandle.writeBuffer(FileWriteAheadLogManager.java:1829) > at > org.gridgain.grid.cache.db.wal.FileWriteAheadLogManager$FileWriteHandle.flush(FileWriteAheadLogManager.java:1572) > at > org.gridgain.grid.cache.db.wal.FileWriteAheadLogManager$FileWriteHandle.addRecord(FileWriteAheadLogManager.java:1421) > at > org.gridgain.grid.cache.db.wal.FileWriteAheadLogManager$FileWriteHandle.access$800(FileWriteAheadLogManager.java:1331) > at > org.gridgain.grid.cache.db.wal.FileWriteAheadLogManager.log(FileWriteAheadLogManager.java:339) > at > org.gridgain.grid.internal.processors.cache.database.pagemem.PageMemoryImpl.beforeReleaseWrite(PageMemoryImpl.java:1287) > at > org.gridgain.grid.internal.processors.cache.database.pagemem.PageMemoryImpl.writeUnlockPage(PageMemoryImpl.java:1142) > at > org.gridgain.grid.internal.processors.cache.database.pagemem.PageImpl.releaseWrite(PageImpl.java:167) > at > org.apache.ignite.internal.processors.cache.database.tree.util.PageHandler.writeUnlock(PageHandler.java:193) > at > org.apache.ignite.internal.processors.cache.database.tree.util.PageHandler.writePage(PageHandler.java:242) > at > org.apache.ignite.internal.processors.cache.database.tree.util.PageHandler.writePage(PageHandler.java:119) > at > org.apache.ignite.internal.processors.cache.database.tree.BPlusTree$Remove.doRemoveFromLeaf(BPlusTree.java:2886) > at > org.apache.ignite.internal.processors.cache.database.tree.BPlusTree$Remove.removeFromLeaf(BPlusTree.java:2865) > at > org.apache.ignite.internal.processors.cache.database.tree.BPlusTree$Remove.access$6900(BPlusTree.java:2515) > at > org.apache.ignite.internal.processors.cache.database.tree.BPlusTree.removeDown(BPlusTree.java:1607) > at >
[jira] [Commented] (IGNITE-7643) Broken javadoc in partitioned dataset
[ https://issues.apache.org/jira/browse/IGNITE-7643?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16355205#comment-16355205 ] Oleg Ignatenko commented on IGNITE-7643: changes made in PR #3480 look good to me, recommend merge > Broken javadoc in partitioned dataset > -- > > Key: IGNITE-7643 > URL: https://issues.apache.org/jira/browse/IGNITE-7643 > Project: Ignite > Issue Type: Task > Components: ml >Affects Versions: 2.5 >Reporter: Yury Babak >Assignee: Yury Babak >Priority: Major > Labels: javadoc > Fix For: 2.5 > > > [22:25:12][Step 7/7] [WARNING] Javadoc Warnings > [22:25:12][Step 7/7] [WARNING] > /data/teamcity/work/bd85361428dcdb1/examples/src/main/java/org/apache/ignite/examples/ml/dataset/AlgorithmSpecificDatasetExample.java:51: > warning - Tag @link: reference not found: AlgorithmSpecificDataset > [22:25:12][Step 7/7] [WARNING] > /data/teamcity/work/bd85361428dcdb1/examples/src/main/java/org/apache/ignite/examples/ml/dataset/AlgorithmSpecificDatasetExample.java:51: > warning - Tag @link: reference not found: AlgorithmSpecificPartitionContext -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Comment Edited] (IGNITE-7578) Web console: Actualize configuration of ClientConnectorConfiguration
[ https://issues.apache.org/jira/browse/IGNITE-7578?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16355201#comment-16355201 ] Vasiliy Sisko edited comment on IGNITE-7578 at 2/7/18 9:28 AM: --- Implemented possibility to configure missed properties of client connector configuration. Implemented showing of missed properties in Visor CMD. was (Author: vsisko): Implemented possibility to configure missed properties of client connector configuration. > Web console: Actualize configuration of ClientConnectorConfiguration > > > Key: IGNITE-7578 > URL: https://issues.apache.org/jira/browse/IGNITE-7578 > Project: Ignite > Issue Type: Bug >Reporter: Vasiliy Sisko >Assignee: Pavel Konstantinov >Priority: Major > Fix For: 2.5 > > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Assigned] (IGNITE-7578) Web console: Actualize configuration of ClientConnectorConfiguration
[ https://issues.apache.org/jira/browse/IGNITE-7578?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vasiliy Sisko reassigned IGNITE-7578: - Assignee: Pavel Konstantinov (was: Vasiliy Sisko) > Web console: Actualize configuration of ClientConnectorConfiguration > > > Key: IGNITE-7578 > URL: https://issues.apache.org/jira/browse/IGNITE-7578 > Project: Ignite > Issue Type: Bug >Reporter: Vasiliy Sisko >Assignee: Pavel Konstantinov >Priority: Major > Fix For: 2.5 > > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (IGNITE-7578) Web console: Actualize configuration of ClientConnectorConfiguration
[ https://issues.apache.org/jira/browse/IGNITE-7578?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16355201#comment-16355201 ] Vasiliy Sisko commented on IGNITE-7578: --- Implemented possibility to configure missed properties of client connector configuration. > Web console: Actualize configuration of ClientConnectorConfiguration > > > Key: IGNITE-7578 > URL: https://issues.apache.org/jira/browse/IGNITE-7578 > Project: Ignite > Issue Type: Bug >Reporter: Vasiliy Sisko >Assignee: Vasiliy Sisko >Priority: Major > Fix For: 2.5 > > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (IGNITE-7643) Broken javadoc in partitioned dataset
[ https://issues.apache.org/jira/browse/IGNITE-7643?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yury Babak updated IGNITE-7643: --- Labels: javadoc (was: ) > Broken javadoc in partitioned dataset > -- > > Key: IGNITE-7643 > URL: https://issues.apache.org/jira/browse/IGNITE-7643 > Project: Ignite > Issue Type: Task > Components: ml >Affects Versions: 2.5 >Reporter: Yury Babak >Assignee: Yury Babak >Priority: Major > Labels: javadoc > Fix For: 2.5 > > > [22:25:12][Step 7/7] [WARNING] Javadoc Warnings > [22:25:12][Step 7/7] [WARNING] > /data/teamcity/work/bd85361428dcdb1/examples/src/main/java/org/apache/ignite/examples/ml/dataset/AlgorithmSpecificDatasetExample.java:51: > warning - Tag @link: reference not found: AlgorithmSpecificDataset > [22:25:12][Step 7/7] [WARNING] > /data/teamcity/work/bd85361428dcdb1/examples/src/main/java/org/apache/ignite/examples/ml/dataset/AlgorithmSpecificDatasetExample.java:51: > warning - Tag @link: reference not found: AlgorithmSpecificPartitionContext -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (IGNITE-6625) JDBC thin: support SSL connection to Ignite node
[ https://issues.apache.org/jira/browse/IGNITE-6625?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16355181#comment-16355181 ] Taras Ledkov commented on IGNITE-6625: -- [~skalashnikov], [~gvvinblade], the comments are fixed. Please review the changes. > JDBC thin: support SSL connection to Ignite node > > > Key: IGNITE-6625 > URL: https://issues.apache.org/jira/browse/IGNITE-6625 > Project: Ignite > Issue Type: Improvement > Components: jdbc >Affects Versions: 2.2 >Reporter: Taras Ledkov >Assignee: Taras Ledkov >Priority: Major > Fix For: 2.5 > > > SSL connection must be supported for JDBC thin driver. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (IGNITE-7643) Broken javadoc in partitioned dataset
[ https://issues.apache.org/jira/browse/IGNITE-7643?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16355131#comment-16355131 ] ASF GitHub Bot commented on IGNITE-7643: GitHub user ybabak opened a pull request: https://github.com/apache/ignite/pull/3480 IGNITE-7643: broken javadoc. fixed You can merge this pull request into a Git repository by running: $ git pull https://github.com/gridgain/apache-ignite ignite-7643 Alternatively you can review and apply these changes as the patch at: https://github.com/apache/ignite/pull/3480.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #3480 commit 075c51a98ca2bcdacd81d4055e4ccb23d4f52f76 Author: YuriBabakDate: 2018-02-07T08:30:33Z IGNITE-7643: broken javadoc. fixed > Broken javadoc in partitioned dataset > -- > > Key: IGNITE-7643 > URL: https://issues.apache.org/jira/browse/IGNITE-7643 > Project: Ignite > Issue Type: Task > Components: ml >Affects Versions: 2.5 >Reporter: Yury Babak >Assignee: Yury Babak >Priority: Major > Fix For: 2.5 > > > [22:25:12][Step 7/7] [WARNING] Javadoc Warnings > [22:25:12][Step 7/7] [WARNING] > /data/teamcity/work/bd85361428dcdb1/examples/src/main/java/org/apache/ignite/examples/ml/dataset/AlgorithmSpecificDatasetExample.java:51: > warning - Tag @link: reference not found: AlgorithmSpecificDataset > [22:25:12][Step 7/7] [WARNING] > /data/teamcity/work/bd85361428dcdb1/examples/src/main/java/org/apache/ignite/examples/ml/dataset/AlgorithmSpecificDatasetExample.java:51: > warning - Tag @link: reference not found: AlgorithmSpecificPartitionContext -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Assigned] (IGNITE-7263) Daemon-mode Ignite node should not open client port (10800)
[ https://issues.apache.org/jira/browse/IGNITE-7263?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Pavel Vinokurov reassigned IGNITE-7263: --- Assignee: Pavel Vinokurov > Daemon-mode Ignite node should not open client port (10800) > -- > > Key: IGNITE-7263 > URL: https://issues.apache.org/jira/browse/IGNITE-7263 > Project: Ignite > Issue Type: Bug > Components: visor >Affects Versions: 2.1 >Reporter: Alexey Popov >Assignee: Pavel Vinokurov >Priority: Minor > Labels: core > > When I run a Visor console with default configuration file it opens a default > port (10800) for ODBC driver connection (and for thin JDBC, and for new > "thin" client). > Then I run several Ignite nodes. > So after that, the ODBC driver with default settings goes directly to a Visor > (daemon-mode Ignite) and does not able to get any data (daemon-mode Ignite > does not keep any data) > It is better to avoid such situation. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (IGNITE-6994) Need to document PartitionLossPolicy
[ https://issues.apache.org/jira/browse/IGNITE-6994?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16355102#comment-16355102 ] Sergey Puchnin commented on IGNITE-6994: The documentation is updated please proofread it. https://apacheignite.readme.io/v2.3/docs/cache-modes-24#partition-loss-policies > Need to document PartitionLossPolicy > > > Key: IGNITE-6994 > URL: https://issues.apache.org/jira/browse/IGNITE-6994 > Project: Ignite > Issue Type: Task > Components: documentation >Affects Versions: 2.3 >Reporter: Valentin Kulichenko >Assignee: Sergey Puchnin >Priority: Critical > Labels: documentation > Fix For: 2.4 > > > Since 2.0 we have a feature that makes cache(s) unavailable in case of data > loss; exact behavior is controlled by {{PartitionLossPolicy}}: > [https://ignite.apache.org/releases/latest/javadoc/org/apache/ignite/cache/PartitionLossPolicy.html] > However, there is no mentioning in the documentation about this. Need to > provide an explanation of how and when it should be used and provide > configuration examples. > The documentation has to address questions and misunderstandings asked in > these discussions: > * > [http://apache-ignite-developers.2346864.n4.nabble.com/Partition-loss-policy-how-to-use-td25341.html] > * > [http://apache-ignite-developers.2346864.n4.nabble.com/Partition-loss-policy-to-disable-cache-completely-td26212.html] > Improve the JavaDoc too whenever is possible. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Assigned] (IGNITE-6994) Need to document PartitionLossPolicy
[ https://issues.apache.org/jira/browse/IGNITE-6994?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sergey Puchnin reassigned IGNITE-6994: -- Assignee: Denis Magda (was: Sergey Puchnin) > Need to document PartitionLossPolicy > > > Key: IGNITE-6994 > URL: https://issues.apache.org/jira/browse/IGNITE-6994 > Project: Ignite > Issue Type: Task > Components: documentation >Affects Versions: 2.3 >Reporter: Valentin Kulichenko >Assignee: Denis Magda >Priority: Critical > Labels: documentation > Fix For: 2.4 > > > Since 2.0 we have a feature that makes cache(s) unavailable in case of data > loss; exact behavior is controlled by {{PartitionLossPolicy}}: > [https://ignite.apache.org/releases/latest/javadoc/org/apache/ignite/cache/PartitionLossPolicy.html] > However, there is no mentioning in the documentation about this. Need to > provide an explanation of how and when it should be used and provide > configuration examples. > The documentation has to address questions and misunderstandings asked in > these discussions: > * > [http://apache-ignite-developers.2346864.n4.nabble.com/Partition-loss-policy-how-to-use-td25341.html] > * > [http://apache-ignite-developers.2346864.n4.nabble.com/Partition-loss-policy-to-disable-cache-completely-td26212.html] > Improve the JavaDoc too whenever is possible. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (IGNITE-7643) Broken javadoc in partitioned dataset
Yury Babak created IGNITE-7643: -- Summary: Broken javadoc in partitioned dataset Key: IGNITE-7643 URL: https://issues.apache.org/jira/browse/IGNITE-7643 Project: Ignite Issue Type: Task Components: ml Affects Versions: 2.5 Reporter: Yury Babak Assignee: Yury Babak Fix For: 2.5 [22:25:12][Step 7/7] [WARNING] Javadoc Warnings [22:25:12][Step 7/7] [WARNING] /data/teamcity/work/bd85361428dcdb1/examples/src/main/java/org/apache/ignite/examples/ml/dataset/AlgorithmSpecificDatasetExample.java:51: warning - Tag @link: reference not found: AlgorithmSpecificDataset [22:25:12][Step 7/7] [WARNING] /data/teamcity/work/bd85361428dcdb1/examples/src/main/java/org/apache/ignite/examples/ml/dataset/AlgorithmSpecificDatasetExample.java:51: warning - Tag @link: reference not found: AlgorithmSpecificPartitionContext -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (IGNITE-7639) NullPointerException in publicApiActiveState
[ https://issues.apache.org/jira/browse/IGNITE-7639?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16355101#comment-16355101 ] Alexey Goncharuk commented on IGNITE-7639: -- Branch: ignite-7639 TC run: https://ci.ignite.apache.org/project.html?projectId=IgniteTests24Java8_IgniteTests24Java8=ignite-7639 > NullPointerException in publicApiActiveState > > > Key: IGNITE-7639 > URL: https://issues.apache.org/jira/browse/IGNITE-7639 > Project: Ignite > Issue Type: Bug > Components: cache >Affects Versions: 2.4 >Reporter: Alexey Goncharuk >Assignee: Alexey Goncharuk >Priority: Major > Fix For: 2.5 > > > This exception is observed in the test: > {code} > [2018-02-05 19:50:38,330][ERROR][main][root] Test failed. > java.lang.NullPointerException > at > org.apache.ignite.internal.processors.cluster.DiscoveryDataClusterState.activeStateChanging(DiscoveryDataClusterState.java:159) > at > org.apache.ignite.internal.processors.cluster.GridClusterStateProcessor.publicApiActiveState(GridClusterStateProcessor.java:182) > at > org.apache.ignite.internal.processors.cache.IgniteClusterActivateDeactivateTest.clientReconnectClusterDeactivated(IgniteClusterActivateDeactivateTest.java:831) > at > org.apache.ignite.internal.processors.cache.IgniteClusterActivateDeactivateTest.testClientReconnectClusterDeactivateInProgress(IgniteClusterActivateDeactivateTest.java:772) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) > at junit.framework.TestCase.runTest(TestCase.java:176) > at > org.apache.ignite.testframework.junits.GridAbstractTest.runTestInternal(GridAbstractTest.java:2001) > at > org.apache.ignite.testframework.junits.GridAbstractTest.access$000(GridAbstractTest.java:133) > at > org.apache.ignite.testframework.junits.GridAbstractTest$5.run(GridAbstractTest.java:1916) > at java.lang.Thread.run(Thread.java:745) > {code} > The reason is that cluster state is transferred to a joining node and > prevState is null in this case. -- This message was sent by Atlassian JIRA (v7.6.3#76005)