[jira] [Commented] (IGNITE-16897) Java thin: Implement IgniteSet
Title: Message Title Ignite TC Bot commented on IGNITE-16897 Re: Java thin: Implement IgniteSet Branch: [pull/10112/head] Base: [master] : Possible Blockers (107) Control Utility [tests 0 CANCELLED Control Utility (Zookeeper) [tests 0 CANCELLED Cache 6 [tests 0 CANCELLED PDS 5 [tests 0 CANCELLED Cache 13 [tests 0 CANCELLED Cache 2 [tests 0 CANCELLED Cassandra Store [tests 0 CANCELLED SPI (Discovery) [tests 0 CANCELLED PDS 6 [tests 0 CANCELLED PDS (Compatibility) [tests 0 CANCELLED PDS (Indexing) [tests 0 CANCELLED Continuous Query 2 [tests 0 CANCELLED Basic 1 [tests 0 CANCELLED Snapshots [tests 0 CANCELLED Cache 5 [tests 0 CANCELLED Platform .NET (Windows) [tests 0 CANCELLED Continuous Query 4 [tests 0 CANCELLED ZooKeeper (Discovery) 1 [tests 0 CANCELLED Compute (Affinity Run) [tests 0 CANCELLED Binary Objects [tests 0 CANCELLED Cache 7 [tests 0 CANCELLED JDBC Driver [tests 0 CANCELLED Calcite SQL [tests 0 CANCELLED Snapshots With Indexes [tests 0 CANCELLED Cache 12 [tests 0 CANCELLED Platform .NET (Core Linux) [tests 0 CANCELLED Queries 3 [tests 0 CANCELLED Data Structures [tests 0 CANCELLED PDS 2 [tests 0 CANCELLED Queries 3 (lazy=true) [tests 0 CANCELLED Cache 1 [tests 0 CANCELLED Continuous Query 3 [tests 0 CANCELLED
[jira] [Created] (IGNITE-17245) AbstractContinuousQuery.setIncludeExpired does not return this
Pavel Tupitsyn created IGNITE-17245: --- Summary: AbstractContinuousQuery.setIncludeExpired does not return this Key: IGNITE-17245 URL: https://issues.apache.org/jira/browse/IGNITE-17245 Project: Ignite Issue Type: Improvement Components: cache Reporter: Pavel Tupitsyn All AbstractContinuousQuery setters return "this" for chaining, but *setIncludeExpired* does not. -- This message was sent by Atlassian Jira (v8.20.7#820007)
[jira] [Updated] (IGNITE-17237) Implement a logging subsystem
[ https://issues.apache.org/jira/browse/IGNITE-17237?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Konstantin Orlov updated IGNITE-17237: -- Description: h2. Motivation One of the most important parts of any running application is its logs. The operations team uses them to make sure the application runs smoothly. Developers use the log for troubleshooting. So we need to provide a uniform way to log any important event related to the system. h2. Requirements * Implementation should not rely on any particular 3rd-party logging framework * Implementation should support 5 base logging severities: TRACE, DEBUG, INFO, WARN, ERROR * Implementation should provide a uniform API for server-side use as well as for clients * For clients there should be an ability to specify logger programmatically through the client builder * Implementation should provide seamless integration with majority of popular logging frameworks * Implementation should support parameters' substitution to avoid wrapping with {{ifEnabled}} for very simple cases h2. Proposed solution We could take an advantage of {{System.Logger}} frameworks. This implies a two level architecture with uniform frontend which should be used throughout our system, and interchangeable backends. Besides, {{System.Logger}} framework have already integrated with such 3rd-party frameworks as {{SLF4j}} and {{Log4j}}. h2. Proposed guidelines h3. Message layout Nowadays so many deployments have an automated logging preprocessing, that it's important not only make the logs human readable, but make them machine friendly. With that said, we need to get sure that all arguments are easy to locate and parse. To achieve this, the proposed log format is follow: {code:java} [argKey1=argVal1, argKey2=argVal2] {code} For example: {code:java} Table has been created [id=0xaabbccdd, tName=my_table, sName=my_schema] {code} Perhaps, the structured logs fits better, but this is currently out of scope. h3. Arguments inlining We need to avoid string concatenation to inline arguments into the message because logging subsystem should provide arguments' substitution h3. On choosing the level We must come from an understanding that both levels WARN and ERROR requires an attention of an operation team, so those levels should be used only when the cluster is in (or about to move to) invalids state. INFO is a normal level that is used to log regular _unfrequent_ events. Avoid to use this level for frequent events like TABLE INSERT was: h2. Motivation One of the most important parts of any running application is its logs. The operations team uses them to make sure the application runs smoothly. Developers use the log for troubleshooting. So we need to provide a uniform way to log any important event related to the system. h2. Requirements * Implementation should not rely on any particular 3rd-party logging framework * Implementation should support 5 base logging severities: TRACE, DEBUG, INFO, WARN, ERROR * Implementation should provide a uniform API for server-side use as well as for clients * For clients there should be an ability to specify logger programmatically through the client builder * Implementation should provide seamless integration with majority of popular logging frameworks * Implementation should support parameters' substitution to avoid wrapping with {{ifEnabled}} for very simple cases h2. Proposed solution We could take an advantage of {{System.Logger}} frameworks. This implies a two level architecture with uniform frontend which should be used throughout our system, and interchangeable backends. Besides, {{System.Logger}} framework have already integrated with such 3rd-party frameworks as {{SLF4j}} and {{Log4j}}. h2. Proposed guidelines h3. Message layout Nowadays so many deployments have an automated logging preprocessing, that it's important not only make the logs human readable, but make them machine friendly. With that said, we need to get sure that all arguments are easy to locate and parse. To achieve this, the proposed log format is follow: {code:java} [argKey1=argVal1, argKey2=argVal2] {code} For example: {code:java} Table has been created [id=0xaabbccdd, tName=my_table, sName=my_schema] {code} Perhaps, the structured logs are better fits this, but this is currently out of scope. h3. Arguments inlining We need to avoid string concatenation to inline arguments into the message because logging subsystem should provide arguments' substitution h3. On choosing the level We must come from an understanding that both levels WARN and ERROR requires an attention of an operation team, so those levels should be used only when the cluster is in (or about to move to) invalids state. INFO is a normal level that is used to log regular _unfrequent_ events. Avoid to use this level for frequent events like TABLE INSERT > Implement a
[jira] [Updated] (IGNITE-17220) YCSB benchmark run for ignite2 vs ignite3
[ https://issues.apache.org/jira/browse/IGNITE-17220?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kirill Gusakov updated IGNITE-17220: Description: For further investigation of ignite3 performance issues, we need to run the following benchmarks to compare ignite2 vs ignite3 performance: * Usual ycsb benchmark with mixed load patterns * Insert-only ycsb benchmark For ignite2 and ignite3 in the following configurations: * 3 ignite nodes setup (so, table must have 1 partition and 3 replicas) * 1 ignite node setup (so, table must have 1 partitoin and 1 replica) Also, please provide: * Hardware configuration of the environment, where benchmark was executed * JFRs for every node in every run. was: For further investigation of ignite3 performance issues, we need to run the following benchmarks to compare ignite2 vs ignite3 performance: * Usual ycsb benchmark with mixed load patterns * Insert-only ycsb benchmark For ignite2 and ignite3 in the following configurations: * 3 ignite nodes setup * 1 ignite node setup Also, please provide: * Hardware configuration of the environment, where benchmark was executed * JFRs for every node in every run. > YCSB benchmark run for ignite2 vs ignite3 > - > > Key: IGNITE-17220 > URL: https://issues.apache.org/jira/browse/IGNITE-17220 > Project: Ignite > Issue Type: Task >Reporter: Kirill Gusakov >Priority: Major > Labels: ignite-3 > > For further investigation of ignite3 performance issues, we need to run the > following benchmarks to compare ignite2 vs ignite3 performance: > * Usual ycsb benchmark with mixed load patterns > * Insert-only ycsb benchmark > For ignite2 and ignite3 in the following configurations: > * 3 ignite nodes setup (so, table must have 1 partition and 3 replicas) > * 1 ignite node setup (so, table must have 1 partitoin and 1 replica) > Also, please provide: > * Hardware configuration of the environment, where benchmark was executed > * JFRs for every node in every run. -- This message was sent by Atlassian Jira (v8.20.7#820007)
[jira] [Updated] (IGNITE-17230) Support splt-file page store
[ https://issues.apache.org/jira/browse/IGNITE-17230?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kirill Tkalenko updated IGNITE-17230: - Description: *Notes* Description may not be complete. *Goal* To implement a new checkpoint (described in IGNITE-15818), we will introduce a new entity *DeltaFilePageStore*, which will be created for each partition at each checkpoint and removed after merging with the *FilePageStore* (the main partition file) using the compacter. *DeltaFilePageStore* will consist of: * Header (maybe updated in the course of implementation): ** Allocation *pageIdx* - *pageIdx* of the last created page; * Sorted list of *pageIdx* - allows a binary search to find the file offset for an *pageId -> pageIdx*; * Page content - sorted by *pageIdx*. What will change for *FilePageStore*: * List of class *DeltaFilePageStore* will be added (from the newest to the oldest by the time of creation); * Allocation index (pageIdx of the last created page) - it will be logical and contained in the header of *FilePageStore*. At node start, it will be read from the header of *FilePageStore* or obtained from the first *DeltaFilePageStore* (the newest one). How pages will be read by *pageId -> pageIdx*: * Interrogates the class *DeltaFilePageStore* in order from the newest to the oldest; * If not found, then we read page from the *FilePageStore* itself. *Some implementation notes* * Format of the file name for the *DeltaFilePageStore* is *part-%d-delta-%d.bin* for example *part-1-delta-3.bin* where the first digit is the partition identifier, and the second is the serial number of the delta file for this partition; * Before creating *part-1-delta-3.bin*, a temporary file *part-1-delta-3.bin.tmp* will be created at the checkpoint first, then filled, then renamed to *part-1-delta-3.bin*; * Since the indexes will be stored in partitions, we can get rid of the code associated with the index partition file; * Fix flaky [FilePageStoreManagerTest#testStopAllGroupFilePageStores|https://ci.ignite.apache.org/test/6999203413272911470?currentProjectId=ignite3_Test&branch=%3Cdefault%3E]. was: *Notes* Description may not be complete. *Goal* To implement a new checkpoint (described in IGNITE-15818), we will introduce a new entity *DelataFilePageStore*, which will be created for each partition at each checkpoint and removed after merging with the *FilePageStore* (the main partition file) using the compacter. *DelataFilePageStore* will consist of: * Header (maybe updated in the course of implementation): ** Allocation *pageIdx* - *pageIdx* of the last created page; * Sorted list of *pageIdx* - allows a binary search to find the file offset for an *pageId -> pageIdx*; * Page content - sorted by *pageIdx*. What will change for *FilePageStore*: * List of class *DelataFilePageStore* will be added (from the newest to the oldest by the time of creation); * Allocation index (pageIdx of the last created page) - it will be logical and contained in the header of *FilePageStore*. At node start, it will be read from the header of *FilePageStore* or obtained from the first *DelataFilePageStore* (the newest one). How pages will be read by *pageId -> pageIdx*: * Interrogates the class *DelataFilePageStore* in order from the newest to the oldest; * If not found, then we read page from the *FilePageStore* itself. *Some implementation notes* * Format of the file name for the *DelataFilePageStore* is *part-%d-delta-%d.bin* for example *part-1-delta-3.bin* where the first digit is the partition identifier, and the second is the serial number of the delta file for this partition; * Before creating *part-1-delta-3.bin*, a temporary file *part-1-delta-3.bin.tmp* will be created at the checkpoint first, then filled, then renamed to *part-1-delta-3.bin*; * Since the indexes will be stored in partitions, we can get rid of the code associated with the index partition file; * Fix flaky [FilePageStoreManagerTest#testStopAllGroupFilePageStores|https://ci.ignite.apache.org/test/6999203413272911470?currentProjectId=ignite3_Test&branch=%3Cdefault%3E]. > Support splt-file page store > > > Key: IGNITE-17230 > URL: https://issues.apache.org/jira/browse/IGNITE-17230 > Project: Ignite > Issue Type: Task >Reporter: Kirill Tkalenko >Assignee: Kirill Tkalenko >Priority: Major > Labels: ignite-3 > Fix For: 3.0.0-alpha6 > > > *Notes* > Description may not be complete. > *Goal* > To implement a new checkpoint (described in IGNITE-15818), we will introduce > a new entity *DeltaFilePageStore*, which will be created for each partition > at each checkpoint and removed after merging with the *FilePageStore* (the > main partition file) using the compacter. > *DeltaFilePageStore* will consist of: > * Header (maybe
[jira] [Updated] (IGNITE-17244) .NET: Thin 3.0: Optimize async request handling
[ https://issues.apache.org/jira/browse/IGNITE-17244?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Pavel Tupitsyn updated IGNITE-17244: Description: Reduce allocations when handling requests in ClientSocket. Combine the following functionality into a single object that can be pooled: * IBufferWriter - to write the request * IValueTaskSource - to represent the task completion * IThreadPoolWorkItem - to handle response on thread pool efficiently * IObjectPoolNode - to use in ObjectPool efficiently > .NET: Thin 3.0: Optimize async request handling > --- > > Key: IGNITE-17244 > URL: https://issues.apache.org/jira/browse/IGNITE-17244 > Project: Ignite > Issue Type: Improvement > Components: platforms, thin client >Reporter: Pavel Tupitsyn >Assignee: Pavel Tupitsyn >Priority: Minor > Labels: .NET, ignite-3 > > Reduce allocations when handling requests in ClientSocket. > Combine the following functionality into a single object that can be pooled: > * IBufferWriter - to write the request > * IValueTaskSource - to represent the task completion > * IThreadPoolWorkItem - to handle response on thread pool efficiently > * IObjectPoolNode - to use in ObjectPool efficiently -- This message was sent by Atlassian Jira (v8.20.7#820007)
[jira] [Created] (IGNITE-17244) .NET: Thin 3.0: Optimize async request handling
Pavel Tupitsyn created IGNITE-17244: --- Summary: .NET: Thin 3.0: Optimize async request handling Key: IGNITE-17244 URL: https://issues.apache.org/jira/browse/IGNITE-17244 Project: Ignite Issue Type: Improvement Reporter: Pavel Tupitsyn Assignee: Pavel Tupitsyn -- This message was sent by Atlassian Jira (v8.20.7#820007)
[jira] [Updated] (IGNITE-17244) .NET: Thin 3.0: Optimize async request handling
[ https://issues.apache.org/jira/browse/IGNITE-17244?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Pavel Tupitsyn updated IGNITE-17244: Priority: Minor (was: Major) > .NET: Thin 3.0: Optimize async request handling > --- > > Key: IGNITE-17244 > URL: https://issues.apache.org/jira/browse/IGNITE-17244 > Project: Ignite > Issue Type: Improvement >Reporter: Pavel Tupitsyn >Assignee: Pavel Tupitsyn >Priority: Minor > Labels: .NET, ignite-3 > -- This message was sent by Atlassian Jira (v8.20.7#820007)
[jira] [Updated] (IGNITE-17244) .NET: Thin 3.0: Optimize async request handling
[ https://issues.apache.org/jira/browse/IGNITE-17244?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Pavel Tupitsyn updated IGNITE-17244: Component/s: platforms thin client > .NET: Thin 3.0: Optimize async request handling > --- > > Key: IGNITE-17244 > URL: https://issues.apache.org/jira/browse/IGNITE-17244 > Project: Ignite > Issue Type: Improvement > Components: platforms, thin client >Reporter: Pavel Tupitsyn >Assignee: Pavel Tupitsyn >Priority: Minor > Labels: .NET, ignite-3 > -- This message was sent by Atlassian Jira (v8.20.7#820007)
[jira] [Updated] (IGNITE-4859) .NET: Remove ICacheStore.SessionEnd
[ https://issues.apache.org/jira/browse/IGNITE-4859?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Pavel Tupitsyn updated IGNITE-4859: --- Labels: .NET breaking-api (was: .NET breaking-api ignite-3) > .NET: Remove ICacheStore.SessionEnd > --- > > Key: IGNITE-4859 > URL: https://issues.apache.org/jira/browse/IGNITE-4859 > Project: Ignite > Issue Type: Improvement > Components: platforms >Reporter: Pavel Tupitsyn >Priority: Major > Labels: .NET, breaking-api > > {{CacheStore.sessionEnd}} is deprecated in Java, remove it from .NET API. > This required implementing {{CacheStoreSessionListener}} API instead. -- This message was sent by Atlassian Jira (v8.20.7#820007)
[jira] [Updated] (IGNITE-17222) Need to propagate HLC with transaction protocol events
[ https://issues.apache.org/jira/browse/IGNITE-17222?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Alexander Lapin updated IGNITE-17222: - Epic Link: IGNITE-15081 > Need to propagate HLC with transaction protocol events > -- > > Key: IGNITE-17222 > URL: https://issues.apache.org/jira/browse/IGNITE-17222 > Project: Ignite > Issue Type: Improvement >Reporter: Sergey Uttsel >Priority: Major > Labels: ignite-3 > > One of source of HLC sync is a transaction protocol. Each message involved in > the execution of the transaction carries the sender’s HLC and updates > receiver HLC according to rules. -- This message was sent by Atlassian Jira (v8.20.7#820007)
[jira] [Updated] (IGNITE-17222) Need to propagate HLC with transaction protocol events
[ https://issues.apache.org/jira/browse/IGNITE-17222?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Alexander Lapin updated IGNITE-17222: - Ignite Flags: (was: Docs Required,Release Notes Required) > Need to propagate HLC with transaction protocol events > -- > > Key: IGNITE-17222 > URL: https://issues.apache.org/jira/browse/IGNITE-17222 > Project: Ignite > Issue Type: Improvement >Reporter: Sergey Uttsel >Priority: Major > Labels: ignite-3 > > One of source of HLC sync is a transaction protocol. Each message involved in > the execution of the transaction carries the sender’s HLC and updates > receiver HLC according to rules. -- This message was sent by Atlassian Jira (v8.20.7#820007)
[jira] [Updated] (IGNITE-17221) Need to propagate HLC with RAFT events
[ https://issues.apache.org/jira/browse/IGNITE-17221?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Alexander Lapin updated IGNITE-17221: - Ignite Flags: (was: Docs Required,Release Notes Required) > Need to propagate HLC with RAFT events > -- > > Key: IGNITE-17221 > URL: https://issues.apache.org/jira/browse/IGNITE-17221 > Project: Ignite > Issue Type: Improvement >Reporter: Sergey Uttsel >Assignee: Sergey Uttsel >Priority: Major > Labels: ignite-3 > > RAFT events help to synchronize HLC between RAFT replicas. All RAFT > communications are initiated by a leader, and only one leader can exist at a > time. This enforces monotonic growth of HLC on raft group replicas. > RequestVote and AppendEntries RPC calls are enriched with sender’s HLC. The > HLC update rules are applied on receiving messages. RAFT lease intervals are > bound to the HLC range. -- This message was sent by Atlassian Jira (v8.20.7#820007)
[jira] [Updated] (IGNITE-17221) Need to propagate HLC with RAFT events
[ https://issues.apache.org/jira/browse/IGNITE-17221?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Alexander Lapin updated IGNITE-17221: - Epic Link: IGNITE-15081 > Need to propagate HLC with RAFT events > -- > > Key: IGNITE-17221 > URL: https://issues.apache.org/jira/browse/IGNITE-17221 > Project: Ignite > Issue Type: Improvement >Reporter: Sergey Uttsel >Assignee: Sergey Uttsel >Priority: Major > Labels: ignite-3 > > RAFT events help to synchronize HLC between RAFT replicas. All RAFT > communications are initiated by a leader, and only one leader can exist at a > time. This enforces monotonic growth of HLC on raft group replicas. > RequestVote and AppendEntries RPC calls are enriched with sender’s HLC. The > HLC update rules are applied on receiving messages. RAFT lease intervals are > bound to the HLC range. -- This message was sent by Atlassian Jira (v8.20.7#820007)
[jira] [Updated] (IGNITE-17214) Implement HLC
[ https://issues.apache.org/jira/browse/IGNITE-17214?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Alexander Lapin updated IGNITE-17214: - Epic Link: IGNITE-15081 > Implement HLC > - > > Key: IGNITE-17214 > URL: https://issues.apache.org/jira/browse/IGNITE-17214 > Project: Ignite > Issue Type: Improvement >Reporter: Sergey Uttsel >Assignee: Sergey Uttsel >Priority: Major > Labels: ignite-3 > Time Spent: 0.5h > Remaining Estimate: 0h > > Need to implement Hybrid Logical Clocks, that combines logical clocks and > physical clocks. > For now it's enough to implement a hybrid clock without any protection from > time errors on different clock instances. > [https://cse.buffalo.edu/tech-reports/2014-04.pdf] > [https://www.cs.cornell.edu/courses/cs5414/2010fa/publications/BM93.pdf] > -- This message was sent by Atlassian Jira (v8.20.7#820007)
[jira] [Updated] (IGNITE-17214) Implement HLC
[ https://issues.apache.org/jira/browse/IGNITE-17214?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Alexander Lapin updated IGNITE-17214: - Ignite Flags: (was: Docs Required,Release Notes Required) > Implement HLC > - > > Key: IGNITE-17214 > URL: https://issues.apache.org/jira/browse/IGNITE-17214 > Project: Ignite > Issue Type: Improvement >Reporter: Sergey Uttsel >Assignee: Sergey Uttsel >Priority: Major > Labels: ignite-3 > Time Spent: 0.5h > Remaining Estimate: 0h > > Need to implement Hybrid Logical Clocks, that combines logical clocks and > physical clocks. > For now it's enough to implement a hybrid clock without any protection from > time errors on different clock instances. > [https://cse.buffalo.edu/tech-reports/2014-04.pdf] > [https://www.cs.cornell.edu/courses/cs5414/2010fa/publications/BM93.pdf] > -- This message was sent by Atlassian Jira (v8.20.7#820007)
[jira] [Created] (IGNITE-17243) Durable background task is not started when BLT node joined cluster after activation
Aleksey Plekhanov created IGNITE-17243: -- Summary: Durable background task is not started when BLT node joined cluster after activation Key: IGNITE-17243 URL: https://issues.apache.org/jira/browse/IGNITE-17243 Project: Ignite Issue Type: Bug Reporter: Aleksey Plekhanov Assignee: Aleksey Plekhanov When BLT node joined to the active cluster, variable {{DurableBackgroundTasksProcessor#prohibitionExecTasks}} is not cleared correctly and durable background tasks are not started on this node. Reproducer: {code:java} public void testNodeJoinAfterActivation() throws Exception { IgniteEx n = startGrid(0); startGrid(1); n.cluster().state(ACTIVE); stopGrid(1); n = startGrid(1); SimpleTask t = new SimpleTask("t"); n.context().durableBackgroundTask().executeAsync(t, true); assertEquals(STARTED, tasks(n).get(t.name()).state()); } {code} -- This message was sent by Atlassian Jira (v8.20.7#820007)
[jira] [Updated] (IGNITE-17243) Durable background task is not started when BLT node joined cluster after activation
[ https://issues.apache.org/jira/browse/IGNITE-17243?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Aleksey Plekhanov updated IGNITE-17243: --- Fix Version/s: 2.14 > Durable background task is not started when BLT node joined cluster after > activation > > > Key: IGNITE-17243 > URL: https://issues.apache.org/jira/browse/IGNITE-17243 > Project: Ignite > Issue Type: Bug >Reporter: Aleksey Plekhanov >Assignee: Aleksey Plekhanov >Priority: Major > Fix For: 2.14 > > > When BLT node joined to the active cluster, variable > {{DurableBackgroundTasksProcessor#prohibitionExecTasks}} is not cleared > correctly and durable background tasks are not started on this node. > Reproducer: > {code:java} > public void testNodeJoinAfterActivation() throws Exception { > IgniteEx n = startGrid(0); > startGrid(1); > n.cluster().state(ACTIVE); > stopGrid(1); > n = startGrid(1); > SimpleTask t = new SimpleTask("t"); > n.context().durableBackgroundTask().executeAsync(t, true); > assertEquals(STARTED, tasks(n).get(t.name()).state()); > } > {code} -- This message was sent by Atlassian Jira (v8.20.7#820007)
[jira] [Updated] (IGNITE-17242) Revise codebase to align logs with proposed guidelines
[ https://issues.apache.org/jira/browse/IGNITE-17242?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Konstantin Orlov updated IGNITE-17242: -- Description: See proposed guidelines in the attached epic. (was: Subj.) > Revise codebase to align logs with proposed guidelines > -- > > Key: IGNITE-17242 > URL: https://issues.apache.org/jira/browse/IGNITE-17242 > Project: Ignite > Issue Type: Improvement > Components: general >Reporter: Konstantin Orlov >Priority: Major > > See proposed guidelines in the attached epic. -- This message was sent by Atlassian Jira (v8.20.7#820007)
[jira] [Created] (IGNITE-17242) Revise codebase to align logs with proposed guidelines
Konstantin Orlov created IGNITE-17242: - Summary: Revise codebase to align logs with proposed guidelines Key: IGNITE-17242 URL: https://issues.apache.org/jira/browse/IGNITE-17242 Project: Ignite Issue Type: Improvement Components: general Reporter: Konstantin Orlov Subj. -- This message was sent by Atlassian Jira (v8.20.7#820007)
[jira] [Updated] (IGNITE-17237) Implement a logging subsystem
[ https://issues.apache.org/jira/browse/IGNITE-17237?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Konstantin Orlov updated IGNITE-17237: -- Ignite Flags: (was: Docs Required,Release Notes Required) > Implement a logging subsystem > - > > Key: IGNITE-17237 > URL: https://issues.apache.org/jira/browse/IGNITE-17237 > Project: Ignite > Issue Type: Epic > Components: general >Reporter: Konstantin Orlov >Priority: Major > > h2. Motivation > One of the most important parts of any running application is its logs. The > operations team uses them to make sure the application runs smoothly. > Developers use the log for troubleshooting. So we need to provide a uniform > way to log any important event related to the system. > h2. Requirements > * Implementation should not rely on any particular 3rd-party logging > framework > * Implementation should support 5 base logging severities: TRACE, DEBUG, > INFO, WARN, ERROR > * Implementation should provide a uniform API for server-side use as well as > for clients > * For clients there should be an ability to specify logger programmatically > through the client builder > * Implementation should provide seamless integration with majority of > popular logging frameworks > * Implementation should support parameters' substitution to avoid wrapping > with {{ifEnabled}} for very simple cases > h2. Proposed solution > We could take an advantage of {{System.Logger}} frameworks. This implies a > two level architecture with uniform frontend which should be used throughout > our system, and interchangeable backends. Besides, {{System.Logger}} > framework have already integrated with such 3rd-party frameworks as {{SLF4j}} > and {{Log4j}}. > h2. Proposed guidelines > h3. Message layout > Nowadays so many deployments have an automated logging preprocessing, that > it's important not only make the logs human readable, but make them machine > friendly. With that said, we need to get sure that all arguments are easy to > locate and parse. To achieve this, the proposed log format is follow: > {code:java} > [argKey1=argVal1, argKey2=argVal2] > {code} > For example: > {code:java} > Table has been created [id=0xaabbccdd, tName=my_table, sName=my_schema] > {code} > Perhaps, the structured logs are better fits this, but this is currently out > of scope. > h3. Arguments inlining > We need to avoid string concatenation to inline arguments into the message > because logging subsystem should provide arguments' substitution > h3. On choosing the level > We must come from an understanding that both levels WARN and ERROR requires > an attention of an operation team, so those levels should be used only when > the cluster is in (or about to move to) invalids state. > INFO is a normal level that is used to log regular _unfrequent_ events. Avoid > to use this level for frequent events like TABLE INSERT -- This message was sent by Atlassian Jira (v8.20.7#820007)
[jira] [Updated] (IGNITE-17237) Implement a logging subsystem
[ https://issues.apache.org/jira/browse/IGNITE-17237?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Konstantin Orlov updated IGNITE-17237: -- Description: h2. Motivation One of the most important parts of any running application is its logs. The operations team uses them to make sure the application runs smoothly. Developers use the log for troubleshooting. So we need to provide a uniform way to log any important event related to the system. h2. Requirements * Implementation should not rely on any particular 3rd-party logging framework * Implementation should support 5 base logging severities: TRACE, DEBUG, INFO, WARN, ERROR * Implementation should provide a uniform API for server-side use as well as for clients * For clients there should be an ability to specify logger programmatically through the client builder * Implementation should provide seamless integration with majority of popular logging frameworks * Implementation should support parameters' substitution to avoid wrapping with {{ifEnabled}} for very simple cases h2. Proposed solution We could take an advantage of {{System.Logger}} frameworks. This implies a two level architecture with uniform frontend which should be used throughout our system, and interchangeable backends. Besides, {{System.Logger}} framework have already integrated with such 3rd-party frameworks as {{SLF4j}} and {{Log4j}}. h2. Proposed guidelines h3. Message layout Nowadays so many deployments have an automated logging preprocessing, that it's important not only make the logs human readable, but make them machine friendly. With that said, we need to get sure that all arguments are easy to locate and parse. To achieve this, the proposed log format is follow: {code:java} [argKey1=argVal1, argKey2=argVal2] {code} For example: {code:java} Table has been created [id=0xaabbccdd, tName=my_table, sName=my_schema] {code} Perhaps, the structured logs are better fits this, but this is currently out of scope. h3. Arguments inlining We need to avoid string concatenation to inline arguments into the message because logging subsystem should provide arguments' substitution h3. On choosing the level We must come from an understanding that both levels WARN and ERROR requires an attention of an operation team, so those levels should be used only when the cluster is in (or about to move to) invalids state. INFO is a normal level that is used to log regular _unfrequent_ events. Avoid to use this level for frequent events like TABLE INSERT was: h2. Motivation One of the most important parts of any running application is its logs. The operations team uses them to make sure the application runs smoothly. Developers use the log for troubleshooting. So we need to provide a uniform way to log any important event related to the system. h2. Requirements * Implementation should not rely on any particular 3rd-party logging framework * Implementation should support 5 base logging severities: TRACE, DEBUG, INFO, WARN, ERROR * Implementation should provide a uniform API for server-side use as well as for clients * For clients there should be an ability to specify logger programmatically through the client builder * Implementation should provide seamless integration with majority of popular logging frameworks * Implementation should support parameters' substitution to avoid wrapping with {{ifEnabled}} for very simple cases h2. Proposed solution We could take an advantage of {{System.Logger}} frameworks. This implies a two level architecture with uniform frontend which should be used throughout our system, and interchangeable backends. Besides, {{System.Logger}} framework have already integrated with such 3rd-party frameworks as {{SLF4j}} and {{Log4j}}. h2. Proposed guidelines Nowadays so many deployments have an automated logging preprocessing, that it's important not only make the logs human readable, but make them machine friendly. With that said, we need to get sure that all arguments are easy to locate and parse. To achieve this, the proposed log format is follow: {code:java} [argKey1=argVal1, argKey2=argVal2] {code} For example: {code:java} Table has been created [id=0xaabbccdd, tName=my_table, sName=my_schema] {code} Perhaps, the structured logs are better fits this, but this is currently out of scope. > Implement a logging subsystem > - > > Key: IGNITE-17237 > URL: https://issues.apache.org/jira/browse/IGNITE-17237 > Project: Ignite > Issue Type: Epic > Components: general >Reporter: Konstantin Orlov >Priority: Major > > h2. Motivation > One of the most important parts of any running application is its logs. The > operations team uses them to make sure the application runs smoothly. > Developers use the log for troubleshooting. So we need to
[jira] [Assigned] (IGNITE-17238) Create an initial template for error codes
[ https://issues.apache.org/jira/browse/IGNITE-17238?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Igor Gusev reassigned IGNITE-17238: --- Assignee: Igor Gusev > Create an initial template for error codes > -- > > Key: IGNITE-17238 > URL: https://issues.apache.org/jira/browse/IGNITE-17238 > Project: Ignite > Issue Type: Task > Components: documentation >Reporter: Igor Gusev >Assignee: Igor Gusev >Priority: Major > Labels: ignite-3 > > We will soon have error codes defined. To make it easier to add them to > product documentation we should prepare a template page in advance. -- This message was sent by Atlassian Jira (v8.20.7#820007)
[jira] [Assigned] (IGNITE-16973) Add advanced completions to SQL REPL
[ https://issues.apache.org/jira/browse/IGNITE-16973?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mikhail Pochatkin reassigned IGNITE-16973: -- Assignee: Mikhail Pochatkin > Add advanced completions to SQL REPL > > > Key: IGNITE-16973 > URL: https://issues.apache.org/jira/browse/IGNITE-16973 > Project: Ignite > Issue Type: Task >Reporter: Aleksandr >Assignee: Mikhail Pochatkin >Priority: Major > Labels: ignite-3, ignite-3-cli-tool > > In order to improve the developer experience in SQL REPL mode dynamic > autocompletion can be added. For example, a user types {{select * from ta}} > and gets the suggestion with the list of tables that are fetched from the > JDBC. > Also, the current list of SQL keywords for autocomplete is taken from the > default Calcite parser. Use an actual list of Ignite SQL keywords for > auto-complete. -- This message was sent by Atlassian Jira (v8.20.7#820007)
[jira] [Updated] (IGNITE-17237) Implement a logging subsystem
[ https://issues.apache.org/jira/browse/IGNITE-17237?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Konstantin Orlov updated IGNITE-17237: -- Description: h2. Motivation One of the most important parts of any running application is its logs. The operations team uses them to make sure the application runs smoothly. Developers use the log for troubleshooting. So we need to provide a uniform way to log any important event related to the system. h2. Requirements * Implementation should not rely on any particular 3rd-party logging framework * Implementation should support 5 base logging severities: TRACE, DEBUG, INFO, WARN, ERROR * Implementation should provide a uniform API for server-side use as well as for clients * For clients there should be an ability to specify logger programmatically through the client builder * Implementation should provide seamless integration with majority of popular logging frameworks * Implementation should support parameters' substitution to avoid wrapping with {{ifEnabled}} for very simple cases h2. Proposed solution We could take an advantage of {{System.Logger}} frameworks. This implies a two level architecture with uniform frontend which should be used throughout our system, and interchangeable backends. Besides, {{System.Logger}} framework have already integrated with such 3rd-party frameworks as {{SLF4j}} and {{Log4j}}. h2. Proposed guidelines Nowadays so many deployments have an automated logging preprocessing, that it's important not only make the logs human readable, but make them machine friendly. With that said, we need to get sure that all arguments are easy to locate and parse. To achieve this, the proposed log format is follow: {code:java} [argKey1=argVal1, argKey2=argVal2] {code} For example: {code:java} Table has been created [id=0xaabbccdd, tName=my_table, sName=my_schema] {code} Perhaps, the structured logs are better fits this, but this is currently out of scope. was: h2mh2. Motivation One of the most important parts of any running application is its logs. The operations team uses them to make sure the application runs smoothly. Developers use the log for troubleshooting. So we need to provide a uniform way to log any important event related to the system. h2. Requirements * Implementation should not rely on any particular 3rd-party logging framework * Implementation should support 5 base logging severities: TRACE, DEBUG, INFO, WARN, ERROR * Implementation should provide a uniform API for server-side use as well as for clients * For clients there should be an ability to specify logger programmatically through the client builder * Implementation should provide seamless integration with majority of popular logging frameworks * Implementation should support parameters' substitution to avoid wrapping with {{ifEnabled}} for very simple cases h2. Proposed solution We could take an advantage of {{System.Logger}} frameworks. This implies a two level architecture with uniform frontend which should be used throughout our system, and interchangeable backends. Besides, {{System.Logger}} framework have already integrated with such 3rd-party frameworks as {{SLF4j}} and {{Log4j}}. h2. Proposed guidelines Nowadays so many deployments have an automated logging preprocessing, that it's important not only make the logs human readable, but make them machine friendly. With that said, we need to get sure that all arguments are easy to locate and parse. To achieve this, the proposed log format is follow: {code:java} [argKey1=argVal1, argKey2=argVal2] {code} For example: {code:java} Table has been created [id=0xaabbccdd, tName=my_table, sName=my_schema] {code} Perhaps, the structured logs are better fits this, but this is currently out of scope. > Implement a logging subsystem > - > > Key: IGNITE-17237 > URL: https://issues.apache.org/jira/browse/IGNITE-17237 > Project: Ignite > Issue Type: Epic > Components: general >Reporter: Konstantin Orlov >Priority: Major > > h2. Motivation > One of the most important parts of any running application is its logs. The > operations team uses them to make sure the application runs smoothly. > Developers use the log for troubleshooting. So we need to provide a uniform > way to log any important event related to the system. > h2. Requirements > * Implementation should not rely on any particular 3rd-party logging > framework > * Implementation should support 5 base logging severities: TRACE, DEBUG, > INFO, WARN, ERROR > * Implementation should provide a uniform API for server-side use as well as > for clients > * For clients there should be an ability to specify logger programmatically > through the client builder > * Implementation should provide seamless integration with majority of >
[jira] [Updated] (IGNITE-17237) Implement a logging subsystem
[ https://issues.apache.org/jira/browse/IGNITE-17237?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Konstantin Orlov updated IGNITE-17237: -- Description: h2mh2. Motivation One of the most important parts of any running application is its logs. The operations team uses them to make sure the application runs smoothly. Developers use the log for troubleshooting. So we need to provide a uniform way to log any important event related to the system. h2. Requirements * Implementation should not rely on any particular 3rd-party logging framework * Implementation should support 5 base logging severities: TRACE, DEBUG, INFO, WARN, ERROR * Implementation should provide a uniform API for server-side use as well as for clients * For clients there should be an ability to specify logger programmatically through the client builder * Implementation should provide seamless integration with majority of popular logging frameworks * Implementation should support parameters' substitution to avoid wrapping with {{ifEnabled}} for very simple cases h2. Proposed solution We could take an advantage of {{System.Logger}} frameworks. This implies a two level architecture with uniform frontend which should be used throughout our system, and interchangeable backends. Besides, {{System.Logger}} framework have already integrated with such 3rd-party frameworks as {{SLF4j}} and {{Log4j}}. h2. Proposed guidelines Nowadays so many deployments have an automated logging preprocessing, that it's important not only make the logs human readable, but make them machine friendly. With that said, we need to get sure that all arguments are easy to locate and parse. To achieve this, the proposed log format is follow: {code:java} [argKey1=argVal1, argKey2=argVal2] {code} For example: {code:java} Table has been created [id=0xaabbccdd, tName=my_table, sName=my_schema] {code} Perhaps, the structured logs are better fits this, but this is currently out of scope. was: h2mh2. Motivation One of the most important parts of any running application is its logs. The operations team uses them to make sure the application runs smoothly. Developers use the log for troubleshooting. So we need to provide a uniform way to log any important event related to the system. h2. Requirements * Implementation should not rely on any particular 3rd-party logging framework * Implementation should support 5 base logging severities: TRACE, DEBUG, INFO, WARN, ERROR * Implementation should provide a uniform API for server-side use as well as for clients * For clients there should be an ability to specify logger programmatically through the client builder * Implementation should provide seamless integration with majority of popular logging frameworks * Implementation should support parameters' substitution to avoid wrapping with {{ifEnabled}} for very simple cases h2. Proposed solution We could take an advantage of {{System.Logger}} frameworks. This implies a two level architecture with uniform frontend which should be used throughout our system, and interchangeable backends. Besides, {{System.Logger}} framework have already integrated with such 3rd-party frameworks as {{SLF4j}} and {{Log4j}}. h2. Proposed guidelines Nowadays so many deployments have an automated logging preprocessing, that it's important not only make the logs human readable, but make them machine friendly. With that said, we need to get sure that all arguments are easy to locate and parse. To achieve this, the proposed log format is follow: {code:java} [argKey1=argVal1, argKey2=argVal2] {code} For example: {code:java} Table has been created [id=0xaabbccdd, tName=my_table, sName=my_schema] {code} > Implement a logging subsystem > - > > Key: IGNITE-17237 > URL: https://issues.apache.org/jira/browse/IGNITE-17237 > Project: Ignite > Issue Type: Epic > Components: general >Reporter: Konstantin Orlov >Priority: Major > > h2mh2. Motivation > One of the most important parts of any running application is its logs. The > operations team uses them to make sure the application runs smoothly. > Developers use the log for troubleshooting. So we need to provide a uniform > way to log any important event related to the system. > h2. Requirements > * Implementation should not rely on any particular 3rd-party logging > framework > * Implementation should support 5 base logging severities: TRACE, DEBUG, > INFO, WARN, ERROR > * Implementation should provide a uniform API for server-side use as well as > for clients > * For clients there should be an ability to specify logger programmatically > through the client builder > * Implementation should provide seamless integration with majority of > popular logging frameworks > * Implementation should support parameters' substit
[jira] [Updated] (IGNITE-17237) Implement a logging subsystem
[ https://issues.apache.org/jira/browse/IGNITE-17237?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Konstantin Orlov updated IGNITE-17237: -- Description: h2mh2. Motivation One of the most important parts of any running application is its logs. The operations team uses them to make sure the application runs smoothly. Developers use the log for troubleshooting. So we need to provide a uniform way to log any important event related to the system. h2. Requirements * Implementation should not rely on any particular 3rd-party logging framework * Implementation should support 5 base logging severities: TRACE, DEBUG, INFO, WARN, ERROR * Implementation should provide a uniform API for server-side use as well as for clients * For clients there should be an ability to specify logger programmatically through the client builder * Implementation should provide seamless integration with majority of popular logging frameworks * Implementation should support parameters' substitution to avoid wrapping with {{ifEnabled}} for very simple cases h2. Proposed solution We could take an advantage of {{System.Logger}} frameworks. This implies a two level architecture with uniform frontend which should be used throughout our system, and interchangeable backends. Besides, {{System.Logger}} framework have already integrated with such 3rd-party frameworks as {{SLF4j}} and {{Log4j}}. h2. Proposed guidelines Nowadays so many deployments have an automated logging preprocessing, that it's important not only make the logs human readable, but make them machine friendly. With that said, we need to get sure that all arguments are easy to locate and parse. To achieve this, the proposed log format is follow: {code:java} [argKey1=argVal1, argKey2=argVal2] {code} For example: {code:java} Table has been created [id=0xaabbccdd, tName=my_table, sName=my_schema] {code} was: h2mh2. Motivation One of the most important parts of any running application is its logs. The operations team uses them to make sure the application runs smoothly. Developers use the log for troubleshooting. So we need to provide a uniform way to log any important event related to the system. h2. Requirements * Implementation should not rely on any particular 3rd-party logging framework * Implementation should support 5 base logging severities: TRACE, DEBUG, INFO, WARN, ERROR * Implementation should provide a uniform API for server-side use as well as for clients * For clients there should be an ability to specify logger programmatically through the client builder * Implementation should provide seamless integration with majority of popular logging frameworks * Implementation should support parameters' substitution to avoid wrapping with {{ifEnabled}} for very simple cases h2. Proposed solution We could take an advantage of {{System.Logger}} frameworks. This implies a two level architecture with uniform frontend which should be used throughout our system, and interchangeable backends. Besides, {{System.Logger}} framework have already integrated with such 3rd-party frameworks as {{SLF4j}} and {{Log4j}}. h2. Proposed guidelines Nowadays so many deployments have an automated logging preprocessing, that it's important not only make the logs human readable, but make them machine friendly. With that said, we need to get sure that all arguments are easy to locate and parse. To achieve this, the proposed log format is follow: {code:java} [argKey1=argVal1, argKey2=argVal2] {code} For example: Table has been created > Implement a logging subsystem > - > > Key: IGNITE-17237 > URL: https://issues.apache.org/jira/browse/IGNITE-17237 > Project: Ignite > Issue Type: Epic > Components: general >Reporter: Konstantin Orlov >Priority: Major > > h2mh2. Motivation > One of the most important parts of any running application is its logs. The > operations team uses them to make sure the application runs smoothly. > Developers use the log for troubleshooting. So we need to provide a uniform > way to log any important event related to the system. > h2. Requirements > * Implementation should not rely on any particular 3rd-party logging > framework > * Implementation should support 5 base logging severities: TRACE, DEBUG, > INFO, WARN, ERROR > * Implementation should provide a uniform API for server-side use as well as > for clients > * For clients there should be an ability to specify logger programmatically > through the client builder > * Implementation should provide seamless integration with majority of > popular logging frameworks > * Implementation should support parameters' substitution to avoid wrapping > with {{ifEnabled}} for very simple cases > h2. Proposed solution > We could take an advantage of {{System.Logger}} frameworks. T
[jira] [Updated] (IGNITE-17237) Implement a logging subsystem
[ https://issues.apache.org/jira/browse/IGNITE-17237?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Konstantin Orlov updated IGNITE-17237: -- Description: h2mh2. Motivation One of the most important parts of any running application is its logs. The operations team uses them to make sure the application runs smoothly. Developers use the log for troubleshooting. So we need to provide a uniform way to log any important event related to the system. h2. Requirements * Implementation should not rely on any particular 3rd-party logging framework * Implementation should support 5 base logging severities: TRACE, DEBUG, INFO, WARN, ERROR * Implementation should provide a uniform API for server-side use as well as for clients * For clients there should be an ability to specify logger programmatically through the client builder * Implementation should provide seamless integration with majority of popular logging frameworks * Implementation should support parameters' substitution to avoid wrapping with {{ifEnabled}} for very simple cases h2. Proposed solution We could take an advantage of {{System.Logger}} frameworks. This implies a two level architecture with uniform frontend which should be used throughout our system, and interchangeable backends. Besides, {{System.Logger}} framework have already integrated with such 3rd-party frameworks as {{SLF4j}} and {{Log4j}}. h2. Proposed guidelines Nowadays so many deployments have an automated logging preprocessing, that it's important not only make the logs human readable, but make them machine friendly. With that said, we need to get sure that all arguments are easy to locate and parse. To achieve this, the proposed log format is follow: {code:java} [argKey1=argVal1, argKey2=argVal2] {code} For example: Table has been created was: h2. Motivation One of the most important parts of any running application is its logs. The operations team uses them to make sure the application runs smoothly. Developers use the log for troubleshooting. So we need to provide a uniform way to log any important event related to the system. h2. Requirements * Implementation should not rely on any particular 3rd-party logging framework * Implementation should support 5 base logging severities: TRACE, DEBUG, INFO, WARN, ERROR * Implementation should provide a uniform API for server-side use as well as for clients * For clients there should be an ability to specify logger programmatically through the client builder * Implementation should provide seamless integration with majority of popular logging frameworks * Implementation should support parameters' substitution to avoid wrapping with {{ifEnabled}} for very simple cases h2. Proposed solution We could take an advantage of {{System.Logger}} frameworks. This implies a two level architecture with uniform frontend which should be used throughout our system, and interchangeable backends. Besides, {{System.Logger}} framework have already integrated with such 3rd-party frameworks as {{SLF4j}} and {{{}Log4j{}}}. > Implement a logging subsystem > - > > Key: IGNITE-17237 > URL: https://issues.apache.org/jira/browse/IGNITE-17237 > Project: Ignite > Issue Type: Epic > Components: general >Reporter: Konstantin Orlov >Priority: Major > > h2mh2. Motivation > One of the most important parts of any running application is its logs. The > operations team uses them to make sure the application runs smoothly. > Developers use the log for troubleshooting. So we need to provide a uniform > way to log any important event related to the system. > h2. Requirements > * Implementation should not rely on any particular 3rd-party logging > framework > * Implementation should support 5 base logging severities: TRACE, DEBUG, > INFO, WARN, ERROR > * Implementation should provide a uniform API for server-side use as well as > for clients > * For clients there should be an ability to specify logger programmatically > through the client builder > * Implementation should provide seamless integration with majority of > popular logging frameworks > * Implementation should support parameters' substitution to avoid wrapping > with {{ifEnabled}} for very simple cases > h2. Proposed solution > We could take an advantage of {{System.Logger}} frameworks. This implies a > two level architecture with uniform frontend which should be used throughout > our system, and interchangeable backends. Besides, {{System.Logger}} > framework have already integrated with such 3rd-party frameworks as {{SLF4j}} > and {{Log4j}}. > > h2. Proposed guidelines > > Nowadays so many deployments have an automated logging preprocessing, that > it's important not only make the logs human readable, but make them machine > friendly. With that said, we nee
[jira] [Updated] (IGNITE-17241) Introduce utility class Loggers
[ https://issues.apache.org/jira/browse/IGNITE-17241?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Konstantin Orlov updated IGNITE-17241: -- Description: Need to introduce a single access point to factory methods to create a logger. This class should provide an ability to create a logger for provided class or provided name with 1) default backend, 2) specified backed, or 3) by delegating backend creation to a {{LoggerFactory}} (which is described in IGNITE-17240) (was: Need to introduce a single access point to factory methods to create a logger. This class should provide an ability to create a logger for provided class or provided name with 1) default backend, 2) specified backed, or 3) by delegating backend creation to a LoggerFactory) > Introduce utility class Loggers > --- > > Key: IGNITE-17241 > URL: https://issues.apache.org/jira/browse/IGNITE-17241 > Project: Ignite > Issue Type: Improvement > Components: general >Reporter: Konstantin Orlov >Priority: Major > > Need to introduce a single access point to factory methods to create a > logger. This class should provide an ability to create a logger for provided > class or provided name with 1) default backend, 2) specified backed, or 3) by > delegating backend creation to a {{LoggerFactory}} (which is described in > IGNITE-17240) -- This message was sent by Atlassian Jira (v8.20.7#820007)
[jira] [Updated] (IGNITE-17241) Introduce utility class Loggers
[ https://issues.apache.org/jira/browse/IGNITE-17241?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Konstantin Orlov updated IGNITE-17241: -- Component/s: general > Introduce utility class Loggers > --- > > Key: IGNITE-17241 > URL: https://issues.apache.org/jira/browse/IGNITE-17241 > Project: Ignite > Issue Type: Improvement > Components: general >Reporter: Konstantin Orlov >Priority: Major > > Need to introduce a single access point to factory methods to create a > logger. This class should provide an ability to create a logger for provided > class or provided name with 1) default backend, 2) specified backed, or 3) by > delegating backend creation to a LoggerFactory -- This message was sent by Atlassian Jira (v8.20.7#820007)
[jira] [Updated] (IGNITE-17241) Introduce utility class Loggers
[ https://issues.apache.org/jira/browse/IGNITE-17241?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Konstantin Orlov updated IGNITE-17241: -- Ignite Flags: (was: Docs Required,Release Notes Required) > Introduce utility class Loggers > --- > > Key: IGNITE-17241 > URL: https://issues.apache.org/jira/browse/IGNITE-17241 > Project: Ignite > Issue Type: Improvement >Reporter: Konstantin Orlov >Priority: Major > > Need to introduce a single access point to factory methods to create a > logger. This class should provide an ability to create a logger for provided > class or provided name with 1) default backend, 2) specified backed, or 3) by > delegating backend creation to a LoggerFactory -- This message was sent by Atlassian Jira (v8.20.7#820007)
[jira] [Created] (IGNITE-17241) Introduce utility class Loggers
Konstantin Orlov created IGNITE-17241: - Summary: Introduce utility class Loggers Key: IGNITE-17241 URL: https://issues.apache.org/jira/browse/IGNITE-17241 Project: Ignite Issue Type: Improvement Reporter: Konstantin Orlov Need to introduce a single access point to factory methods to create a logger. This class should provide an ability to create a logger for provided class or provided name with 1) default backend, 2) specified backed, or 3) by delegating backend creation to a LoggerFactory -- This message was sent by Atlassian Jira (v8.20.7#820007)
[jira] [Created] (IGNITE-17240) Provide an ability to configure logging backend though IgniteClient.Builder
Konstantin Orlov created IGNITE-17240: - Summary: Provide an ability to configure logging backend though IgniteClient.Builder Key: IGNITE-17240 URL: https://issues.apache.org/jira/browse/IGNITE-17240 Project: Ignite Issue Type: Improvement Components: clients Reporter: Konstantin Orlov Need to extend {{org.apache.ignite.client.IgniteClient.Builder}} in order to provide an ability to specify {{{}LoggerFactory{}}}, where {{LoggerFactory}} is the following interface: {code:java} public interface LoggerFactory { default System.Logger forClass(Class clazz) { return forName(Objects.requireNonNull(clazz).getName()); } System.Logger forName(String name); } {code} The configured backend should be stored within {{{}org.apache.ignite.client.IgniteClientConfiguration{}}}. -- This message was sent by Atlassian Jira (v8.20.7#820007)
[jira] [Created] (IGNITE-17239) Divide [Module] Page Memory into 2
Kirill Tkalenko created IGNITE-17239: Summary: Divide [Module] Page Memory into 2 Key: IGNITE-17239 URL: https://issues.apache.org/jira/browse/IGNITE-17239 Project: Ignite Issue Type: Task Reporter: Kirill Tkalenko Assignee: Petr Ivanov Fix For: 3.0.0-alpha6 To speed up the tests, it would be good to divide [[Module] Page Memory|https://ci.ignite.apache.org/viewType.html?buildTypeId=ignite3_Test_IntegrationTests_ModulePageMemory&branch_ignite3_Test_IntegrationTests=%3Cdefault%3E&tab=buildTypeStatusDiv] by 2: * [Module] Page Memory Volatile: ** ItBplusTreeReplaceRemoveRaceTest.* ** ItBplusTreeFakeReuseVolatilePageMemoryTest.* ** ItBplusTreeReuseVolatilePageMemoryTest.* ** ItBplusTreeVolatilePageMemoryTest.* * [Module] Page Memory Persistent: ** ItBplusTreePersistentPageMemoryTest.* ** ItBplusTreeReuseListPersistentPageMemoryTest.* -- This message was sent by Atlassian Jira (v8.20.7#820007)
[jira] [Created] (IGNITE-17238) Create an initial template for error codes
Igor Gusev created IGNITE-17238: --- Summary: Create an initial template for error codes Key: IGNITE-17238 URL: https://issues.apache.org/jira/browse/IGNITE-17238 Project: Ignite Issue Type: Task Components: documentation Reporter: Igor Gusev We will soon have error codes defined. To make it easier to add them to product documentation we should prepare a template page in advance. -- This message was sent by Atlassian Jira (v8.20.7#820007)
[jira] [Updated] (IGNITE-17237) Implement a logging subsystem
[ https://issues.apache.org/jira/browse/IGNITE-17237?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Konstantin Orlov updated IGNITE-17237: -- Description: h2. Motivation One of the most important parts of any running application is its logs. The operations team uses them to make sure the application runs smoothly. Developers use the log for troubleshooting. So we need to provide a uniform way to log any important event related to the system. h2. Requirements * Implementation should not rely on any particular 3rd-party logging framework * Implementation should support 5 base logging severities: TRACE, DEBUG, INFO, WARN, ERROR * Implementation should provide a uniform API for server-side use as well as for clients * For clients there should be an ability to specify logger programmatically through the client builder * Implementation should provide seamless integration with majority of popular logging frameworks * Implementation should support parameters' substitution to avoid wrapping with {{ifEnabled}} for very simple cases h2. Proposed solution We could take an advantage of {{System.Logger}} frameworks. This implies a two level architecture with uniform frontend which should be used throughout our system, and interchangeable backends. Besides, {{System.Logger}} framework have already integrated with such 3rd-party frameworks as {{SLF4j}} and {{{}Log4j{}}}. was: h2. Motivation One of the most important parts of any running application is its logs. The operations team uses them to make sure the application runs smoothly. Developers use the log for troubleshooting. So we need to provide a uniform way to log any important event related to the system. h2. Requirements * Implementation should not rely on any particular 3rd-party logging framework * Implementation should support 5 base logging severities: TRACE, DEBUG, INFO, WARN, ERROR * Implementation should provide a uniform API for server-side use as well as for clients * For clients there should be an ability to specify logging programmatically through the client builder * Implementation should provide seamless integration with majority of popular logging frameworks (for embedded use) * Implementation should support parameters' substitution to avoid wrapping with {{ifEnabled}} for very simple cases h2. Proposed solution We could take an advantage of System.Logger frameworks. This implies a two level architecture with uniform frontend which should be used throughout our system, and interchangeable backends. Besides, System.Logger framework have already integrated with such 3rd-party frameworks as SLF4j and Log4j. > Implement a logging subsystem > - > > Key: IGNITE-17237 > URL: https://issues.apache.org/jira/browse/IGNITE-17237 > Project: Ignite > Issue Type: Epic > Components: general >Reporter: Konstantin Orlov >Priority: Major > > h2. Motivation > One of the most important parts of any running application is its logs. The > operations team uses them to make sure the application runs smoothly. > Developers use the log for troubleshooting. So we need to provide a uniform > way to log any important event related to the system. > h2. Requirements > * Implementation should not rely on any particular 3rd-party logging > framework > * Implementation should support 5 base logging severities: TRACE, DEBUG, > INFO, WARN, ERROR > * Implementation should provide a uniform API for server-side use as well as > for clients > * For clients there should be an ability to specify logger programmatically > through the client builder > * Implementation should provide seamless integration with majority of > popular logging frameworks > * Implementation should support parameters' substitution to avoid wrapping > with {{ifEnabled}} for very simple cases > h2. Proposed solution > We could take an advantage of {{System.Logger}} frameworks. This implies a > two level architecture with uniform frontend which should be used throughout > our system, and interchangeable backends. Besides, {{System.Logger}} > framework have already integrated with such 3rd-party frameworks as {{SLF4j}} > and {{{}Log4j{}}}. -- This message was sent by Atlassian Jira (v8.20.7#820007)
[jira] [Updated] (IGNITE-17237) Implement a logging subsystem
[ https://issues.apache.org/jira/browse/IGNITE-17237?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Konstantin Orlov updated IGNITE-17237: -- Description: h2. Motivation One of the most important parts of any running application is its logs. The operations team uses them to make sure the application runs smoothly. Developers use the log for troubleshooting. So we need to provide a uniform way to log any important event related to the system. h2. Requirements * Implementation should not rely on any particular 3rd-party logging framework * Implementation should support 5 base logging severities: TRACE, DEBUG, INFO, WARN, ERROR * Implementation should provide a uniform API for server-side use as well as for clients * For clients there should be an ability to specify logging programmatically through the client builder * Implementation should provide seamless integration with majority of popular logging frameworks (for embedded use) * Implementation should support parameters' substitution to avoid wrapping with {{ifEnabled}} for very simple cases h2. Proposed solution We could take an advantage of System.Logger frameworks. This implies a two level architecture with uniform frontend which should be used throughout our system, and interchangeable backends. Besides, System.Logger framework have already integrated with such 3rd-party frameworks as SLF4j and Log4j. was: h2. Motivation One of the most important parts of any running application is its logs. The operations team uses them to make sure the application runs smoothly. Developers use the log for troubleshooting. So we need to provide a uniform way to log any important event related to the system. h2. Requirements * Implementation should not rely on any particular 3rd-party logging framework * Implementation should support 5 base logging severities: TRACE, DEBUG, INFO, WARN, ERROR * Implementation should provide a uniform API for server-side use as well as for clients * For clients there should be an ability to specify logging programmatically with through client builder * Implementation should provide seamless integration with majority of popular logging frameworks (for embedded use) * Implementation should support parameters' substitution to avoid wrapping with {{ifEnabled}} for very simple cases h2. Proposed solution We could take an advantage of System.Logger frameworks. This implies a two level architecture with uniform frontend which should be used throughout our system, and interchangeable backends. Besides, System.Logger framework have already integrated with such 3rd-party frameworks as SLF4j and Log4j. > Implement a logging subsystem > - > > Key: IGNITE-17237 > URL: https://issues.apache.org/jira/browse/IGNITE-17237 > Project: Ignite > Issue Type: Epic > Components: general >Reporter: Konstantin Orlov >Priority: Major > > h2. Motivation > One of the most important parts of any running application is its logs. The > operations team uses them to make sure the application runs smoothly. > Developers use the log for troubleshooting. So we need to provide a uniform > way to log any important event related to the system. > h2. Requirements > * Implementation should not rely on any particular 3rd-party logging > framework > * Implementation should support 5 base logging severities: TRACE, DEBUG, > INFO, WARN, ERROR > * Implementation should provide a uniform API for server-side use as well as > for clients > * For clients there should be an ability to specify logging programmatically > through the client builder > * Implementation should provide seamless integration with majority of > popular logging frameworks (for embedded use) > * Implementation should support parameters' substitution to avoid wrapping > with {{ifEnabled}} for very simple cases > h2. Proposed solution > We could take an advantage of System.Logger frameworks. This implies a two > level architecture with uniform frontend which should be used throughout our > system, and interchangeable backends. Besides, System.Logger framework have > already integrated with such 3rd-party frameworks as SLF4j and Log4j. -- This message was sent by Atlassian Jira (v8.20.7#820007)
[jira] [Updated] (IGNITE-17237) Implement a logging subsystem
[ https://issues.apache.org/jira/browse/IGNITE-17237?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Konstantin Orlov updated IGNITE-17237: -- Description: h2. Motivation One of the most important parts of any running application is its logs. The operations team uses them to make sure the application runs smoothly. Developers use the log for troubleshooting. So we need to provide a uniform way to log any important event related to the system. h2. Requirements * Implementation should not rely on any particular 3rd-party logging framework * Implementation should support 5 base logging severities: TRACE, DEBUG, INFO, WARN, ERROR * Implementation should provide a uniform API for server-side use as well as for clients * For clients there should be an ability to specify logging programmatically with through client builder * Implementation should provide seamless integration with majority of popular logging frameworks (for embedded use) * Implementation should support parameters' substitution to avoid wrapping with {{ifEnabled}} for very simple cases h2. Proposed solution We could take an advantage of System.Logger frameworks. This implies a two level architecture with uniform frontend which should be used throughout our system, and interchangeable backends. Besides, System.Logger framework have already integrated with such a 3rd-party frameworks as SLF4j and Log4j. was: h2. Motivation One of the most important parts of any running application is its logs. The operations team uses them to make sure the application runs smoothly. Developers use the log for troubleshooting. So we need to provide a uniform way to log any important event related to the system. h2. Requirements * Implementation should not rely on any particular 3rd-party logging framework * Implementation should support 5 base logging severities: TRACE, DEBUG, INFO, WARN, ERROR * Implementation should provide a uniform API for server-side use as well as for clients * For clients there should be an ability to specify logging programmatically with through client builder * Implementation should provide seamless integration with majority of popular logging frameworks (for embedded use) * Implementation should support parameters' substitution to avoid wrapping with {{ifEnabled}} for very simple cases h2. Proposed solution We could take an advantage of System.Logger frameworks. This implies a two level architecture with uniform frontend which should be used throughout our system, and interchangeable backend. Besides, System.Logger framework have already integrated with such a 3rd-party frameworks as SLF4j and Log4j. > Implement a logging subsystem > - > > Key: IGNITE-17237 > URL: https://issues.apache.org/jira/browse/IGNITE-17237 > Project: Ignite > Issue Type: Epic > Components: general >Reporter: Konstantin Orlov >Priority: Major > > h2. Motivation > One of the most important parts of any running application is its logs. The > operations team uses them to make sure the application runs smoothly. > Developers use the log for troubleshooting. So we need to provide a uniform > way to log any important event related to the system. > h2. Requirements > * Implementation should not rely on any particular 3rd-party logging > framework > * Implementation should support 5 base logging severities: TRACE, DEBUG, > INFO, WARN, ERROR > * Implementation should provide a uniform API for server-side use as well as > for clients > * For clients there should be an ability to specify logging programmatically > with through client builder > * Implementation should provide seamless integration with majority of > popular logging frameworks (for embedded use) > * Implementation should support parameters' substitution to avoid wrapping > with {{ifEnabled}} for very simple cases > h2. Proposed solution > We could take an advantage of System.Logger frameworks. This implies a two > level architecture with uniform frontend which should be used throughout our > system, and interchangeable backends. Besides, System.Logger framework have > already integrated with such a 3rd-party frameworks as SLF4j and Log4j. -- This message was sent by Atlassian Jira (v8.20.7#820007)
[jira] [Updated] (IGNITE-17237) Implement a logging subsystem
[ https://issues.apache.org/jira/browse/IGNITE-17237?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Konstantin Orlov updated IGNITE-17237: -- Description: h2. Motivation One of the most important parts of any running application is its logs. The operations team uses them to make sure the application runs smoothly. Developers use the log for troubleshooting. So we need to provide a uniform way to log any important event related to the system. h2. Requirements * Implementation should not rely on any particular 3rd-party logging framework * Implementation should support 5 base logging severities: TRACE, DEBUG, INFO, WARN, ERROR * Implementation should provide a uniform API for server-side use as well as for clients * For clients there should be an ability to specify logging programmatically with through client builder * Implementation should provide seamless integration with majority of popular logging frameworks (for embedded use) * Implementation should support parameters' substitution to avoid wrapping with {{ifEnabled}} for very simple cases h2. Proposed solution We could take an advantage of System.Logger frameworks. This implies a two level architecture with uniform frontend which should be used throughout our system, and interchangeable backends. Besides, System.Logger framework have already integrated with such 3rd-party frameworks as SLF4j and Log4j. was: h2. Motivation One of the most important parts of any running application is its logs. The operations team uses them to make sure the application runs smoothly. Developers use the log for troubleshooting. So we need to provide a uniform way to log any important event related to the system. h2. Requirements * Implementation should not rely on any particular 3rd-party logging framework * Implementation should support 5 base logging severities: TRACE, DEBUG, INFO, WARN, ERROR * Implementation should provide a uniform API for server-side use as well as for clients * For clients there should be an ability to specify logging programmatically with through client builder * Implementation should provide seamless integration with majority of popular logging frameworks (for embedded use) * Implementation should support parameters' substitution to avoid wrapping with {{ifEnabled}} for very simple cases h2. Proposed solution We could take an advantage of System.Logger frameworks. This implies a two level architecture with uniform frontend which should be used throughout our system, and interchangeable backends. Besides, System.Logger framework have already integrated with such a 3rd-party frameworks as SLF4j and Log4j. > Implement a logging subsystem > - > > Key: IGNITE-17237 > URL: https://issues.apache.org/jira/browse/IGNITE-17237 > Project: Ignite > Issue Type: Epic > Components: general >Reporter: Konstantin Orlov >Priority: Major > > h2. Motivation > One of the most important parts of any running application is its logs. The > operations team uses them to make sure the application runs smoothly. > Developers use the log for troubleshooting. So we need to provide a uniform > way to log any important event related to the system. > h2. Requirements > * Implementation should not rely on any particular 3rd-party logging > framework > * Implementation should support 5 base logging severities: TRACE, DEBUG, > INFO, WARN, ERROR > * Implementation should provide a uniform API for server-side use as well as > for clients > * For clients there should be an ability to specify logging programmatically > with through client builder > * Implementation should provide seamless integration with majority of > popular logging frameworks (for embedded use) > * Implementation should support parameters' substitution to avoid wrapping > with {{ifEnabled}} for very simple cases > h2. Proposed solution > We could take an advantage of System.Logger frameworks. This implies a two > level architecture with uniform frontend which should be used throughout our > system, and interchangeable backends. Besides, System.Logger framework have > already integrated with such 3rd-party frameworks as SLF4j and Log4j. -- This message was sent by Atlassian Jira (v8.20.7#820007)
[jira] [Created] (IGNITE-17237) Implement a logging subsystem
Konstantin Orlov created IGNITE-17237: - Summary: Implement a logging subsystem Key: IGNITE-17237 URL: https://issues.apache.org/jira/browse/IGNITE-17237 Project: Ignite Issue Type: Epic Components: general Reporter: Konstantin Orlov h2. Motivation One of the most important parts of any running application is its logs. The operations team uses them to make sure the application runs smoothly. Developers use the log for troubleshooting. So we need to provide a uniform way to log any important event related to the system. h2. Requirements * Implementation should not rely on any particular 3rd-party logging framework * Implementation should support 5 base logging severities: TRACE, DEBUG, INFO, WARN, ERROR * Implementation should provide a uniform API for server-side use as well as for clients * For clients there should be an ability to specify logging programmatically with through client builder * Implementation should provide seamless integration with majority of popular logging frameworks (for embedded use) * Implementation should support parameters' substitution to avoid wrapping with {{ifEnabled}} for very simple cases h2. Proposed solution We could take an advantage of System.Logger frameworks. This implies a two level architecture with uniform frontend which should be used throughout our system, and interchangeable backend. Besides, System.Logger framework have already integrated with such a 3rd-party frameworks as SLF4j and Log4j. -- This message was sent by Atlassian Jira (v8.20.7#820007)
[jira] [Updated] (IGNITE-17134) Thin 3.0: Implement client SQL session management
[ https://issues.apache.org/jira/browse/IGNITE-17134?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Igor Sapego updated IGNITE-17134: - Priority: Minor (was: Major) > Thin 3.0: Implement client SQL session management > - > > Key: IGNITE-17134 > URL: https://issues.apache.org/jira/browse/IGNITE-17134 > Project: Ignite > Issue Type: Improvement > Components: sql, thin client >Reporter: Pavel Tupitsyn >Assignee: Pavel Tupitsyn >Priority: Minor > Labels: ignite-3 > Fix For: 3.0.0-alpha6 > > > Close all active cursors and cancel queries when client SQL session is closed -- This message was sent by Atlassian Jira (v8.20.7#820007)
[jira] [Updated] (IGNITE-16907) Add ability to use Raft log as storage WAL within the scope of local recovery
[ https://issues.apache.org/jira/browse/IGNITE-16907?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Denis Chudov updated IGNITE-16907: -- Summary: Add ability to use Raft log as storage WAL within the scope of local recovery (was: Add ability to use Raft log as storage WAL wihtin the scope of local recovery) > Add ability to use Raft log as storage WAL within the scope of local recovery > - > > Key: IGNITE-16907 > URL: https://issues.apache.org/jira/browse/IGNITE-16907 > Project: Ignite > Issue Type: Improvement >Reporter: Alexander Lapin >Priority: Major > Labels: ignite-3 > > h4. Problem > From the birds eye view raft-to-storage flow looks similar to > # > {code:java} > RaftGroupService#run(writeCommand());{code} > # Inner raft replication logic, when replicated on majority adjust > raft.commitedIndex. > # Propagate command to RaftGroupListener (raft state machine). > {code:java} > RaftGroupListener#onWrite(closure(writeCommand()));{code} > # Within state machine insert data from writeCommand to underneath storage: > {code:java} > var insertRes = storage.insert(cmd.getRow(), cmd.getTimestamp());{code} > # ack that data was applied successfully > {code:java} > clo.result(insertRes);{code} > # move raft.appliedIndex to corresponding value, meaning that the data for > this index is applied to the state machine. > The most interesting part, especially for given ticket, relates to step 4. > In real world storage doesn't flush every mutator on disk, instead it buffers > some amount of such mutators and flush them all-together as a part of some > checkpointing process. Thus, if node fails before mutatorsBuffer.flush() it > might lost some data because raft will apply data starting from appliedIndex > + 1 on recovery. > h4. Possible solutions: > There are several possibilities to solve this issue: > # In-storage WAL. Bad solution, because there's already raft log that can be > used as a WAL. Such duplication is redundant. > # Local recovery starting from appliedIndex - mutatorsBuffer.size. Bad > solution. Won't work for not-idempotent operations. Exposes inner storage > details such as mutatorBuffer.size. > # proposedIndex propagation + checkpointIndex synchonization. Seems fine. > More details below: > * First off all, in order to coordinate raft replicator and storage, > proposedIndex should be propagated to raftGroupListener and storage. > * On every checkpoint, storage will persist corresponding proposed index as > checkpointIndex. > ** In case of storage inner checkpoints, storage won't notify raft > replicator about new checkpointIndex. This kind of notification is an > optimization that does not affect the correctness of the protocol. > ** In case of outer checkpoint intention, e.g. raft snapshotting for the > purposes of raft log truncation, corresponding checkpointIndex will be > propagated to raft replicator within a callback "onShapshotDone". > * During local recovery raft will apply raft log entries from the very > begging. If checkpointIndex occurred to be bigger than proposedIndex on an > another raft log entity it fails the proposed closure with > IndexMismatchException(checkpointIndex) that leads to proposedIndex shift and > optional async raft log truncation. > Let's consider following example: > ] checkpointBuffer = 3. [P] - perisisted entities, [!P] - not perisisted/in > memory one. > # raft.put(k1,v1) > ## -> raftlog[cmd(k1,v1, index:1)] > ## -> storage[(k1,v1), index:1] > ## -> appliedIndex:1 > # raft.put(k2,v2) > ## -> raftlog[cmd(k1,v1, index:1), \\{*}cmd(k2,v2, index:2)\\{*}] > ## -> storage[(k1,v1), \\{*}(k2,v2)\\{*}, ** index:\\{*}2\\{*}] > ## -> appliedIndex:{*}2{*} > # raft.put(k3,v3) > ## -> raftlog[cmd(k1,v1, index:1), cmd(k2,v2, index:2), \\{*}cmd(k3,v3, > index:3)\\{*}] > ## -> storage[(k1,v1), (k2,v2), \\{*}(k3,v3)\\{*}, index:\\{*}3\\{*}] > ## -> appliedIndex:{*}3{*} > ## *inner storage checkpoint* > ### raftlog[cmd(k1,v1, index:1), cmd(k2,v2, index:2), cmd(k3,v3, index:3)] > ### storage[(k1,v1, proposedIndex:1), (k2,v2, proposedIndex:2), (k3,v3, > proposedIndex:3)] > ### {*}checkpointedData[(k1,v1),* *(k2,v2),* \\{*}(k3,v3), > checkpointIndex:3\\{*}{*}\\{*}{*}]{*}{*}{{*}} > # raft.put(k4,v4) > ## -> raftlog[cmd(k1,v1, index:1), cmd(k2,v2, index:2), cmd(k3,v3, > index:3), \\{*}cmd(k4,v4, index:4)\\{*}] > ## -> storage[(k1,v1), (k2,v2), (k3,v3), *(k4,v4)* index:\\{*}4\\{*}] > ## -> checkpointedData[(k1,v1), (k2,v2), (k3,v3), checkpointIndex:3] > ## -> appliedIndex:{*}4{*} > # Node failure > # Node restart > ## StorageRecovery: storage.apply(checkpointedData) > ## raft-to-storage data application starting from index: 1 // raft doesn't > know checkpointedIndex at this point. >
[jira] [Assigned] (IGNITE-17236) inline size usage of index-reader
[ https://issues.apache.org/jira/browse/IGNITE-17236?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Nikolay Izhikov reassigned IGNITE-17236: Assignee: Nikolay Izhikov > inline size usage of index-reader > - > > Key: IGNITE-17236 > URL: https://issues.apache.org/jira/browse/IGNITE-17236 > Project: Ignite > Issue Type: Improvement >Reporter: Nikolay Izhikov >Assignee: Nikolay Izhikov >Priority: Minor > > It will be useful to analyze and output information about actual usage of > inline space in index. > Those information can hint use about suboptimal usage of space in index > entries. -- This message was sent by Atlassian Jira (v8.20.7#820007)
[jira] [Assigned] (IGNITE-17230) Support splt-file page store
[ https://issues.apache.org/jira/browse/IGNITE-17230?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kirill Tkalenko reassigned IGNITE-17230: Assignee: Kirill Tkalenko > Support splt-file page store > > > Key: IGNITE-17230 > URL: https://issues.apache.org/jira/browse/IGNITE-17230 > Project: Ignite > Issue Type: Task >Reporter: Kirill Tkalenko >Assignee: Kirill Tkalenko >Priority: Major > Labels: ignite-3 > Fix For: 3.0.0-alpha6 > > > *Notes* > Description may not be complete. > *Goal* > To implement a new checkpoint (described in IGNITE-15818), we will introduce > a new entity *DelataFilePageStore*, which will be created for each partition > at each checkpoint and removed after merging with the *FilePageStore* (the > main partition file) using the compacter. > *DelataFilePageStore* will consist of: > * Header (maybe updated in the course of implementation): > ** Allocation *pageIdx* - *pageIdx* of the last created page; > * Sorted list of *pageIdx* - allows a binary search to find the file offset > for an *pageId -> pageIdx*; > * Page content - sorted by *pageIdx*. > What will change for *FilePageStore*: > * List of class *DelataFilePageStore* will be added (from the newest to the > oldest by the time of creation); > * Allocation index (pageIdx of the last created page) - it will be logical > and contained in the header of *FilePageStore*. At node start, it will be > read from the header of *FilePageStore* or obtained from the first > *DelataFilePageStore* (the newest one). > How pages will be read by *pageId -> pageIdx*: > * Interrogates the class *DelataFilePageStore* in order from the newest to > the oldest; > * If not found, then we read page from the *FilePageStore* itself. > *Some implementation notes* > * Format of the file name for the *DelataFilePageStore* is > *part-%d-delta-%d.bin* for example *part-1-delta-3.bin* where the first digit > is the partition identifier, and the second is the serial number of the delta > file for this partition; > * Before creating *part-1-delta-3.bin*, a temporary file > *part-1-delta-3.bin.tmp* will be created at the checkpoint first, then > filled, then renamed to *part-1-delta-3.bin*; > * Since the indexes will be stored in partitions, we can get rid of the code > associated with the index partition file; > * Fix flaky > [FilePageStoreManagerTest#testStopAllGroupFilePageStores|https://ci.ignite.apache.org/test/6999203413272911470?currentProjectId=ignite3_Test&branch=%3Cdefault%3E]. -- This message was sent by Atlassian Jira (v8.20.7#820007)
[jira] [Updated] (IGNITE-17198) Prototype of pure in-memory storage (with in-memory RAFT)
[ https://issues.apache.org/jira/browse/IGNITE-17198?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Roman Puchkovskiy updated IGNITE-17198: --- Description: This is about the simpliest case possible: stable topology, nodes never leave, corner cases are not handled. What has to be done: # Volatile RAFT meta storage # Create volatile RAFT storages for a RAFT group (meta and logs) when RAFT group is about volatile storage; create persistent RAFT storages otherwise What does not need to be done: # Now, RAFT snapshots use files. We should not change this behavior in this task was:This is about the simpliest case possible: stable topology, nodes never leave, corner cases are not handled. > Prototype of pure in-memory storage (with in-memory RAFT) > - > > Key: IGNITE-17198 > URL: https://issues.apache.org/jira/browse/IGNITE-17198 > Project: Ignite > Issue Type: New Feature > Components: persistence >Reporter: Roman Puchkovskiy >Assignee: Roman Puchkovskiy >Priority: Major > Labels: ignite-3 > Fix For: 3.0.0-alpha6 > > > This is about the simpliest case possible: stable topology, nodes never > leave, corner cases are not handled. > What has to be done: > # Volatile RAFT meta storage > # Create volatile RAFT storages for a RAFT group (meta and logs) when RAFT > group is about volatile storage; create persistent RAFT storages otherwise > What does not need to be done: > # Now, RAFT snapshots use files. We should not change this behavior in this > task -- This message was sent by Atlassian Jira (v8.20.7#820007)
[jira] [Created] (IGNITE-17236) inline size usage of index-reader
Nikolay Izhikov created IGNITE-17236: Summary: inline size usage of index-reader Key: IGNITE-17236 URL: https://issues.apache.org/jira/browse/IGNITE-17236 Project: Ignite Issue Type: Improvement Reporter: Nikolay Izhikov It will be useful to analyze and output information about actual usage of inline space in index. Those information can hint use about suboptimal usage of space in index entries. -- This message was sent by Atlassian Jira (v8.20.7#820007)
[jira] [Commented] (IGNITE-17149) Separation of the PageMemoryStorageEngineConfigurationSchema into in-memory and persistent
[ https://issues.apache.org/jira/browse/IGNITE-17149?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17559103#comment-17559103 ] Petr Ivanov commented on IGNITE-17149: -- Merged to main, thank you for contribution. > Separation of the PageMemoryStorageEngineConfigurationSchema into in-memory > and persistent > -- > > Key: IGNITE-17149 > URL: https://issues.apache.org/jira/browse/IGNITE-17149 > Project: Ignite > Issue Type: Task >Reporter: Kirill Tkalenko >Assignee: Kirill Tkalenko >Priority: Major > Labels: ignite-3 > Fix For: 3.0.0-alpha6 > > Time Spent: 9h > Remaining Estimate: 0h > > *Problem* > At the moment, the > *org.apache.ignite.internal.storage.pagememory.configuration.schema.PageMemoryStorageEngineConfigurationSchema* > contains configuration for in-memory and persistent > *org.apache.ignite.internal.pagememory.configuration.schema.PageMemoryDataRegionConfigurationSchema*, > which can be inconvenient for the user for several reasons: > * *PageMemoryDataRegionConfigurationSchema* contains the configuration for > in-memory and the persistent case, which can be confusing because it's not > obvious which properties to set for each; > * User does not have the ability to set a different size > *PageMemoryStorageEngineConfigurationSchema#pageSize* for in-memory and the > persistent case; > * When creating a table through SQL, it would be more convenient for the > user to simply specify the engine and use the default region than specify the > data region, let's look at the examples. > {code:java} > CREATE TABLE user (id INT PRIMARY KEY, name VARCHAR(255)) ENGINE pagememory > dataRegion='in-memory' > CREATE TABLE user (id INT PRIMARY KEY, name VARCHAR(255)) ENGINE pagememory > dataRegion='persistnet'{code} > {code:java} > CREATE TABLE user (id INT PRIMARY KEY, name VARCHAR(255)) ENGINE > in-memory-pagememory > CREATE TABLE user (id INT PRIMARY KEY, name VARCHAR(255)) ENGINE > persistnet-pagememory > {code} > *Implementation proposal* > Divide by two (in-memory and persistent): > * > *org.apache.ignite.internal.pagememory.configuration.schema.PageMemoryDataRegionConfigurationSchema* > * > *org.apache.ignite.internal.storage.pagememory.configuration.schema.PageMemoryStorageEngineConfigurationSchema* > * *org.apache.ignite.internal.storage.pagememory.PageMemoryStorageEngine* -- This message was sent by Atlassian Jira (v8.20.7#820007)
[jira] [Commented] (IGNITE-17149) Separation of the PageMemoryStorageEngineConfigurationSchema into in-memory and persistent
[ https://issues.apache.org/jira/browse/IGNITE-17149?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17559094#comment-17559094 ] Aleksandr Polovtcev commented on IGNITE-17149: -- LGTM! > Separation of the PageMemoryStorageEngineConfigurationSchema into in-memory > and persistent > -- > > Key: IGNITE-17149 > URL: https://issues.apache.org/jira/browse/IGNITE-17149 > Project: Ignite > Issue Type: Task >Reporter: Kirill Tkalenko >Assignee: Kirill Tkalenko >Priority: Major > Labels: ignite-3 > Fix For: 3.0.0-alpha6 > > Time Spent: 8h 50m > Remaining Estimate: 0h > > *Problem* > At the moment, the > *org.apache.ignite.internal.storage.pagememory.configuration.schema.PageMemoryStorageEngineConfigurationSchema* > contains configuration for in-memory and persistent > *org.apache.ignite.internal.pagememory.configuration.schema.PageMemoryDataRegionConfigurationSchema*, > which can be inconvenient for the user for several reasons: > * *PageMemoryDataRegionConfigurationSchema* contains the configuration for > in-memory and the persistent case, which can be confusing because it's not > obvious which properties to set for each; > * User does not have the ability to set a different size > *PageMemoryStorageEngineConfigurationSchema#pageSize* for in-memory and the > persistent case; > * When creating a table through SQL, it would be more convenient for the > user to simply specify the engine and use the default region than specify the > data region, let's look at the examples. > {code:java} > CREATE TABLE user (id INT PRIMARY KEY, name VARCHAR(255)) ENGINE pagememory > dataRegion='in-memory' > CREATE TABLE user (id INT PRIMARY KEY, name VARCHAR(255)) ENGINE pagememory > dataRegion='persistnet'{code} > {code:java} > CREATE TABLE user (id INT PRIMARY KEY, name VARCHAR(255)) ENGINE > in-memory-pagememory > CREATE TABLE user (id INT PRIMARY KEY, name VARCHAR(255)) ENGINE > persistnet-pagememory > {code} > *Implementation proposal* > Divide by two (in-memory and persistent): > * > *org.apache.ignite.internal.pagememory.configuration.schema.PageMemoryDataRegionConfigurationSchema* > * > *org.apache.ignite.internal.storage.pagememory.configuration.schema.PageMemoryStorageEngineConfigurationSchema* > * *org.apache.ignite.internal.storage.pagememory.PageMemoryStorageEngine* -- This message was sent by Atlassian Jira (v8.20.7#820007)
[jira] [Created] (IGNITE-17235) Fix flaky ItBplusTreePageMemoryImplTest#testPutSizeLivelock
Kirill Tkalenko created IGNITE-17235: Summary: Fix flaky ItBplusTreePageMemoryImplTest#testPutSizeLivelock Key: IGNITE-17235 URL: https://issues.apache.org/jira/browse/IGNITE-17235 Project: Ignite Issue Type: Bug Reporter: Kirill Tkalenko Fix For: 3.0.0-alpha6 It is necessary to investigate and fix the falling flaky [ItBplusTreePageMemoryImplTest#testPutSizeLivelock|https://ci.ignite.apache.org/buildConfiguration/ignite3_Test_RunAllTests/6647751?logFilter=debug] test. -- This message was sent by Atlassian Jira (v8.20.7#820007)
[jira] [Updated] (IGNITE-17230) Support splt-file page store
[ https://issues.apache.org/jira/browse/IGNITE-17230?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kirill Tkalenko updated IGNITE-17230: - Description: *Notes* Description may not be complete. *Goal* To implement a new checkpoint (described in IGNITE-15818), we will introduce a new entity *DelataFilePageStore*, which will be created for each partition at each checkpoint and removed after merging with the *FilePageStore* (the main partition file) using the compacter. *DelataFilePageStore* will consist of: * Header (maybe updated in the course of implementation): ** Allocation *pageIdx* - *pageIdx* of the last created page; * Sorted list of *pageIdx* - allows a binary search to find the file offset for an *pageId -> pageIdx*; * Page content - sorted by *pageIdx*. What will change for *FilePageStore*: * List of class *DelataFilePageStore* will be added (from the newest to the oldest by the time of creation); * Allocation index (pageIdx of the last created page) - it will be logical and contained in the header of *FilePageStore*. At node start, it will be read from the header of *FilePageStore* or obtained from the first *DelataFilePageStore* (the newest one). How pages will be read by *pageId -> pageIdx*: * Interrogates the class *DelataFilePageStore* in order from the newest to the oldest; * If not found, then we read page from the *FilePageStore* itself. *Some implementation notes* * Format of the file name for the *DelataFilePageStore* is *part-%d-delta-%d.bin* for example *part-1-delta-3.bin* where the first digit is the partition identifier, and the second is the serial number of the delta file for this partition; * Before creating *part-1-delta-3.bin*, a temporary file *part-1-delta-3.bin.tmp* will be created at the checkpoint first, then filled, then renamed to *part-1-delta-3.bin*; * Since the indexes will be stored in partitions, we can get rid of the code associated with the index partition file; * Fix flaky [FilePageStoreManagerTest#testStopAllGroupFilePageStores|https://ci.ignite.apache.org/test/6999203413272911470?currentProjectId=ignite3_Test&branch=%3Cdefault%3E]. was: *Notes* Description may not be complete. *Goal* To implement a new checkpoint (described in IGNITE-15818), we will introduce a new entity *DelataFilePageStore*, which will be created for each partition at each checkpoint and removed after merging with the *FilePageStore* (the main partition file) using the compacter. *DelataFilePageStore* will consist of: * Header (maybe updated in the course of implementation): ** Allocation *pageIdx* - *pageIdx* of the last created page; * Sorted list of *pageIdx* - allows a binary search to find the file offset for an *pageId -> pageIdx*; * Page content - sorted by *pageIdx*. What will change for *FilePageStore*: * List of class *DelataFilePageStore* will be added (from the newest to the oldest by the time of creation); * Allocation index (pageIdx of the last created page) - it will be logical and contained in the header of *FilePageStore*. At node start, it will be read from the header of *FilePageStore* or obtained from the first *DelataFilePageStore* (the newest one). How pages will be read by *pageId -> pageIdx*: * Interrogates the class *DelataFilePageStore* in order from the newest to the oldest; * If not found, then we read page from the *FilePageStore* itself. *Some implementation notes* * Format of the file name for the *DelataFilePageStore* is *part-%d-delta-%d.bin* for example *part-1-delta-3.bin* where the first digit is the partition identifier, and the second is the serial number of the delta file for this partition; * Before creating *part-1-delta-3.bin*, a temporary file *part-1-delta-3.bin.tmp* will be created at the checkpoint first, then filled, then renamed to *part-1-delta-3.bin*; * Since the indexes will be stored in partitions, we can get rid of the code associated with the index partition file. > Support splt-file page store > > > Key: IGNITE-17230 > URL: https://issues.apache.org/jira/browse/IGNITE-17230 > Project: Ignite > Issue Type: Task >Reporter: Kirill Tkalenko >Priority: Major > Labels: ignite-3 > Fix For: 3.0.0-alpha6 > > > *Notes* > Description may not be complete. > *Goal* > To implement a new checkpoint (described in IGNITE-15818), we will introduce > a new entity *DelataFilePageStore*, which will be created for each partition > at each checkpoint and removed after merging with the *FilePageStore* (the > main partition file) using the compacter. > *DelataFilePageStore* will consist of: > * Header (maybe updated in the course of implementation): > ** Allocation *pageIdx* - *pageIdx* of the last created page; > * Sorted list of *pageIdx* - allows a binary search to find the file offset > for an *pageId -> pag
[jira] [Created] (IGNITE-17234) "version" and "probe" REST commands should not require authentication
Dmitriy Borunov created IGNITE-17234: Summary: "version" and "probe" REST commands should not require authentication Key: IGNITE-17234 URL: https://issues.apache.org/jira/browse/IGNITE-17234 Project: Ignite Issue Type: Improvement Components: rest Reporter: Dmitriy Borunov Assignee: Dmitriy Borunov *Actual:* /ignite?cmd=version and /ignite?cmd=probe both return: {code:java} {"successStatus":2,"error":"Failed to authenticate remote client (secure session SPI not set?): GridRestRequest [destId=null, clientId=3fbf0a38-4d80-42f3-9f77-a0ba7e2da396, addr=/127.0.0.1:54649, cmd=, authCtx=null]","sessionToken":null,"response":null} {code} *Expected:* {code:java} {"successStatus":0,"error":null,"sessionToken":null,"response":"grid has started"} {code} These two commands should not require authentication because it could cause a timeout due to system cache usage (transactions + pme ). These commands may be blocked for some time or timed out. It can be interpreted as a cluster failure. -- This message was sent by Atlassian Jira (v8.20.7#820007)
[jira] [Commented] (IGNITE-17002) Indexes rebuild in Maintenance Mode
[ https://issues.apache.org/jira/browse/IGNITE-17002?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17559079#comment-17559079 ] Ignite TC Bot commented on IGNITE-17002: {panel:title=Branch: [pull/10042/head] Base: [master] : No blockers found!|borderStyle=dashed|borderColor=#ccc|titleBGColor=#D6F7C1}{panel} {panel:title=Branch: [pull/10042/head] Base: [master] : New Tests (13)|borderStyle=dashed|borderColor=#ccc|titleBGColor=#D6F7C1} {color:#8b}Control Utility{color} [[tests 12|https://ci.ignite.apache.org/viewLog.html?buildId=6647402]] * {color:#013220}IgniteControlUtilityTestSuite: CommandHandlerParsingTest.testScheduleIndexRebuildArgs - PASSED{color} * {color:#013220}IgniteControlUtilityTestSuite: CommandHandlerParsingTest.testScheduleIndexRebuildWrongArgs - PASSED{color} * {color:#013220}IgniteControlUtilityTestSuite: GridCommandHandlerScheduleIndexRebuildTest.testConsecutiveCommandInvocations - PASSED{color} * {color:#013220}IgniteControlUtilityTestSuite: GridCommandHandlerScheduleIndexRebuildTest.testErrors - PASSED{color} * {color:#013220}IgniteControlUtilityTestSuite: GridCommandHandlerScheduleIndexRebuildTest.testCorruptedIndexRebuildCacheWithGroup - PASSED{color} * {color:#013220}IgniteControlUtilityTestSuite: GridCommandHandlerScheduleIndexRebuildTest.testSpecificIndexes - PASSED{color} * {color:#013220}IgniteControlUtilityTestSuite: GridCommandHandlerScheduleIndexRebuildTest.testCorruptedIndexRebuildCacheOnAllNodes - PASSED{color} * {color:#013220}IgniteControlUtilityTestSuite: GridCommandHandlerScheduleIndexRebuildTest.testCacheGroupParameter - PASSED{color} * {color:#013220}IgniteControlUtilityTestSuite: GridCommandHandlerScheduleIndexRebuildTest.testCorruptedIndexRebuildCacheWithGroupOnAllNodes - PASSED{color} * {color:#013220}IgniteControlUtilityTestSuite: GridCommandHandlerScheduleIndexRebuildTest.testRebuild - PASSED{color} * {color:#013220}IgniteControlUtilityTestSuite: GridCommandHandlerScheduleIndexRebuildTest.testCacheGroupParameterWithCacheNames - PASSED{color} ... and 1 new tests {color:#8b}PDS (Indexing){color} [[tests 1|https://ci.ignite.apache.org/viewLog.html?buildId=6646137]] * {color:#013220}IgnitePdsWithIndexingTestSuite: MaintenanceRebuildIndexUtilsSelfTest.testConstructFromMap - PASSED{color} {panel} [TeamCity *--> Run :: All* Results|https://ci.ignite.apache.org/viewLog.html?buildId=6645056&buildTypeId=IgniteTests24Java8_RunAll] > Indexes rebuild in Maintenance Mode > --- > > Key: IGNITE-17002 > URL: https://issues.apache.org/jira/browse/IGNITE-17002 > Project: Ignite > Issue Type: Improvement > Components: control.sh, persistence >Reporter: Sergey Chugunov >Assignee: Semyon Danilov >Priority: Major > Fix For: 2.14 > > Time Spent: 3.5h > Remaining Estimate: 0h > > Now Ignite supports entering Maintenance Mode after index corruption > automatically - this was implemented in linked issue. > But there are use-cases when user needs to request rebuilding specific > indexes in MM, so we need to provide a control.sh API to make these requests. > Also for better integration with monitoring tools it is nice to provide an > API to check status of rebuilding task and print message to logs when each > task is finished and all tasks are finished. -- This message was sent by Atlassian Jira (v8.20.7#820007)
[jira] [Updated] (IGNITE-17162) Fix init cluster command options
[ https://issues.apache.org/jira/browse/IGNITE-17162?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vyacheslav Koptilin updated IGNITE-17162: - Issue Type: Improvement (was: Task) > Fix init cluster command options > > > Key: IGNITE-17162 > URL: https://issues.apache.org/jira/browse/IGNITE-17162 > Project: Ignite > Issue Type: Improvement >Reporter: Vadim Pakhnushev >Assignee: Vadim Pakhnushev >Priority: Major > Labels: ignite-3, ignite-3-cli-tool > Time Spent: 0.5h > Remaining Estimate: 0h > > Currently "cluster init" command uses --node-endpoint option which requires > passing endpoint without the URL schema. > It should use --cluster-url option as stated in the IEP. -- This message was sent by Atlassian Jira (v8.20.7#820007)
[jira] [Updated] (IGNITE-17162) Fix init cluster command options
[ https://issues.apache.org/jira/browse/IGNITE-17162?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vyacheslav Koptilin updated IGNITE-17162: - Ignite Flags: (was: Docs Required,Release Notes Required) > Fix init cluster command options > > > Key: IGNITE-17162 > URL: https://issues.apache.org/jira/browse/IGNITE-17162 > Project: Ignite > Issue Type: Task >Reporter: Vadim Pakhnushev >Assignee: Vadim Pakhnushev >Priority: Major > Labels: ignite-3, ignite-3-cli-tool > Time Spent: 0.5h > Remaining Estimate: 0h > > Currently "cluster init" command uses --node-endpoint option which requires > passing endpoint without the URL schema. > It should use --cluster-url option as stated in the IEP. -- This message was sent by Atlassian Jira (v8.20.7#820007)
[jira] [Updated] (IGNITE-17233) Clarify node and cluster URL parameter names
[ https://issues.apache.org/jira/browse/IGNITE-17233?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vadim Pakhnushev updated IGNITE-17233: -- Epic Link: IGNITE-16970 > Clarify node and cluster URL parameter names > > > Key: IGNITE-17233 > URL: https://issues.apache.org/jira/browse/IGNITE-17233 > Project: Ignite > Issue Type: Task >Reporter: Vadim Pakhnushev >Assignee: Vadim Pakhnushev >Priority: Major > Labels: ignite-3, ignite-3-cli-tool > > [IEP-88|https://cwiki.apache.org/confluence/display/IGNITE/IEP-88%3A+CLI+Tool] > states that commands for cluster management use --cluster-url parameter for > the endpoint while commands specific to the node use --node-url while in fact > --cluster-url is a URL to any node in the cluster. > We could use --node-url in cluster commands as well. -- This message was sent by Atlassian Jira (v8.20.7#820007)
[jira] [Created] (IGNITE-17233) Clarify node and cluster URL parameter names
Vadim Pakhnushev created IGNITE-17233: - Summary: Clarify node and cluster URL parameter names Key: IGNITE-17233 URL: https://issues.apache.org/jira/browse/IGNITE-17233 Project: Ignite Issue Type: Task Reporter: Vadim Pakhnushev Assignee: Vadim Pakhnushev [IEP-88|https://cwiki.apache.org/confluence/display/IGNITE/IEP-88%3A+CLI+Tool] states that commands for cluster management use --cluster-url parameter for the endpoint while commands specific to the node use --node-url while in fact --cluster-url is a URL to any node in the cluster. We could use --node-url in cluster commands as well. -- This message was sent by Atlassian Jira (v8.20.7#820007)
[jira] [Commented] (IGNITE-17181) index-reader add size of data
[ https://issues.apache.org/jira/browse/IGNITE-17181?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17559076#comment-17559076 ] Ignite TC Bot commented on IGNITE-17181: {panel:title=Branch: [pull/10102/head] Base: [master] : No blockers found!|borderStyle=dashed|borderColor=#ccc|titleBGColor=#D6F7C1}{panel} {panel:title=Branch: [pull/10102/head] Base: [master] : New Tests (24)|borderStyle=dashed|borderColor=#ccc|titleBGColor=#D6F7C1} {color:#8b}PDS 5{color} [[tests 24|https://ci2.ignite.apache.org/viewLog.html?buildId=6496626]] * {color:#013220}IgnitePdsTestSuite5: PageIOFreeSizeTest.testPageMetaIO[pageSz=4,096] - PASSED{color} * {color:#013220}IgnitePdsTestSuite5: PageIOFreeSizeTest.testPartitionCountersIO[pageSz=4,096] - PASSED{color} * {color:#013220}IgnitePdsTestSuite5: PageIOFreeSizeTest.testBPlusIO[pageSz=4,096] - PASSED{color} * {color:#013220}IgnitePdsTestSuite5: PageIOFreeSizeTest.testBPlusMetaIO[pageSz=4,096] - PASSED{color} * {color:#013220}IgnitePdsTestSuite5: PageIOFreeSizeTest.testPagesListNodeIO[pageSz=4,096] - PASSED{color} * {color:#013220}IgnitePdsTestSuite5: PageIOFreeSizeTest.testPagesListMetaIO[pageSz=4,096] - PASSED{color} * {color:#013220}IgnitePdsTestSuite5: PageIOFreeSizeTest.testTrackingPageIO[pageSz=4,096] - PASSED{color} * {color:#013220}IgnitePdsTestSuite5: PageIOFreeSizeTest.testDataPageIO[pageSz=4,096] - PASSED{color} * {color:#013220}IgnitePdsTestSuite5: PageIOFreeSizeTest.testPageMetaIO[pageSz=8,192] - PASSED{color} * {color:#013220}IgnitePdsTestSuite5: PageIOFreeSizeTest.testPartitionCountersIO[pageSz=8,192] - PASSED{color} * {color:#013220}IgnitePdsTestSuite5: PageIOFreeSizeTest.testBPlusIO[pageSz=8,192] - PASSED{color} ... and 13 new tests {panel} [TeamCity *--> Run :: All* Results|https://ci2.ignite.apache.org/viewLog.html?buildId=6496668&buildTypeId=IgniteTests24Java8_RunAll] > index-reader add size of data > - > > Key: IGNITE-17181 > URL: https://issues.apache.org/jira/browse/IGNITE-17181 > Project: Ignite > Issue Type: Improvement >Reporter: Nikolay Izhikov >Assignee: Nikolay Izhikov >Priority: Major > Time Spent: 10m > Remaining Estimate: 0h > > It will be useful to calculate and output size of indexes pages vs. size of > data stored inside pages. -- This message was sent by Atlassian Jira (v8.20.7#820007)
[jira] [Updated] (IGNITE-17230) Support splt-file page store
[ https://issues.apache.org/jira/browse/IGNITE-17230?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ivan Bessonov updated IGNITE-17230: --- Description: *Notes* Description may not be complete. *Goal* To implement a new checkpoint (described in IGNITE-15818), we will introduce a new entity {*}DelataFilePageStore{*}, which will be created for each partition at each checkpoint and removed after merging with the *FilePageStore* (the main partition file) using the compacter. *DelataFilePageStore* will consist of: * Header (maybe updated in the course of implementation): ** Allocation *pageIdx* - *pageIdx* of the last created page; * Sorted list of *pageIdx* - allows a binary search to find the file offset for an {*}pageId -> pageIdx{*}; * Page content - sorted by {*}pageIdx{*}. What will change for {*}FilePageStore{*}: * List of class *DelataFilePageStore* will be added (from the newest to the oldest by the time of creation); * Allocation index (pageIdx of the last created page) - it will be logical and contained in the header of {*}FilePageStore{*}. At node start, it will be read from the header of *FilePageStore* or obtained from the first *DelataFilePageStore* (the newest one). How pages will be read by {*}pageId -> pageIdx{*}: * Interrogates the class *DelataFilePageStore* in order from the newest to the oldest; * If not found, then we read page from the *FilePageStore* itself. *Some implementation notes* * Format of the file name for the *DelataFilePageStore* is *part-%d-delta-%d.bin* for example *part-1-delta-3.bin* where the first digit is the partition identifier, and the second is the serial number of the delta file for this partition; * Before creating {*}part-1-delta-3.bin{*}, a temporary file *part-1-delta-3.bin.tmp* will be created at the checkpoint first, then filled, then renamed to {*}part-1-delta-3.bin{*}. was: *Notes* Description may not be complete. *Goal* To implement a new checkpoint (described in IGNITE-15818), we will introduce a new entity {*}DelataFilePageStore{*}, which will be created for each partition at each checkpoint and removed after merging with the *FilePageStore* (the main partition file) using the compacter. *DelataFilePageStore* will consist of: * Header (maybe updated in the course of implementation): ** Allocation *pageIdx* - *pageIdx* of the last created page; * Sorted list of *pageIds* - allows a binary search to find the file offset for an {*}pageId -> pageIdx{*}; * Page content - sorted by {*}pageIdx{*}. What will change for {*}FilePageStore{*}: * List of class *DelataFilePageStore* will be added (from the newest to the oldest by the time of creation); * Allocation index (pageIdx of the last created page) - it will be logical and contained in the header of {*}FilePageStore{*}. At node start, it will be read from the header of *FilePageStore* or obtained from the first *DelataFilePageStore* (the newest one). How pages will be read by {*}pageId -> pageIdx{*}: * Interrogates the class *DelataFilePageStore* in order from the newest to the oldest; * If not found, then we read page from the *FilePageStore* itself. *Some implementation notes* * Format of the file name for the *DelataFilePageStore* is *part-%d-delta-%d.bin* for example *part-1-delta-3.bin* where the first digit is the partition identifier, and the second is the serial number of the delta file for this partition; * Before creating {*}part-1-delta-3.bin{*}, a temporary file *part-1-delta-3.bin.tmp* will be created at the checkpoint first, then filled, then renamed to {*}part-1-delta-3.bin{*}. > Support splt-file page store > > > Key: IGNITE-17230 > URL: https://issues.apache.org/jira/browse/IGNITE-17230 > Project: Ignite > Issue Type: Task >Reporter: Kirill Tkalenko >Priority: Major > Labels: ignite-3 > Fix For: 3.0.0-alpha6 > > > *Notes* > Description may not be complete. > *Goal* > To implement a new checkpoint (described in IGNITE-15818), we will introduce > a new entity {*}DelataFilePageStore{*}, which will be created for each > partition at each checkpoint and removed after merging with the > *FilePageStore* (the main partition file) using the compacter. > *DelataFilePageStore* will consist of: > * Header (maybe updated in the course of implementation): > ** Allocation *pageIdx* - *pageIdx* of the last created page; > * Sorted list of *pageIdx* - allows a binary search to find the file offset > for an {*}pageId -> pageIdx{*}; > * Page content - sorted by {*}pageIdx{*}. > What will change for {*}FilePageStore{*}: > * List of class *DelataFilePageStore* will be added (from the newest to the > oldest by the time of creation); > * Allocation index (pageIdx of the last created page) - it will be logical > and contained in the header of {*}Fi
[jira] [Updated] (IGNITE-17230) Support splt-file page store
[ https://issues.apache.org/jira/browse/IGNITE-17230?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kirill Tkalenko updated IGNITE-17230: - Description: *Notes* Description may not be complete. *Goal* To implement a new checkpoint (described in IGNITE-15818), we will introduce a new entity *DelataFilePageStore*, which will be created for each partition at each checkpoint and removed after merging with the *FilePageStore* (the main partition file) using the compacter. *DelataFilePageStore* will consist of: * Header (maybe updated in the course of implementation): ** Allocation *pageIdx* - *pageIdx* of the last created page; * Sorted list of *pageIdx* - allows a binary search to find the file offset for an *pageId -> pageIdx*; * Page content - sorted by *pageIdx*. What will change for *FilePageStore*: * List of class *DelataFilePageStore* will be added (from the newest to the oldest by the time of creation); * Allocation index (pageIdx of the last created page) - it will be logical and contained in the header of *FilePageStore*. At node start, it will be read from the header of *FilePageStore* or obtained from the first *DelataFilePageStore* (the newest one). How pages will be read by *pageId -> pageIdx*: * Interrogates the class *DelataFilePageStore* in order from the newest to the oldest; * If not found, then we read page from the *FilePageStore* itself. *Some implementation notes* * Format of the file name for the *DelataFilePageStore* is *part-%d-delta-%d.bin* for example *part-1-delta-3.bin* where the first digit is the partition identifier, and the second is the serial number of the delta file for this partition; * Before creating *part-1-delta-3.bin*, a temporary file *part-1-delta-3.bin.tmp* will be created at the checkpoint first, then filled, then renamed to *part-1-delta-3.bin*; * Since the indexes will be stored in partitions, we can get rid of the code associated with the index partition file. was: *Notes* Description may not be complete. *Goal* To implement a new checkpoint (described in IGNITE-15818), we will introduce a new entity {*}DelataFilePageStore{*}, which will be created for each partition at each checkpoint and removed after merging with the *FilePageStore* (the main partition file) using the compacter. *DelataFilePageStore* will consist of: * Header (maybe updated in the course of implementation): ** Allocation *pageIdx* - *pageIdx* of the last created page; * Sorted list of *pageIdx* - allows a binary search to find the file offset for an {*}pageId -> pageIdx{*}; * Page content - sorted by {*}pageIdx{*}. What will change for {*}FilePageStore{*}: * List of class *DelataFilePageStore* will be added (from the newest to the oldest by the time of creation); * Allocation index (pageIdx of the last created page) - it will be logical and contained in the header of {*}FilePageStore{*}. At node start, it will be read from the header of *FilePageStore* or obtained from the first *DelataFilePageStore* (the newest one). How pages will be read by {*}pageId -> pageIdx{*}: * Interrogates the class *DelataFilePageStore* in order from the newest to the oldest; * If not found, then we read page from the *FilePageStore* itself. *Some implementation notes* * Format of the file name for the *DelataFilePageStore* is *part-%d-delta-%d.bin* for example *part-1-delta-3.bin* where the first digit is the partition identifier, and the second is the serial number of the delta file for this partition; * Before creating {*}part-1-delta-3.bin{*}, a temporary file *part-1-delta-3.bin.tmp* will be created at the checkpoint first, then filled, then renamed to {*}part-1-delta-3.bin{*}. > Support splt-file page store > > > Key: IGNITE-17230 > URL: https://issues.apache.org/jira/browse/IGNITE-17230 > Project: Ignite > Issue Type: Task >Reporter: Kirill Tkalenko >Priority: Major > Labels: ignite-3 > Fix For: 3.0.0-alpha6 > > > *Notes* > Description may not be complete. > *Goal* > To implement a new checkpoint (described in IGNITE-15818), we will introduce > a new entity *DelataFilePageStore*, which will be created for each partition > at each checkpoint and removed after merging with the *FilePageStore* (the > main partition file) using the compacter. > *DelataFilePageStore* will consist of: > * Header (maybe updated in the course of implementation): > ** Allocation *pageIdx* - *pageIdx* of the last created page; > * Sorted list of *pageIdx* - allows a binary search to find the file offset > for an *pageId -> pageIdx*; > * Page content - sorted by *pageIdx*. > What will change for *FilePageStore*: > * List of class *DelataFilePageStore* will be added (from the newest to the > oldest by the time of creation); > * Allocation index (pageIdx of the last created page)
[jira] [Resolved] (IGNITE-17218) Add isRequired flag for SpringResource annotation
[ https://issues.apache.org/jira/browse/IGNITE-17218?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Amelchev Nikita resolved IGNITE-17218. -- Release Note: Added support of optional beans for SpringResource annotation injection Resolution: Fixed Merged into the master. [~PetrovMikhail], thank you for the contribution. [~xtern], thanks for the review. > Add isRequired flag for SpringResource annotation > - > > Key: IGNITE-17218 > URL: https://issues.apache.org/jira/browse/IGNITE-17218 > Project: Ignite > Issue Type: Improvement >Reporter: Mikhail Petrov >Assignee: Mikhail Petrov >Priority: Major > Fix For: 2.14 > > Time Spent: 1h > Remaining Estimate: 0h > > We need to add isRequired flag for SpringResource annotation that allows to > make some injection beans optional. Currently user will face exception if no > bean with specified name or type is defined in a Spring Context. -- This message was sent by Atlassian Jira (v8.20.7#820007)
[jira] [Updated] (IGNITE-17230) Support splt-file page store
[ https://issues.apache.org/jira/browse/IGNITE-17230?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ivan Bessonov updated IGNITE-17230: --- Description: *Notes* Description may not be complete. *Goal* To implement a new checkpoint (described in IGNITE-15818), we will introduce a new entity {*}DelataFilePageStore{*}, which will be created for each partition at each checkpoint and removed after merging with the *FilePageStore* (the main partition file) using the compacter. *DelataFilePageStore* will consist of: * Header (maybe updated in the course of implementation): ** Allocation *pageIdx* - *pageIdx* of the last created page; * Sorted list of *pageIds* - allows a binary search to find the file offset for an {*}pageId -> pageIdx{*}; * Page content - sorted by {*}pageIdx{*}. What will change for {*}FilePageStore{*}: * List of class *DelataFilePageStore* will be added (from the newest to the oldest by the time of creation); * Allocation index (pageIdx of the last created page) - it will be logical and contained in the header of {*}FilePageStore{*}. At node start, it will be read from the header of *FilePageStore* or obtained from the first *DelataFilePageStore* (the newest one). How pages will be read by {*}pageId -> pageIdx{*}: * Interrogates the class *DelataFilePageStore* in order from the newest to the oldest; * If not found, then we read page from the *FilePageStore* itself. *Some implementation notes* * Format of the file name for the *DelataFilePageStore* is *part-%d-delta-%d.bin* for example *part-1-delta-3.bin* where the first digit is the partition identifier, and the second is the serial number of the delta file for this partition; * Before creating {*}part-1-delta-3.bin{*}, a temporary file *part-1-delta-3.bin.tmp* will be created at the checkpoint first, then filled, then renamed to {*}part-1-delta-3.bin{*}. was: *Notes* Description may not be complete. *Goal* To implement a new checkpoint (described in IGNITE-15818), we will introduce a new entity *DelataFilePageStore*, which will be created for each partition at each checkpoint and removed after merging with the *FilePageStore* (the main partition file) using the compacter. *DelataFilePageStore* will consist of: * Header (maybe updated in the course of implementation): ** Allocation *pageIdx* - *pageIdx* of the last created page; * Sorted list of *pageIdx* - allows a binary search to find the file offset for an *pageId -> pageIdx*; * Page content - sorted by *pageIdx*. What will change for *FilePageStore*: * List of class *DelataFilePageStore* will be added (from the newest to the oldest by the time of creation); * Allocation index (pageIdx of the last created page) - it will be logical and contained in the header of *FilePageStore*. At node start, it will be read from the header of *FilePageStore* or obtained from the first *DelataFilePageStore* (the newest one). How pages will be read by *pageId -> pageIdx*: * Interrogates the class *DelataFilePageStore* in order from the newest to the oldest; * If not found, then we read page from the *FilePageStore* itself. *Some implementation notes* * Format of the file name for the *DelataFilePageStore* is *part-%d-delta-%d.bin* for example *part-1-delta-3.bin* where the first digit is the partition identifier, and the second is the serial number of the delta file for this partition; * Before creating *part-1-delta-3.bin*, a temporary file *part-1-delta-3.bin.tmp* will be created at the checkpoint first, then filled, then renamed to *part-1-delta-3.bin*. > Support splt-file page store > > > Key: IGNITE-17230 > URL: https://issues.apache.org/jira/browse/IGNITE-17230 > Project: Ignite > Issue Type: Task >Reporter: Kirill Tkalenko >Priority: Major > Labels: ignite-3 > Fix For: 3.0.0-alpha6 > > > *Notes* > Description may not be complete. > *Goal* > To implement a new checkpoint (described in IGNITE-15818), we will introduce > a new entity {*}DelataFilePageStore{*}, which will be created for each > partition at each checkpoint and removed after merging with the > *FilePageStore* (the main partition file) using the compacter. > *DelataFilePageStore* will consist of: > * Header (maybe updated in the course of implementation): > ** Allocation *pageIdx* - *pageIdx* of the last created page; > * Sorted list of *pageIds* - allows a binary search to find the file offset > for an {*}pageId -> pageIdx{*}; > * Page content - sorted by {*}pageIdx{*}. > What will change for {*}FilePageStore{*}: > * List of class *DelataFilePageStore* will be added (from the newest to the > oldest by the time of creation); > * Allocation index (pageIdx of the last created page) - it will be logical > and contained in the header of {*}FilePageStore{*}. At node start, it will be
[jira] [Updated] (IGNITE-17218) Add isRequired flag for SpringResource annotation
[ https://issues.apache.org/jira/browse/IGNITE-17218?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Amelchev Nikita updated IGNITE-17218: - Fix Version/s: 2.14 > Add isRequired flag for SpringResource annotation > - > > Key: IGNITE-17218 > URL: https://issues.apache.org/jira/browse/IGNITE-17218 > Project: Ignite > Issue Type: Improvement >Reporter: Mikhail Petrov >Assignee: Mikhail Petrov >Priority: Major > Fix For: 2.14 > > Time Spent: 1h > Remaining Estimate: 0h > > We need to add isRequired flag for SpringResource annotation that allows to > make some injection beans optional. Currently user will face exception if no > bean with specified name or type is defined in a Spring Context. -- This message was sent by Atlassian Jira (v8.20.7#820007)
[jira] [Updated] (IGNITE-17230) Support splt-file page store
[ https://issues.apache.org/jira/browse/IGNITE-17230?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kirill Tkalenko updated IGNITE-17230: - Description: *Notes* Description may not be complete. *Goal* To implement a new checkpoint (described in IGNITE-15818), we will introduce a new entity *DelataFilePageStore*, which will be created for each partition at each checkpoint and removed after merging with the *FilePageStore* (the main partition file) using the compacter. *DelataFilePageStore* will consist of: * Header (maybe updated in the course of implementation): ** Allocation *pageIdx* - *pageIdx* of the last created page; * Sorted list of *pageIdx* - allows a binary search to find the file offset for an *pageId -> pageIdx*; * Page content - sorted by *pageIdx*. What will change for *FilePageStore*: * List of class *DelataFilePageStore* will be added (from the newest to the oldest by the time of creation); * Allocation index (pageIdx of the last created page) - it will be logical and contained in the header of *FilePageStore*. At node start, it will be read from the header of *FilePageStore* or obtained from the first *DelataFilePageStore* (the newest one). How pages will be read by *pageId -> pageIdx*: * Interrogates the class *DelataFilePageStore* in order from the newest to the oldest; * If not found, then we read page from the *FilePageStore* itself. *Some implementation notes* * Format of the file name for the *DelataFilePageStore* is *part-%d-delta-%d.bin* for example *part-1-delta-3.bin* where the first digit is the partition identifier, and the second is the serial number of the delta file for this partition; * Before creating *part-1-delta-3.bin*, a temporary file *part-1-delta-3.bin.tmp* will be created at the checkpoint first, then filled, then renamed to *part-1-delta-3.bin*. was: *Notes* Description may not be complete. *Goal* To implement a new checkpoint (described in IGNITE-15818), we will introduce a new entity *DelataFilePageStore*, which will be created for each partition at each checkpoint and removed after merging with the *FilePageStore* (the main partition file) using the compacter. *DelataFilePageStore* will consist of: * Header (maybe updated in the course of implementation): ** Allocation *pageIdx* - *pageIdx* of the last created page; * Sorted list of *pageIdx* - allows a binary search to find the file offset for an *pageId -> pageIdx*; * Page content - sorted by *pageIdx*. What will change for *FilePageStore*: * List of class *DelataFilePageStore* will be added (from the newest to the oldest by the time of creation); * Allocation index (pageIdx of the last created page) - it will be logical and contained in the header of *FilePageStore*. At node start, it will be read from the header of *FilePageStore* or obtained from the first *DelataFilePageStore* (the newest one). How pages will be read by *pageId -> pageIdx*: * Interrogates the class *DelataFilePageStore* in order from the newest to the oldest; * If not found, then we read page from the *FilePageStore* itself. > Support splt-file page store > > > Key: IGNITE-17230 > URL: https://issues.apache.org/jira/browse/IGNITE-17230 > Project: Ignite > Issue Type: Task >Reporter: Kirill Tkalenko >Priority: Major > Labels: ignite-3 > Fix For: 3.0.0-alpha6 > > > *Notes* > Description may not be complete. > *Goal* > To implement a new checkpoint (described in IGNITE-15818), we will introduce > a new entity *DelataFilePageStore*, which will be created for each partition > at each checkpoint and removed after merging with the *FilePageStore* (the > main partition file) using the compacter. > *DelataFilePageStore* will consist of: > * Header (maybe updated in the course of implementation): > ** Allocation *pageIdx* - *pageIdx* of the last created page; > * Sorted list of *pageIdx* - allows a binary search to find the file offset > for an *pageId -> pageIdx*; > * Page content - sorted by *pageIdx*. > What will change for *FilePageStore*: > * List of class *DelataFilePageStore* will be added (from the newest to the > oldest by the time of creation); > * Allocation index (pageIdx of the last created page) - it will be logical > and contained in the header of *FilePageStore*. At node start, it will be > read from the header of *FilePageStore* or obtained from the first > *DelataFilePageStore* (the newest one). > How pages will be read by *pageId -> pageIdx*: > * Interrogates the class *DelataFilePageStore* in order from the newest to > the oldest; > * If not found, then we read page from the *FilePageStore* itself. > *Some implementation notes* > * Format of the file name for the *DelataFilePageStore* is > *part-%d-delta-%d.bin* for example *part-1-delta-3.bin* where the first digit > is the parti
[jira] [Resolved] (IGNITE-17109) Error handling for invalid url passed to any command
[ https://issues.apache.org/jira/browse/IGNITE-17109?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mikhail Pochatkin resolved IGNITE-17109. Resolution: Won't Fix > Error handling for invalid url passed to any command > > > Key: IGNITE-17109 > URL: https://issues.apache.org/jira/browse/IGNITE-17109 > Project: Ignite > Issue Type: Task >Reporter: Aleksandr >Priority: Major > Labels: ignite-3 > > h2. Description > Different commands given the wrong URL display different messages. For example > > {code:java} > disconnected]> connect lkhjasdflkjhhasdf > 2022-06-06 10:41:04:665 +0100 [ERROR][main][ExceptionHandler] Unhandled > exception > java.lang.IllegalArgumentException: Expected URL scheme 'http' or 'https' but > no colon was found > at okhttp3.HttpUrl$Builder.parse$okhttp(HttpUrl.kt:1260) > at okhttp3.HttpUrl$Companion.get(HttpUrl.kt:1633) > at okhttp3.Request$Builder.url(Request.kt:184) > ... > Internal error! {code} > > {code:java} > [disconnected]> connect http://kjhasdflkjhhasdf:10300/ > Api error: null > {code} > > {code:java} > [disconnected]> sql -u=hdbkljghhgasdflkjhasdf > Connection failed. {code} > h2. To-Do > * Test all possible variations of incorrect URLs with integration/interface > tests > * Define a single error handler for the wrong URL, port, etc. > As a result, all variations of wrong data passed should be handled on a > consistent way. The user has to see the same messages for the same mistakes > regardless of the command is used. -- This message was sent by Atlassian Jira (v8.20.7#820007)
[jira] [Updated] (IGNITE-17230) Support splt-file page store
[ https://issues.apache.org/jira/browse/IGNITE-17230?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kirill Tkalenko updated IGNITE-17230: - Description: *Notes* Description may not be complete. *Goal* To implement a new checkpoint (described in IGNITE-15818), we will introduce a new entity *DelataFilePageStore*, which will be created for each partition at each checkpoint and removed after merging with the *FilePageStore* (the main partition file) using the compacter. *DelataFilePageStore* will consist of: * Header (maybe updated in the course of implementation): ** Allocation *pageIdx* - *pageIdx* of the last created page; * Sorted list of *pageIdx* - allows a binary search to find the file offset for an *pageId -> pageIdx*; * Page content - sorted by *pageIdx*. What will change for *FilePageStore*: * List of class *DelataFilePageStore* will be added (from the newest to the oldest by the time of creation); * Allocation index (pageIdx of the last created page) - it will be logical and contained in the header of *FilePageStore*. At node start, it will be read from the header of *FilePageStore* or obtained from the first *DelataFilePageStore* (the newest one). How pages will be read by *pageId -> pageIdx*: * Interrogates the class *DelataFilePageStore* in order from the newest to the oldest; * If not found, then we read page from the *FilePageStore* itself. was: *Notes* Description may not be complete. *Goal* To implement a new checkpoint (described in IGNITE-15818), we will introduce a new entity *DelataFilePageStore*, which will be created for each partition at each checkpoint and removed after merging with the *FilePageStore* (the main partition file) using the compacter. *DelataFilePageStore* will consist of: * Header (maybe updated in the course of implementation): ** Allocation *pageIdx* - *pageIdx* of the last created page; * Sorted list of *pageIdx* - allows a binary search to find the file offset for an *pageId -> pageIdx*; * Page content - sorted by *pageIdx*. What will change for *FilePageStore*: * List of class *DelataFilePageStore* will be added (from the newest to the oldest by the time of creation); * Allocation index (pageIdx of the last created page) - it will be logical and contained in the header of *FilePageStore*. At node start, it will be read from the header of *FilePageStore* or obtained from the first *DelataFilePageStore* (the newest one). To read a page by *pageId -> pageIdx*, at the beginning we will try to find it among the *DelataFilePageStore* (from the youngest to the oldest) and if we do not find it among them, we will read from *FilePageStore*. > Support splt-file page store > > > Key: IGNITE-17230 > URL: https://issues.apache.org/jira/browse/IGNITE-17230 > Project: Ignite > Issue Type: Task >Reporter: Kirill Tkalenko >Priority: Major > Labels: ignite-3 > Fix For: 3.0.0-alpha6 > > > *Notes* > Description may not be complete. > *Goal* > To implement a new checkpoint (described in IGNITE-15818), we will introduce > a new entity *DelataFilePageStore*, which will be created for each partition > at each checkpoint and removed after merging with the *FilePageStore* (the > main partition file) using the compacter. > *DelataFilePageStore* will consist of: > * Header (maybe updated in the course of implementation): > ** Allocation *pageIdx* - *pageIdx* of the last created page; > * Sorted list of *pageIdx* - allows a binary search to find the file offset > for an *pageId -> pageIdx*; > * Page content - sorted by *pageIdx*. > What will change for *FilePageStore*: > * List of class *DelataFilePageStore* will be added (from the newest to the > oldest by the time of creation); > * Allocation index (pageIdx of the last created page) - it will be logical > and contained in the header of *FilePageStore*. At node start, it will be > read from the header of *FilePageStore* or obtained from the first > *DelataFilePageStore* (the newest one). > How pages will be read by *pageId -> pageIdx*: > * Interrogates the class *DelataFilePageStore* in order from the newest to > the oldest; > * If not found, then we read page from the *FilePageStore* itself. -- This message was sent by Atlassian Jira (v8.20.7#820007)
[jira] [Updated] (IGNITE-17230) Support splt-file page store
[ https://issues.apache.org/jira/browse/IGNITE-17230?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kirill Tkalenko updated IGNITE-17230: - Description: *Notes* Description may not be complete. *Goal* To implement a new checkpoint (described in IGNITE-15818), we will introduce a new entity *DelataFilePageStore*, which will be created for each partition at each checkpoint and removed after merging with the *FilePageStore* (the main partition file) using the compacter. *DelataFilePageStore* will consist of: * Header (maybe updated in the course of implementation): ** Allocation *pageIdx* - *pageIdx* of the last created page; * Sorted list of *pageIdx* - allows a binary search to find the file offset for an *pageId -> pageIdx*; * Page content - sorted by *pageIdx*. What will change for *FilePageStore*: * List of class *DelataFilePageStore* will be added (from the newest to the oldest by the time of creation); * Allocation index (pageIdx of the last created page) - it will be logical and contained in the header of *FilePageStore*. At node start, it will be read from the header of *FilePageStore* or obtained from the first *DelataFilePageStore* (the newest one). To read a page by *pageId -> pageIdx*, at the beginning we will try to find it among the *DelataFilePageStore* (from the youngest to the oldest) and if we do not find it among them, we will read from *FilePageStore*. was: *Notes* Description may not be complete. *Goal* To implement a new checkpoint (described in IGNITE-15818), we will introduce a new entity *DelataFilePageStore*, which will be created for each partition at each checkpoint and removed after merging with the *FilePageStore* (the main partition file) using the compacter. *DelataFilePageStore* will consist of: * Header (maybe updated in the course of implementation): ** Allocation *pageIdx* - *pageIdx* of the last created page; * Sorted list of *pageIdx* - allows a binary search to find the file offset for an *pageId -> pageIdx*; * Page content - sorted by *pageIdx*. What will change for *FilePageStore*: * Allocation index (pageIdx of the last created page) - it will be logical and contained in the header of *FilePageStore*. At node start, it will be read from the class header "123" or obtained from the first class "123" (the newest one). To implement the new checkpoint (described in IGNITE-15818), we need to modify *FilePageStore*. A list (from new to old) of *DelataFilePageStore* will be added to its structure, which will increase after each checkpoint (at its completion) and decrease after each merge with the *FilePageStore* by "compacter". *DelataFilePageStore* will contain: * Sorted list of *pageIdx*: allows by binary search to find the offset in this file for the requested *pageId -> pageIdx*; * Allocation Index: *pageIdx* of the last allocated page; * Pages themselves are sorted by *pageIdx*. To read a page by *pageId -> pageIdx*, at the beginning we will try to find it among the *DelataFilePageStore* (from the youngest to the oldest) and if we do not find it among them, we will read from *FilePageStore*. > Support splt-file page store > > > Key: IGNITE-17230 > URL: https://issues.apache.org/jira/browse/IGNITE-17230 > Project: Ignite > Issue Type: Task >Reporter: Kirill Tkalenko >Priority: Major > Labels: ignite-3 > Fix For: 3.0.0-alpha6 > > > *Notes* > Description may not be complete. > *Goal* > To implement a new checkpoint (described in IGNITE-15818), we will introduce > a new entity *DelataFilePageStore*, which will be created for each partition > at each checkpoint and removed after merging with the *FilePageStore* (the > main partition file) using the compacter. > *DelataFilePageStore* will consist of: > * Header (maybe updated in the course of implementation): > ** Allocation *pageIdx* - *pageIdx* of the last created page; > * Sorted list of *pageIdx* - allows a binary search to find the file offset > for an *pageId -> pageIdx*; > * Page content - sorted by *pageIdx*. > What will change for *FilePageStore*: > * List of class *DelataFilePageStore* will be added (from the newest to the > oldest by the time of creation); > * Allocation index (pageIdx of the last created page) - it will be logical > and contained in the header of *FilePageStore*. At node start, it will be > read from the header of *FilePageStore* or obtained from the first > *DelataFilePageStore* (the newest one). > To read a page by *pageId -> pageIdx*, at the beginning we will try to find > it among the *DelataFilePageStore* (from the youngest to the oldest) and if > we do not find it among them, we will read from *FilePageStore*. -- This message was sent by Atlassian Jira (v8.20.7#820007)
[jira] [Updated] (IGNITE-17230) Support splt-file page store
[ https://issues.apache.org/jira/browse/IGNITE-17230?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kirill Tkalenko updated IGNITE-17230: - Description: *Notes* Description may not be complete. *Goal* To implement a new checkpoint (described in IGNITE-15818), we will introduce a new entity *DelataFilePageStore*, which will be created for each partition at each checkpoint and removed after merging with the *FilePageStore* (the main partition file) using the compacter. *DelataFilePageStore* will consist of: * Header (maybe updated in the course of implementation): ** Allocation *pageIdx* - *pageIdx* of the last created page; * Sorted list of *pageIdx* - allows a binary search to find the file offset for an *pageId -> pageIdx*; * Page content - sorted by *pageIdx*. What will change for *FilePageStore*: * Allocation index (pageIdx of the last created page) - it will be logical and contained in the header of *FilePageStore*. At node start, it will be read from the class header "123" or obtained from the first class "123" (the newest one). To implement the new checkpoint (described in IGNITE-15818), we need to modify *FilePageStore*. A list (from new to old) of *DelataFilePageStore* will be added to its structure, which will increase after each checkpoint (at its completion) and decrease after each merge with the *FilePageStore* by "compacter". *DelataFilePageStore* will contain: * Sorted list of *pageIdx*: allows by binary search to find the offset in this file for the requested *pageId -> pageIdx*; * Allocation Index: *pageIdx* of the last allocated page; * Pages themselves are sorted by *pageIdx*. To read a page by *pageId -> pageIdx*, at the beginning we will try to find it among the *DelataFilePageStore* (from the youngest to the oldest) and if we do not find it among them, we will read from *FilePageStore*. was: *Notes* Description may not be complete. *Goal* To implement a new checkpoint (described in IGNITE-15818), we will introduce a new entity *DelataFilePageStore*", which will be created for each partition at each checkpoint. Will consist of: * Header (maybe updated in the course of implementation): ** Allocation *pageIdx* - *pageIdx* of the last created page; * Sorted list of *pageIdx* - allows a binary search to find the file offset for an *pageId -> pageIdx*; * Page content - sorted by *pageIdx*. To implement the new checkpoint (described in IGNITE-15818), we need to modify *FilePageStore*. A list (from new to old) of *DelataFilePageStore* will be added to its structure, which will increase after each checkpoint (at its completion) and decrease after each merge with the *FilePageStore* by "compacter". *DelataFilePageStore* will contain: * Sorted list of *pageIdx*: allows by binary search to find the offset in this file for the requested *pageId -> pageIdx*; * Allocation Index: *pageIdx* of the last allocated page; * Pages themselves are sorted by *pageIdx*. To read a page by *pageId -> pageIdx*, at the beginning we will try to find it among the *DelataFilePageStore* (from the youngest to the oldest) and if we do not find it among them, we will read from *FilePageStore*. > Support splt-file page store > > > Key: IGNITE-17230 > URL: https://issues.apache.org/jira/browse/IGNITE-17230 > Project: Ignite > Issue Type: Task >Reporter: Kirill Tkalenko >Priority: Major > Labels: ignite-3 > Fix For: 3.0.0-alpha6 > > > *Notes* > Description may not be complete. > *Goal* > To implement a new checkpoint (described in IGNITE-15818), we will introduce > a new entity *DelataFilePageStore*, which will be created for each partition > at each checkpoint and removed after merging with the *FilePageStore* (the > main partition file) using the compacter. > *DelataFilePageStore* will consist of: > * Header (maybe updated in the course of implementation): > ** Allocation *pageIdx* - *pageIdx* of the last created page; > * Sorted list of *pageIdx* - allows a binary search to find the file offset > for an *pageId -> pageIdx*; > * Page content - sorted by *pageIdx*. > What will change for *FilePageStore*: > * Allocation index (pageIdx of the last created page) - it will be logical > and contained in the header of *FilePageStore*. At node start, it will be > read from the class header "123" or obtained from the first class "123" (the > newest one). > To implement the new checkpoint (described in IGNITE-15818), we need to > modify *FilePageStore*. > A list (from new to old) of *DelataFilePageStore* will be added to its > structure, which will increase after each checkpoint (at its completion) and > decrease after each merge with the *FilePageStore* by "compacter". > *DelataFilePageStore* will contain: > * Sorted list of *pageIdx*: allows by binary search to find the of
[jira] [Updated] (IGNITE-17230) Support splt-file page store
[ https://issues.apache.org/jira/browse/IGNITE-17230?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kirill Tkalenko updated IGNITE-17230: - Description: *Notes* Description may not be complete. *Goal* To implement a new checkpoint (described in IGNITE-15818), we will introduce a new entity *DelataFilePageStore*", which will be created for each partition at each checkpoint. Will consist of: * Header (maybe updated in the course of implementation): ** Allocation *pageIdx* - *pageIdx* of the last created page; * Sorted list of *pageIdx* - allows a binary search to find the file offset for an *pageId -> pageIdx*; * Page content - sorted by *pageIdx*. To implement the new checkpoint (described in IGNITE-15818), we need to modify *FilePageStore*. A list (from new to old) of *DelataFilePageStore* will be added to its structure, which will increase after each checkpoint (at its completion) and decrease after each merge with the *FilePageStore* by "compacter". *DelataFilePageStore* will contain: * Sorted list of *pageIdx*: allows by binary search to find the offset in this file for the requested *pageId -> pageIdx*; * Allocation Index: *pageIdx* of the last allocated page; * Pages themselves are sorted by *pageIdx*. To read a page by *pageId -> pageIdx*, at the beginning we will try to find it among the *DelataFilePageStore* (from the youngest to the oldest) and if we do not find it among them, we will read from *FilePageStore*. was: *Notes* Description may not be complete. *Goal* To implement the new checkpoint (described in IGNITE-15818), we need to modify *FilePageStore*. A list (from new to old) of *DelataFilePageStore* will be added to its structure, which will increase after each checkpoint (at its completion) and decrease after each merge with the *FilePageStore* by "compacter". *DelataFilePageStore* will contain: * Sorted list of *pageIdx*: allows by binary search to find the offset in this file for the requested *pageId -> pageIdx*; * Allocation Index: *pageIdx* of the last allocated page; * Pages themselves are sorted by *pageIdx*. To read a page by *pageId -> pageIdx*, at the beginning we will try to find it among the *DelataFilePageStore* (from the youngest to the oldest) and if we do not find it among them, we will read from *FilePageStore*. > Support splt-file page store > > > Key: IGNITE-17230 > URL: https://issues.apache.org/jira/browse/IGNITE-17230 > Project: Ignite > Issue Type: Task >Reporter: Kirill Tkalenko >Priority: Major > Labels: ignite-3 > Fix For: 3.0.0-alpha6 > > > *Notes* > Description may not be complete. > *Goal* > To implement a new checkpoint (described in IGNITE-15818), we will introduce > a new entity *DelataFilePageStore*", which will be created for each partition > at each checkpoint. > Will consist of: > * Header (maybe updated in the course of implementation): > ** Allocation *pageIdx* - *pageIdx* of the last created page; > * Sorted list of *pageIdx* - allows a binary search to find the file offset > for an *pageId -> pageIdx*; > * Page content - sorted by *pageIdx*. > To implement the new checkpoint (described in IGNITE-15818), we need to > modify *FilePageStore*. > A list (from new to old) of *DelataFilePageStore* will be added to its > structure, which will increase after each checkpoint (at its completion) and > decrease after each merge with the *FilePageStore* by "compacter". > *DelataFilePageStore* will contain: > * Sorted list of *pageIdx*: allows by binary search to find the offset in > this file for the requested *pageId -> pageIdx*; > * Allocation Index: *pageIdx* of the last allocated page; > * Pages themselves are sorted by *pageIdx*. > To read a page by *pageId -> pageIdx*, at the beginning we will try to find > it among the *DelataFilePageStore* (from the youngest to the oldest) and if > we do not find it among them, we will read from *FilePageStore*. -- This message was sent by Atlassian Jira (v8.20.7#820007)
[jira] [Updated] (IGNITE-17232) Optimization of DeltaFilePageStore: write new pages directly to FilePageStore
[ https://issues.apache.org/jira/browse/IGNITE-17232?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kirill Tkalenko updated IGNITE-17232: - Description: When creating *DelateFilePageStore* at checkpoint, we sort the list of all dirty pages of the partition by *pageIdx*, write to disk the sorted list of *pageIdx* (for *pageId -> pageIdx* binary lookup), the contents of the dirty pages, and the current *pageIdx* of the page allocations. I propose to optimize this a bit. In *DelateFilePageStore*, store only changes in existing pages, and write all new pages immediately to *FilePageStore*, so we will reduce the work for the compacter (it will need to write less to the main partition file) and the sorted list of *pageIdx* will be smaller. Since the allocation index becomes logical (which is stored in the *FilePageStore*) and depends on the first (newest) *DelateFilePageStore*, then if the checkpoint is not completed, we will not lose or break anything in the *FilePageStore* and on the new checkpoint we will write new pages on over of those that we write on the unfinished previous checkpoint. was: When creating *DelateFilePageStore* at checkpoint, we sort the list of all dirty pages of the partition by *pageIdx*, write to disk the sorted list of *pageIdx* (for *pageId -> pageIdx* binary lookup), the contents of the dirty pages, and the current *pageIdx* of the page allocations. I propose to optimize this a bit. In *DelateFilePageStore*, store only changes in existing pages, and write all new pages immediately to *FilePageStore*, so we will reduce the work for the compacter (it will need to write less to the main partition file) and the sorted list of *pageIdx* will be smaller. > Optimization of DeltaFilePageStore: write new pages directly to FilePageStore > - > > Key: IGNITE-17232 > URL: https://issues.apache.org/jira/browse/IGNITE-17232 > Project: Ignite > Issue Type: Improvement >Reporter: Kirill Tkalenko >Priority: Major > Labels: ignite-3 > Fix For: 3.0.0-alpha6 > > > When creating *DelateFilePageStore* at checkpoint, we sort the list of all > dirty pages of the partition by *pageIdx*, write to disk the sorted list of > *pageIdx* (for *pageId -> pageIdx* binary lookup), the contents of the dirty > pages, and the current *pageIdx* of the page allocations. > I propose to optimize this a bit. > In *DelateFilePageStore*, store only changes in existing pages, and write all > new pages immediately to *FilePageStore*, so we will reduce the work for the > compacter (it will need to write less to the main partition file) and the > sorted list of *pageIdx* will be smaller. > Since the allocation index becomes logical (which is stored in the > *FilePageStore*) and depends on the first (newest) *DelateFilePageStore*, > then if the checkpoint is not completed, we will not lose or break anything > in the *FilePageStore* and on the new checkpoint we will write new pages on > over of those that we write on the unfinished previous checkpoint. -- This message was sent by Atlassian Jira (v8.20.7#820007)
[jira] [Updated] (IGNITE-17232) Optimization of DeltaFilePageStore: write new pages directly to FilePageStore
[ https://issues.apache.org/jira/browse/IGNITE-17232?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kirill Tkalenko updated IGNITE-17232: - Description: When creating *DelateFilePageStore* at checkpoint, we sort the list of all dirty pages of the partition by *pageIdx*, write to disk the sorted list of *pageIdx* (for *pageId -> pageIdx* binary lookup), the contents of the dirty pages, and the current *pageIdx* of the page allocations. I propose to optimize this a bit. In *DelateFilePageStore*, store only changes in existing pages, and write all new pages immediately to *FilePageStore*, so we will reduce the work for the compacter (it will need to write less to the main partition file) and the sorted list of *pageIdx* will be smaller. was: When creating *DelateFilePageStore* at checkpoint, we sort the list of all dirty pages of the partition by *pageIdx*, write to disk the sorted list of *pageIdx* (for *pageId -> pageIdx* binary lookup), the contents of the dirty pages, and the current *pageIdx* of the page allocations. I propose to optimize this a bit. In *DelateFilePageStore*, store only changes in existing pages, and write all new pages immediately to *FilePageStore*, so we will reduce the work for the compacter (it will need to write less to the main partition file) and the sorted list of *pageIdx* will be smaller. > Optimization of DeltaFilePageStore: write new pages directly to FilePageStore > - > > Key: IGNITE-17232 > URL: https://issues.apache.org/jira/browse/IGNITE-17232 > Project: Ignite > Issue Type: Improvement >Reporter: Kirill Tkalenko >Priority: Major > Labels: ignite-3 > Fix For: 3.0.0-alpha6 > > > When creating *DelateFilePageStore* at checkpoint, we sort the list of all > dirty pages of the partition by *pageIdx*, write to disk the sorted list of > *pageIdx* (for *pageId -> pageIdx* binary lookup), the contents of the dirty > pages, and the current *pageIdx* of the page allocations. > I propose to optimize this a bit. > In *DelateFilePageStore*, store only changes in existing pages, and write all > new pages immediately to *FilePageStore*, so we will reduce the work for the > compacter (it will need to write less to the main partition file) and the > sorted list of *pageIdx* will be smaller. -- This message was sent by Atlassian Jira (v8.20.7#820007)
[jira] [Updated] (IGNITE-17232) Optimization of DeltaFilePageStore: write new pages directly to FilePageStore
[ https://issues.apache.org/jira/browse/IGNITE-17232?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kirill Tkalenko updated IGNITE-17232: - Description: When creating *DelateFilePageStore* at checkpoint, we sort the list of all dirty pages of the partition by *pageIdx*, write to disk the sorted list of *pageIdx* (for *pageId -> pageIdx* binary lookup), the contents of the dirty pages, and the current *pageIdx* of the page allocations. I propose to optimize this a bit. In *DelateFilePageStore*, store only changes in existing pages, and write all new pages immediately to *FilePageStore*, so we will reduce the work for the compacter (it will need to write less to the main partition file) and the sorted list of *pageIdx* will be smaller. was: When creating class 1 at checkpoint, we sort the list of all dirty pages in the partition by ID, write to disk the sorted list of IDs (for ID binary lookup), the contents of the dirty pages, and the current ID of the page allocations. I propose to optimize this a bit. In class 1, store only changes in existing pages, and write all new pages immediately to class 1, so we will reduce the work for the merger (it will need to write less to the main partition file) and the sorted list of pages will be smaller. > Optimization of DeltaFilePageStore: write new pages directly to FilePageStore > - > > Key: IGNITE-17232 > URL: https://issues.apache.org/jira/browse/IGNITE-17232 > Project: Ignite > Issue Type: Improvement >Reporter: Kirill Tkalenko >Priority: Major > Labels: ignite-3 > Fix For: 3.0.0-alpha6 > > > When creating *DelateFilePageStore* at checkpoint, we sort the list of all > dirty pages of the partition by *pageIdx*, write to disk the sorted list of > *pageIdx* (for *pageId -> pageIdx* binary lookup), the contents of the dirty > pages, and the current *pageIdx* of the page allocations. > I propose to optimize this a bit. > In *DelateFilePageStore*, store only changes in existing pages, and write all > new pages immediately to *FilePageStore*, so we will reduce the work for the > compacter (it will need to write less to the main partition file) and the > sorted list of *pageIdx* will be smaller. -- This message was sent by Atlassian Jira (v8.20.7#820007)
[jira] [Created] (IGNITE-17232) Optimization of DeltaFilePageStore: write new pages directly to FilePageStore
Kirill Tkalenko created IGNITE-17232: Summary: Optimization of DeltaFilePageStore: write new pages directly to FilePageStore Key: IGNITE-17232 URL: https://issues.apache.org/jira/browse/IGNITE-17232 Project: Ignite Issue Type: Improvement Reporter: Kirill Tkalenko Fix For: 3.0.0-alpha6 When creating class 1 at checkpoint, we sort the list of all dirty pages in the partition by ID, write to disk the sorted list of IDs (for ID binary lookup), the contents of the dirty pages, and the current ID of the page allocations. I propose to optimize this a bit. In class 1, store only changes in existing pages, and write all new pages immediately to class 1, so we will reduce the work for the merger (it will need to write less to the main partition file) and the sorted list of pages will be smaller. -- This message was sent by Atlassian Jira (v8.20.7#820007)
[jira] [Commented] (IGNITE-17217) Java thin: addresses are not reloaded from ClientAddressFinder on connection loss
[ https://issues.apache.org/jira/browse/IGNITE-17217?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17559029#comment-17559029 ] wkhappy commented on IGNITE-17217: -- ok,thank you very much。i will repair it > Java thin: addresses are not reloaded from ClientAddressFinder on connection > loss > - > > Key: IGNITE-17217 > URL: https://issues.apache.org/jira/browse/IGNITE-17217 > Project: Ignite > Issue Type: Bug > Components: thin client >Reporter: Pavel Tupitsyn >Assignee: wkhappy >Priority: Major > > When all node connections are lost, *ClientAddressFinder.getAddresses* is not > called to refresh the list of known endpoints. > For example, when a Kubernetes pod restarts, it may have a different address. -- This message was sent by Atlassian Jira (v8.20.7#820007)
[jira] [Updated] (IGNITE-15128) Take own control of SQL functions
[ https://issues.apache.org/jira/browse/IGNITE-15128?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Konstantin Orlov updated IGNITE-15128: -- Labels: (was: calcite3-required) > Take own control of SQL functions > - > > Key: IGNITE-15128 > URL: https://issues.apache.org/jira/browse/IGNITE-15128 > Project: Ignite > Issue Type: Improvement > Components: sql >Reporter: Yury Gerzhedovich >Assignee: Aleksey Plekhanov >Priority: Major > Time Spent: 1h > Remaining Estimate: 0h > > As of now, we use a set of 4 database function dialects: > SqlLibrary.STANDARD, > SqlLibrary.POSTGRESQL, > SqlLibrary.ORACLE, > SqlLibrary.MYSQL > Seems we should have owned our dialect with a subset of the aforementioned > functions and have the ability to modify already exists functions and add a > new one. > During implementation need to sort out similar functions and choose just one > of them to avoid duplication, > See : > org.apache.calcite.util.BuiltInMethod > org.apache.calcite.sql.fun.SqlLibraryOperators > org.apache.calcite.runtime.SqlFunctions > org.apache.ignite.internal.processors.query.calcite.exec.exp.RexImpTable -- This message was sent by Atlassian Jira (v8.20.7#820007)
[jira] [Commented] (IGNITE-17217) Java thin: addresses are not reloaded from ClientAddressFinder on connection loss
[ https://issues.apache.org/jira/browse/IGNITE-17217?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17559028#comment-17559028 ] Pavel Tupitsyn commented on IGNITE-17217: - [~wkhapy123123] thank you for submitting this PR. Yes, it is the correct way. Please see a few comments on GitHub. > Java thin: addresses are not reloaded from ClientAddressFinder on connection > loss > - > > Key: IGNITE-17217 > URL: https://issues.apache.org/jira/browse/IGNITE-17217 > Project: Ignite > Issue Type: Bug > Components: thin client >Reporter: Pavel Tupitsyn >Assignee: wkhappy >Priority: Major > > When all node connections are lost, *ClientAddressFinder.getAddresses* is not > called to refresh the list of known endpoints. > For example, when a Kubernetes pod restarts, it may have a different address. -- This message was sent by Atlassian Jira (v8.20.7#820007)
[jira] [Created] (IGNITE-17231) Optimization of DeltaFilePageStore: improve mapping of pageIdx to file offset
Kirill Tkalenko created IGNITE-17231: Summary: Optimization of DeltaFilePageStore: improve mapping of pageIdx to file offset Key: IGNITE-17231 URL: https://issues.apache.org/jira/browse/IGNITE-17231 Project: Ignite Issue Type: Improvement Reporter: Kirill Tkalenko Fix For: 3.0.0-alpha6 For ease of implementation, a sorted list of *pageIdx* has been added to the *DeltaFilePageStore*, thereby allowing a binary search to find a *pageId -> pageIdx*. Perhaps this is not quite optimal, and it can be optimized. It is important that we need to find a balance between memory usage and *pageId* lookup speed, since the *DeltaFilePageStore* class can be many (very many) due to the fact that it depends on the checkpoint, compacter, number of partitions and number of groups. Before implementation, we need to study the options in more depth and perhaps try a few of them. What can we consider: * roaring map - this needs to be carefully studied; * list of containers (idea) - there are 3 types of container, the first is a bitmask, the second is value intervals (provided that the values are greater than 64 (two integers)), the third is a sorted list (or hash map); then by binary search we find the container (by the first *pageIdx* in this container) and then we query the container. -- This message was sent by Atlassian Jira (v8.20.7#820007)