This is an automated email from the ASF dual-hosted git repository.

sk0x50 pushed a commit to branch main
in repository https://gitbox.apache.org/repos/asf/ignite-3.git


The following commit(s) were added to refs/heads/main by this push:
     new f1720d672d IGNITE-21563 Replace ENGINE based aprroach by 
STORAGE_PROFILES in docs (#4619)
f1720d672d is described below

commit f1720d672d6a1a7aaffbd4fdef1fc46a6c935aea
Author: Kirill Gusakov <[email protected]>
AuthorDate: Thu Oct 24 13:49:24 2024 +0300

    IGNITE-21563 Replace ENGINE based aprroach by STORAGE_PROFILES in docs 
(#4619)
---
 docs/_docs/developers-guide/java-to-tables.adoc  |  7 ++--
 docs/_docs/sql-reference/distribution-zones.adoc | 16 +++------
 docs/_docs/sql-reference/grammar-reference.adoc  |  6 ++--
 modules/storage-api/README.md                    | 43 ++++++++++--------------
 4 files changed, 30 insertions(+), 42 deletions(-)

diff --git a/docs/_docs/developers-guide/java-to-tables.adoc 
b/docs/_docs/developers-guide/java-to-tables.adoc
index 0fdffb5e7c..7e1c0f22fe 100644
--- a/docs/_docs/developers-guide/java-to-tables.adoc
+++ b/docs/_docs/developers-guide/java-to-tables.adoc
@@ -39,7 +39,7 @@ You use the `@Table` and other annotations that are located 
in the `org.apache.i
 @Zone(
         value = "zone_test",
         partitions = 2,
-        engine = ZoneEngine.ROCKSDB
+        storageProfiles = "default"
 )
 class ZoneTest {}
 
@@ -82,7 +82,7 @@ The result is equivalent to the following SQL multi-statement:
 
 [source, sql]
 ----
-CREATE ZONE IF NOT EXISTS zone_test ENGINE ROCKSDB WITH PARTITIONS=2;
+CREATE ZONE IF NOT EXISTS zone_test ENGINE ROCKSDB WITH PARTITIONS=2, 
STORAGE_PROFILES='default';
 
 CREATE TABLE IF NOT EXISTS kv_pojo_test (
        id int,
@@ -168,7 +168,8 @@ class Pojo {
 
 ignite.catalog()
   .create(ZoneDefinition.builder("zone_test")
-    .partitions(2));
+    .partitions(2)
+    .storageProfiles("default"));
 
 ignite.catalog()
   .create(TableDefinition.builder("pojo_test")
diff --git a/docs/_docs/sql-reference/distribution-zones.adoc 
b/docs/_docs/sql-reference/distribution-zones.adoc
index 6c025b8d77..55254980ec 100644
--- a/docs/_docs/sql-reference/distribution-zones.adoc
+++ b/docs/_docs/sql-reference/distribution-zones.adoc
@@ -27,8 +27,6 @@ Diagram(
 Terminal('CREATE ZONE'),
 Optional(Terminal('IF NOT EXISTS')),
 NonTerminal('qualified_zone_name'),
-Optional(Sequence(Terminal('ENGINE'),
-NonTerminal('engine_name'))),
 End({type:'complex'})
 )
 
@@ -37,22 +35,17 @@ Diagram(
 Start({type:'complex'}),
 Optional(Sequence(
 Terminal('WITH'),
-Optional('('),
 OneOrMore(
 NonTerminal('parameter', {href:'./grammar-reference/#parameter'}),
-','),
-Optional(')'))))
+','))))
 
 
 Keywords and parameters:
 
 * `IF NOT EXISTS` - create a zone only if a different zone with the same name 
does not exist.
 * `qualified_zone_name` - a name of the distribution zone.
-* `ENGINE` - selects the storage engine (`engine_name`) to use. Currently 
available are:
-** `aipersist`
-** `aimem`
-** `rocksdb`
 * `WITH` - accepts the following additional parameters:
+** `STORAGE_PROFILES` - Required. Comma-separated list of the profiles of the 
storage engines to use.
 ** `PARTITIONS` - the number of partition the data is divided into. Partitions 
are then split between nodes for storage.
 ** `REPLICAS` - the number of copies of each partition.
 ** `DATA_NODES_FILTER` - specifies the nodes that can be used to store data in 
the distribution zone based on node attributes. You can configure node 
attributes by using cli. Filter uses JSONPath rules. If the attribute is not 
found, all negative comparisons will be valid. For example, `$[?(@.storage != 
'SSD']}` will also include nodes without the `storage` attribute specified.
@@ -66,14 +59,14 @@ Creates an `exampleZone` distribution zone:
 
 [source,sql]
 ----
-CREATE ZONE IF NOT EXISTS exampleZone
+CREATE ZONE IF NOT EXISTS exampleZone WITH STORAGE_PROFILES='default'
 ----
 
 Creates an `exampleZone` distribution zone that will only use nodes with SSD 
attribute and adjust 300 seconds after cluster topology changes:
 
 [source,sql]
 ----
-CREATE ZONE IF NOT EXISTS exampleZone WITH DATA_NODES_FILTER=SSD, 
DATA_NODES_AUTO_ADJUST_SCALE_UP=300
+CREATE ZONE IF NOT EXISTS exampleZone WITH DATA_NODES_FILTER=SSD, 
DATA_NODES_AUTO_ADJUST_SCALE_UP=300, STORAGE_PROFILES='default'
 ----
 
 == ALTER ZONE
@@ -117,6 +110,7 @@ Keywords and parameters:
 * `IF EXISTS` - do not throw an error if a zone with the specified name does 
not exist.
 * `qualified_zone_name` - a name of the distribution zone.
 * `SET` - assigns values to any or all of the following parameters:
+** `STORAGE_PROFILES` - comma-separated list of the profiles of the storage 
engines to use.
 ** `PARTITIONS` - the number of partitions
 ** `REPLICAS` - the number of copies of each partition.
 ** `DATA_NODES_FILTER` - specifies the nodes that can be used to store data in 
the distribution zone based on node attributes.
diff --git a/docs/_docs/sql-reference/grammar-reference.adoc 
b/docs/_docs/sql-reference/grammar-reference.adoc
index 80d8fb18de..8cf7efe452 100644
--- a/docs/_docs/sql-reference/grammar-reference.adoc
+++ b/docs/_docs/sql-reference/grammar-reference.adoc
@@ -196,21 +196,21 @@ Parameters:
 When a parameter is specified, you can provide it as a literal value or as an 
identifier. For example:
 
 ----
-CREATE ZONE test_zone;
+CREATE ZONE test_zone WITH STORAGE_PROFILES='default';
 CREATE TABLE test_table (id INT PRIMARY KEY, val INT) WITH 
PRIMARY_ZONE=test_zone;
 ----
 
 In this case, `test_zone` is the identifier, and is used as an identifier. 
When used like this, the parameters are not case-sensitive.
 
 ----
-CREATE ZONE "test_zone";
+CREATE ZONE "test_zone" WITH STORAGE_PROFILES='default';
 CREATE TABLE test_table (id INT PRIMARY KEY, val INT) WITH 
PRIMARY_ZONE='test_zone';
 ----
 
 In this case, `test_zone` is created as a literal value, and is used as a 
literal. When used like this, the parameter is case-sensitive.
 
 ----
-CREATE ZONE test_zone;
+CREATE ZONE test_zone WITH STORAGE_PROFILES='default';
 CREATE TABLE test_table (id INT PRIMARY KEY, val INT) WITH 
PRIMARY_ZONE=`TEST_ZONE`;
 ----
 
diff --git a/modules/storage-api/README.md b/modules/storage-api/README.md
index f6895249b1..1e70a490ee 100644
--- a/modules/storage-api/README.md
+++ b/modules/storage-api/README.md
@@ -14,9 +14,11 @@ To add a new data storage you need:
     * `org.apache.ignite.internal.storage.MvPartitionStorage`;
     * `org.apache.ignite.internal.storage.index.SortedIndexStorage`;
 * Add configuration:
-    * Add an inheritor of 
`org.apache.ignite.configuration.schemas.store.DataStorageConfigurationSchema`, 
with type equal
-      to `org.apache.ignite.internal.storage.engine.StorageEngine.name`;
-    * If necessary, add a specific configuration of the data storage engine;
+    * Add an inheritor of 
`org.apache.ignite.internal.storage.configurations.StorageProfileConfigurationSchema`,
+      with `@PolymorphicConfigInstance` value equal to 
`org.apache.ignite.internal.storage.engine.StorageEngine.name`;
+    * If necessary, add a specific configuration of the data storage engine:
+        * Implement 
`org.apache.ignite.internal.storage.configurations.StorageEngineConfigurationSchema`
 with the `@ConfigurationExtension`
+          annotation and the `@ConfigValue` field with the name equal to 
`org.apache.ignite.internal.storage.engine.StorageEngine.name`;
     * Implement `org.apache.ignite.configuration.ConfigurationModule`;
 * Add services (which are loaded via 
`java.util.ServiceLoader.load(java.lang.Class<S>)`):
     * Implementation of `org.apache.ignite.internal.storage.DataStorageModule`;
@@ -26,31 +28,22 @@ Take 
`org.apache.ignite.internal.storage.impl.TestStorageEngine` as an example.
 
 ## Usage
 
-For each table, you need to specify the data storage, which is located in 
`org.apache.ignite.configuration.schemas.table.TableConfigurationSchema.dataStorage`.
-
-Configuration example in HOCON:
+Storage configuration in HOCON:
 ```
-tables.table {
-    name = schema.table,
-    columns.id {name = id, type.type = STRING, nullable = true},
-    primaryKey {columns = [id], colocationColumns = [id]},
-    indices.foo {type = HASH, name = foo, colNames = [id]},
-    dataStorage {name = rocksdb, dataRegion = default}
-}
+ignite:
+    storage.profiles:
+        test_profile1
+            engine: test
+        test_profile2
+            engine: test
 ```
 
-Configuration example in java:
-```java
-TableConfiguration tableConfig = ...;
-
-// Change data storage.
-tableConfig.dataStorage().change(c -> 
c.convert(RocksDbDataStorageChange.class).changeDataRegion("default")).get(1, 
TimeUnit.SECONDS);
-
-// Get data storage.
-RocksDbDataStorageView dataStorageView = (RocksDbDataStorageView) 
tableConfig().dataStorage().value();
+For each table, you may to specify the storage profile.
 
-String dataRegion = dataStorageView.dateRegion();
+Table creation example in DDL:
 ```
+create zone z1 with storage_profiles='test_profile1,test_profile2';
 
-
-To get the data storage engine, you need to use 
`org.apache.ignite.internal.storage.DataStorageManager.engine(org.apache.ignite.configuration.schemas.store.DataStorageConfiguration)`.
+create table t1 with storage_profile='test_profile2' using zone='z1';
+create table t2 using zone='z1'; // first storage profile from the zone z1 
will be used here: 'test_profile1'.
+```

Reply via email to