This is an automated email from the ASF dual-hosted git repository.

dmagda pushed a commit to branch IGNITE-7595
in repository https://gitbox.apache.org/repos/asf/ignite.git


The following commit(s) were added to refs/heads/IGNITE-7595 by this push:
     new b8dfebc  Ported the last pages from readme.io to the new Ignite docs 
(no technical editing was done): binary marshaller, PHP PDO, VMWare 
installation, Thread Pools and Resources Injection
b8dfebc is described below

commit b8dfebc2b98208db73522d5c2af74105ae0829e7
Author: Denis Magda <dma...@gridgain.com>
AuthorDate: Thu Oct 1 22:50:50 2020 -0700

    Ported the last pages from readme.io to the new Ignite docs (no technical 
editing was done): binary marshaller, PHP PDO, VMWare installation, Thread 
Pools and Resources Injection
---
 docs/_data/toc.yaml                                |  10 +
 docs/_docs/data-modeling/binary-marshaller.adoc    | 285 +++++++++++++++++++++
 .../_docs/extensions-and-integrations/php-pdo.adoc | 233 +++++++++++++++++
 docs/_docs/installation/vmware-installation.adoc   |  45 ++++
 docs/_docs/resources-injection.adoc                |  74 ++++++
 docs/_docs/thread-pools.adoc                       | 136 ++++++++++
 6 files changed, 783 insertions(+)

diff --git a/docs/_data/toc.yaml b/docs/_data/toc.yaml
index 396e09e..e6db8e7 100644
--- a/docs/_data/toc.yaml
+++ b/docs/_data/toc.yaml
@@ -35,6 +35,8 @@
         url: installation/kubernetes/azure-deployment
       - title: Google Kubernetes Engine
         url: installation/kubernetes/gke-deployment
+  - title: VMWare
+    url: installation/vmware-installation
 - title: Setting Up
   items:
     - title: Setting Up Ignite for Java 
@@ -71,6 +73,8 @@
       url: data-modeling/data-partitioning
     - title: Affinity Colocation 
       url: data-modeling/affinity-collocation
+    - title: Binary Marshaller
+      url: data-modeling/binary-marshaller
 - title: Configuring Memory 
   items:
     - title: Memory Architecture
@@ -427,6 +431,8 @@
           url: extensions-and-integrations/cassandra/usage-examples
         - title: DDL Generator
           url: extensions-and-integrations/cassandra/ddl-generator
+    - title: PHP PDO
+      url: extensions-and-integrations/php-pdo
 - title: .NET Specific
   items:
     - title: Configuration Options
@@ -512,3 +518,7 @@
       url: sql-reference/system-functions
     - title: Data Types
       url: sql-reference/data-types
+- title: Thread Pools
+  url: thread-pools
+- title: Resources Injection
+  url: resources-injection
diff --git a/docs/_docs/data-modeling/binary-marshaller.adoc 
b/docs/_docs/data-modeling/binary-marshaller.adoc
new file mode 100644
index 0000000..7e73f20
--- /dev/null
+++ b/docs/_docs/data-modeling/binary-marshaller.adoc
@@ -0,0 +1,285 @@
+= Binary Marshaller
+
+== Basic Concepts
+
+Binary Marshaller is a component of Ignite that is responsible for data 
serialization. It has the advantages:
+
+* It enables you to read an arbitrary field from an object's serialized form 
without full object deserialization.
+This ability completely removes the requirement to have the cache key and 
value classes deployed on the server node's classpath.
+* It enables you to add and remove fields from objects of the same type. Given 
that server nodes do not have model classes
+definitions, this ability allows dynamic change to an object's structure, and 
even allows multiple clients with different versions of class definitions to 
co-exist.
+* It enables you to construct new objects based on a type name without having 
class definitions at all, hence
+allowing dynamic type creation.
+
+Binary objects can be used only when the default binary marshaller is used 
(i.e. no other marshaller is set to the configuration explicitly).
+
+[NOTE]
+====
+[discrete]
+=== Restrictions
+There are several restrictions that are implied by the BinaryObject format 
implementation:
+
+* Internally, Ignite does not write field and type names but uses a lower-case 
name hash to identify a field or a type.
+It means that fields or types with the same name hash are not allowed. Even 
though serialization will not work out-of-the-box
+in the case of hash collision, Ignite provides a way to resolve this collision 
at the configuration level.
+* For the same reason, BinaryObject format does not allow identical field 
names on different levels of a class hierarchy.
+* If a class implements `Externalizable` interface, Ignite will use 
`OptimizedMarshaller` instead of the binary one.
+The `OptimizedMarshaller` uses `writeExternal()` and `readExternal()` methods 
to serialize and deserialize objects of
+this class which requires adding classes of `Externalizable` objects to the 
classpath of server nodes.
+====
+
+The `IgniteBinary` facade, which can be obtained from an instance of Ignite, 
contains all the necessary methods to work with binary objects.
+
+[NOTE]
+====
+[discrete]
+=== Automatic Hash Code Calculation and Equals Implementation
+There are several restrictions that are implied by the BinaryObject format 
implementation:
+
+If an object can be serialized into a binary form, then Ignite will calculate 
its hash code during serialization and
+write it to the resulting binary array. Also, Ignite provides a custom 
implementation of the equals method for the binary
+object's comparison needs. This means that you do not need to override the 
GetHashCode and Equals methods of your custom
+keys and values in order for them to be used in Ignite, unless they can not be 
serialized into the binary form.
+For instance, objects of `Externalizable` type cannot be serialized into the 
binary form and require you to implement
+the `hashCode` and `equals` methods manually. See Restrictions section above 
for more details.
+====
+
+== Configuring Binary Objects
+
+In the vast majority of use cases, there is no need to additionally configure 
binary objects.
+
+However, in a case when you need to override the default type and field IDs 
calculation, or to plug in `BinarySerializer`,
+a `BinaryConfiguration` object should be defined in `IgniteConfiguration`. 
This object allows specifying a global
+name mapper, a global ID mapper, and a global binary serializer as well as 
per-type mappers and serializers. Wildcards
+are supported for per-type configuration, in which case, the provided 
configuration will be applied to all types
+that match the type name template.
+
+[tabs]
+--
+tab:XML[]
+[source,xml]
+----
+<bean id="ignite.cfg" 
class="org.apache.ignite.configuration.IgniteConfiguration">
+
+  <property name="binaryConfiguration">
+    <bean class="org.apache.ignite.configuration.BinaryConfiguration">
+
+      <property name="nameMapper" ref="globalNameMapper"/>
+      <property name="idMapper" ref="globalIdMapper"/>
+
+      <property name="typeConfigurations">
+        <list>
+          <bean class="org.apache.ignite.binary.BinaryTypeConfiguration">
+            <property name="typeName" value="org.apache.ignite.examples.*"/>
+            <property name="serializer" ref="exampleSerializer"/>
+          </bean>
+        </list>
+      </property>
+    </bean>
+  </property>
+</bean>
+----
+--
+
+== BinaryObject API
+
+By default, Ignite works with deserialized values as it is the most common use 
case. To enable `BinaryObject`
+processing, a user needs to obtain an instance of `IgniteCache` using the 
`withKeepBinary()` method. When enabled,
+this flag will ensure that objects returned from the cache will be in 
`BinaryObject` format, when possible. The same
+applies to values being passed to the `EntryProcessor` and `CacheInterceptor`.
+
+[NOTE]
+====
+[discrete]
+=== Platform Types
+Note that not all types will be represented as `BinaryObject` when the 
`withKeepBinary()` flag is enabled. There is a
+set of 'platform' types that includes primitive types, String, UUID, Date, 
Timestamp, BigDecimal, Collections,
+Maps and arrays of these that will never be represented as a `BinaryObject`.
+
+Note that in the example below key type Integer does not change because it is 
a platform type.
+====
+
+[tabs]
+--
+tab:Java[]
+[source,java]
+----
+// Create a regular Person object and put it to the cache.
+Person person = buildPerson(personId);
+ignite.cache("myCache").put(personId, person);
+
+// Get an instance of binary-enabled cache.
+IgniteCache<Integer, BinaryObject> binaryCache = 
ignite.cache("myCache").withKeepBinary();
+
+// Get the above person object in the BinaryObject format.
+BinaryObject binaryPerson = binaryCache.get(personId);
+----
+--
+
+== Modifying Binary Objects Using BinaryObjectBuilder
+
+`BinaryObject` instances are immutable. An instance of `BinaryObjectBuilder` 
must be used in order to update fields and
+create a new `BinaryObject`.
+
+An instance of `BinaryObjectBuilder` can be obtained from `IgniteBinary` 
facade. The builder may be created using a type
+name, in this case the returned builder will contain no fields, or it may be 
created using an existing `BinaryObject`,
+in this case the returned builder will copy all the fields from the given 
`BinaryObject`.
+
+Another way to get an instance of `BinaryObjectBuilder` is to call 
`toBuilder()` on an existing instance of a `BinaryObject`.
+This will also copy all data from the `BinaryObject` to the created builder.
+
+[NOTE]
+====
+[discrete]
+=== Limitations
+
+* You cannot change the types of existing fields.
+* You cannot change the order of enum values or add new constants at the 
beginning or in the middle of the list of enum's
+values. You can add new constants to the end of the list though.
+====
+
+Below is an example of using the `BinaryObject` API to process data on server 
nodes without having user classes deployed
+on servers and without actual data deserialization.
+
+[tabs]
+--
+tab:Java[]
+[source,java]
+----
+// The EntryProcessor is to be executed for this key.
+int key = 101;
+
+cache.<Integer, BinaryObject>withKeepBinary().invoke(
+  key, new CacheEntryProcessor<Integer, BinaryObject, Object>() {
+    public Object process(MutableEntry<Integer, BinaryObject> entry,
+                          Object... objects) throws EntryProcessorException {
+            // Create builder from the old value.
+        BinaryObjectBuilder bldr = entry.getValue().toBuilder();
+
+        //Update the field in the builder.
+        bldr.setField("name", "Ignite");
+
+        // Set new value to the entry.
+        entry.setValue(bldr.build());
+
+        return null;
+     }
+  });
+----
+--
+
+== BinaryObject Type Metadata
+
+As it was mentioned above, binary object structure may be changed at runtime 
hence it may also be useful to get
+information about a particular type that is stored in a cache such as field 
names, field type names, and affinity
+field name. Ignite facilitates this requirement via the `BinaryType` interface.
+
+This interface also introduces a faster version of field getter called 
`BinaryField`. The concept is similar to java
+reflection and allows to cache certain information about the field being read 
in the `BinaryField` instance, which is
+useful when reading the same field from a large collection of binary objects.
+
+[tabs]
+--
+tab:Java[]
+[source,java]
+----
+Collection<BinaryObject> persons = getPersons();
+
+BinaryField salary = null;
+
+double total = 0;
+int cnt = 0;
+
+for (BinaryObject person : persons) {
+    if (salary == null)
+        salary = person.type().field("salary");
+
+    total += salary.value(person);
+    cnt++;
+}
+
+double avg = total / cnt;
+----
+--
+
+== BinaryObject and CacheStore
+
+Setting `withKeepBinary()` on the cache API does not affect the way user 
objects are passed to a `CacheStore`. This is
+intentional because in most cases a single `CacheStore` implementation works 
either with deserialized classes, or with
+`BinaryObject` representations. To control the way objects are passed to the 
store, the `storeKeepBinary` flag on
+`CacheConfiguration` should be used. When this flag is set to `false`, 
deserialized values will be passed to the store,
+otherwise `BinaryObject` representations will be used.
+
+Below is an example pseudo-code implementation of a store working with 
`BinaryObject`:
+
+[tabs]
+--
+tab:Java[]
+[source,java]
+----
+public class CacheExampleBinaryStore extends CacheStoreAdapter<Integer, 
BinaryObject> {
+    @IgniteInstanceResource
+    private Ignite ignite;
+
+    /** {@inheritDoc} */
+    @Override public BinaryObject load(Integer key) {
+        IgniteBinary binary = ignite.binary();
+
+        List<?> rs = loadRow(key);
+
+        BinaryObjectBuilder bldr = binary.builder("Person");
+
+        for (int i = 0; i < rs.size(); i++)
+            bldr.setField(name(i), rs.get(i));
+
+        return bldr.build();
+    }
+
+    /** {@inheritDoc} */
+    @Override public void write(Cache.Entry<? extends Integer, ? extends 
BinaryObject> entry) {
+        BinaryObject obj = entry.getValue();
+
+        BinaryType type = obj.type();
+
+        Collection<String> fields = type.fieldNames();
+
+        List<Object> row = new ArrayList<>(fields.size());
+
+        for (String fieldName : fields)
+            row.add(obj.field(fieldName));
+
+        saveRow(entry.getKey(), row);
+    }
+}
+----
+--
+
+== Binary Name Mapper and Binary ID Mapper
+
+Internally, Ignite never writes full strings for field or type names. Instead, 
for performance reasons, Ignite writes
+integer hash codes for type and field names. Testing has indicated that hash 
code conflicts for the type names or the
+field names within the same type are virtually non-existent and, to gain 
performance, it is safe to work with hash codes.
+For the cases when hash codes for different types or fields actually do 
collide, `BinaryNameMapper` and `BinaryIdMapper`
+support overriding the automatically generated hash code IDs for the type and 
field names.
+
+`BinaryNameMapper` - maps type/class and field names to different names.
+`BinaryIdMapper` - maps given from `BinaryNameMapper` type and field name to 
ID that will be used by Ignite in internals.
+
+Ignite provides the following out-of-the-box mappers implementation:
+
+* `BinaryBasicNameMapper` - a basic implementation of `BinaryNameMapper` that 
returns a full or a simple name of a given
+class depending on whether the `setSimpleName(boolean useSimpleName)` property 
is set.
+* `BinaryBasicIdMapper` - a basic implementation of `BinaryIdMapper`. It has a 
configuration property called
+`setLowerCase(boolean isLowerCase)`. If the property is set to `false` then a 
hash code of given type or field name
+will be returned. If the property is set to `true` then a hash code of given 
type or field name in lower case will be returned.
+
+If you are using Java or .NET clients and do not specify mappers in 
`BinaryConfiguration`, then Ignite will use
+`BinaryBasicNameMapper` and the `simpleName` property will be set to `false`, 
and `BinaryBasicIdMapper` and the
+`lowerCase` property will be set to `true`.
+
+If you are using the C{pp} client and do not specify mappers in 
`BinaryConfiguration`, then Ignite will use
+`BinaryBasicNameMapper` and the `simpleName` property will be set to `true`, 
and `BinaryBasicIdMapper` and the
+`lowerCase` property will be set to `true`.
+
+By default, there is no need to configure anything if you use Java, .NET or 
C{pp}. Mappers need to be configured if
+there is a tricky name conversion when platform interoperability is needed.
diff --git a/docs/_docs/extensions-and-integrations/php-pdo.adoc 
b/docs/_docs/extensions-and-integrations/php-pdo.adoc
new file mode 100644
index 0000000..173d748
--- /dev/null
+++ b/docs/_docs/extensions-and-integrations/php-pdo.adoc
@@ -0,0 +1,233 @@
+= Using PHP PDO With Apache Ignite
+
+== Overview
+
+PHP provides a lightweight, consistent interface for accessing databases named 
PHP Data Objects - PDO. This extension works
+with several database-specific PDO drivers. One of them is 
http://php.net/manual/en/ref.pdo-odbc.php[PDO_ODBC, window=_blank],
+which allows connecting to any database that provides its own ODBC driver 
implementation.
+
+With the usage of Apache Ignite's ODBC driver, it's possible to connect to an 
Apache Ignite cluster from a PHP application
+ accessing and modifying data that is stored there.
+
+== Setting Up ODBC Driver
+
+Apache Ignite conforms to ODBC protocol and has its own ODBC driver that is 
delivered along with other functionality.
+This is the driver that will be used by PHP PDO framework going forward in 
order to connect to an Apache Ignite cluster.
+
+Refer to the Ignite link:SQL/ODBC/odbc-driver[ODBC Driver] documentation to 
configure and install the driver
+on a target system. Once the driver is installed and functional move to the 
next sections of this guide below.
+
+== Installing and Configuring PHP PDO
+
+To install PHP, PDO and the PDO_ODBC driver refer to the generic PHP resources:
+
+* http://php.net/downloads.php[Download, window=_blank] and install the 
desired PHP version. Note, that PDO driver is
+enabled by default in PHP as of PHP 5.1.0. On Windows environment you can 
download PHP binary from the
+http://windows.php.net/download[following page, window=_blank].
+* http://php.net/manual/en/book.pdo.php[Configure, window=_blank] PHP PDO 
framework.
+* http://php.net/manual/en/ref.pdo-odbc.php[Enable, window=_blank] PDO_ODBC 
driver.
+  ** On Windows it may be needed to uncomment `extension=php_pdo_odbc.dll` 
line in the php.ini and make sure that `extension_dir`
+points to a directory which contains `php_pdo_odbc.dll`. Moreover, this 
directory has to be added to `PATH` environment variable.
+  ** On Unix based systems most often it's simply required to install a 
special PHP_ODBC package. For instance, `php5-odbc`
+package has to be installed on Ubuntu 14.04.
+* If necessary, 
http://php.net/manual/en/ref.pdo-odbc.php#ref.pdo-odbc.installation[configure, 
window=_blank] and build PDO_ODBC driver
+for a specific system that does not fall under a general case. In most cases, 
however, simple installation of both PHP
+and PDO_ODBC driver is going to be enough.
+
+== Starting Ignite Cluster
+
+After PHP PDO is installed and ready to be used let's start an Ignite cluster 
with an exemplary configuration and connect
+to the cluster from a PHP application updating and querying cluster's data:
+
+* First, the ODBC processor has to be enabled cluster wide. To do so, 
`odbcConfiguration` property has to be added to
+`IgniteConfiguration` of every cluster node.
+
+* Next, list configurations for all the caches related to specific data models 
inside of `IgniteConfiguration`.
+Since we're going to execute SQL queries from PHP PDO side over the cluster, 
every cache configuration needs to contain
+a definition for `QueryEntity`. Alternatively, you can define SQL tables and 
indexes with Ignite DDL commands.
+
+* Finally, use the configuration template below to start an Ignite cluster
++
+[tabs]
+--
+tab:XML[]
+[source,xml]
+----
+<?xml version="1.0" encoding="UTF-8"?>
+
+<beans xmlns="http://www.springframework.org/schema/beans";
+       xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance";
+       xmlns:util="http://www.springframework.org/schema/util";
+       xsi:schemaLocation="
+        http://www.springframework.org/schema/beans
+        http://www.springframework.org/schema/beans/spring-beans.xsd
+        http://www.springframework.org/schema/util
+        http://www.springframework.org/schema/util/spring-util.xsd";>
+  <bean id="ignite.cfg" 
class="org.apache.ignite.configuration.IgniteConfiguration">
+
+    <!-- Enabling ODBC. -->
+    <property name="odbcConfiguration">
+      <bean class="org.apache.ignite.configuration.OdbcConfiguration"></bean>
+    </property>
+
+    <!-- Configuring cache. -->
+    <property name="cacheConfiguration">
+      <list>
+        <bean class="org.apache.ignite.configuration.CacheConfiguration">
+          <property name="name" value="Person"/>
+          <property name="cacheMode" value="PARTITIONED"/>
+          <property name="atomicityMode" value="TRANSACTIONAL"/>
+          <property name="writeSynchronizationMode" value="FULL_SYNC"/>
+
+          <property name="queryEntities">
+            <list>
+              <bean class="org.apache.ignite.cache.QueryEntity">
+                <property name="keyType" value="java.lang.Long"/>
+                <property name="valueType" value="Person"/>
+
+                <property name="fields">
+                  <map>
+                    <entry key="firstName" value="java.lang.String"/>
+                    <entry key="lastName" value="java.lang.String"/>
+                    <entry key="resume" value="java.lang.String"/>
+                    <entry key="salary" value="java.lang.Integer"/>
+                  </map>
+                </property>
+
+                <property name="indexes">
+                  <list>
+                    <bean class="org.apache.ignite.cache.QueryIndex">
+                      <constructor-arg value="salary"/>
+                    </bean>
+                  </list>
+                </property>
+              </bean>
+            </list>
+          </property>
+        </bean>
+      </list>
+    </property>
+  </bean>
+</beans>
+----
+--
+
+== Connecting From PHP to Ignite Cluster
+
+To connect to Ignite from PHP PDO side the DSN has to be properly configured 
for Ignite.
+Refer to the link:SQL/ODBC/connection-string-dsn#configuring-dsn[Configuring 
DSN] documentation page for details.
+
+In the example below it's assumed that DSN's name is "LocalApacheIgniteDSN". 
Once everything is configured and can be
+inter-connected it's time to connect to the Apache Ignite cluster from a PHP 
PDO application and execute a number of
+queries like the ones shown below.
+[tabs]
+--
+tab:Insert[]
+[source,php]
+----
+<?php
+try {
+    // Connecting to Ignite using pre-configured DSN.
+    $dbh = new PDO('odbc:LocalApacheIgniteDSN');
+
+    // Changing PDO error mode.
+    $dbh->setAttribute(PDO::ATTR_ERRMODE, PDO::ERRMODE_EXCEPTION);
+
+    // Preparing query.
+    $dbs = $dbh->prepare('INSERT INTO Person (_key, firstName, lastName, 
resume, salary)
+        VALUES (?, ?, ?, ?, ?)');
+
+    // Declaring parameters.
+    $key = 777;
+    $firstName = "James";
+    $lastName = "Bond";
+    $resume = "Secret Service agent";
+    $salary = 65000;
+
+    // Binding parameters.
+    $dbs->bindParam(1, $key);
+    $dbs->bindParam(2, $firstName);
+    $dbs->bindParam(3, $lastName);
+    $dbs->bindParam(4, $resume);
+    $dbs->bindParam(5, $salary);
+
+    // Executing the query.
+    $dbs->execute();
+
+} catch (PDOException $e) {
+    print "Error!: " . $e->getMessage() . "\n";
+    die();
+}
+?>
+----
+tab:Update[]
+[source,php]
+----
+<?php
+try {
+    // Connecting to Ignite using pre-configured DSN.
+    $dbh = new PDO('odbc:LocalApacheIgniteDSN');
+
+    // Changing PDO error mode.
+    $dbh->setAttribute(PDO::ATTR_ERRMODE, PDO::ERRMODE_EXCEPTION);
+
+    // Executing the query. The salary field is an indexed field.
+    $dbh->query('UPDATE Person SET salary = 42000 WHERE salary > 50000');
+
+} catch (PDOException $e) {
+    print "Error!: " . $e->getMessage() . "\n";
+    die();
+}
+?>
+----
+tab:Select[]
+[source,php]
+----
+<?php
+try {
+    // Connecting to Ignite using pre-configured DSN.
+    $dbh = new PDO('odbc:LocalApacheIgniteDSN');
+
+    // Changing PDO error mode.
+    $dbh->setAttribute(PDO::ATTR_ERRMODE, PDO::ERRMODE_EXCEPTION);
+
+    // Executing the query and getting a result set. The salary field is an 
indexed field.
+    $res = $dbh->query('SELECT firstName, lastName, resume, salary from Person
+        WHERE salary > 12000');
+
+    if ($res == FALSE)
+        print_r("Exception");
+
+    // Printing results.
+    foreach($res as $row) {
+        print_r($row);
+    }
+
+} catch (PDOException $e) {
+    print "Error!: " . $e->getMessage() . "\n";
+    die();
+}
+?>
+----
+tab:Delete[]
+[source,php]
+----
+<?php
+try {
+    // Connecting to Ignite using pre-configured DSN.
+    $dbh = new PDO('odbc:LocalApacheIgniteDSN');
+
+    // Changing PDO error mode.
+    $dbh->setAttribute(PDO::ATTR_ERRMODE, PDO::ERRMODE_EXCEPTION);
+
+    // Performing query. Both firstName and lastName are non indexed fields.
+    $dbh->query('DELETE FROM Person WHERE firstName = \'James\' and lastName = 
\'Bond\'');
+
+} catch (PDOException $e) {
+    print "Error!: " . $e->getMessage() . "\n";
+    die();
+}
+?>
+----
+--
+
diff --git a/docs/_docs/installation/vmware-installation.adoc 
b/docs/_docs/installation/vmware-installation.adoc
new file mode 100644
index 0000000..a96ad81
--- /dev/null
+++ b/docs/_docs/installation/vmware-installation.adoc
@@ -0,0 +1,45 @@
+= Installing Apache Ignite in VMWare
+
+== Overview
+
+Apache Ignite can be deployed in virtual and cloud environments managed by 
VMWare. There are no specificities related to
+VMWare; however, we recommend you have an Ignite VM pinned to a single 
dedicated host, which allows you to:
+
+* Avoid the "noisy neighbor" problem when Ignite VM could compete for the host 
resources with other applications. This
+might cause performance spikes in your Ignite cluster.
+* Ensure high-availability. If a host goes down and you had two or more Ignite 
server node VMs pinned to it, then it could lead to data loss.
+
+The following sections cover vMotion usage aspects for Ignite nodes migration.
+
+== Cluster Nodes Migration With vMotion
+
+vMotion provides migration of a live VM from one host to another. There are 
some basic principles Ignite relies on to
+continue a normal operation after the migration:
+
+* Memory state on the new host is identical.
+* Disk state is identical (or the new host uses the same disk).
+* IP addresses, available ports, and other networking parameters are not 
changed.
+* All network resources are available, TCP connections are not interrupted.
+
+If vMotion is set up and works in accordance with above mentioned rules, then 
an Ignite node will function normally.
+
+However, the vMotion migration will impact the performance of the Ignite VM. 
During the transfer procedure a lot of resources
+-- mainly CPU and network capacity -- will be serving vMotion needs.
+
+To avoid negative performance spikes and unresponsive/frozen periods of the 
cluster state, we recommend the following:
+
+* Perform migration during the periods of low activity and load on your Ignite 
cluster. This ensures faster transfer with
+minimal impact on the cluster performance.
+* Perform migration of the nodes sequentially, one by one, if several nodes 
have to be migrated.
+* Set `IgniteConfiguration.failureDetectionTimeout` parameter to a value 
higher than the possible downtime for Ignite VM.
+This is because vMotion stops the CPU of your Ignite VM when a small chunk of 
state is left for transfer. It will take X
+time to transfer the chunk and `IgniteConfiguration.failureDetectionTimeout` 
has to be bigger than X; otherwise, the node
+will be removed from the cluster.
+* Use a high-throughput network. It's better if the vMotion migrator and 
Ignite cluster are using different networks to
+avoid network saturation.
+* If you have an option to choose between more nodes with less RAM vs. fewer 
nodes with more RAM, then go for the first option.
+Smaller RAM on the Ignite VM ensures faster vMotion migration, and faster 
migration ensures more stable operation of the Ignite cluster.
+* If it's applicable for your use case, you can even consider the migration 
with a downtime of the Ignite VM. Given that
+there are backup copies of the data on other nodes in the cluster, the node 
can be shut down and brought back up after the
+vMotion migration is over. This may result in better overall performance (both 
performance of the cluster and the vMotion
+transfer time) than with a live migration.
diff --git a/docs/_docs/resources-injection.adoc 
b/docs/_docs/resources-injection.adoc
new file mode 100644
index 0000000..9f74112
--- /dev/null
+++ b/docs/_docs/resources-injection.adoc
@@ -0,0 +1,74 @@
+= Resources Injection
+
+== Overview
+
+Ignite supports the dependency injection of pre-defined Ignite resources, and 
supports field-based as well as method-based
+injection. Resources with proper annotations will be injected into the 
corresponding task, job, closure, or SPI before it is initialized.
+
+== Field-Based and Method-Based Injection
+
+You can inject resources by annotating either a field or a method. When you 
annotate a field, Ignite simply sets the
+value of the field at injection time (disregarding an access modifier of the 
field). If you annotate a method with
+resource annotation, it should accept an input parameter of the type 
corresponding to an injected resource. If it does,
+then the method is invoked at injection time with the appropriate resource 
passed as an input argument.
+
+[tabs]
+--
+tab:Field-Based Approach[]
+[source,java]
+----
+Ignite ignite = Ignition.ignite();
+
+Collection<String> res = ignite.compute().broadcast(new 
IgniteCallable<String>() {
+  // Inject Ignite instance.
+  @IgniteInstanceResource
+  private Ignite ignite;
+
+  @Override
+  public String call() throws Exception {
+    IgniteCache<Object, Object> cache = ignite.getOrCreateCache(CACHE_NAME);
+
+    // Do some stuff with cache.
+     ...
+  }
+});
+----
+tab:Method-Based Approach[]
+[source,java]
+----
+public class MyClusterJob implements ComputeJob {
+    ...
+    private Ignite ignite;
+    ...
+    // Inject Ignite instance.
+    @IgniteInstanceResource
+    public void setIgnite(Ignite ignite) {
+        this.ignite = ignite;
+    }
+    ...
+}
+----
+--
+
+== Pre-defined Resources
+
+There are a number of pre-defined Ignite resources that you can inject:
+
+[cols="1,3",opts="header"]
+|===
+| Resource | Description
+
+| `CacheNameResource` | Injects grid cache name provided via 
`CacheConfiguration.getName()`.
+| `CacheStoreSessionResource` | Injects the current `CacheStoreSession` 
instance.
+| `IgniteInstanceResource` | Injects the Ignite node instance.
+| `JobContextResource` | Injects an instance of `ComputeJobContext`. The job 
context holds useful information about a
+particular job execution. For example, you can get the name of the cache 
containing the entry for which a job was co-located.
+| `LoadBalancerResource` | Injects an instance of ComputeLoadBalancer that can 
be used by a task to do the load balancing.
+| `ServiceResource` | Injects an Ignite service by specified service name.
+| `SpringApplicationContextResource` | Injects Spring's `ApplicationContext` 
resource.
+| `SpringResource` | Injects resource from Spring's `ApplicationContext`. Use 
it whenever you would like to access a bean
+specified in Spring's application context XML configuration.
+| `TaskContinuousMapperResource` | Injects an instance of 
`ComputeTaskContinuousMapper`. Continuous mapping allows to
+emit jobs from the task at any point, even after initial map phase.
+| `TaskSessionResource` | Injects instance of `ComputeTaskSession` resource 
which defines a distributed session for a particular task execution.
+|===
diff --git a/docs/_docs/thread-pools.adoc b/docs/_docs/thread-pools.adoc
new file mode 100644
index 0000000..5d1f9fb
--- /dev/null
+++ b/docs/_docs/thread-pools.adoc
@@ -0,0 +1,136 @@
+= Thread Pools
+
+== Overview
+
+Apache Ignite creates and maintains a variety of Thread pools that are used 
for different purposes depending on the
+API being used. In this documentation, we list some of the well-known internal 
pools and show how you can create a
+custom one. Refer to `IgniteConfiguration` javadoc to get a full list of 
thread pools available in Apache Ignite.
+
+== System Pool
+
+The system pool processes all the cache related operations except for SQL and 
some other types of queries. Also, this pool is
+responsible for processing of Ignite Compute tasks' cancellation operations.
+
+The default pool size is `max(8, total number of cores)`. Use 
`IgniteConfiguration.setSystemThreadPoolSize(...)` to change the pool size.
+
+== Public Pool
+
+The public pool is the work-horse of Apache Ignite compute grid. All 
computations are received and processed by this pool.
+
+The default pool size is `max(8, total number of cores)`. Use 
`IgniteConfiguration.setPublicThreadPoolSize(...)` to change the pool size.
+
+== Queries Pool
+
+The queries pool takes care of all SQL, Scan and SPI queries that are being 
sent and executed across the cluster.
+
+The default pool size is `max(8, total number of cores)`. Use 
`IgniteConfiguration.setQueryThreadPoolSize(...)` to change the pool size.
+
+== Services Pool
+
+Apache Ignite Service Grid calls go to the services' thread pool. Having 
dedicated pools for Ignite Service and
+Compute Grid components allows us to avoid threads starvation and deadlocks 
when a service implementation wants to call a computation or vice versa.
+
+The default pool size is `max(8, total number of cores)`. Use 
`IgniteConfiguration.setServiceThreadPoolSize(...)` to change the pool size.
+
+== Striped Pool
+
+The striped pool helps to accelerate basic cache operations and transactions 
significantly by spreading the operations
+execution across multiple stripes that don't contend with each other.
+
+The default pool size is `max(8, total number of cores)`. Use 
`IgniteConfiguration.setStripedPoolSize(...)` to change the pool size.
+
+== Data Streamer Pool
+
+The data streamer pool processes all messages and requests coming from 
`IgniteDataStreamer` and a variety of streaming
+adapters that use `IgniteDataStreamer` internally.
+
+The default pool size is `max(8, total number of cores)`. Use 
`IgniteConfiguration.setDataStreamerThreadPoolSize(...)` to change the pool 
size.
+
+== Custom Thread Pools
+
+It is possible to configure a custom thread pool for Ignite Compute tasks. 
This is useful if you want to execute one
+compute task from another synchronously avoiding deadlocks. To guarantee this, 
you need to make sure that a nested
+task is executed in a thread pool different from the parent's tasks thread 
pool.
+
+A custom pool is defined in `IgniteConfiguration` and has to have a unique 
name:
+
+[tabs]
+--
+tab:Java[]
+[source,java]
+----
+IgniteConfiguration cfg = ...;
+
+cfg.setExecutorConfiguration(new ExecutorConfiguration("myPool").setSize(16));
+----
+tab:XML[]
+[source,xml]
+----
+<bean id="grid.cfg" 
class="org.apache.ignite.configuration.IgniteConfiguration">
+  ...
+  <property name="executorConfiguration">
+    <list>
+      <bean class="org.apache.ignite.configuration.ExecutorConfiguration">
+        <property name="name" value="myPool"/>
+        <property name="size" value="16"/>
+      </bean>
+    </list>
+  </property>
+  ...
+</bean>
+----
+--
+
+Now, let's assume that an Ignite Compute task below has to be executed in the 
`myPool` defined above:
+
+[tabs]
+--
+tab:Java[]
+[source,java]
+----
+public class InnerRunnable implements IgniteRunnable {
+    @Override public void run() {
+        System.out.println("Hello from inner runnable!");
+    }
+}
+----
+--
+
+To do that, you need to use the `IgniteCompute.withExecutor()` method that 
will execute the task right away from an
+implementation of the parent task, as shown below:
+
+[tabs]
+--
+tab:Java[]
+[source,java]
+----
+public class OuterRunnable implements IgniteRunnable {
+    @IgniteInstanceResource
+    private Ignite ignite;
+
+    @Override public void run() {
+        // Synchronously execute InnerRunnable in custom executor.
+        ignite.compute().withExecutor("myPool").run(new InnerRunnable());
+    }
+}
+----
+--
+
+The parent task's execution might be triggered the following way and, in this 
scenario, it will be executed by the public pool size:
+
+[tabs]
+--
+tab:Java[]
+[source,java]
+----
+ignite.compute().run(new OuterRunnable());
+----
+--
+
+[CAUTION]
+====
+[discrete]
+=== Undefined Thread Pool
+If you attempt to execute a compute task in a custom thread pool that is not 
explicitly configured with Ignite,
+then a special warning message will be printed in the node's logs, and the 
task will be picked up by the public pool for the execution.
+====

Reply via email to