This is an automated email from the ASF dual-hosted git repository.

sk0x50 pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/ignite-website.git


The following commit(s) were added to refs/heads/master by this push:
     new 31e913f5f1 IGNITE-27327 Cleanup formatting (#304)
31e913f5f1 is described below

commit 31e913f5f1287da8c55e822655760ad9f8afcee0
Author: jinxxxoid <[email protected]>
AuthorDate: Tue Dec 30 20:39:36 2025 +0400

    IGNITE-27327 Cleanup formatting (#304)
---
 _src/_blog/apache-ignite-3-architecture-part-1.pug | 26 ++++----
 _src/_blog/apache-ignite-3-architecture-part-2.pug | 24 ++++----
 _src/_blog/apache-ignite-3-architecture-part-3.pug | 15 +++--
 _src/_blog/apache-ignite-3-architecture-part-4.pug | 13 ++--
 _src/_blog/apache-ignite-3-architecture-part-5.pug | 70 +++++++++++-----------
 ...apache-ignite-3-client-connections-handling.pug |  5 +-
 .../schema-design-for-distributed-systems-ai3.pug  | 30 +++++-----
 blog/apache-ignite-3-architecture-part-1.html      | 14 -----
 blog/apache-ignite-3-architecture-part-2.html      | 16 ++---
 blog/apache-ignite-3-architecture-part-3.html      | 12 +---
 blog/apache-ignite-3-architecture-part-4.html      |  7 ---
 blog/apache-ignite-3-architecture-part-5.html      | 65 +++++++++-----------
 ...pache-ignite-3-client-connections-handling.html |  3 -
 blog/apache/index.html                             | 57 +++++-------------
 blog/ignite/index.html                             | 49 ++++++---------
 blog/index.html                                    | 22 +------
 .../schema-design-for-distributed-systems-ai3.html | 29 ++++-----
 17 files changed, 173 insertions(+), 284 deletions(-)

diff --git a/_src/_blog/apache-ignite-3-architecture-part-1.pug 
b/_src/_blog/apache-ignite-3-architecture-part-1.pug
index d82b99df73..e50892a2ce 100644
--- a/_src/_blog/apache-ignite-3-architecture-part-1.pug
+++ b/_src/_blog/apache-ignite-3-architecture-part-1.pug
@@ -18,7 +18,6 @@ p But success changes the game. Your application now 
processes thousands of even
 p.
   #[strong At high event volumes, data movement between systems becomes the 
primary performance constraint].
 
-hr
 
 h3 The Scale Reality for High-Velocity Applications
 
@@ -41,7 +40,6 @@ ul
   li During traffic spikes (50,000+ events per second), traditional systems 
collapse entirely, dropping connections and losing data when they're needed 
most.
 p The math scales against you.
 
-hr
 
 h3 When Smart Choices Become Scaling Limits
 
@@ -74,7 +72,7 @@ ul
   li #[strong Memory overhead:] Each system caches the same data in different 
formats
   li #[strong Consistency windows:] Brief periods where systems show different 
data states
 
-hr
+
 
 h3 The Hidden Cost of Multi-System Success
 
@@ -92,7 +90,7 @@ ul
 p #[strong Minimum per-event cost ≈ 7 ms before business logic.]
 p At 10,000 events/s, you’d need 70 seconds of processing capacity just 
#[strong for data movement] per real-time second!
 
-hr
+
 
 h3 The Performance Gap That Grows With Success
 
@@ -116,7 +114,7 @@ ul
   li #[strong Result]: Business requirements compromised to fit architectural 
limitations
   li #[strong Reality]: Competitive disadvantage as customer expectations grow
 
-hr
+
 
 h3 The Critical Performance Gap
 
@@ -145,7 +143,7 @@ p Applications needing #[strong microsecond insights] on 
#[strong millisecond tr
 
 p During traffic spikes, traditional architectures either drop connections 
(data loss) or degrade performance (missed SLAs). High-velocity applications 
need intelligent flow control that guarantees stability under pressure while 
preserving data integrity.
 
-hr
+
 
 h3 Event Processing at Scale
 
@@ -200,7 +198,7 @@ table(style="border-collapse: collapse; margin: 20px 0; 
width: 100%;")
 
 p #[strong The math doesn’t work:] parallelism helps, but coordination 
overhead grows exponentially with system count.
 
-hr
+
 
 h3 Real-World Breaking Points
 ul
@@ -208,7 +206,7 @@ ul
   li #[strong Gaming platforms]: Multiplayer backends processing user actions 
find that leaderboard updates lag behind gameplay events.
   li #[strong IoT analytics]: Manufacturing systems processing sensor data 
realize that anomaly detection arrives too late for preventive action.
 
-hr
+
 
 h3 The Apache Ignite Alternative
 
@@ -238,7 +236,7 @@ pre.mermaid.
 
 p #[strong Key difference:] events process #[strong where the data lives], 
eliminating inter-system latency.
 
-hr
+
 
 h3 Apache Ignite Performance Reality Check
 
@@ -273,7 +271,7 @@ pre
 
 p Processing 10,000 events/s is achievable with integrated architecture 
eliminating network overhead.
 
-hr
+
 
 h3 The Unified Data-Access Advantage
 
@@ -308,7 +306,7 @@ ul
 
 p #[strong Unified advantage:] One schema, one transaction model, multiple 
access paths.
 
-hr
+
 
 h3 Apache Ignite Architecture Preview
 
@@ -321,7 +319,7 @@ ul
 
 p These innovations address the compound effects that make multi-system 
architectures unsuitable for high-velocity applications.
 
-hr
+
 
 h3 Business Impact of Architectural Evolution
 
@@ -343,7 +341,7 @@ ul
   li #[strong Consistent behavior]: no synchronization anomalies
   li #[strong Faster delivery]: one integrated system to test and debug
 
-hr
+
 
 h3 The Architectural Evolution Decision
 
@@ -355,7 +353,7 @@ p #[strong Apache Ignite] consolidates transactions, 
caching, and compute into a
 
 p Your winning architecture doesn't have to become your scaling limit. It can 
evolve into the foundation for your next phase of growth.
 
-hr
+
 br
 |
 p #[em Return next Tuesday for Part 2, where we examine how Apache Ignite’s 
memory-first architecture enables optimized event processing while maintaining 
durability, forming the basis for true high-velocity performance.]
diff --git a/_src/_blog/apache-ignite-3-architecture-part-2.pug 
b/_src/_blog/apache-ignite-3-architecture-part-2.pug
index 9f2c4795f9..ef24acad2c 100644
--- a/_src/_blog/apache-ignite-3-architecture-part-2.pug
+++ b/_src/_blog/apache-ignite-3-architecture-part-2.pug
@@ -17,7 +17,7 @@ p Event data lives in memory for immediate access. 
Persistence happens asynchron
 
 p #[strong Performance transformation: significant speed improvements with 
enterprise durability.]
 
-hr
+
 br
 |
 h3 The Event Processing Performance Challenge
@@ -56,7 +56,7 @@ ul
 
 p #[strong The constraint]: Disk I/O creates a performance ceiling on 
throughput regardless of CPU or memory.
 
-hr
+
 br
 |
 h3 #[strong Memory-First Performance Results]
@@ -91,7 +91,7 @@ ul
   li #[strong Consistent performance]: Sub-millisecond response times even 
during traffic spikes
   li #[strong Resource utilization]: Memory bandwidth becomes the scaling 
factor, not disk I/O waits
 
-hr
+
 
 h3 Architecture Comparison: Disk-First vs Memory-First
 
@@ -129,7 +129,7 @@ ul
   li #[strong Performance Impact]: 5-15ms disk waits become <1ms memory 
operations
   li #[strong Scalability]: Memory bandwidth scales linearly vs disk I/O 
bottlenecks
 
-hr
+
 br
 
 h3 Memory-First Architecture Principles
@@ -162,7 +162,7 @@ ul
 
 p #[strong The Evolution Solution]: Instead of choosing between fast caches 
and durable databases, you get both performance characteristics in the same 
platform based on your specific data requirements.
 
-hr
+
 br
 |
 h3 Event Processing Performance Characteristics
@@ -180,7 +180,7 @@ ul
 
 p #[strong Performance Advantage]: Event data processing operates on 
memory-resident data with minimal serialization overhead.
 
-h3 Asynchronous Persistence for Event Durability
+h3 Asynconous Persistence for Event Durability
 
 p The checkpoint manager ensures event durability without blocking event 
processing.
 
@@ -190,9 +190,9 @@ ul
   li #[strong Write Phase]: Persist changes to storage without blocking 
ongoing operations
   li #[strong Coordination]: Manage recovery markers for failure scenarios
 
-p #[strong Key Advantage]: Event processing continues at memory speeds while 
persistence happens in background threads.
+p #[strong Key Advantage]: Event processing continues at memory speeds while 
persistence happens in background teads.
+
 
-hr
 br
 |
 h3 B+ Tree Organization for Event Data
@@ -210,7 +210,7 @@ ul
 
 h3 MVCC Integration for Event Consistency
 
-p Event processing maintains consistency through multi-version concurrency 
control:
+p Event processing maintains consistency tough multi-version concurrency 
control:
 
 p #[strong Event Processing Benefits]:
 ul
@@ -218,7 +218,7 @@ ul
   li #[strong High-Frequency Writes]: Events process concurrently with 
analytical queries
   li #[strong Recovery Guarantees]: Event ordering maintained across failures
 
-hr
+
 br
 
 h3 Performance Characteristics at Event Scale
@@ -248,7 +248,7 @@ p #[strong IoT Event Processing]: Sensor data ingestion 
scales to device-native
 
 p #[strong Gaming Backends]: Player actions process immediately while 
leaderboards, achievements, and session state update concurrently. No delays 
between action and world state changes.
 
-hr
+
 br
 |
 h2 Foundation for High-Velocity Applications
@@ -270,7 +270,7 @@ ul
 p #[strong Maintains Enterprise Requirements]:
 ul
   li ACID transaction guarantees for critical events
-  li Durability through asynchronous checkpointing
+  li Durability tough asynchronous checkpointing
   li Recovery capabilities for event stream continuity
 
 p The memory-first foundation transforms what's possible for high-velocity 
applications. Instead of architecting around disk I/O constraints, you can 
design for the performance characteristics your business requirements actually 
need.
diff --git a/_src/_blog/apache-ignite-3-architecture-part-3.pug 
b/_src/_blog/apache-ignite-3-architecture-part-3.pug
index 3b2c11e9b5..6860ad442a 100644
--- a/_src/_blog/apache-ignite-3-architecture-part-3.pug
+++ b/_src/_blog/apache-ignite-3-architecture-part-3.pug
@@ -26,7 +26,7 @@ p Apache Ignite 3 eliminates this constraint through flexible 
schema evolution.
 
 p #[strong The result: operational evolution without operational interruption.]
 
-hr
+
 br
 
 h2 The Schema Rigidity Problem at Scale
@@ -113,7 +113,7 @@ pre
 br
 p #[strong The Pattern]: Each functional change requires operational 
disruption that grows with system complexity.
 
-hr
+
 br
 
 h2 Apache Ignite Flexible Schema Architecture
@@ -149,7 +149,7 @@ ul
 
 p The catalog management system uses `HybridTimestamp` values to ensure schema 
versions activate consistently across all cluster nodes, preventing race 
conditions and maintaining data integrity during schema evolution.
 
-hr
+
 br
 
 h2 Schema Evolution in Production
@@ -297,7 +297,7 @@ ul
   li #[strong Risk Reduction]: Gradual rollout instead of big-bang deployment
   li #[strong Revenue Protection]: No downtime for existing operations
 
-hr
+
 br
 
 h2 Schema Evolution Performance Impact
@@ -333,12 +333,12 @@ p #[strong Performance Characteristics:]
 ul
   li #[strong Schema change time]: Fast metadata operations (typically under 
100ms)
   li #[strong Application downtime]: Zero
-  li #[strong Throughput impact]: Minimal during change operation
+  li #[strong Toughput impact]: Minimal during change operation
   li #[strong Recovery time]: Immediate (no recovery needed)
 
 p Performance improves by separating schema metadata management from data 
storage, allowing schema evolution without touching existing data structures.
 
-hr
+
 br
 h2 Business Impact of Schema Flexibility
 
@@ -408,7 +408,7 @@ ul
   li #[strong Experimentation]: Add telemetry fields for new insights
   li #[strong Personalization]: Evolve customer data models based on behavior 
patterns
 br
-hr
+
 
 br
 h2 The Operational Evolution Advantage
@@ -422,7 +422,6 @@ p When market demands shift daily but schema changes occur 
only during monthly m
 
 p Fast-paced applications can't afford architectural constraints that slow 
adaptation. Schema flexibility becomes a strategic advantage when your system 
must evolve faster than competitors can deploy.
 
-hr
 br
 |
 p #[em Return next Tuesday for Part 4, where we explore how integrated 
platform performance maintains consistency across all workload types. This 
ensures that schema flexibility and business agility don't compromise the 
performance characteristics your application requires.]
\ No newline at end of file
diff --git a/_src/_blog/apache-ignite-3-architecture-part-4.pug 
b/_src/_blog/apache-ignite-3-architecture-part-4.pug
index 5ac36453e5..d02851956d 100644
--- a/_src/_blog/apache-ignite-3-architecture-part-4.pug
+++ b/_src/_blog/apache-ignite-3-architecture-part-4.pug
@@ -15,7 +15,7 @@ p Financial trades execute in microseconds while risk 
analytics run concurrently
 
 p #[strong Real-time analytics breakthrough: query live transactional data 
without ETL delays or performance interference.]
 
-hr
+
 br
 
 h2 Performance Comparison: Traditional vs Integrated
@@ -85,7 +85,7 @@ ul
   li Fresh analytics with trading delays (performance risk)
   li Complete reporting with operational lag (business risk)
 
-hr
+
 br
 
 h2 Apache Ignite Integrated Performance Architecture
@@ -173,7 +173,7 @@ ul
   li #[strong Throughput scaling]: Process multiple operations per thread
   li #[strong Latency hiding]: Overlapping operations reduce total processing 
time
 
-hr
+
 br
 
 h2 Performance Under Real-World Load Conditions
@@ -349,7 +349,7 @@ ul
 
 p #[strong The wow moment]: While competitors' systems crash during peak 
demand, yours maintains 99.9% uptime through intelligent flow control that 
automatically adapts to system pressure.
 
-hr
+
 br
 
 h2 Performance Optimization Strategies
@@ -384,7 +384,7 @@ ul
   li #[strong Interference detection]: Less than 5% mutual performance impact 
between workload types
   li #[strong Capacity planning]: Predictable scaling characteristics enable 
accurate resource allocation
 
-hr
+
 br
 
 h2 Business Impact of Consistent Performance
@@ -431,7 +431,7 @@ ul
   li #[strong Market expansion]: Consistent performance supports higher-volume 
markets
   li #[strong Technical differentiation]: Platform capabilities become 
competitive advantages
 
-hr
+
 br
 
 h2 The Performance Integration Advantage
@@ -446,7 +446,6 @@ p When all workloads perform predictably within the same 
platform, you eliminate
 
 p High-velocity applications need performance characteristics they can depend 
on. Integrated platform performance provides both the speed individual 
operations require and the consistency mixed workloads demand.
 
-hr
 br
 |
 p #[em Return next Tuesday for Part 5, that explores how data colocation 
eliminates the network overhead that traditional distributed systems accept as 
inevitable. This transforms distributed processing into local memory operations 
while maintaining the scale and fault tolerance benefits of distributed 
architecture.]
\ No newline at end of file
diff --git a/_src/_blog/apache-ignite-3-architecture-part-5.pug 
b/_src/_blog/apache-ignite-3-architecture-part-5.pug
index 076181a551..fe39569fbe 100644
--- a/_src/_blog/apache-ignite-3-architecture-part-5.pug
+++ b/_src/_blog/apache-ignite-3-architecture-part-5.pug
@@ -13,11 +13,11 @@ p Your high-velocity application processes events fast 
enough until it doesn't.
 
 p Consider a financial trading system processing peak loads of 10,000 trades 
per second. Each trade requires risk calculations against a customer's 
portfolio. If portfolio data lives on different nodes than the processing 
logic, network round-trips create an impossible performance equation: 10,000 
trades × 2ms network latency = 20 seconds of network delay per second of wall 
clock time.
 
-p Apache Ignite eliminates this constraint through data colocation. Related 
data and processing live on the same nodes, transforming distributed operations 
into local memory operations.
+p Apache Ignite eliminates this constraint through data collocation. Related 
data and processing live on the same nodes, transforming distributed operations 
into local memory operations.
 
 p #[strong The result: distributed system performance without distributed 
system overhead.]
 
-hr
+
 br
 
 h2 The Data Movement Tax on High-Velocity Applications
@@ -70,20 +70,20 @@ pre
 
 p #[strong The cascade effect]: Each data dependency creates another network 
round-trip. Complex event processing can require 10+ network operations per 
event.
 
-hr
+
 br
 
-h2 Strategic Data Placement Through Colocation
+h2 Strategic Data Placement Through collocation
 
-h3 Apache Ignite Colocation Architecture
+h3 Apache Ignite collocation Architecture
 
-p Apache Ignite uses deterministic hash distribution to ensure related data 
lands on the same nodes. The platform automatically generates consistent hash 
values for colocation keys, ensuring all data with the same colocation key 
always lands on the same node. This deterministic placement means that once 
data is colocated, subsequent access patterns benefit from data locality 
without manual coordination.
+p Apache Ignite uses deterministic hash distribution to ensure related data 
lands on the same nodes. The platform automatically generates consistent hash 
values for collocation keys, ensuring all data with the same collocation key 
always lands on the same node. This deterministic placement means that once 
data is collocated, subsequent access patterns benefit from data locality 
without manual coordination.
 
-h3 Table Design for Event Processing Colocation
+h3 Table Design for Event Processing collocation
 
 pre
   code.
-    -- Create distribution zone for customer-based colocation
+    -- Create distribution zone for customer-based collocation
     CREATE ZONE customer_zone WITH
         partitions=64,
         replicas=3;
@@ -95,14 +95,14 @@ pre
         amount DECIMAL(10,2),
         order_date TIMESTAMP
     ) WITH ZONE='customer_zone';
-    -- Customer data using same distribution zone for colocation
+    -- Customer data using same distribution zone for collocation
     CREATE TABLE customers (
         customer_id BIGINT PRIMARY KEY,
         name VARCHAR(100),
         segment VARCHAR(20),
         payment_method VARCHAR(50)
     ) WITH ZONE='customer_zone';
-    -- Customer pricing using same zone for colocation
+    -- Customer pricing using same zone for collocation
     CREATE TABLE customer_pricing (
         customer_id BIGINT,
         product_id BIGINT,
@@ -113,13 +113,13 @@ pre
 
 p #[strong Result]: All tables using the same distribution zone share the same 
partitioning strategy. Data for customer 12345 distributes to the same 
partition across all tables, enabling local processing without network 
communication.
 
-h3 Compute Colocation for Event Processing
+h3 Compute collocation for Event Processing
 
 p #[strong Processing Moves to Where Data Lives:]
 
 p Instead of moving data to processing logic, compute jobs execute on the 
nodes where related data already exists. For customer order processing, the 
compute job runs on the node containing the customer's data, orders, and 
pricing information. All data access becomes local memory operations rather 
than network calls.
 
-p #[strong Simple Colocation Example:]
+p #[strong Simple collocation Example:]
 
 pre
   code.
@@ -131,12 +131,12 @@ pre
         customerId
     );
 
-p #[strong Performance Impact]: 8ms distributed processing becomes 
sub-millisecond colocated processing through data locality.
+p #[strong Performance Impact]: 8ms distributed processing becomes 
sub-millisecond collocated processing through data locality.
+
 
-hr
 br
 
-h2 Real-World Colocation Performance Impact
+h2 Real-World collocation Performance Impact
 
 h3 Financial Risk Calculation Example
 
@@ -146,9 +146,9 @@ p #[strong Here's what the distributed approach costs:]
 
 p Traditional risk calculations require multiple network calls: fetch trade 
details (1ms), retrieve portfolio data (2ms), get current market prices (1ms), 
and load risk rules (1ms). The actual risk calculation takes 0.2ms, but network 
overhead dominates at 5.2ms total. At 10,000 trades per second, this creates 52 
seconds of processing time per second. This is mathematically impossible.
 
-p #[strong Colocated Risk Processing:]
+p #[strong collocated Risk Processing:]
 
-p When account portfolios, trade histories, and risk rules colocate by account 
ID, risk calculations become local operations. All required data lives on the 
same node where the processing executes. Network overhead disappears, 
transforming 5.2ms distributed operations into sub-millisecond local 
calculations.
+p When account portfolios, trade histories, and risk rules collocate by 
account ID, risk calculations become local operations. All required data lives 
on the same node where the processing executes. Network overhead disappears, 
transforming 5.2ms distributed operations into sub-millisecond local 
calculations.
 
 p #[strong Business Impact:]
 ul
@@ -160,11 +160,11 @@ h3 IoT Event Processing Example
 
 p #[strong Problem]: Manufacturing system processes sensor events requiring 
contextual data for anomaly detection.
 
-p #[strong Colocated Design]:
+p #[strong collocated Design]:
 
 pre
   code.
-    -- Create distribution zone for equipment-based colocation
+    -- Create distribution zone for equipment-based collocation
     CREATE ZONE equipment_zone WITH
         partitions=32,
         replicas=2;
@@ -178,7 +178,7 @@ pre
         vibration DECIMAL(6,3),
         PRIMARY KEY (sensor_id, timestamp)
     ) WITH ZONE='equipment_zone';
-    -- Equipment specifications using same zone for colocation
+    -- Equipment specifications using same zone for collocation
     CREATE TABLE equipment_specs (
         equipment_id BIGINT PRIMARY KEY,
         max_temperature DECIMAL(5,2),
@@ -193,12 +193,12 @@ p Anomaly detection jobs execute on the nodes containing 
the equipment data they
 
 p #[strong Performance Outcome]: Sub-millisecond anomaly detection vs 
multi-millisecond distributed processing. Single cluster processes tens of 
thousands of sensor readings per second with real-time anomaly detection.
 
-hr
+
 br
 
-h2 Colocation Strategy Selection
+h2 collocation Strategy Selection
 
-h3 Event-Driven Colocation Patterns
+h3 Event-Driven collocation Patterns
 
 p #[strong Customer-Centric Applications]:
 
@@ -239,20 +239,20 @@ ul
   li Location-aware services access local partition data
   li Geographic analytics minimize cross-region data movement
 
-h3 Automatic Query Optimization Through Colocation
+h3 Automatic Query Optimization Through collocation
 
 p #[strong When related data lives together, query performance transforms:]
 
 pre
   code.
-    -- Before colocation: expensive cross-node JOINs
+    -- Before collocation: expensive cross-node JOINs
     SELECT c.name, o.order_total, p.amount
     FROM customers c
       JOIN orders o ON c.customer_id = o.customer_id
       JOIN payments p ON o.order_id = p.order_id
     WHERE c.customer_id = 12345;
     -- Network overhead: 3 tables × potential cross-node fetches = high latency
-    -- After colocation: local memory JOINs
+    -- After collocation: local memory JOINs
     -- Same query, but all customer 12345 data lives on same node
     -- Result: JOIN operations become local memory operations
 
@@ -290,18 +290,18 @@ p #[strong The performance transformation]: Query 
optimization through data plac
 
 h3 Performance Validation
 
-p #[strong Colocation Effectiveness Monitoring:]
+p #[strong collocation Effectiveness Monitoring:]
 
-p Apache Ignite provides built-in metrics to monitor colocation effectiveness: 
query response times, network traffic patterns, and CPU utilization versus 
network wait time. Effective colocation strategies achieve specific performance 
indicators that demonstrate data locality success.
+p Apache Ignite provides built-in metrics to monitor collocation 
effectiveness: query response times, network traffic patterns, and CPU 
utilization versus network wait time. Effective collocation strategies achieve 
specific performance indicators that demonstrate data locality success.
 
 p #[strong Success Indicators:]
 ul
   li #[strong Local execution]: >95% of queries execute locally without 
network hops
-  li #[strong Memory-speed access]: Average query latency <1 ms for colocated 
data
+  li #[strong Memory-speed access]: Average query latency <1 ms for collocated 
data
   li #[strong CPU utilization]: >80% processing time versus network waiting
   li #[strong Predictable performance]: Consistent response times independent 
of cluster size
 
-hr
+
 br
 
 h2 The Business Impact of Eliminating Data Movement
@@ -327,20 +327,20 @@ ul
   li #[strong Complex Event Processing]: Multi-step event processing without 
coordination overhead
   li #[strong Interactive Applications]: User-facing features with 
database-backed logic at cache speeds
 
-hr
+
 br
 
 h2 The Architectural Evolution
 
 p Traditional distributed systems accept network overhead as inevitable. 
Apache Ignite eliminates it through intelligent data placement.
 
-p Your high-velocity application doesn't need to choose between distributed 
scale and local performance. Colocation provides both: the data capacity and 
fault tolerance of distributed systems with the performance characteristics of 
single-node processing.
+p Your high-velocity application doesn't need to choose between distributed 
scale and local performance. collocation provides both: the data capacity and 
fault tolerance of distributed systems with the performance characteristics of 
single-node processing.
 
-p #[strong The principle]: Colocate related data, localize dependent 
processing.
+p #[strong The principle]: collocate related data, localize dependent 
processing.
 
 p Every network hop you eliminate returns performance to your application's 
processing budget. At high event volumes, those performance gains determine 
whether your architecture scales with your business success or becomes the 
constraint that limits it.
 
-hr
+
 br
 |
-p #[em Return next Tuesday for Part 6 to discover how distributed consensus 
maintains data consistency during high-frequency operations. We'll explore how 
to preserve the performance gains from colocation while ensuring your 
high-velocity applications remain both fast and reliable.]
\ No newline at end of file
+p #[em Return next Tuesday for Part 6 to discover how distributed consensus 
maintains data consistency during high-frequency operations. We'll explore how 
to preserve the performance gains from collocation while ensuring your 
high-velocity applications remain both fast and reliable.]
\ No newline at end of file
diff --git a/_src/_blog/apache-ignite-3-client-connections-handling.pug 
b/_src/_blog/apache-ignite-3-client-connections-handling.pug
index a303155520..ab889c9222 100644
--- a/_src/_blog/apache-ignite-3-client-connections-handling.pug
+++ b/_src/_blog/apache-ignite-3-client-connections-handling.pug
@@ -41,7 +41,7 @@ ul
 
 p Let's see how many concurrent client connections a single Apache Ignite 3 
node can handle.
 
-hr
+
 br
 h2 Testing Setup
 
@@ -63,7 +63,6 @@ p In the program you can notice the trick with multiple 
localhost addresses (#[c
 
 p Basically, every TCP connection has a source #[code IP:port] pair and the 
port is chosen from the ephemeral port range (typically 32768–60999 on Linux). 
We can't have more connections on the same address than the number of ephemeral 
ports available. Using multiple localhost addresses works around this 
limitation.
 
-hr
 br
 h2 Results
 br
@@ -84,7 +83,7 @@ pre
 
 p Note that each connection exchanges a heartbeat message every 10 seconds, so 
the system is not completely idle. We have about 20k small requests per second, 
but this barely requires any CPU.
 
-hr
+
 br
 h2 Conclusion
 br
diff --git a/_src/_blog/schema-design-for-distributed-systems-ai3.pug 
b/_src/_blog/schema-design-for-distributed-systems-ai3.pug
index d92fd94e7c..419dc42df1 100644
--- a/_src/_blog/schema-design-for-distributed-systems-ai3.pug
+++ b/_src/_blog/schema-design-for-distributed-systems-ai3.pug
@@ -1,5 +1,5 @@
 ---
-title: " Schema Design for Distributed Systems: Why Data Placement Matters"
+title: "Schema Design for Distributed Systems: Why Data Placement Matters"
 author: "Michael Aglietti"
 date: 2025-11-18
 tags:
@@ -7,7 +7,7 @@ tags:
     - ignite
 ---
 
-p Discover how Apache Ignite 3 keeps related data together with schema-driven 
colocation, cutting cross-node traffic and making distributed queries fast, 
local and predictable.
+p Discover how Apache Ignite keeps related data together with schema-driven 
colocation, cutting cross-node traffic and making distributed queries fast, 
local and predictable.
 
 <!-- end -->
 
@@ -21,9 +21,9 @@ p.
 
 
 p.
-  That’s where #[strong data placement] becomes the real scaling strategy. 
Apache Ignite 3 takes a different path with #[strong schema-driven colocation] 
— a way to keep related data physically together. Instead of spreading rows 
randomly across nodes, Ignite uses your schema relationships to decide where 
data lives. The result: a 200 ms cross-node query becomes a 5 ms local read.
+  That’s where #[strong data placement] becomes the real scaling strategy. 
Apache Ignite takes a different path with #[strong schema-driven colocation] — 
a way to keep related data physically together. Instead of spreading rows 
randomly across nodes, Ignite uses your schema relationships to decide where 
data lives. The result: a 200 ms cross-node query becomes a 5 ms local read.
+
 
-hr
 
 h3 How Ignite 3 Differs from Other Distributed Databases
 
@@ -43,7 +43,7 @@ ul
   li Local queries eliminate network overhead
   li Microsecond latencies through memory-first storage
 
-hr
+
 
 h3 The Distributed Data Placement Problem
 
@@ -61,7 +61,7 @@ blockquote
     This post assumes you have a basic understanding of how to get an Ignite 3 
cluster running and have worked with the Ignite 3 Java API. If you’re new to 
Ignite 3, start with the 
#[a(href="https://ignite.apache.org/docs/ignite3/latest/quick-start/java-api";) 
Java API quick start guide] to set up your development environment.
 
 
-hr
+
 
 h3 How Ignite 3 Places Data Differently
 
@@ -97,7 +97,7 @@ p.
   Colocation configuration in your schema ensures that Album records use the 
#[code ArtistId] value (not #[code AlbumId]) for partition assignment. This 
guarantees that Artist 1 and all albums with #[code ArtistId = 1] hash to the 
same partition and therefore live on the same nodes.
 
 
-hr
+
 
 h3 Distribution Zones and Data Placement
 
@@ -143,7 +143,7 @@ pre
     .storageProfiles("default")
     .build()).execute();
 
-hr
+
 
 h3 Building Your Music Platform Schema
 
@@ -186,7 +186,7 @@ pre
         public void setName(String name) { this.Name = name; }
     }
 
-hr
+
 
 h3 Parent–Child Colocation Implementation
 
@@ -228,7 +228,7 @@ p The colocation field (
   code ArtistId
   | value to ensure albums with the same artist live on the same nodes as 
their corresponding artist record.
 
-hr
+
 
 h3 Performance Impact: Memory-First + Colocation
 
@@ -263,7 +263,7 @@ ul
     strong Resource efficiency:
     |  CPU focuses on serving requests instead of moving data
 
-hr
+
 
 h3 Colocation Enables Compute-to-Data Processing
 
@@ -282,7 +282,7 @@ pre
 
 p Instead of moving gigabytes of album data to a compute cluster, you move 
kilobytes of logic to where the data already resides.
 
-hr
+
 
 h3 Implementation Guide
 
@@ -305,7 +305,7 @@ pre
         client.catalog().createTable(Track.class);      // References Album
     }
 
-hr
+
 
 h3 Accessing Your Distributed Data
 
@@ -326,12 +326,12 @@ pre
     albums.upsert(null, abbeyRoad);  // Automatically colocated with artist
 
 
-hr
+
 
 h3 Summary
 
 p.
-  Data placement is where distributed performance is won or lost. With 
#[strong schema-driven colocation], Apache Ignite 3 keeps related data together 
on the same nodes, so your queries stay local, fast, and predictable.
+  Data placement is where distributed performance is won or lost. With 
#[strong schema-driven colocation], Apache Ignite keeps related data together 
on the same nodes, so your queries stay local, fast, and predictable.
 
 p Instead of tuning around network latency, you design for it once at the 
schema level. Your joins stay local, your compute jobs run where the data 
lives, and scaling stops being a tradeoff between performance and size.
 
diff --git a/blog/apache-ignite-3-architecture-part-1.html 
b/blog/apache-ignite-3-architecture-part-1.html
index e1d2935c2d..c673b2f541 100644
--- a/blog/apache-ignite-3-architecture-part-1.html
+++ b/blog/apache-ignite-3-architecture-part-1.html
@@ -359,7 +359,6 @@
                   that enabled growth now create performance bottlenecks that 
compound with every additional event.
                 </p>
                 <p><strong>At high event volumes, data movement between 
systems becomes the primary performance constraint</strong>.</p>
-                <hr />
                 <h3>The Scale Reality for High-Velocity Applications</h3>
                 <p>
                   As event volume grows, architectural compromises that once 
seemed reasonable at lower scale become critical bottlenecks. Consider a 
financial trading platform, gaming backend, or IoT processor handling tens of 
thousands of
@@ -381,7 +380,6 @@
                   <li>During traffic spikes (50,000+ events per second), 
traditional systems collapse entirely, dropping connections and losing data 
when they're needed most.</li>
                 </ul>
                 <p>The math scales against you.</p>
-                <hr />
                 <h3>When Smart Choices Become Scaling Limits</h3>
                 <p><strong>Initial Architecture, works great at lower 
scale:</strong></p>
                 <pre class="mermaid">flowchart TB
@@ -410,7 +408,6 @@
                   <li><strong>Memory overhead:</strong> Each system caches the 
same data in different formats</li>
                   <li><strong>Consistency windows:</strong> Brief periods 
where systems show different data states</li>
                 </ul>
-                <hr />
                 <h3>The Hidden Cost of Multi-System Success</h3>
                 <p><strong>Data Movement Overhead:</strong></p>
                 <p>Your events don't just need processing, they need 
processing that maintains consistency across all systems.</p>
@@ -423,7 +420,6 @@
                 </ul>
                 <p><strong>Minimum per-event cost ≈ 7 ms before business 
logic.</strong></p>
                 <p>At 10,000 events/s, you’d need 70 seconds of processing 
capacity just <strong>for data movement</strong> per real-time second!</p>
-                <hr />
                 <h3>The Performance Gap That Grows With Success</h3>
                 <h3>Why Traditional Options Fail</h3>
                 <p><strong>Option 1: Scale out each system</strong></p>
@@ -444,7 +440,6 @@
                   <li><strong>Result</strong>: Business requirements 
compromised to fit architectural limitations</li>
                   <li><strong>Reality</strong>: Competitive disadvantage as 
customer expectations grow</li>
                 </ul>
-                <hr />
                 <h3>The Critical Performance Gap</h3>
                 <table style="borderlicollapse: collapse; margin: 20px 0; 
width: 100%">
                   <thead>
@@ -477,7 +472,6 @@
                   During traffic spikes, traditional architectures either drop 
connections (data loss) or degrade performance (missed SLAs). High-velocity 
applications need intelligent flow control that guarantees stability under 
pressure
                   while preserving data integrity.
                 </p>
-                <hr />
                 <h3>Event Processing at Scale</h3>
                 <p><strong>Here's what traditional multi-system event 
processing costs:</strong></p>
                 <pre><code>// Traditional multi-system event processing
@@ -531,14 +525,12 @@ long totalTime = System.nanoTime() - startTime;
                   </tbody>
                 </table>
                 <p><strong>The math doesn’t work:</strong> parallelism helps, 
but coordination overhead grows exponentially with system count.</p>
-                <hr />
                 <h3>Real-World Breaking Points</h3>
                 <ul>
                   <li><strong>Financial services</strong>: Trading platforms 
hitting 10,000+ trades/second discover that compliance reporting delays impact 
trading decisions.</li>
                   <li><strong>Gaming platforms</strong>: Multiplayer backends 
processing user actions find that leaderboard updates lag behind gameplay 
events.</li>
                   <li><strong>IoT analytics</strong>: Manufacturing systems 
processing sensor data realize that anomaly detection arrives too late for 
preventive action.</li>
                 </ul>
-                <hr />
                 <h3>The Apache Ignite Alternative</h3>
                 <h3>Eliminating Multi-System Overhead</h3>
                 <pre class="mermaid">flowchart TB
@@ -563,7 +555,6 @@ long totalTime = System.nanoTime() - startTime;
     Compute <-->|Direct Access| Memory
 </pre>
                 <p><strong>Key difference:</strong> events process 
<strong>where the data lives</strong>, eliminating inter-system latency.</p>
-                <hr />
                 <h3>Apache Ignite Performance Reality Check</h3>
                 <p><strong>Here's the same event processing with integrated 
architecture:</strong></p>
                 <pre><code>// Apache Ignite integrated event processing
@@ -592,7 +583,6 @@ try (IgniteClient client = 
IgniteClient.builder().addresses("cluster:10800").bui
 // Result: microsecond-range event processing through integrated architecture
 </code></pre>
                 <p>Processing 10,000 events/s is achievable with integrated 
architecture eliminating network overhead.</p>
-                <hr />
                 <h3>The Unified Data-Access Advantage</h3>
                 <p><strong>Here's what eliminates the need for separate 
systems:</strong></p>
                 <pre><code>// The SAME data, THREE access paradigms, ONE system
@@ -621,7 +611,6 @@ CustomerRecord record = customerTable.recordView()
                   <li><strong>Schema drift risks</strong> across different 
systems</li>
                 </ul>
                 <p><strong>Unified advantage:</strong> One schema, one 
transaction model, multiple access paths.</p>
-                <hr />
                 <h3>Apache Ignite Architecture Preview</h3>
                 <p>The ability to handle high-velocity events without 
multi-system overhead requires specific technical innovations:</p>
                 <ul>
@@ -631,7 +620,6 @@ CustomerRecord record = customerTable.recordView()
                   <li><strong>Minimal data copying</strong>: Events process 
against live data through collocated processing and direct memory access</li>
                 </ul>
                 <p>These innovations address the compound effects that make 
multi-system architectures unsuitable for high-velocity applications.</p>
-                <hr />
                 <h3>Business Impact of Architectural Evolution</h3>
                 <h3>Cost Efficiency</h3>
                 <ul>
@@ -651,7 +639,6 @@ CustomerRecord record = customerTable.recordView()
                   <li><strong>Consistent behavior</strong>: no synchronization 
anomalies</li>
                   <li><strong>Faster delivery</strong>: one integrated system 
to test and debug</li>
                 </ul>
-                <hr />
                 <h3>The Architectural Evolution Decision</h3>
                 <p>Every successful application reaches this point: the 
architecture that once fueled growth now constrains it.</p>
                 <p><strong>The question isn't whether you'll hit multi-system 
scaling limits. It's how you'll evolve past them.</strong></p>
@@ -660,7 +647,6 @@ CustomerRecord record = customerTable.recordView()
                   systems at scale, you consolidate core operations into a 
platform designed for high-velocity applications.
                 </p>
                 <p>Your winning architecture doesn't have to become your 
scaling limit. It can evolve into the foundation for your next phase of 
growth.</p>
-                <hr />
                 <br />
                 <p>
                   <em>Return next Tuesday for Part 2, where we examine how 
Apache Ignite’s memory-first architecture enables optimized event processing 
while maintaining durability, forming the basis for true high-velocity 
performance.</em>
diff --git a/blog/apache-ignite-3-architecture-part-2.html 
b/blog/apache-ignite-3-architecture-part-2.html
index 2e0433ab6b..efc6ebaf5a 100644
--- a/blog/apache-ignite-3-architecture-part-2.html
+++ b/blog/apache-ignite-3-architecture-part-2.html
@@ -353,7 +353,6 @@
                   ACID guarantees.
                 </p>
                 <p><strong>Performance transformation: significant speed 
improvements with enterprise durability.</strong></p>
-                <hr />
                 <br />
                 <h3>The Event Processing Performance Challenge</h3>
                 <p><strong> Traditional Database Performance Under 
Load</strong></p>
@@ -383,7 +382,6 @@ long totalTime = System.nanoTime() - startTime;
                   <li>10,000 events/sec × 15ms avg = 150 seconds processing 
time needed per second</li>
                 </ul>
                 <p><strong>The constraint</strong>: Disk I/O creates a 
performance ceiling on throughput regardless of CPU or memory.</p>
-                <hr />
                 <br />
                 <h3><strong>Memory-First Performance Results</strong></h3>
                 <p><strong>Concrete performance improvement with Apache Ignite 
memory-first architecture:</strong></p>
@@ -412,7 +410,6 @@ long totalTime = System.nanoTime() - startTime;
                   <li><strong>Consistent performance</strong>: Sub-millisecond 
response times even during traffic spikes</li>
                   <li><strong>Resource utilization</strong>: Memory bandwidth 
becomes the scaling factor, not disk I/O waits</li>
                 </ul>
-                <hr />
                 <h3>Architecture Comparison: Disk-First vs Memory-First</h3>
                 <pre class="mermaid">flowchart LR
     subgraph "Disk-First Architecture"
@@ -446,7 +443,6 @@ long totalTime = System.nanoTime() - startTime;
                   <li><strong>Performance Impact</strong>: 5-15ms disk waits 
become <1ms memory operations</li>
                   <li><strong>Scalability</strong>: Memory bandwidth scales 
linearly vs disk I/O bottlenecks</li>
                 </ul>
-                <hr />
                 <br />
                 <h3>Memory-First Architecture Principles</h3>
                 <h3>Off-Heap Memory Management</h3>
@@ -472,7 +468,6 @@ long totalTime = System.nanoTime() - startTime;
                   <li><strong>Trade-off</strong>: Near-memory speed with full 
durability protection</li>
                 </ul>
                 <p><strong>The Evolution Solution</strong>: Instead of 
choosing between fast caches and durable databases, you get both performance 
characteristics in the same platform based on your specific data 
requirements.</p>
-                <hr />
                 <br />
                 <h3>Event Processing Performance Characteristics</h3>
                 <h3>Memory-First Operations</h3>
@@ -485,7 +480,7 @@ long totalTime = System.nanoTime() - startTime;
                   <li>B+ tree structures optimize both sequential and random 
access patterns</li>
                 </ul>
                 <p><strong>Performance Advantage</strong>: Event data 
processing operates on memory-resident data with minimal serialization 
overhead.</p>
-                <h3>Asynchronous Persistence for Event Durability</h3>
+                <h3>Asynconous Persistence for Event Durability</h3>
                 <p>The checkpoint manager ensures event durability without 
blocking event processing.</p>
                 <h4>Background Checkpoint Process</h4>
                 <ul>
@@ -493,8 +488,7 @@ long totalTime = System.nanoTime() - startTime;
                   <li><strong>Write Phase</strong>: Persist changes to storage 
without blocking ongoing operations</li>
                   <li><strong>Coordination</strong>: Manage recovery markers 
for failure scenarios</li>
                 </ul>
-                <p><strong>Key Advantage</strong>: Event processing continues 
at memory speeds while persistence happens in background threads.</p>
-                <hr />
+                <p><strong>Key Advantage</strong>: Event processing continues 
at memory speeds while persistence happens in background teads.</p>
                 <br />
                 <h3>B+ Tree Organization for Event Data</h3>
                 <p>Event-Optimized Data Structures</p>
@@ -507,14 +501,13 @@ long totalTime = System.nanoTime() - startTime;
                   <li>Multi-version support for consistent read operations</li>
                 </ul>
                 <h3>MVCC Integration for Event Consistency</h3>
-                <p>Event processing maintains consistency through 
multi-version concurrency control:</p>
+                <p>Event processing maintains consistency tough multi-version 
concurrency control:</p>
                 <p><strong>Event Processing Benefits</strong>:</p>
                 <ul>
                   <li><strong>Consistent Analytics</strong>: Read events at 
specific points in time without blocking new events</li>
                   <li><strong>High-Frequency Writes</strong>: Events process 
concurrently with analytical queries</li>
                   <li><strong>Recovery Guarantees</strong>: Event ordering 
maintained across failures</li>
                 </ul>
-                <hr />
                 <br />
                 <h3>Performance Characteristics at Event Scale</h3>
                 <h3>Memory-First Performance Profile</h3>
@@ -539,7 +532,6 @@ long totalTime = System.nanoTime() - startTime;
                 </p>
                 <p><strong>IoT Event Processing</strong>: Sensor data 
ingestion scales to device-native rates without sampling or queuing delays. 
Anomaly detection runs on live data streams rather than batch-processed 
snapshots.</p>
                 <p><strong>Gaming Backends</strong>: Player actions process 
immediately while leaderboards, achievements, and session state update 
concurrently. No delays between action and world state changes.</p>
-                <hr />
                 <br />
                 <h2>Foundation for High-Velocity Applications</h2>
                 <br />
@@ -559,7 +551,7 @@ long totalTime = System.nanoTime() - startTime;
                 <p><strong>Maintains Enterprise Requirements</strong>:</p>
                 <ul>
                   <li>ACID transaction guarantees for critical events</li>
-                  <li>Durability through asynchronous checkpointing</li>
+                  <li>Durability tough asynchronous checkpointing</li>
                   <li>Recovery capabilities for event stream continuity</li>
                 </ul>
                 <p>
diff --git a/blog/apache-ignite-3-architecture-part-3.html 
b/blog/apache-ignite-3-architecture-part-3.html
index 65e78cc04a..5f13e53a8c 100644
--- a/blog/apache-ignite-3-architecture-part-3.html
+++ b/blog/apache-ignite-3-architecture-part-3.html
@@ -364,7 +364,6 @@
                 <p><strong>Total downtime</strong>: 2-4 hours. <strong>Lost 
revenue</strong>: $500K+ for a payment processor.</p>
                 <p>Apache Ignite 3 eliminates this constraint through flexible 
schema evolution. Schema changes apply without downtime, applications adjust 
automatically, and system operations continue uninterrupted.</p>
                 <p><strong>The result: operational evolution without 
operational interruption.</strong></p>
-                <hr />
                 <br />
                 <h2>The Schema Rigidity Problem at Scale</h2>
                 <h3>Traditional Schema Change Overhead</h3>
@@ -434,7 +433,6 @@ ALTER TABLE orders ADD COLUMN shipping_region VARCHAR(50);
 </code></pre>
                 <br />
                 <p><strong>The Pattern</strong>: Each functional change 
requires operational disruption that grows with system complexity.</p>
-                <hr />
                 <br />
                 <h2>Apache Ignite Flexible Schema Architecture</h2>
                 <h3>Catalog-Driven Schema Management</h3>
@@ -462,7 +460,6 @@ ALTER TABLE orders ADD COLUMN shipping_region VARCHAR(50);
                   <li><strong>Application Compatibility</strong>: Gradual 
adoption without breaking changes</li>
                 </ul>
                 <p>The catalog management system uses `HybridTimestamp` values 
to ensure schema versions activate consistently across all cluster nodes, 
preventing race conditions and maintaining data integrity during schema 
evolution.</p>
-                <hr />
                 <br />
                 <h2>Schema Evolution in Production</h2>
                 <h3>Adding Fraud Detection Fields (Real-Time)</h3>
@@ -591,7 +588,6 @@ public class InternationalExpansionEvolution {
                   <li><strong>Risk Reduction</strong>: Gradual rollout instead 
of big-bang deployment</li>
                   <li><strong>Revenue Protection</strong>: No downtime for 
existing operations</li>
                 </ul>
-                <hr />
                 <br />
                 <h2>Schema Evolution Performance Impact</h2>
                 <h3>Traditional Schema Change Performance Cost</h3>
@@ -618,11 +614,10 @@ ALTER TABLE orders ADD COLUMN fraud_score DECIMAL(5,2);
                 <ul>
                   <li><strong>Schema change time</strong>: Fast metadata 
operations (typically under 100ms)</li>
                   <li><strong>Application downtime</strong>: Zero</li>
-                  <li><strong>Throughput impact</strong>: Minimal during 
change operation</li>
+                  <li><strong>Toughput impact</strong>: Minimal during change 
operation</li>
                   <li><strong>Recovery time</strong>: Immediate (no recovery 
needed)</li>
                 </ul>
                 <p>Performance improves by separating schema metadata 
management from data storage, allowing schema evolution without touching 
existing data structures.</p>
-                <hr />
                 <br />
                 <h2>Business Impact of Schema Flexibility</h2>
                 <h3>Revenue Protection</h3>
@@ -682,9 +677,7 @@ public class FlexibleFeatureDevelopment {
                   <li><strong>Experimentation</strong>: Add telemetry fields 
for new insights</li>
                   <li><strong>Personalization</strong>: Evolve customer data 
models based on behavior patterns</li>
                 </ul>
-                <br />
-                <hr />
-                <br />
+                <br /><br />
                 <h2>The Operational Evolution Advantage</h2>
                 <br />
                 <p>
@@ -697,7 +690,6 @@ public class FlexibleFeatureDevelopment {
                   business needs rather than restricting them.
                 </p>
                 <p>Fast-paced applications can't afford architectural 
constraints that slow adaptation. Schema flexibility becomes a strategic 
advantage when your system must evolve faster than competitors can deploy.</p>
-                <hr />
                 <br />
                 <p>
                   <em
diff --git a/blog/apache-ignite-3-architecture-part-4.html 
b/blog/apache-ignite-3-architecture-part-4.html
index ea79fd7f7f..aa291cfbf6 100644
--- a/blog/apache-ignite-3-architecture-part-4.html
+++ b/blog/apache-ignite-3-architecture-part-4.html
@@ -352,7 +352,6 @@
                   unified performance architecture that maintains speed 
characteristics across all workload types.
                 </p>
                 <p><strong>Real-time analytics breakthrough: query live 
transactional data without ETL delays or performance interference.</strong></p>
-                <hr />
                 <br />
                 <h2>Performance Comparison: Traditional vs Integrated</h2>
                 <h3>Traditional Multi-System Performance</h3>
@@ -412,7 +411,6 @@ public class TradingSystemBottlenecks {
                   <li>Fresh analytics with trading delays (performance 
risk)</li>
                   <li>Complete reporting with operational lag (business 
risk)</li>
                 </ul>
-                <hr />
                 <br />
                 <h2>Apache Ignite Integrated Performance Architecture</h2>
                 <h3>Before/After Performance Transformation</h3>
@@ -495,7 +493,6 @@ public class ConcurrentTradingProcessor {
                   <li><strong>Throughput scaling</strong>: Process multiple 
operations per thread</li>
                   <li><strong>Latency hiding</strong>: Overlapping operations 
reduce total processing time</li>
                 </ul>
-                <hr />
                 <br />
                 <h2>Performance Under Real-World Load Conditions</h2>
                 <h3>Performance Under Mixed Workloads</h3>
@@ -660,7 +657,6 @@ flowchart LR
                   <li><strong>Data migration</strong>: 100TB+ datasets 
streamed without overwhelming target systems</li>
                 </ul>
                 <p><strong>The wow moment</strong>: While competitors' systems 
crash during peak demand, yours maintains 99.9% uptime through intelligent flow 
control that automatically adapts to system pressure.</p>
-                <hr />
                 <br />
                 <h2>Performance Optimization Strategies</h2>
                 <h3>Workload-Specific Optimization</h3>
@@ -694,7 +690,6 @@ flowchart LR
                   <li><strong>Interference detection</strong>: Less than 5% 
mutual performance impact between workload types</li>
                   <li><strong>Capacity planning</strong>: Predictable scaling 
characteristics enable accurate resource allocation</li>
                 </ul>
-                <hr />
                 <br />
                 <h2>Business Impact of Consistent Performance</h2>
                 <h3>Risk Reduction Through Performance Predictability</h3>
@@ -736,7 +731,6 @@ flowchart LR
                   <li><strong>Market expansion</strong>: Consistent 
performance supports higher-volume markets</li>
                   <li><strong>Technical differentiation</strong>: Platform 
capabilities become competitive advantages</li>
                 </ul>
-                <hr />
                 <br />
                 <h2>The Performance Integration Advantage</h2>
                 <br />
@@ -752,7 +746,6 @@ flowchart LR
                   of constraining them.
                 </p>
                 <p>High-velocity applications need performance characteristics 
they can depend on. Integrated platform performance provides both the speed 
individual operations require and the consistency mixed workloads demand.</p>
-                <hr />
                 <br />
                 <p>
                   <em
diff --git a/blog/apache-ignite-3-architecture-part-5.html 
b/blog/apache-ignite-3-architecture-part-5.html
index 011c9ed0c8..df3ef51f57 100644
--- a/blog/apache-ignite-3-architecture-part-5.html
+++ b/blog/apache-ignite-3-architecture-part-5.html
@@ -354,9 +354,8 @@
                   Consider a financial trading system processing peak loads of 
10,000 trades per second. Each trade requires risk calculations against a 
customer's portfolio. If portfolio data lives on different nodes than the 
processing
                   logic, network round-trips create an impossible performance 
equation: 10,000 trades × 2ms network latency = 20 seconds of network delay per 
second of wall clock time.
                 </p>
-                <p>Apache Ignite eliminates this constraint through data 
colocation. Related data and processing live on the same nodes, transforming 
distributed operations into local memory operations.</p>
+                <p>Apache Ignite eliminates this constraint through data 
collocation. Related data and processing live on the same nodes, transforming 
distributed operations into local memory operations.</p>
                 <p><strong>The result: distributed system performance without 
distributed system overhead.</strong></p>
-                <hr />
                 <br />
                 <h2>The Data Movement Tax on High-Velocity Applications</h2>
                 <h3>Network Latency Arithmetic</h3>
@@ -396,16 +395,15 @@ PromotionData promotions = applyPromotions(order, 
customer);             // Netw
 // Total network overhead: 8ms per order (before any business logic)
 </code></pre>
                 <p><strong>The cascade effect</strong>: Each data dependency 
creates another network round-trip. Complex event processing can require 10+ 
network operations per event.</p>
-                <hr />
                 <br />
-                <h2>Strategic Data Placement Through Colocation</h2>
-                <h3>Apache Ignite Colocation Architecture</h3>
+                <h2>Strategic Data Placement Through collocation</h2>
+                <h3>Apache Ignite collocation Architecture</h3>
                 <p>
-                  Apache Ignite uses deterministic hash distribution to ensure 
related data lands on the same nodes. The platform automatically generates 
consistent hash values for colocation keys, ensuring all data with the same 
colocation
-                  key always lands on the same node. This deterministic 
placement means that once data is colocated, subsequent access patterns benefit 
from data locality without manual coordination.
+                  Apache Ignite uses deterministic hash distribution to ensure 
related data lands on the same nodes. The platform automatically generates 
consistent hash values for collocation keys, ensuring all data with the same
+                  collocation key always lands on the same node. This 
deterministic placement means that once data is collocated, subsequent access 
patterns benefit from data locality without manual coordination.
                 </p>
-                <h3>Table Design for Event Processing Colocation</h3>
-                <pre><code>-- Create distribution zone for customer-based 
colocation
+                <h3>Table Design for Event Processing collocation</h3>
+                <pre><code>-- Create distribution zone for customer-based 
collocation
 CREATE ZONE customer_zone WITH
     partitions=64,
     replicas=3;
@@ -417,14 +415,14 @@ CREATE TABLE orders (
     amount DECIMAL(10,2),
     order_date TIMESTAMP
 ) WITH ZONE='customer_zone';
--- Customer data using same distribution zone for colocation
+-- Customer data using same distribution zone for collocation
 CREATE TABLE customers (
     customer_id BIGINT PRIMARY KEY,
     name VARCHAR(100),
     segment VARCHAR(20),
     payment_method VARCHAR(50)
 ) WITH ZONE='customer_zone';
--- Customer pricing using same zone for colocation
+-- Customer pricing using same zone for collocation
 CREATE TABLE customer_pricing (
     customer_id BIGINT,
     product_id BIGINT,
@@ -437,13 +435,13 @@ CREATE TABLE customer_pricing (
                   <strong>Result</strong>: All tables using the same 
distribution zone share the same partitioning strategy. Data for customer 12345 
distributes to the same partition across all tables, enabling local processing 
without
                   network communication.
                 </p>
-                <h3>Compute Colocation for Event Processing</h3>
+                <h3>Compute collocation for Event Processing</h3>
                 <p><strong>Processing Moves to Where Data Lives:</strong></p>
                 <p>
                   Instead of moving data to processing logic, compute jobs 
execute on the nodes where related data already exists. For customer order 
processing, the compute job runs on the node containing the customer's data, 
orders, and
                   pricing information. All data access becomes local memory 
operations rather than network calls.
                 </p>
-                <p><strong>Simple Colocation Example:</strong></p>
+                <p><strong>Simple collocation Example:</strong></p>
                 <pre><code>// Execute processing where customer data lives
 Tuple customerKey = Tuple.create().set("customer_id", customerId);
 CompletableFuture&lt;OrderResult&gt future = client.compute().executeAsync(
@@ -452,10 +450,9 @@ CompletableFuture&lt;OrderResult&gt future = 
client.compute().executeAsync(
     customerId
 );
 </code></pre>
-                <p><strong>Performance Impact</strong>: 8ms distributed 
processing becomes sub-millisecond colocated processing through data 
locality.</p>
-                <hr />
+                <p><strong>Performance Impact</strong>: 8ms distributed 
processing becomes sub-millisecond collocated processing through data 
locality.</p>
                 <br />
-                <h2>Real-World Colocation Performance Impact</h2>
+                <h2>Real-World collocation Performance Impact</h2>
                 <h3>Financial Risk Calculation Example</h3>
                 <p><strong>Problem</strong>: Trading system needs real-time 
portfolio risk calculation for each trade.</p>
                 <p><strong>Here's what the distributed approach 
costs:</strong></p>
@@ -463,9 +460,9 @@ CompletableFuture&lt;OrderResult&gt future = 
client.compute().executeAsync(
                   Traditional risk calculations require multiple network 
calls: fetch trade details (1ms), retrieve portfolio data (2ms), get current 
market prices (1ms), and load risk rules (1ms). The actual risk calculation 
takes 0.2ms,
                   but network overhead dominates at 5.2ms total. At 10,000 
trades per second, this creates 52 seconds of processing time per second. This 
is mathematically impossible.
                 </p>
-                <p><strong>Colocated Risk Processing:</strong></p>
+                <p><strong>collocated Risk Processing:</strong></p>
                 <p>
-                  When account portfolios, trade histories, and risk rules 
colocate by account ID, risk calculations become local operations. All required 
data lives on the same node where the processing executes. Network overhead
+                  When account portfolios, trade histories, and risk rules 
collocate by account ID, risk calculations become local operations. All 
required data lives on the same node where the processing executes. Network 
overhead
                   disappears, transforming 5.2ms distributed operations into 
sub-millisecond local calculations.
                 </p>
                 <p><strong>Business Impact:</strong></p>
@@ -476,8 +473,8 @@ CompletableFuture&lt;OrderResult&gt future = 
client.compute().executeAsync(
                 </ul>
                 <h3>IoT Event Processing Example</h3>
                 <p><strong>Problem</strong>: Manufacturing system processes 
sensor events requiring contextual data for anomaly detection.</p>
-                <p><strong>Colocated Design</strong>:</p>
-                <pre><code>-- Create distribution zone for equipment-based 
colocation
+                <p><strong>collocated Design</strong>:</p>
+                <pre><code>-- Create distribution zone for equipment-based 
collocation
 CREATE ZONE equipment_zone WITH
     partitions=32,
     replicas=2;
@@ -491,7 +488,7 @@ CREATE TABLE sensor_readings (
     vibration DECIMAL(6,3),
     PRIMARY KEY (sensor_id, timestamp)
 ) WITH ZONE='equipment_zone';
--- Equipment specifications using same zone for colocation
+-- Equipment specifications using same zone for collocation
 CREATE TABLE equipment_specs (
     equipment_id BIGINT PRIMARY KEY,
     max_temperature DECIMAL(5,2),
@@ -509,10 +506,9 @@ CREATE TABLE equipment_specs (
                   <strong>Performance Outcome</strong>: Sub-millisecond 
anomaly detection vs multi-millisecond distributed processing. Single cluster 
processes tens of thousands of sensor readings per second with real-time anomaly
                   detection.
                 </p>
-                <hr />
                 <br />
-                <h2>Colocation Strategy Selection</h2>
-                <h3>Event-Driven Colocation Patterns</h3>
+                <h2>collocation Strategy Selection</h2>
+                <h3>Event-Driven collocation Patterns</h3>
                 <p><strong>Customer-Centric Applications</strong>:</p>
                 <pre><code>-- Customer-focused distribution zone
 CREATE ZONE customer_zone WITH partitions=64;
@@ -543,16 +539,16 @@ CREATE TABLE locations (...) WITH ZONE='regional_zone';
                   <li>Location-aware services access local partition data</li>
                   <li>Geographic analytics minimize cross-region data 
movement</li>
                 </ul>
-                <h3>Automatic Query Optimization Through Colocation</h3>
+                <h3>Automatic Query Optimization Through collocation</h3>
                 <p><strong>When related data lives together, query performance 
transforms:</strong></p>
-                <pre><code>-- Before colocation: expensive cross-node JOINs
+                <pre><code>-- Before collocation: expensive cross-node JOINs
 SELECT c.name, o.order_total, p.amount
 FROM customers c
   JOIN orders o ON c.customer_id = o.customer_id
   JOIN payments p ON o.order_id = p.order_id
 WHERE c.customer_id = 12345;
 -- Network overhead: 3 tables × potential cross-node fetches = high latency
--- After colocation: local memory JOINs
+-- After collocation: local memory JOINs
 -- Same query, but all customer 12345 data lives on same node
 -- Result: JOIN operations become local memory operations
 </code></pre>
@@ -588,19 +584,18 @@ ResultSet&lt;SqlRow&gt; customerAnalysis = 
client.sql().execute(tx, """
                   characteristics.
                 </p>
                 <h3>Performance Validation</h3>
-                <p><strong>Colocation Effectiveness Monitoring:</strong></p>
+                <p><strong>collocation Effectiveness Monitoring:</strong></p>
                 <p>
-                  Apache Ignite provides built-in metrics to monitor 
colocation effectiveness: query response times, network traffic patterns, and 
CPU utilization versus network wait time. Effective colocation strategies 
achieve specific
+                  Apache Ignite provides built-in metrics to monitor 
collocation effectiveness: query response times, network traffic patterns, and 
CPU utilization versus network wait time. Effective collocation strategies 
achieve specific
                   performance indicators that demonstrate data locality 
success.
                 </p>
                 <p><strong>Success Indicators:</strong></p>
                 <ul>
                   <li><strong>Local execution</strong>: >95% of queries 
execute locally without network hops</li>
-                  <li><strong>Memory-speed access</strong>: Average query 
latency <1 ms for colocated data</li>
+                  <li><strong>Memory-speed access</strong>: Average query 
latency <1 ms for collocated data</li>
                   <li><strong>CPU utilization</strong>: >80% processing time 
versus network waiting</li>
                   <li><strong>Predictable performance</strong>: Consistent 
response times independent of cluster size</li>
                 </ul>
-                <hr />
                 <br />
                 <h2>The Business Impact of Eliminating Data Movement</h2>
                 <h3>Cost Reduction</h3>
@@ -621,24 +616,22 @@ ResultSet&lt;SqlRow&gt; customerAnalysis = 
client.sql().execute(tx, """
                   <li><strong>Complex Event Processing</strong>: Multi-step 
event processing without coordination overhead</li>
                   <li><strong>Interactive Applications</strong>: User-facing 
features with database-backed logic at cache speeds</li>
                 </ul>
-                <hr />
                 <br />
                 <h2>The Architectural Evolution</h2>
                 <p>Traditional distributed systems accept network overhead as 
inevitable. Apache Ignite eliminates it through intelligent data placement.</p>
                 <p>
-                  Your high-velocity application doesn't need to choose 
between distributed scale and local performance. Colocation provides both: the 
data capacity and fault tolerance of distributed systems with the performance
+                  Your high-velocity application doesn't need to choose 
between distributed scale and local performance. collocation provides both: the 
data capacity and fault tolerance of distributed systems with the performance
                   characteristics of single-node processing.
                 </p>
-                <p><strong>The principle</strong>: Colocate related data, 
localize dependent processing.</p>
+                <p><strong>The principle</strong>: collocate related data, 
localize dependent processing.</p>
                 <p>
                   Every network hop you eliminate returns performance to your 
application's processing budget. At high event volumes, those performance gains 
determine whether your architecture scales with your business success or becomes
                   the constraint that limits it.
                 </p>
-                <hr />
                 <br />
                 <p>
                   <em
-                    >Return next Tuesday for Part 6 to discover how 
distributed consensus maintains data consistency during high-frequency 
operations. We'll explore how to preserve the performance gains from colocation 
while ensuring your
+                    >Return next Tuesday for Part 6 to discover how 
distributed consensus maintains data consistency during high-frequency 
operations. We'll explore how to preserve the performance gains from 
collocation while ensuring your
                     high-velocity applications remain both fast and 
reliable.</em
                   >
                 </p>
diff --git a/blog/apache-ignite-3-client-connections-handling.html 
b/blog/apache-ignite-3-client-connections-handling.html
index 8bdbd12c40..24aed6eaab 100644
--- a/blog/apache-ignite-3-client-connections-handling.html
+++ b/blog/apache-ignite-3-client-connections-handling.html
@@ -375,7 +375,6 @@
                   <li>Query metadata is cached by the client connection, 
improving performance for repeated queries.</li>
                 </ul>
                 <p>Let's see how many concurrent client connections a single 
Apache Ignite 3 node can handle.</p>
-                <hr />
                 <br />
                 <h2>Testing Setup</h2>
                 <h3>Server</h3>
@@ -396,7 +395,6 @@
                   Basically, every TCP connection has a source 
<code>IP:port</code> pair and the port is chosen from the ephemeral port range 
(typically 32768–60999 on Linux). We can't have more connections on the same 
address than the
                   number of ephemeral ports available. Using multiple 
localhost addresses works around this limitation.
                 </p>
-                <hr />
                 <br />
                 <h2>Results</h2>
                 <br />
@@ -412,7 +410,6 @@
 Verified connectivity in 00:00:09.1446883
 </code></pre>
                 <p>Note that each connection exchanges a heartbeat message 
every 10 seconds, so the system is not completely idle. We have about 20k small 
requests per second, but this barely requires any CPU.</p>
-                <hr />
                 <br />
                 <h2>Conclusion</h2>
                 <br />
diff --git a/blog/apache/index.html b/blog/apache/index.html
index 9a05f4a5d2..013b7a0f55 100644
--- a/blog/apache/index.html
+++ b/blog/apache/index.html
@@ -343,38 +343,38 @@
           <section class="blog__posts">
             <article class="post">
               <div class="post__header">
-                <h2><a 
href="/blog/apache-ignite-3-architecture-part-4.html">Apache Ignite 
Architecture Series: Part 4 - Integrated Platform Performance: Maintaining 
Speed Under Pressure</a></h2>
+                <h2><a 
href="/blog/apache-ignite-3-architecture-part-5.html">Apache Ignite 
Architecture Series: Part 5 - Eliminating Data Movement: The Hidden Cost of 
Distributed Event Processing</a></h2>
                 <div>
-                  December 16, 2025 by Michael Aglietti. Share in <a 
href="http://www.facebook.com/sharer.php?u=https://ignite.apache.org/blog/apache-ignite-3-architecture-part-4.html";>Facebook</a><span>,
 </span
+                  December 23, 2025 by Michael Aglietti. Share in <a 
href="http://www.facebook.com/sharer.php?u=https://ignite.apache.org/blog/apache-ignite-3-architecture-part-5.html";>Facebook</a><span>,
 </span
                   ><a
-                    href="http://twitter.com/home?status=Apache Ignite 
Architecture Series: Part 4 - Integrated Platform Performance: Maintaining 
Speed Under 
Pressure%20https://ignite.apache.org/blog/apache-ignite-3-architecture-part-4.html";
+                    href="http://twitter.com/home?status=Apache Ignite 
Architecture Series: Part 5 - Eliminating Data Movement: The Hidden Cost of 
Distributed Event 
Processing%20https://ignite.apache.org/blog/apache-ignite-3-architecture-part-5.html";
                     >Twitter</a
                   >
                 </div>
               </div>
               <div class="post__content">
-                <p>Traditional systems force a choice: real-time analytics or 
fast transactions. Apache Ignite eliminates this trade-off with integrated 
platform performance that delivers both simultaneously.</p>
+                <p>
+                  Your high-velocity application processes events fast enough 
until it doesn't. The bottleneck isn't CPU or memory. It's data movement. Every 
time event processing requires data from another node, network latency adds
+                  milliseconds that compound into seconds of delay under load.
+                </p>
               </div>
-              <div class="post__footer"><a class="more" 
href="/blog/apache-ignite-3-architecture-part-4.html">↓ Read all</a></div>
+              <div class="post__footer"><a class="more" 
href="/blog/apache-ignite-3-architecture-part-5.html">↓ Read all</a></div>
             </article>
             <article class="post">
               <div class="post__header">
-                <h2><a 
href="/blog/apache-ignite-3-architecture-part-5.html">Apache Ignite 
Architecture Series: Part 5 - Eliminating Data Movement: The Hidden Cost of 
Distributed Event Processing</a></h2>
+                <h2><a 
href="/blog/apache-ignite-3-architecture-part-4.html">Apache Ignite 
Architecture Series: Part 4 - Integrated Platform Performance: Maintaining 
Speed Under Pressure</a></h2>
                 <div>
-                  December 23, 2025 by Michael Aglietti. Share in <a 
href="http://www.facebook.com/sharer.php?u=https://ignite.apache.org/blog/apache-ignite-3-architecture-part-5.html";>Facebook</a><span>,
 </span
+                  December 16, 2025 by Michael Aglietti. Share in <a 
href="http://www.facebook.com/sharer.php?u=https://ignite.apache.org/blog/apache-ignite-3-architecture-part-4.html";>Facebook</a><span>,
 </span
                   ><a
-                    href="http://twitter.com/home?status=Apache Ignite 
Architecture Series: Part 5 - Eliminating Data Movement: The Hidden Cost of 
Distributed Event 
Processing%20https://ignite.apache.org/blog/apache-ignite-3-architecture-part-5.html";
+                    href="http://twitter.com/home?status=Apache Ignite 
Architecture Series: Part 4 - Integrated Platform Performance: Maintaining 
Speed Under 
Pressure%20https://ignite.apache.org/blog/apache-ignite-3-architecture-part-4.html";
                     >Twitter</a
                   >
                 </div>
               </div>
               <div class="post__content">
-                <p>
-                  Your high-velocity application processes events fast enough 
until it doesn't. The bottleneck isn't CPU or memory. It's data movement. Every 
time event processing requires data from another node, network latency adds
-                  milliseconds that compound into seconds of delay under load.
-                </p>
+                <p>Traditional systems force a choice: real-time analytics or 
fast transactions. Apache Ignite eliminates this trade-off with integrated 
platform performance that delivers both simultaneously.</p>
               </div>
-              <div class="post__footer"><a class="more" 
href="/blog/apache-ignite-3-architecture-part-5.html">↓ Read all</a></div>
+              <div class="post__footer"><a class="more" 
href="/blog/apache-ignite-3-architecture-part-4.html">↓ Read all</a></div>
             </article>
             <article class="post">
               <div class="post__header">
@@ -440,13 +440,13 @@
             </article>
             <article class="post">
               <div class="post__header">
-                <h2><a 
href="/blog/schema-design-for-distributed-systems-ai3.html"> Schema Design for 
Distributed Systems: Why Data Placement Matters</a></h2>
+                <h2><a 
href="/blog/schema-design-for-distributed-systems-ai3.html">Schema Design for 
Distributed Systems: Why Data Placement Matters</a></h2>
                 <div>
                   November 18, 2025 by Michael Aglietti. Share in <a 
href="http://www.facebook.com/sharer.php?u=https://ignite.apache.org/blog/schema-design-for-distributed-systems-ai3.html";>Facebook</a><span>,
 </span
-                  ><a href="http://twitter.com/home?status= Schema Design for 
Distributed Systems: Why Data Placement 
Matters%20https://ignite.apache.org/blog/schema-design-for-distributed-systems-ai3.html";>Twitter</a>
+                  ><a href="http://twitter.com/home?status=Schema Design for 
Distributed Systems: Why Data Placement 
Matters%20https://ignite.apache.org/blog/schema-design-for-distributed-systems-ai3.html";>Twitter</a>
                 </div>
               </div>
-              <div class="post__content"><p>Discover how Apache Ignite 3 keeps 
related data together with schema-driven colocation, cutting cross-node traffic 
and making distributed queries fast, local and predictable.</p></div>
+              <div class="post__content"><p>Discover how Apache Ignite keeps 
related data together with schema-driven colocation, cutting cross-node traffic 
and making distributed queries fast, local and predictable.</p></div>
               <div class="post__footer"><a class="more" 
href="/blog/schema-design-for-distributed-systems-ai3.html">↓ Read all</a></div>
             </article>
             <article class="post">
@@ -497,31 +497,6 @@
               </div>
               <div class="post__footer"><a class="more" 
href="/blog/whats-new-in-apache-ignite-3-0.html">↓ Read all</a></div>
             </article>
-            <article class="post">
-              <div class="post__header">
-                <h2><a href="/blog/apache-ignite-2-5-scaling.html">Apache 
Ignite 2.5: Scaling to 1000s Nodes Clusters</a></h2>
-                <div>
-                  May 31, 2018 by Denis Magda. Share in <a 
href="http://www.facebook.com/sharer.php?u=https://ignite.apache.org/blog/apache-ignite-2-5-scaling.html";>Facebook</a><span>,
 </span
-                  ><a href="http://twitter.com/home?status=Apache Ignite 2.5: 
Scaling to 1000s Nodes 
Clusters%20https://ignite.apache.org/blog/apache-ignite-2-5-scaling.html";>Twitter</a>
-                </div>
-              </div>
-              <div class="post__content">
-                <p>
-                  Apache Ignite was always appreciated by its users for two 
primary things it delivers - scalability and performance. Throughout the 
lifetime many distributed systems tend to do performance optimizations from a 
release to
-                  release while making scalability related improvements just a 
couple of times. It&apos;s not because the scalability is of no interest. 
Usually, scalability requirements are set and solved once by a distributed 
system and
-                  don&apos;t require significant additional interventions by 
engineers.
-                </p>
-                <p>
-                  However, Apache Ignite grew to the point when the community 
decided to revisit its discovery subsystem that influences how well and far 
Ignite scales out. The goal was pretty clear - Ignite has to scale to 1000s of 
nodes
-                  as good as it scales to 100s now.
-                </p>
-                <p>
-                  It took many months to get the task implemented. So, please 
join me in welcoming Apache Ignite 2.5 that now can be scaled easily to 1000s 
of nodes and goes with other exciting capabilities. Let&apos;s check out the 
most
-                  prominent ones.
-                </p>
-              </div>
-              <div class="post__footer"><a class="more" 
href="/blog/apache-ignite-2-5-scaling.html">↓ Read all</a></div>
-            </article>
           </section>
           <section class="blog__footer">
             <ul class="pagination">
diff --git a/blog/ignite/index.html b/blog/ignite/index.html
index c398de0d11..2ef21fec14 100644
--- a/blog/ignite/index.html
+++ b/blog/ignite/index.html
@@ -343,38 +343,38 @@
           <section class="blog__posts">
             <article class="post">
               <div class="post__header">
-                <h2><a 
href="/blog/apache-ignite-3-architecture-part-4.html">Apache Ignite 
Architecture Series: Part 4 - Integrated Platform Performance: Maintaining 
Speed Under Pressure</a></h2>
+                <h2><a 
href="/blog/apache-ignite-3-architecture-part-5.html">Apache Ignite 
Architecture Series: Part 5 - Eliminating Data Movement: The Hidden Cost of 
Distributed Event Processing</a></h2>
                 <div>
-                  December 16, 2025 by Michael Aglietti. Share in <a 
href="http://www.facebook.com/sharer.php?u=https://ignite.apache.org/blog/apache-ignite-3-architecture-part-4.html";>Facebook</a><span>,
 </span
+                  December 23, 2025 by Michael Aglietti. Share in <a 
href="http://www.facebook.com/sharer.php?u=https://ignite.apache.org/blog/apache-ignite-3-architecture-part-5.html";>Facebook</a><span>,
 </span
                   ><a
-                    href="http://twitter.com/home?status=Apache Ignite 
Architecture Series: Part 4 - Integrated Platform Performance: Maintaining 
Speed Under 
Pressure%20https://ignite.apache.org/blog/apache-ignite-3-architecture-part-4.html";
+                    href="http://twitter.com/home?status=Apache Ignite 
Architecture Series: Part 5 - Eliminating Data Movement: The Hidden Cost of 
Distributed Event 
Processing%20https://ignite.apache.org/blog/apache-ignite-3-architecture-part-5.html";
                     >Twitter</a
                   >
                 </div>
               </div>
               <div class="post__content">
-                <p>Traditional systems force a choice: real-time analytics or 
fast transactions. Apache Ignite eliminates this trade-off with integrated 
platform performance that delivers both simultaneously.</p>
+                <p>
+                  Your high-velocity application processes events fast enough 
until it doesn't. The bottleneck isn't CPU or memory. It's data movement. Every 
time event processing requires data from another node, network latency adds
+                  milliseconds that compound into seconds of delay under load.
+                </p>
               </div>
-              <div class="post__footer"><a class="more" 
href="/blog/apache-ignite-3-architecture-part-4.html">↓ Read all</a></div>
+              <div class="post__footer"><a class="more" 
href="/blog/apache-ignite-3-architecture-part-5.html">↓ Read all</a></div>
             </article>
             <article class="post">
               <div class="post__header">
-                <h2><a 
href="/blog/apache-ignite-3-architecture-part-5.html">Apache Ignite 
Architecture Series: Part 5 - Eliminating Data Movement: The Hidden Cost of 
Distributed Event Processing</a></h2>
+                <h2><a 
href="/blog/apache-ignite-3-architecture-part-4.html">Apache Ignite 
Architecture Series: Part 4 - Integrated Platform Performance: Maintaining 
Speed Under Pressure</a></h2>
                 <div>
-                  December 23, 2025 by Michael Aglietti. Share in <a 
href="http://www.facebook.com/sharer.php?u=https://ignite.apache.org/blog/apache-ignite-3-architecture-part-5.html";>Facebook</a><span>,
 </span
+                  December 16, 2025 by Michael Aglietti. Share in <a 
href="http://www.facebook.com/sharer.php?u=https://ignite.apache.org/blog/apache-ignite-3-architecture-part-4.html";>Facebook</a><span>,
 </span
                   ><a
-                    href="http://twitter.com/home?status=Apache Ignite 
Architecture Series: Part 5 - Eliminating Data Movement: The Hidden Cost of 
Distributed Event 
Processing%20https://ignite.apache.org/blog/apache-ignite-3-architecture-part-5.html";
+                    href="http://twitter.com/home?status=Apache Ignite 
Architecture Series: Part 4 - Integrated Platform Performance: Maintaining 
Speed Under 
Pressure%20https://ignite.apache.org/blog/apache-ignite-3-architecture-part-4.html";
                     >Twitter</a
                   >
                 </div>
               </div>
               <div class="post__content">
-                <p>
-                  Your high-velocity application processes events fast enough 
until it doesn't. The bottleneck isn't CPU or memory. It's data movement. Every 
time event processing requires data from another node, network latency adds
-                  milliseconds that compound into seconds of delay under load.
-                </p>
+                <p>Traditional systems force a choice: real-time analytics or 
fast transactions. Apache Ignite eliminates this trade-off with integrated 
platform performance that delivers both simultaneously.</p>
               </div>
-              <div class="post__footer"><a class="more" 
href="/blog/apache-ignite-3-architecture-part-5.html">↓ Read all</a></div>
+              <div class="post__footer"><a class="more" 
href="/blog/apache-ignite-3-architecture-part-4.html">↓ Read all</a></div>
             </article>
             <article class="post">
               <div class="post__header">
@@ -440,13 +440,13 @@
             </article>
             <article class="post">
               <div class="post__header">
-                <h2><a 
href="/blog/schema-design-for-distributed-systems-ai3.html"> Schema Design for 
Distributed Systems: Why Data Placement Matters</a></h2>
+                <h2><a 
href="/blog/schema-design-for-distributed-systems-ai3.html">Schema Design for 
Distributed Systems: Why Data Placement Matters</a></h2>
                 <div>
                   November 18, 2025 by Michael Aglietti. Share in <a 
href="http://www.facebook.com/sharer.php?u=https://ignite.apache.org/blog/schema-design-for-distributed-systems-ai3.html";>Facebook</a><span>,
 </span
-                  ><a href="http://twitter.com/home?status= Schema Design for 
Distributed Systems: Why Data Placement 
Matters%20https://ignite.apache.org/blog/schema-design-for-distributed-systems-ai3.html";>Twitter</a>
+                  ><a href="http://twitter.com/home?status=Schema Design for 
Distributed Systems: Why Data Placement 
Matters%20https://ignite.apache.org/blog/schema-design-for-distributed-systems-ai3.html";>Twitter</a>
                 </div>
               </div>
-              <div class="post__content"><p>Discover how Apache Ignite 3 keeps 
related data together with schema-driven colocation, cutting cross-node traffic 
and making distributed queries fast, local and predictable.</p></div>
+              <div class="post__content"><p>Discover how Apache Ignite keeps 
related data together with schema-driven colocation, cutting cross-node traffic 
and making distributed queries fast, local and predictable.</p></div>
               <div class="post__footer"><a class="more" 
href="/blog/schema-design-for-distributed-systems-ai3.html">↓ Read all</a></div>
             </article>
             <article class="post">
@@ -497,28 +497,13 @@
               </div>
               <div class="post__footer"><a class="more" 
href="/blog/whats-new-in-apache-ignite-3-0.html">↓ Read all</a></div>
             </article>
-            <article class="post">
-              <div class="post__header">
-                <h2><a href="/blog/apache-ignite-2-17-0.html">Apache Ignite 
2.17 Release: What’s New</a></h2>
-                <div>
-                  February 13, 2025 by Nikita Amelchev. Share in <a 
href="http://www.facebook.com/sharer.php?u=https://ignite.apache.org/blog/apache-ignite-2-17-0.html";>Facebook</a><span>,
 </span
-                  ><a href="http://twitter.com/home?status=Apache Ignite 2.17 
Release: What’s 
New%20https://ignite.apache.org/blog/apache-ignite-2-17-0.html";>Twitter</a>
-                </div>
-              </div>
-              <div class="post__content">
-                <p>
-                  We are happy to announce the release of <a 
href="https://ignite.apache.org/";>Apache Ignite </a>2.17.0! In this latest 
version, the Ignite community has introduced a range of new features and 
improvements to deliver a more
-                  efficient, flexible, and future-proof platform. Below, we’ll 
cover the key highlights that you can look forward to when upgrading to the new 
release.
-                </p>
-              </div>
-              <div class="post__footer"><a class="more" 
href="/blog/apache-ignite-2-17-0.html">↓ Read all</a></div>
-            </article>
           </section>
           <section class="blog__footer">
             <ul class="pagination">
               <li><a class="current" href="/blog/ignite">1</a></li>
               <li><a class="item" href="/blog/ignite/1/">2</a></li>
               <li><a class="item" href="/blog/ignite/2/">3</a></li>
+              <li><a class="item" href="/blog/ignite/3/">4</a></li>
             </ul>
           </section>
         </main>
diff --git a/blog/index.html b/blog/index.html
index 153e710ff4..003efed0be 100644
--- a/blog/index.html
+++ b/blog/index.html
@@ -440,13 +440,13 @@
             </article>
             <article class="post">
               <div class="post__header">
-                <h2><a 
href="/blog/schema-design-for-distributed-systems-ai3.html"> Schema Design for 
Distributed Systems: Why Data Placement Matters</a></h2>
+                <h2><a 
href="/blog/schema-design-for-distributed-systems-ai3.html">Schema Design for 
Distributed Systems: Why Data Placement Matters</a></h2>
                 <div>
                   November 18, 2025 by Michael Aglietti. Share in <a 
href="http://www.facebook.com/sharer.php?u=https://ignite.apache.org/blog/schema-design-for-distributed-systems-ai3.html";>Facebook</a><span>,
 </span
-                  ><a href="http://twitter.com/home?status= Schema Design for 
Distributed Systems: Why Data Placement 
Matters%20https://ignite.apache.org/blog/schema-design-for-distributed-systems-ai3.html";>Twitter</a>
+                  ><a href="http://twitter.com/home?status=Schema Design for 
Distributed Systems: Why Data Placement 
Matters%20https://ignite.apache.org/blog/schema-design-for-distributed-systems-ai3.html";>Twitter</a>
                 </div>
               </div>
-              <div class="post__content"><p>Discover how Apache Ignite 3 keeps 
related data together with schema-driven colocation, cutting cross-node traffic 
and making distributed queries fast, local and predictable.</p></div>
+              <div class="post__content"><p>Discover how Apache Ignite keeps 
related data together with schema-driven colocation, cutting cross-node traffic 
and making distributed queries fast, local and predictable.</p></div>
               <div class="post__footer"><a class="more" 
href="/blog/schema-design-for-distributed-systems-ai3.html">↓ Read all</a></div>
             </article>
             <article class="post">
@@ -497,22 +497,6 @@
               </div>
               <div class="post__footer"><a class="more" 
href="/blog/whats-new-in-apache-ignite-3-0.html">↓ Read all</a></div>
             </article>
-            <article class="post">
-              <div class="post__header">
-                <h2><a href="/blog/apache-ignite-2-17-0.html">Apache Ignite 
2.17 Release: What’s New</a></h2>
-                <div>
-                  February 13, 2025 by Nikita Amelchev. Share in <a 
href="http://www.facebook.com/sharer.php?u=https://ignite.apache.org/blog/apache-ignite-2-17-0.html";>Facebook</a><span>,
 </span
-                  ><a href="http://twitter.com/home?status=Apache Ignite 2.17 
Release: What’s 
New%20https://ignite.apache.org/blog/apache-ignite-2-17-0.html";>Twitter</a>
-                </div>
-              </div>
-              <div class="post__content">
-                <p>
-                  We are happy to announce the release of <a 
href="https://ignite.apache.org/";>Apache Ignite </a>2.17.0! In this latest 
version, the Ignite community has introduced a range of new features and 
improvements to deliver a more
-                  efficient, flexible, and future-proof platform. Below, we’ll 
cover the key highlights that you can look forward to when upgrading to the new 
release.
-                </p>
-              </div>
-              <div class="post__footer"><a class="more" 
href="/blog/apache-ignite-2-17-0.html">↓ Read all</a></div>
-            </article>
           </section>
           <section class="blog__footer">
             <ul class="pagination">
diff --git a/blog/schema-design-for-distributed-systems-ai3.html 
b/blog/schema-design-for-distributed-systems-ai3.html
index 0f48941aa3..a27eb542a2 100644
--- a/blog/schema-design-for-distributed-systems-ai3.html
+++ b/blog/schema-design-for-distributed-systems-ai3.html
@@ -337,7 +337,7 @@
         <h1>Schema Design for Distributed Systems: Why Data Placement 
Matters</h1>
         <p>
           November 18, 2025 by <strong>Michael Aglietti. Share in </strong><a 
href="http://www.facebook.com/sharer.php?u=https://ignite.apache.org/blog/undefined";>Facebook</a><span>,
 </span
-          ><a href="http://twitter.com/home?status= Schema Design for 
Distributed Systems: Why Data Placement 
Matters%20https://ignite.apache.org/blog/undefined";>Twitter</a>
+          ><a href="http://twitter.com/home?status=Schema Design for 
Distributed Systems: Why Data Placement 
Matters%20https://ignite.apache.org/blog/undefined";>Twitter</a>
         </p>
       </section>
       <div class="blog__content">
@@ -345,7 +345,7 @@
           <section class="blog__posts">
             <article class="post">
               <div>
-                <p>Discover how Apache Ignite 3 keeps related data together 
with schema-driven colocation, cutting cross-node traffic and making 
distributed queries fast, local and predictable.</p>
+                <p>Discover how Apache Ignite keeps related data together with 
schema-driven colocation, cutting cross-node traffic and making distributed 
queries fast, local and predictable.</p>
                 <!-- end -->
                 <h3>Schema Design for Distributed Systems: Why Data Placement 
Matters</h3>
                 <p>
@@ -357,10 +357,9 @@
                   together matters more than most people realize. If related 
data lands on different nodes, every query has to travel the network to fetch 
it, and each millisecond adds up.
                 </p>
                 <p>
-                  That’s where <strong>data placement</strong> becomes the 
real scaling strategy. Apache Ignite 3 takes a different path with 
<strong>schema-driven colocation</strong> — a way to keep related data 
physically together.
-                  Instead of spreading rows randomly across nodes, Ignite uses 
your schema relationships to decide where data lives. The result: a 200 ms 
cross-node query becomes a 5 ms local read.
+                  That’s where <strong>data placement</strong> becomes the 
real scaling strategy. Apache Ignite takes a different path with 
<strong>schema-driven colocation</strong> — a way to keep related data 
physically together. Instead
+                  of spreading rows randomly across nodes, Ignite uses your 
schema relationships to decide where data lives. The result: a 200 ms 
cross-node query becomes a 5 ms local read.
                 </p>
-                <hr />
                 <h3>How Ignite 3 Differs from Other Distributed Databases</h3>
                 <p><strong>Traditional Distributed SQL Databases:</strong></p>
                 <ul>
@@ -376,7 +375,6 @@
                   <li>Local queries eliminate network overhead</li>
                   <li>Microsecond latencies through memory-first storage</li>
                 </ul>
-                <hr />
                 <h3>The Distributed Data Placement Problem</h3>
                 <p>You’ve tuned indexes, optimized queries, and scaled your 
cluster—but latency still creeps in. The problem isn’t your SQL — it’s where 
your data lives.</p>
                 <p>
@@ -393,7 +391,6 @@
                     <a 
href="https://ignite.apache.org/docs/ignite3/latest/quick-start/java-api";>Java 
API quick start guide</a> to set up your development environment.
                   </p>
                 </blockquote>
-                <hr />
                 <h3>How Ignite 3 Places Data Differently</h3>
                 <p>
                   Tables are distributed across multiple nodes using 
consistent hashing, but with a key difference: your schema definitions control 
data placement. Instead of accepting random distribution of related records, 
you declare
@@ -420,7 +417,6 @@
                   Colocation configuration in your schema ensures that Album 
records use the <code>ArtistId</code> value (not <code>AlbumId</code>) for 
partition assignment. This guarantees that Artist 1 and all albums with
                   <code>ArtistId = 1</code> hash to the same partition and 
therefore live on the same nodes.
                 </p>
-                <hr />
                 <h3>Distribution Zones and Data Placement</h3>
                 <p>Distribution zones are cluster-level configurations that 
define how data is distributed and replicated.</p>
                 <blockquote>
@@ -453,8 +449,9 @@ 
ignite.catalog().create(ZoneDefinition.builder("MusicStoreReplicated")
 .replicas(clusterNodes().size())
 .storageProfiles("default")
 .build()).execute();
+
+
 </code></pre>
-                <hr />
                 <h3>Building Your Music Platform Schema</h3>
                 <blockquote>
                   <p>
@@ -492,8 +489,9 @@ public class Artist {
     public String getName() { return Name; }
     public void setName(String name) { this.Name = name; }
 }
+
+
 </code></pre>
-                <hr />
                 <h3>Parent–Child Colocation Implementation</h3>
                 <p>When users search for "The Beatles", they expect both 
artist details and album listings in the same query. Without colocation, this 
requires cross-node joins that can take 40–200 ms.</p>
                 <p>We solve this by setting<code>colocateBy</code>in 
the<code>@Table</code>annotation:</p>
@@ -523,7 +521,6 @@ public class Album {
                   The colocation field (<code>ArtistId</code>) must be part of 
the composite primary key. Ignite uses the<code>ArtistId</code>value to ensure 
albums with the same artist live on the same nodes as their corresponding artist
                   record.
                 </p>
-                <hr />
                 <h3>Performance Impact: Memory-First + Colocation</h3>
                 <p>Let’s quantify the effect of combining memory-first storage 
with schema-driven colocation.</p>
                 <p><strong>Without Colocation – Data Scattered:</strong></p>
@@ -543,7 +540,6 @@ Collection&lt;Album&gt; albums = albumView.getAll(null, 
albumKeys); // Node 2
                   <li><strong>Network traffic elimination:</strong> related 
data queries become local operations</li>
                   <li><strong>Resource efficiency:</strong> CPU focuses on 
serving requests instead of moving data</li>
                 </ul>
-                <hr />
                 <h3>Colocation Enables Compute-to-Data Processing</h3>
                 <p>Schema-driven colocation doesn’t just optimize queries—it 
enables processing where data lives:</p>
                 <pre><code>// Process all albums for an artist locally
@@ -556,7 +552,6 @@ CompletableFuture&lt;RecommendationResult&gt; result = 
ignite.compute()
 
 </code></pre>
                 <p>Instead of moving gigabytes of album data to a compute 
cluster, you move kilobytes of logic to where the data already resides.</p>
-                <hr />
                 <h3>Implementation Guide</h3>
                 <p>Deploy tables in dependency order to avoid colocation 
reference errors:</p>
                 <pre><code>try (IgniteClient client = IgniteClient.builder()
@@ -573,8 +568,9 @@ CompletableFuture&lt;RecommendationResult&gt; result = 
ignite.compute()
     client.catalog().createTable(Album.class);      // References Artist
     client.catalog().createTable(Track.class);      // References Album
 }
+
+
 </code></pre>
-                <hr />
                 <h3>Accessing Your Distributed Data</h3>
                 <p>Ignite 3 provides multiple views of the same colocated 
data:</p>
                 <pre><code>// RecordView for entity operations
@@ -589,11 +585,12 @@ artists.upsert(null, beatles);
 Album abbeyRoad = new Album(1, 1, "Abbey Road", LocalDate.of(1969, 9, 26));
 albums.upsert(null, abbeyRoad);  // Automatically colocated with artist
 
+
+
 </code></pre>
-                <hr />
                 <h3>Summary</h3>
                 <p>
-                  Data placement is where distributed performance is won or 
lost. With <strong>schema-driven colocation</strong>, Apache Ignite 3 keeps 
related data together on the same nodes, so your queries stay local, fast, and
+                  Data placement is where distributed performance is won or 
lost. With <strong>schema-driven colocation</strong>, Apache Ignite keeps 
related data together on the same nodes, so your queries stay local, fast, and
                   predictable.
                 </p>
                 <p>Instead of tuning around network latency, you design for it 
once at the schema level. Your joins stay local, your compute jobs run where 
the data lives, and scaling stops being a tradeoff between performance and 
size.</p>

Reply via email to