This is an automated email from the ASF dual-hosted git repository.

wenming pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/apisix-website.git


The following commit(s) were added to refs/heads/master by this push:
     new c9dd0937949 fix(docs/blog): inline image width syntax (#1953)
c9dd0937949 is described below

commit c9dd09379496129080afe8cd851b7ac2913e997f
Author: Yilia Lin <114121331+yilial...@users.noreply.github.com>
AuthorDate: Wed Sep 10 15:08:47 2025 +0800

    fix(docs/blog): inline image width syntax (#1953)
---
 ...x-honor-gateway-practice-in-massive-business.md | 42 +++++++--------
 .../07/apisix-gateway-practice-in-tencent-timi.md  | 48 ++++++++---------
 ...0-built-unified-l7-load-balancer-with-apisix.md |  6 +--
 ...x-honor-gateway-practice-in-massive-business.md | 60 +++++++++++-----------
 .../07/apisix-gateway-practice-in-tencent-timi.md  | 50 +++++++++---------
 ...0-built-unified-l7-load-balancer-with-apisix.md |  6 +--
 6 files changed, 106 insertions(+), 106 deletions(-)

diff --git 
a/blog/en/blog/2025/04/27/apisix-honor-gateway-practice-in-massive-business.md 
b/blog/en/blog/2025/04/27/apisix-honor-gateway-practice-in-massive-business.md
index 7f98e2f8c47..bbf5b0d7766 100644
--- 
a/blog/en/blog/2025/04/27/apisix-honor-gateway-practice-in-massive-business.md
+++ 
b/blog/en/blog/2025/04/27/apisix-honor-gateway-practice-in-massive-business.md
@@ -107,17 +107,17 @@ To handle high-traffic scenarios, the gateway can be 
rapidly scaled out and prom
 
 Initially, we utilized APISIX's native plugins. As business grew and 
requirements evolved, native plugins became insufficient. Consequently, we 
expanded plugins based on the platform or user-specific needs, resulting in 
over 100 plugins to date.
 
-<div align="center">
-<img alt="Honor Plugin Ecosystem" style="width: 65%" 
src="https://static.api7.ai/uploads/2025/05/13/Pk221A8e_2-honor-plugins-2.webp";></img>
-</div>
+<p align="center">
+  <img width="550" alt="Honor Plugin Ecosystem" 
src="https://static.api7.ai/uploads/2025/05/13/Pk221A8e_2-honor-plugins-2.webp"; 
/>
+</p>
 
 Plugins are categorized into four groups: traffic control, authentication, 
security, and observability. Since our clusters are predominantly deployed 
across dual Availability Zones (AZs) to ensure reliability, this setup 
introduces cross-AZ latency issues. To address this, the gateway facilitates 
local routing within the same AZ, ensuring traffic is forwarded to the nearest 
node.
 
 ### 1. Observability: Traffic Mirroring
 
-<div align="center">
-<img alt="Traffic Mirroring" style="width: 80%" 
src="https://static.api7.ai/uploads/2025/05/13/1wt6a77m_3-traffic-mirror-2.webp";></img>
-</div>
+<p align="center">
+  <img width="700" alt="Honor Traffic Mirroring" 
src="https://static.api7.ai/uploads/2025/05/13/1wt6a77m_3-traffic-mirror-2.webp";
 />
+</p>
 
 #### Request Processing and Traffic Mirroring
 
@@ -131,9 +131,9 @@ After a request reaches APISIX, the traffic is forwarded to 
the upstream service
 
 3. Asynchronous recording: Asynchronous threads extract requests from the 
queue and send them to the analytics platform for data recording. Since 
recording requests include timestamps, asynchronous operations do not affect 
production traffic.
 
-<div align="center">
-<img alt="Custom Plugin Implementation" style="width: 80%" 
src="https://static.api7.ai/uploads/2025/05/13/7jNxZpWR_4-custom-plugin-2.webp";></img>
-</div>
+<p align="center">
+  <img width="700" alt="Custom Plugin Implementation" 
src="https://static.api7.ai/uploads/2025/05/13/7jNxZpWR_4-custom-plugin-2.webp"; 
/>
+</p>
 
 #### Recording Platform Features
 
@@ -221,17 +221,17 @@ Initially, when adopting the single-node rate limiting 
solution, we encountered
 
 2. In the elastic scaling scenario, when the gateway triggers scaling up or 
down, there may be a mismatch in the throttling values. For example, the CPU 
usage reached 80%, triggering an automatic scale-out. Assume each node was 
initially configured with a 2000 QPS limit; increasing the node count to three 
would inadvertently raise the total rate limit to 6000 QPS. This could 
overwhelm backend services, leading to potential system anomalies.
 
-<div align="center">
-<img alt="Single-Node Rate Limiting" style="width: 50%" 
src="https://static.api7.ai/uploads/2025/05/13/GzaePNL2_6-rate-limiting-2.webp";></img>
-</div>
+<p align="center">
+  <img width="500" alt="Single-Node Rate Limiting" 
src="https://static.api7.ai/uploads/2025/05/13/GzaePNL2_6-rate-limiting-2.webp"; 
/>
+</p>
 
 **Solution**
 
 To address these issues, we implemented the following solutions:
 
-<div align="center">
-<img alt="Upgraded Single-Node Rate Limiting Solution" style="width: 60%" 
src="https://static.api7.ai/uploads/2025/05/13/9egDM2V0_7-rate-limiting-upgrade-2.webp";></img>
-</div>
+<p align="center">
+  <img width="500" alt="Upgraded Single-Node Rate Limiting Solution" 
src="https://static.api7.ai/uploads/2025/05/13/9egDM2V0_7-rate-limiting-upgrade-2.webp";
 />
+</p>
 
 1. **Node Reporting and Maintenance**
 
@@ -273,17 +273,17 @@ When applying the open-source distributed rate limiting 
solution, we encountered
 
 3. **Increased Request Latency**: Open-source distributed rate limiting 
solutions typically require accessing Redis to complete counting before 
forwarding the request upstream. This process adds 2–3 milliseconds to the 
latency of business requests.
 
-<div align="center">
-<img alt="Distributed Rate Limiting" style="width: 40%" 
src="https://static.api7.ai/uploads/2025/05/13/XLwUO4Gc_8-distributed-rate-limiting-2.webp";></img>
-</div>
+<p align="center">
+  <img width="500" alt="Distributed Rate Limiting" 
src="https://static.api7.ai/uploads/2025/05/13/XLwUO4Gc_8-distributed-rate-limiting-2.webp";
 />
+</p>
 
 **Solution**
 
 To address these issues, we designed the following optimizations:
 
-<div align="center">
-<img alt="Upgraded Distributed Rate Limiting Solution" style="width: 45%" 
src="https://static.api7.ai/uploads/2025/05/13/J4Ie3Hkg_9-distributed-rate-limiting-upgrade-2.webp";></img>
-</div>
+<p align="center">
+  <img width="500" alt="Upgraded Distributed Rate Limiting Solution" 
src="https://static.api7.ai/uploads/2025/05/13/J4Ie3Hkg_9-distributed-rate-limiting-upgrade-2.webp";
 />
+</p>
 
 1. **Introducing Local Counting Cache**:
 
diff --git a/blog/en/blog/2025/05/07/apisix-gateway-practice-in-tencent-timi.md 
b/blog/en/blog/2025/05/07/apisix-gateway-practice-in-tencent-timi.md
index ddb0fae9770..b498a2c7fa7 100644
--- a/blog/en/blog/2025/05/07/apisix-gateway-practice-in-tencent-timi.md
+++ b/blog/en/blog/2025/05/07/apisix-gateway-practice-in-tencent-timi.md
@@ -80,9 +80,9 @@ Development standards are easy to understand. We need to 
define a library, speci
 
 To lower the development threshold, we support local quick running and 
testing. By utilizing APISIX's Docker image, local plugins can be mounted into 
containers via volume mapping for convenient deployment. Additionally, by 
leveraging the downstream echo-service (a service developed based on 
open-source Node.js), upstream behavior can be simulated. This service can 
return all content of a request, such as request headers. By adding specific 
parameters in the request (e.g., HTTP status co [...]
 
-<div align="center">
-<img alt="TAPISIX Project Introduction" style="width: 80%" 
src="https://static.api7.ai/uploads/2025/05/15/1r4TMUK9_timi-2.webp";></img>
-</div>
+<p align="center">
+  <img width="550" alt="Honor Plugin Ecosystem" 
src="https://static.api7.ai/uploads/2025/05/15/1r4TMUK9_timi-2.webp"; />
+</p>
 
 ### 2. Local Quick Running and Testing
 
@@ -94,9 +94,9 @@ To reduce the development threshold and accelerate 
verification, we provide conv
 
 3. **Direct Browser Access**: Developers can directly verify plugin 
functionality by accessing relevant interfaces in a browser, without additional 
deployment or configuration.
 
-<div align="center">
-<img alt="Run and Test" style="width: 60%" 
src="https://static.api7.ai/uploads/2025/05/15/bdMFTb0b_timi-3.webp";></img>
-</div>
+<p align="center">
+  <img width="550" alt="Honor Plugin Ecosystem" 
src="https://static.api7.ai/uploads/2025/05/15/bdMFTb0b_timi-3.webp"; />
+</p>
 
 By defining development standards and providing local quick development 
support, we have effectively lowered the development threshold and accelerated 
the plugin verification process. Developers can focus on feature implementation 
without worrying about complex deployment and testing procedures, thereby 
improving overall development efficiency.
 
@@ -120,9 +120,9 @@ During pipeline construction, it is essential to ensure 
reliability and stabilit
 
     c. Try Build: Constructs an image using the source code to verify its 
buildability.
 
-<div align="center">
-<img alt="Pipeline Building" style="width: 50%" 
src="https://static.api7.ai/uploads/2025/05/15/VAFUteFJ_timi-4.webp";></img>
-</div>
+<p align="center">
+  <img width="550" alt="Pipeline Building" 
src="https://static.api7.ai/uploads/2025/05/15/VAFUteFJ_timi-4.webp"; />
+</p>
 
 ### 4. Reliability Assurance (CR, lint, unit testing, black-box testing)
 
@@ -132,9 +132,9 @@ We utilize the k6 testing framework from Grafana to 
validate core test cases. Th
 
 k6 Test Cases: Comprising hundreds of test cases covering core processes to 
ensure plugin reliability.
 
-<div align="center">
-<img alt="K6 Test" style="width: 80%" 
src="https://static.api7.ai/uploads/2025/05/15/80NTJpcY_timi-5.webp";></img>
-</div>
+<p align="center">
+  <img width="550" alt="K6 Test" 
src="https://static.api7.ai/uploads/2025/05/15/80NTJpcY_timi-5.webp"; />
+</p>
 
 Through the complete process of local development, quick validation, MR 
submission, pipeline inspection, reliability assurance, and packaging 
deployment, we ensure that every stage of plugin development and deployment 
undergoes strict quality control.
 
@@ -154,9 +154,9 @@ APISIX offers three deployment methods to accommodate the 
needs of different pro
 
 We utilize the standalone mode that retains only the data plane. All 
configurations are stored locally, avoiding reliance on etcd. This mode is more 
suitable for overseas scenarios. Since etcd is a database, some cloud providers 
do not offer etcd services. Given the stringent overseas data compliance 
requirements and our k8s-based deployment environment, we have also implemented 
a configuration management approach that is k8s-friendly.
 
-<div align="center">
-<img alt="APISIX Deployment" style="width: 75%" 
src="https://static.api7.ai/uploads/2025/05/07/99nRuGCG_7-dp-and-cp.webp";></img>
-</div>
+<p align="center">
+  <img width="650" alt="APISIX Deployment" 
src="https://static.api7.ai/uploads/2025/05/07/99nRuGCG_7-dp-and-cp.webp"; />
+</p>
 
 - YAML Configuration: All configurations are directly stored in YAML files for 
easy management and automated deployment.
 - ConfigMap Storage: YAML files are directly placed in k8s ConfigMaps to 
ensure configuration versioning and traceability.
@@ -183,9 +183,9 @@ To address this, we adopted the GitOps model, deploying 
YAML files to a Kubernet
 
 ### Deployment Process Example
 
-<div align="center">
-<img alt="Deployment Workflow" style="width: 80%" 
src="https://static.api7.ai/uploads/2025/05/15/S2R27TnZ_timi-8.webp";></img>
-</div>
+<p align="center">
+  <img width="550" alt="Deployment Workflow" 
src="https://static.api7.ai/uploads/2025/05/15/S2R27TnZ_timi-8.webp"; />
+</p>
 
 In the deployment process illustrated above, SREs (Site Reliability Engineers) 
manage configurations on behalf of users. Any modifications, such as route 
changes or image updates, must be implemented by altering the Helm Chart 
repository. After the change, Argo CD automatically detects it and triggers the 
pipeline to pull the latest configuration for deployment. Additionally, a 
strong synchronization is established between Git and Kubernetes, ensuring 
configuration consistency and reliability.
 
@@ -213,9 +213,9 @@ The APISIX Ingress Controller, as the official community 
solution for k8s, follo
 
 After deploying these CRDs to the k8s cluster, the Ingress Controller 
continuously monitors the relevant CRD resources. It parses the configuration 
information from the CRDs and synchronizes the configurations to APISIX by 
invoking APISIX's Admin API. The Ingress Controller primarily facilitates 
deployment between CRDs and APISIX, ultimately writing data to etcd.
 
-<div align="center">
-<img alt="APISIX Ingress Controller" style="width: 50%" 
src="https://static.api7.ai/uploads/2025/05/07/XbjN7Bky_11-ingress-controller.webp";></img>
-</div>
+<p align="center">
+  <img width="550" alt="APISIX Ingress Controller" 
src="https://static.api7.ai/uploads/2025/05/07/XbjN7Bky_11-ingress-controller.webp";
 />
+</p>
 
 After careful evaluation, we found that the deployment and operational model 
of the APISIX Ingress Controller does not fully align with our team's 
requirements for the following reasons:
 
@@ -315,9 +315,9 @@ To address these challenges, we implemented a two-pronged 
strategy to optimize o
 
 In k8s upstream configurations, there are various types, differed solely by 
the service name. After introducing a new version and updating the Lua package, 
we effectively addressed the issue of duplicated configurations by fully 
utilizing leveraged YAML's anchor (&) and alias (*) features. Through the 
anchor mechanism, we abstracted and reused common configuration parts, reducing 
duplicated configurations by approximately 70% in practical applications. This 
significantly improved the eff [...]
 
-<div align="center">
-<img alt="Duplicated Route Configuration" style="width: 60%" 
src="https://static.api7.ai/uploads/2025/05/07/hbEPdHAf_20-duplicated-route-configuration.webp";></img>
-</div>
+<p align="center">
+  <img width="550" alt="Duplicated Route Configuration" 
src="https://static.api7.ai/uploads/2025/05/07/hbEPdHAf_20-duplicated-route-configuration.webp";
 />
+</p>
 
 ## Migration Practices of APISIX Replacing Ingress
 
diff --git 
a/blog/en/blog/2025/09/03/360-built-unified-l7-load-balancer-with-apisix.md 
b/blog/en/blog/2025/09/03/360-built-unified-l7-load-balancer-with-apisix.md
index 2aceb11537f..809e072d1db 100644
--- a/blog/en/blog/2025/09/03/360-built-unified-l7-load-balancer-with-apisix.md
+++ b/blog/en/blog/2025/09/03/360-built-unified-l7-load-balancer-with-apisix.md
@@ -89,9 +89,9 @@ Traffic flows through one of two primary paths depending on 
the environment:
 * **VPC Cloud L7 Traffic:**
     Public EIP -> L4 Load Balancer (vpc vip) -> **L7 LB Gateway with VXLAN 
encapsulation** -> VPC-internal IP (VM, pod, etc.)
 
-<div align="center">
-<img alt="Traffic Path Diagram" style="width: 65%" 
src="https://static.api7.ai/uploads/2025/09/04/zO2tt4qq_3.1-en.webp";></img>
-</div>
+<p align="center">
+  <img width="500" alt="Traffic Path Diagram" 
src="https://static.api7.ai/uploads/2025/09/04/zO2tt4qq_3.1-en.webp"; />
+</p>
 
 ### Conclusion & Future Outlook
 
diff --git 
a/blog/zh/blog/2025/04/27/apisix-honor-gateway-practice-in-massive-business.md 
b/blog/zh/blog/2025/04/27/apisix-honor-gateway-practice-in-massive-business.md
index 8894de14bfc..55670522f0a 100644
--- 
a/blog/zh/blog/2025/04/27/apisix-honor-gateway-practice-in-massive-business.md
+++ 
b/blog/zh/blog/2025/04/27/apisix-honor-gateway-practice-in-massive-business.md
@@ -24,7 +24,7 @@ image: 
https://static.api7.ai/uploads/2025/04/27/qq0YIAxK_honor-case-study.webp
 > 作者:付家浩、许伟川,荣耀 PAAS 平台部工程师。本文整理自 2025 年 4 月 12 日两位工程师在 APISIX 深圳 Meetup 的演讲。
 <!--truncate-->
 
-## 关于荣耀
+## 荣耀简介
 
 [荣耀](https://www.honor.com/cn/)成立于 2013 年,是全球领先的智能终端提供商。荣耀的产品已销往全球 100 
多个国家和地区,并与 200 多个运营商建立了合作关系。荣耀在全球的体验店与专区专柜超 52000,在网设备数超 2.5 亿。
 
@@ -97,9 +97,9 @@ image: 
https://static.api7.ai/uploads/2025/04/27/qq0YIAxK_honor-case-study.webp
 
 关于 APISIX 在荣耀海量业务下的实践,最初我们使用 APISIX 
的原生插件,随着业务发展和要求,原生插件已经无法满足我们的需求。因此我们基于平台或者用户基于自身的需求扩展了一些插件,目前已经有 100 多个。
 
-<div align="center">
-<img alt="Honor Plugin Ecosystem" style="width: 65%" 
src="https://static.api7.ai/uploads/2025/05/16/eycp2ZaK_2-honor-plugins-ecosystem.webp";></img>
-</div>
+<p align="center">
+  <img width="550" alt="Honor Plugin Ecosystem" 
src="https://static.api7.ai/uploads/2025/05/16/eycp2ZaK_2-honor-plugins-ecosystem.webp";
 />
+</p>
 
 ### 插件分类
 
@@ -107,9 +107,9 @@ image: 
https://static.api7.ai/uploads/2025/04/27/qq0YIAxK_honor-case-study.webp
 
 ### 1. 可观测:流量镜像
 
-<div align="center">
-<img alt="Traffic Mirroring" style="width: 80%" 
src="https://static.api7.ai/uploads/2025/04/27/N6bqzJgO_3-traffic-mirror.webp";></img>
-</div>
+<p align="center">
+  <img width="700" alt="Honor Traffic Mirroring" 
src="https://static.api7.ai/uploads/2025/04/27/N6bqzJgO_3-traffic-mirror.webp"; 
/>
+</p>
 
 #### 请求处理与流量镜像
 
@@ -123,9 +123,9 @@ image: 
https://static.api7.ai/uploads/2025/04/27/qq0YIAxK_honor-case-study.webp
 2. **上游处理**:APISIX 将请求转发至上游,上游返回响应后,客户端请求流程结束。
 3. **异步录制**:通过异步线程从队列中提取请求,并将其发送至录制平台进行数据录制。由于录制请求包含时间戳,异步操作不会影响正式流量。
 
-<div align="center">
-<img alt="Custom Plugin Implementation" style="width: 80%" 
src="https://static.api7.ai/uploads/2025/04/27/0x2hYRcj_4-custom-plugin.webp";></img>
-</div>
+<p align="center">
+  <img width="700" alt="Custom Plugin Implementation" 
src="https://static.api7.ai/uploads/2025/04/27/0x2hYRcj_4-custom-plugin.webp"; />
+</p>
 
 #### 录制平台功能
 
@@ -146,7 +146,7 @@ image: 
https://static.api7.ai/uploads/2025/04/27/qq0YIAxK_honor-case-study.webp
 
 传统灰度插件支持基于规则或流量百分比的灰度功能,但其流量百分比灰度可能导致流量分配不一致,例如同一请求在不同时间可能被分配到不同的灰度环境。这种情况在 To 
C 场景中可能影响业务的稳定性。
 
-为解决这一问题,我们在灰度插件前引入了哈希插件 key-hash,结合灰度插件实现稳定的灰度百分比分配。具体实现方式如下:
+为解决这一问题,我们在灰度插件前引入了哈希插件 `key-hash`,结合灰度插件实现稳定的灰度百分比分配。具体实现方式如下:
 
 1. 支持基于特定请求头或 Cookie 的输入进行哈希计算。
 2. 将哈希结果作为灰度插件的输入,用于确定流量分配的百分比。
@@ -163,11 +163,11 @@ image: 
https://static.api7.ai/uploads/2025/04/27/qq0YIAxK_honor-case-study.webp
 
    a. 当流量通过 APISIX 网关时,会根据灰度策略对流量进行打标。
 
-   b. 若通过的流量为灰度流量,网关会在请求中插入特定的请求头(如 honor-tag:gray),标识该请求为灰度流量。
+   b. 若通过的流量为灰度流量,网关会在请求中插入特定的请求头(如 `honor-tag:gray`),标识该请求为灰度流量。
 
 **2. 服务注册与标识**:
 
-   a. 服务 A 在注册到注册中心时,会将自己的灰度标识(如 gray)一并注册。
+   a. 服务 A 在注册到注册中心时,会将自己的灰度标识(如 `gray`)一并注册。
 
    b. 注册中心维护了服务的灰度标识与实例的映射关系。
 
@@ -175,7 +175,7 @@ image: 
https://static.api7.ai/uploads/2025/04/27/qq0YIAxK_honor-case-study.webp
 
    a. 服务 A 调用服务 B:
 
-      i. 服务 A 收到请求后,首先检查请求中是否包含灰度标识(如 honor-tag:gray)。
+      i. 服务 A 收到请求后,首先检查请求中是否包含灰度标识(如 `honor-tag:gray`)。
       
       ii. 若请求包含灰度标识,服务 A 会根据该标识从注册中心获取服务 B 的灰度实例,并优先调度灰度实例。
       
@@ -183,13 +183,13 @@ image: 
https://static.api7.ai/uploads/2025/04/27/qq0YIAxK_honor-case-study.webp
 
    b. 服务 B 调用服务 C:
 
-      i. 服务 B 收到服务 A 传递的灰度标识(如 honor-tag:gray)后,同样会根据该标识从注册中心获取服务 C 的灰度实例。
+      i. 服务 B 收到服务 A 传递的灰度标识(如 `honor-tag:gray`)后,同样会根据该标识从注册中心获取服务 C 的灰度实例。
       
       ii. 若服务 C 存在灰度实例,则将请求调度到灰度实例;否则,调度正式实例。
 
 **4. 全链路灰度实现**:
 
-   a. 通过请求头的透传(如 honor-tag:gray),确保灰度标识在服务链路中保持一致。
+   a. 通过请求头的透传(如 `honor-tag:gray`),确保灰度标识在服务链路中保持一致。
 
    b. 服务链路中的每个节点根据灰度标识进行调度决策,从而实现全链路灰度能力。
 
@@ -209,17 +209,17 @@ APISIX 提供了丰富的插件能力,涵盖单机限流和分布式限流方
 
 在弹性伸缩场景下,网关触发扩容或缩容时,限流值可能出现不匹配问题。例如,当 CPU 使用率达到 80% 时触发弹性扩容,假设初始配置为每个节点限流值为 
2000,扩容后节点数量增加至 3 个,总限流值会变为 6000,这可能导致后端服务因流量超出承载能力而异常。
 
-<div align="center">
-<img alt="Single-Node Rate Limiting" style="width: 50%" 
src="https://static.api7.ai/uploads/2025/04/27/35KRFtE7_6-rate-limiting.webp";></img>
-</div>
+<p align="center">
+  <img width="500" alt="Single-Node Rate Limiting" 
src="https://static.api7.ai/uploads/2025/04/27/35KRFtE7_6-rate-limiting.webp"; />
+</p>
 
 **优化方案**
 
 为解决上述问题,我们引入了以下优化措施:
 
-<div align="center">
-<img alt="Upgraded Single-Node Rate Limiting Solution" style="width: 60%" 
src="https://static.api7.ai/uploads/2025/04/27/BsEyxG1X_7-rate-limiting-upgrade.webp";></img>
-</div>
+<p align="center">
+  <img width="500" alt="Upgraded Single-Node Rate Limiting Solution" 
src="https://static.api7.ai/uploads/2025/04/27/BsEyxG1X_7-rate-limiting-upgrade.webp";
 />
+</p>
 
 **1. 节点信息上报与维护**
 
@@ -261,17 +261,17 @@ b. **插件复用**:内部大量插件(如固定窗口限流、自定义性
 
 3. **请求时延增加**:开源分布式限流方案需先访问 Redis 完成计数,再将请求转发至上游,导致业务请求时延增加 2-3 毫秒。
 
-<div align="center">
-<img alt="Distributed Rate Limiting" style="width: 40%" 
src="https://static.api7.ai/uploads/2025/04/27/Jg0gGugw_8-distributed-rate-limiting.webp";></img>
-</div>
+<p align="center">
+  <img width="500" alt="Distributed Rate Limiting" 
src="https://static.api7.ai/uploads/2025/04/27/Jg0gGugw_8-distributed-rate-limiting.webp";
 />
+</p>
 
 **优化方案**
 
 为解决上述问题,我们设计了以下优化方案:
 
-<div align="center">
-<img alt="Upgraded Distributed Rate Limiting Solution" style="width: 45%" 
src="https://static.api7.ai/uploads/2025/04/27/peXIhano_9-distributed-rate-limiting-upgrade.webp";></img>
-</div>
+<p align="center">
+  <img width="500" alt="Upgraded Distributed Rate Limiting Solution" 
src="https://static.api7.ai/uploads/2025/04/27/peXIhano_9-distributed-rate-limiting-upgrade.webp";
 />
+</p>
 
 **1. 引入本地计数缓存**:
 
@@ -339,8 +339,8 @@ b. **插件复用**:内部大量插件(如固定窗口限流、自定义性
 2. **分流量检测**:在 APISIX 集群中,将部分流量转发至 WAF 进行检测,判断流量是否正常或是否包含恶意攻击(如出口攻击和命令出口攻击)。
 3. **状态码响应机制**:
 
-   a. 若 WAF 检测到流量正常,返回 200 状态码,请求被放通到上游。
-   b. 若 WAF 检测到恶意攻击,返回类似 403 的状态码,请求被拒绝。
+   a. 若 WAF 检测到流量正常,返回 `200` 状态码,请求被放通到上游。
+   b. 若 WAF 检测到恶意攻击,返回类似 `403` 的状态码,请求被拒绝。
 
 4. **故障容错**:若 WAF 发生故障,流量可直接转发到后端,避免因 WAF 故障导致链路中断,提升整体链路的可靠性。
 
diff --git a/blog/zh/blog/2025/05/07/apisix-gateway-practice-in-tencent-timi.md 
b/blog/zh/blog/2025/05/07/apisix-gateway-practice-in-tencent-timi.md
index acaefc5235c..5ed24cd333b 100644
--- a/blog/zh/blog/2025/05/07/apisix-gateway-practice-in-tencent-timi.md
+++ b/blog/zh/blog/2025/05/07/apisix-gateway-practice-in-tencent-timi.md
@@ -23,7 +23,7 @@ image: 
https://static.api7.ai/uploads/2025/05/07/Em3otYyD_tencent-timi-uses-apis
 >
 <!--truncate-->
 
-## 关于腾讯天美
+## 腾讯天美工作室群简介
 
 天美工作室群 Timi Studio Group 
是腾讯游戏旗下精品游戏研发工作室,也是多款热门手游的研发商,包括《使命召唤手游》、《宝可梦大集结》、《Honor Of 
Kings》(《王者荣耀》国际版)和《王者荣耀》。
 
@@ -81,9 +81,9 @@ image: 
https://static.api7.ai/uploads/2025/05/07/Em3otYyD_tencent-timi-uses-apis
 
 为降低开发门槛,我们支持本地快速运行与测试。借助 APISIX 的 Docker 镜像,可将本地插件通过卷映射至容器中,实现便捷部署。同时,利用下游的 
echo-service(基于开源 Node.js 开发的服务),可模拟上游行为。该服务能够返回请求的所有内容,如请求头等。通过在请求中添加特定参数(如 
HTTP 状态码 500),可模拟上游的异常行为,从而全面验证插件功能。
 
-<div align="center">
-<img alt="TAPISIX Project Introduction" style="width: 80%" 
src="https://static.api7.ai/uploads/2025/05/07/BPa5r4Tr_2-tapisix-project.webp";></img>
-</div>
+<p align="center">
+  <img width="550" alt="TAPISIX Project Introduction" 
src="https://static.api7.ai/uploads/2025/05/07/BPa5r4Tr_2-tapisix-project.webp"; 
/>
+</p>
 
 ### 2. 本地快速运行与测试
 
@@ -93,9 +93,9 @@ image: 
https://static.api7.ai/uploads/2025/05/07/Em3otYyD_tencent-timi-uses-apis
 2. **Makefile 构建**:构建 Makefile 文件,支持通过 make run-dev 命令快速启动插件测试环境,确保本地文件与容器无缝连接。
 3. **浏览器直接访问**:开发人员只需在浏览器中访问相关接口,即可直接验证插件功能,无需额外部署或配置。
 
-<div align="center">
-<img alt="Run and Test" style="width: 60%" 
src="https://static.api7.ai/uploads/2025/05/07/vlmK6Cls_3-run-and-test.webp";></img>
-</div>
+<p align="center">
+  <img width="550" alt="Run and Test" 
src="https://static.api7.ai/uploads/2025/05/07/vlmK6Cls_3-run-and-test.webp"; />
+</p>
 
 
通过定义开发规范和提供本地快速开发支持,我们有效降低了开发门槛,加速了插件的验证过程。开发人员可以专注于功能实现,而无需担心复杂的部署和测试流程,从而提高了整体开发效率。
 
@@ -119,9 +119,9 @@ image: 
https://static.api7.ai/uploads/2025/05/07/Em3otYyD_tencent-timi-uses-apis
   
     c. **Try Build**:使用源代码构建镜像,验证代码的可构建性。
 
-<div align="center">
-<img alt="Pipeline Building" style="width: 50%" 
src="https://static.api7.ai/uploads/2025/05/07/7QGbMcLK_4-pipeline-inspection.webp";></img>
-</div>
+<p align="center">
+  <img width="550" alt="Pipeline Building" 
src="https://static.api7.ai/uploads/2025/05/07/7QGbMcLK_4-pipeline-inspection.webp";
 />
+</p>
 
 ### 4. 可靠性保障(CR、lint、单侧、黑盒测试)
 
@@ -131,9 +131,9 @@ image: 
https://static.api7.ai/uploads/2025/05/07/Em3otYyD_tencent-timi-uses-apis
 
 k6 测试用例:包含几百个测试用例,覆盖了核心流程,确保插件的可靠性。
 
-<div align="center">
-<img alt="K6 Test" style="width: 80%" 
src="https://static.api7.ai/uploads/2025/05/07/DbmDfZFS_5-k6.webp";></img>
-</div>
+<p align="center">
+  <img width="550" alt="K6 Test" 
src="https://static.api7.ai/uploads/2025/05/07/DbmDfZFS_5-k6.webp"; />
+</p>
 
 通过本地开发、快速验证、MR 提交、流水线检测、可靠性保障以及打包部署的完整流程,我们确保了插件从开发到上线的每个环节都经过严格的质量控制。
 
@@ -151,9 +151,9 @@ APISIX 提供了三种部署方式,以适应不同的生产环境需求:
 
 只保留数据面的独立模式也是我们使用的方式,所有的配置都存储在本地,避免了对 etcd 的依赖。这种模式更适用于海外场景。由于 etcd 
属于数据库选型,部分云厂商不提供 etcd 服务,且海外对数据合规性要求严格,并且我们的部署环境在 k8s,因此也采用了对 k8s 友好的配置管理方式。
 
-<div align="center">
-<img alt="APISIX Deployment" style="width: 75%" 
src="https://static.api7.ai/uploads/2025/05/07/99nRuGCG_7-dp-and-cp.webp";></img>
-</div>
+<p align="center">
+  <img width="650" alt="APISIX Deployment" 
src="https://static.api7.ai/uploads/2025/05/07/99nRuGCG_7-dp-and-cp.webp"; />
+</p>
 
 - **YAML 配置**:所有配置直接存储在 YAML 文件中,便于管理和自动化部署。
 - **ConfigMap 存储**:将 yaml 文件直接放置在 k8s 的 ConfigMap 中,确保配置的版本化和可追溯性。
@@ -172,9 +172,9 @@ APISIX 提供了三种部署方式,以适应不同的生产环境需求:
 
 ### 部署流程示例
 
-<div align="center">
-<img alt="Deployment Workflow" style="width: 80%" 
src="https://static.api7.ai/uploads/2025/05/07/KdOcfic9_8-deployment-workflow.webp";></img>
-</div>
+<p align="center">
+  <img width="550" alt="Deployment Workflow" 
src="https://static.api7.ai/uploads/2025/05/07/KdOcfic9_8-deployment-workflow.webp";
 />
+</p>
 
 在上图展示的部署流程中,SRE(Site Reliability Engineer)代表用户进行配置管理。任何修改,如路由变更或镜像更新,都需要通过修改 
Helm Chart 仓库来实现。修改后,Argo CD 会自动检测到变更并触发流水线,拉取最新的配置完成部署。另外,Git 和 Kubernetes 
之间建立了强同步关系,确保配置的一致性和可靠性。
 
@@ -204,9 +204,9 @@ APISIX Ingress Controller 作为社区为 k8s 提供的官方解决方案,其
 
 将这些 CRD 部署到 k8s 集群后,Ingress Controller 会持续监听相关的 CRD 资源。解析 CRD 中的配置信息,并通过调用 
APISIX 的 Admin API 将配置同步到 APISIX 中。Ingress Controller 主要为了进行 CRD 和 APISIX 
之间的部署,最终还是将数据写入etcd。
 
-<div align="center">
-<img alt="APISIX Ingress Controller" style="width: 50%" 
src="https://static.api7.ai/uploads/2025/05/07/XbjN7Bky_11-ingress-controller.webp";></img>
-</div>
+<p align="center">
+  <img width="550" alt="APISIX Ingress Controller" 
src="https://static.api7.ai/uploads/2025/05/07/XbjN7Bky_11-ingress-controller.webp";
 />
+</p>
 
 经过审慎评估,我们发现 APISIX Ingress Controller 的部署和运维模式并不完全适配我们的团队需求,主要有以下原因:
 
@@ -300,9 +300,9 @@ Trace 上报基于 APISIX 提供的 OpenTelemetry 插件实现。该插件通过
 
 在 k8s 的 upstream 配置中,存在多种类型,这些不同类型配置间的差异往往仅体现在 service name 这一关键要素上。在引入新版本并更新 
Lua 包后,我们充分利用其支持的锚点功能,对重复配置问题进行了有效治理。通过锚点机制,实现了对共性配置部分的抽象与复用,在实际应用中成功减少了约 70% 
的重复配置内容,极大地提升了配置管理的效率与简洁性,降低了因重复配置而引入错误的风险。
 
-<div align="center">
-<img alt="Duplicated Route Configuration" style="width: 60%" 
src="https://static.api7.ai/uploads/2025/05/07/hbEPdHAf_20-duplicated-route-configuration.webp";></img>
-</div>
+<p align="center">
+  <img width="550" alt="Duplicated Route Configuration" 
src="https://static.api7.ai/uploads/2025/05/07/hbEPdHAf_20-duplicated-route-configuration.webp";
 />
+</p>
 
 ## APISIX 替换 Ingress 迁移实践
 
diff --git 
a/blog/zh/blog/2025/09/03/360-built-unified-l7-load-balancer-with-apisix.md 
b/blog/zh/blog/2025/09/03/360-built-unified-l7-load-balancer-with-apisix.md
index a4b2a3bd00a..d122a00ddf4 100644
--- a/blog/zh/blog/2025/09/03/360-built-unified-l7-load-balancer-with-apisix.md
+++ b/blog/zh/blog/2025/09/03/360-built-unified-l7-load-balancer-with-apisix.md
@@ -94,9 +94,9 @@ image: 
https://static.api7.ai/uploads/2025/09/05/SWaSLAns_360-zyun-cloud-use-cas
 
 - **VPC 云上七层流量**:公网 EIP -> 四层负载均衡集群(vpc vip) -> 七层负载均衡网关(vxlan封装)-> VPC 内 
IP(包括虚机/POD等)。
 
-<div align="center">
-<img alt="Traffic Path Diagram" style="width: 65%" 
src="https://static.api7.ai/uploads/2025/09/04/BFDB1z4d_3.1-cn.webp";></img>
-</div>
+<p align="center">
+  <img width="500" alt="Traffic Path Diagram" 
src="https://static.api7.ai/uploads/2025/09/04/zO2tt4qq_3.1-en.webp"; />
+</p>
 
 ## 总结与展望
 

Reply via email to