This is an automated email from the ASF dual-hosted git repository.

jimin pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-seata-k8s.git


The following commit(s) were added to refs/heads/master by this push:
     new 3a5b57e  docs: enhance README and README.zh.md with project overview, 
features, and deployment instructions (#36)
3a5b57e is described below

commit 3a5b57e8159da845ec95de0583c5bbca4b8c6d7c
Author: jimin <[email protected]>
AuthorDate: Sat Nov 22 23:43:12 2025 +0800

    docs: enhance README and README.zh.md with project overview, features, and 
deployment instructions (#36)
---
 README.md    | 414 ++++++++++++++++++++++++++++++++++----------------
 README.zh.md | 489 +++++++++++++++++++++++++++++++++++++++++------------------
 2 files changed, 625 insertions(+), 278 deletions(-)

diff --git a/README.md b/README.md
index 565a93b..07f8a8f 100644
--- a/README.md
+++ b/README.md
@@ -14,185 +14,339 @@
     See the License for the specific language governing permissions and
     limitations under the License.
 -->
+
 # seata-k8s
 
-[中文文档](README.zh.md) 
+[中文文档](README.zh.md) | [English](README.md)
+
+## Overview
+
+seata-k8s is a Kubernetes operator for deploying and managing [Apache 
Seata](https://github.com/seata/seata) distributed transaction servers. It 
provides a streamlined way to deploy Seata Server clusters on Kubernetes with 
automatic scaling, persistence management, and operational simplicity.
+
+## Features
+
+- 🚀 **Easy Deployment**: Deploy Seata Server clusters using Kubernetes CRDs
+- 📈 **Auto Scaling**: Simple scaling through replica configuration
+- 💾 **Persistence Management**: Built-in support for persistent volumes
+- 🔐 **RBAC Support**: Comprehensive role-based access control
+- 🛠️ **Developer Friendly**: Includes debugging and development tools
 
-Associated Projects:
+## Related Projects
 
-- [https://github.com/seata/seata](https://github.com/seata/seata)
-- 
[https://github.com/seata/seata-samples/tree/docker/springboot-dubbo-fescar](https://github.com/seata/seata-samples/tree/docker/springboot-dubbo-fescar)
-- 
[https://github.com/seata/seata-docker](https://github.com/seata/seata-docker)
+- [Apache Seata](https://github.com/seata/seata) - Distributed transaction 
framework
+- [Seata 
Samples](https://github.com/seata/seata-samples/tree/docker/springboot-dubbo-fescar)
 - Example implementations
+- [Seata Docker](https://github.com/seata/seata-docker) - Docker image 
repository
+
+## Table of Contents
+
+- [Method 1: Using Operator](#method-1-using-operator)
+  - [Usage](#usage)
+  - [CRD Reference](#crd-reference)
+  - [Development Guide](#development-guide)
+- [Method 2: Direct Kubernetes 
Deployment](#method-2-direct-kubernetes-deployment)
+  - [Deployment Steps](#deployment-steps)
+  - [Testing](#testing)
 
 ## Method 1: Using Operator
 
+### Prerequisites
+
+- Kubernetes 1.16+ cluster
+- kubectl configured with access to your cluster
+- Make and Docker (for building images)
+
 ### Usage
 
 To deploy Seata Server using the Operator method, follow these steps:
 
-1. Clone this repository:
+#### Step 1: Clone the Repository
 
-   ```shell
-   git clone https://github.com/apache/incubator-seata-k8s.git
-   ```
+```shell
+git clone https://github.com/apache/incubator-seata-k8s.git
+cd incubator-seata-k8s
+```
 
-2. Deploy Controller, CRD, RBAC, and other resources to the Kubernetes cluster:
+#### Step 2: Deploy Operator to Cluster
 
-   ```shell
-   make deploy
-   kubectl get deployment -n seata-k8s-controller-manager  # check if exists
-   ```
-   
-4. You can now deploy your CR to the cluster. An example can be found here 
[seata-server-cluster.yaml](deploy/seata-server-cluster.yaml):
-
-   ```yaml
-   apiVersion: operator.seata.apache.org/v1alpha1
-   kind: SeataServer
-   metadata:
-     name: seata-server
-     namespace: default
-   spec:
-     serviceName: seata-server-cluster
-     replicas: 3
-     image: apache/seata-server:latest
-     persistence:
-         volumeReclaimPolicy: Retain
-       spec:
-         resources:
-           requests:
-               storage: 5Gi
-   ```
-   
-   For the example above, if everything is correct, the controller will deploy 
3 StatefulSet resources and a Headless Service to the cluster. You can access 
the Seata Server cluster in the cluster through 
`seata-server-0.seata-server-cluster.default.svc`.
+Deploy the controller, CRD, RBAC, and other required resources:
+
+```shell
+make deploy
+```
+
+Verify the deployment:
+
+```shell
+kubectl get deployment -n seata-k8s-controller-manager
+kubectl get pods -n seata-k8s-controller-manager
+```
 
-### Reference
+#### Step 3: Deploy Seata Server Cluster
+
+Create a SeataServer resource. Here's an example based on 
[seata-server-cluster.yaml](deploy/seata-server-cluster.yaml):
+
+```yaml
+apiVersion: operator.seata.apache.org/v1alpha1
+kind: SeataServer
+metadata:
+  name: seata-server
+  namespace: default
+spec:
+  serviceName: seata-server-cluster
+  replicas: 3
+  image: apache/seata-server:latest
+  persistence:
+    volumeReclaimPolicy: Retain
+    spec:
+      resources:
+        requests:
+          storage: 5Gi
+```
 
-For CRD details, you can visit 
[operator.seata.apache.org_seataservers.yaml](config/crd/bases/operator.seata.apache.org_seataservers.yaml).
 Here are some important configurations:
+Apply it to your cluster:
 
-1. `serviceName`: Used to define the name of the Headless Service deployed by 
the controller. This will affect how you access the server cluster. In the 
example above, you can access the Seata Server cluster through 
`seata-server-0.seata-server-cluster.default.svc`.
+```shell
+kubectl apply -f seata-server.yaml
+```
 
-2. `replicas`: Defines the number of Seata Server replicas. Adjusting this 
field achieves scaling without the need for additional HTTP requests to change 
the Seata raft cluster list.
+If everything is working correctly, the operator will:
+- Create 3 StatefulSet replicas
+- Create a Headless Service named `seata-server-cluster`
+- Set up persistent volumes
 
-3. `image`: Defines the Seata Server image name.
+Access the Seata Server cluster within your Kubernetes network:
 
-4. `ports`: Three ports need to be set under the `ports` property: 
`consolePort`, `servicePort`, and `raftPort`, with default values of 7091, 
8091, and 9091, respectively.
+```
+seata-server-0.seata-server-cluster.default.svc
+seata-server-1.seata-server-cluster.default.svc
+seata-server-2.seata-server-cluster.default.svc
+```
 
-5. `resources`: Used to define container resource requirements.
+### CRD Reference
+
+For complete CRD definitions, see 
[operator.seata.apache.org_seataservers.yaml](config/crd/bases/operator.seata.apache.org_seataservers.yaml).
+
+#### Key Configuration Properties
+
+| Property | Description | Default | Example |
+|----------|-------------|---------|---------|
+| `serviceName` | Name of the Headless Service | - | `seata-server-cluster` |
+| `replicas` | Number of Seata Server replicas | 1 | `3` |
+| `image` | Seata Server container image | - | `apache/seata-server:latest` |
+| `ports.consolePort` | Console port | `7091` | `7091` |
+| `ports.servicePort` | Service port | `8091` | `8091` |
+| `ports.raftPort` | Raft consensus port | `9091` | `9091` |
+| `resources` | Container resource requests/limits | - | See example below |
+| `persistence.volumeReclaimPolicy` | Volume reclaim policy | `Retain` | 
`Retain` or `Delete` |
+| `persistence.spec.resources.requests.storage` | Persistent volume size | - | 
`5Gi` |
+| `env` | Environment variables | - | See example below |
+
+#### Environment Variables & Secrets
+
+Configure Seata Server settings using environment variables and Kubernetes 
Secrets:
+
+```yaml
+apiVersion: operator.seata.apache.org/v1alpha1
+kind: SeataServer
+metadata:
+  name: seata-server
+  namespace: default
+spec:
+  image: apache/seata-server:latest
+  replicas: 1
+  persistence:
+    spec:
+      resources:
+        requests:
+          storage: 5Gi
+  env:
+  - name: console.user.username
+    value: seata
+  - name: console.user.password
+    valueFrom:
+      secretKeyRef:
+        name: seata-credentials
+        key: password
+---
+apiVersion: v1
+kind: Secret
+metadata:
+  name: seata-credentials
+  namespace: default
+type: Opaque
+stringData:
+  password: your-secure-password
+```
 
-6. `persistence.spec`: Used to define mounted storage resource requirements.
 
-7. `persistence.volumeReclaimPolicy`: Used to control volume reclaim behavior, 
possible choices include `Retain` or `Delete`, which infer retain volumes or 
delete volumes after deletion respectively.
 
-8. `env`: Environment variables passed to the container. You can use this 
field to define Seata Server configuration. For example:
+### Development Guide
 
-   ```yaml
-   apiVersion: operator.seata.apache.org/v1alpha1
-   kind: SeataServer
-   metadata:
-     name: seata-server
-     namespace: default
-   spec:
-     image: apache/seata-server:latest
-     store:
-       resources:
-         requests:
-           storage: 5Gi
-     env:
-     - name: console.user.username
-       value: seata
-     - name: console.user.password
-       valueFrom:
-         secretKeyRef:
-           name: seata
-           key: password
-   ---
-   apiVersion: v1
-   kind: Secret
-   metadata:
-     name: seata
-   type: Opaque
-   data:
-     password: seata
-   ```
+To debug and develop this operator locally, we recommend using Minikube or a 
similar local Kubernetes environment.
 
+#### Option 1: Build and Deploy Docker Image
 
+Modify the code and rebuild the controller image:
 
-### For Developer
+```shell
+# Start minikube and set docker environment
+minikube start
+eval $(minikube docker-env)
 
-To debug this operator locally, we suggest you use a test k8s environment like 
minikube.
+# Build and deploy
+make docker-build deploy
 
-1. Method 1. Modify code and build the controller image:
+# Verify deployment
+kubectl get deployment -n seata-k8s-controller-manager
+```
 
-   Assume you are using minikube for testing,
+#### Option 2: Local Debug with Telepresence
 
-   ```shell
-   eval $(minikube docker-env)
-   make docker-build deploy
-   ```
+Use [Telepresence](https://www.telepresence.io/) to debug locally without 
building container images.
 
-2. Method 2. Locally debug without building images
+**Prerequisites:**
+- Install [Telepresence 
CLI](https://www.telepresence.io/docs/latest/quick-start/)
+- Install [Traffic 
Manager](https://www.getambassador.io/docs/telepresence/latest/install/manager#install-the-traffic-manager)
 
-   You need to use telepresence to proxy traffic to the k8s cluster, see 
[telepresence tutorial](https://www.telepresence.io/docs/latest/quick-start/) 
to install its cli tool and [traffic 
manager](https://www.getambassador.io/docs/telepresence/latest/install/manager#install-the-traffic-manager).
 After installing telepresence, you can connect to minikube by following 
commands:
+**Steps:**
 
-   ```shell
-   telepresence connect
-   # Check if traffic manager connected
-   telepresence status
-   ```
+1. Connect Telepresence to your cluster:
 
-   By executing above commands, you can use in-cluster DNS resolution and 
proxy your requests to the cluster. And then you can use IDE to run or debug 
locally:
+```shell
+telepresence connect
+telepresence status  # Verify connection
+```
 
-   ```shell
-   # Make sure generate proper resources first
-   make manifests generate fmt vet
-   
-   go run .
-   # Or you can use IDE to run locally instead
-   ```
+2. Generate code resources:
+
+```shell
+make manifests generate fmt vet
+```
+
+3. Run the controller locally using your IDE or command line:
+
+```shell
+go run .
+```
+
+Now your local development environment has access to the Kubernetes cluster's 
DNS and services.
 
    
 
 
-## Method 2: Example without Using Operator
+## Method 2: Direct Kubernetes Deployment
+
+This method deploys Seata Server directly using Kubernetes manifests without 
the operator. Note that Seata Docker images currently require link-mode for 
container communication.
+
+### Prerequisites
+
+- MySQL database
+- Nacos registry server
+- Access to Kubernetes cluster
 
-Due to certain reasons, Seata Docker images currently do not support external 
container calls. Therefore, the example projects should also be kept in link 
mode with the Seata image inside the container.
+### Deployment Steps
 
-```sh
-# Start Seata deployment (nacos,seata,mysql)
-kubectl create -f deploy/seata-deploy.yaml
-# Start Seata service (nacos,seata,mysql)
-kubectl create -f deploy/seata-service.yaml
-# Get a NodePort IP (kubectl get service)
-# Modify the IP in examples/examples-deploy for DNS addressing
-# Connect to MySQL and import table structure
-# Start example deployment (samples-account,samples-storage)
-kubectl create -f example/example-deploy.yaml
-# Start example service (samples-account,samples-storage)
-kubectl create -f example/example-service.yaml
-# Start order deployment (samples-order)
-kubectl create -f example/example-deploy.yaml
-# Start order service (samples-order)
-kubectl create -f example/example-service.yaml
-# Start business deployment (samples-dubbo-business-call)
-kubectl create -f example/business-deploy.yaml
-# Start business deployment (samples-dubbo-service-call)
-kubectl create -f example/business-service.yaml
+#### Step 1: Deploy Seata and Dependencies
+
+Deploy Seata server, Nacos, and MySQL:
+
+```shell
+kubectl apply -f deploy/seata-deploy.yaml
+kubectl apply -f deploy/seata-service.yaml
+```
+
+#### Step 2: Retrieve Service Information
+
+```shell
+kubectl get service
+# Note the NodePort IPs and ports for Seata and Nacos
 ```
 
-### Open the Nacos console in your browser [http://localhost:8848/nacos/] to 
check if all instances are registered successfully.
+#### Step 3: Configure DNS Addressing
+
+Update `example/example-deploy.yaml` with the NodePort IP addresses obtained 
above.
+
+#### Step 4: Initialize Database
+
+```shell
+# Connect to MySQL and import Seata table schema
+# Replace CLUSTER_IP with your MySQL service IP
+mysql -h <CLUSTER_IP> -u root -p < path/to/seata-db-schema.sql
+```
+
+#### Step 5: Deploy Example Applications
+
+Deploy the sample microservices:
+
+```shell
+# Deploy account and storage services
+kubectl apply -f example/example-deploy.yaml
+kubectl apply -f example/example-service.yaml
+
+# Deploy order service
+kubectl apply -f example/order-deploy.yaml
+kubectl apply -f example/order-service.yaml
+
+# Deploy business service
+kubectl apply -f example/business-deploy.yaml
+kubectl apply -f example/business-service.yaml
+```
+
+### Verification
+
+Open Nacos console to verify service registration:
+
+```
+http://localhost:8848/nacos/
+```
+
+Check that all services are registered:
+- account-service
+- storage-service
+- order-service
+- business-service
 
 ### Testing
 
-```sh
-# Account service - Deduct amount
-curl -H "Content-Type: application/json" -X POST --data 
"{\"id\":1,\"userId\":\"1\",\"amount\":100}" cluster-ip:8102/account/dec_account
-# Storage service - Deduct stock
-curl -H "Content-Type: application/json" -X POST --data 
"{\"commodityCode\":\"C201901140001\",\"count\":100}" 
cluster-ip:8100/storage/dec_storage
-# Order service - Add order and deduct amount
-curl -H "Content-Type: application/json" -X POST --data 
"{\"userId\":\"1\",\"commodityCode\":\"C201901140001\",\"orderCount\":10,\"orderAmount\":100}"
 cluster-ip:8101/order/create_order
-# Business service - Client Seata version too low
-curl -H "Content-Type: application/json" -X POST --data 
"{\"userId\":\"1\",\"commodityCode\":\"C201901140001\",\"count\":10,\"amount\":100}"
 cluster-ip:8104/business/dubbo/buy
+Test the distributed transaction scenarios using the following curl commands:
+
+#### Test 1: Account Service - Deduct Amount
+
+```shell
+curl -H "Content-Type: application/json" \
+  -X POST \
+  --data '{"id":1,"userId":"1","amount":100}' \
+  http://<CLUSTER_IP>:8102/account/dec_account
+```
+
+#### Test 2: Storage Service - Deduct Stock
+
+```shell
+curl -H "Content-Type: application/json" \
+  -X POST \
+  --data '{"commodityCode":"C201901140001","count":100}' \
+  http://<CLUSTER_IP>:8100/storage/dec_storage
+```
+
+#### Test 3: Order Service - Create Order
+
+```shell
+curl -H "Content-Type: application/json" \
+  -X POST \
+  --data 
'{"userId":"1","commodityCode":"C201901140001","orderCount":10,"orderAmount":100}'
 \
+  http://<CLUSTER_IP>:8101/order/create_order
 ```
 
+#### Test 4: Business Service - Execute Transaction
+
+```shell
+curl -H "Content-Type: application/json" \
+  -X POST \
+  --data 
'{"userId":"1","commodityCode":"C201901140001","count":10,"amount":100}' \
+  http://<CLUSTER_IP>:8104/business/dubbo/buy
+```
+
+Replace `<CLUSTER_IP>` with the actual NodePort IP address of your service.
+
 
 
diff --git a/README.zh.md b/README.zh.md
index aa1498f..daec6c3 100755
--- a/README.zh.md
+++ b/README.zh.md
@@ -14,184 +14,377 @@
     See the License for the specific language governing permissions and
     limitations under the License.
 -->
+
 # seata-k8s
 
-关联项目:
+[中文文档](README.zh.md) | [English](README.md)
+
+## 项目概述
+
+seata-k8s 是一个用于在 Kubernetes 上部署和管理 [Apache 
Seata](https://github.com/seata/seata) 分布式事务服务器的 Kubernetes 
Operator。它提供了一种简化的方式来在 Kubernetes 上部署 Seata Server 集群,并支持自动扩缩容、持久化存储管理和运维简化。
+
+## 主要特性
+
+- 🚀 **快速部署**:使用 Kubernetes CRD 快速部署 Seata Server 集群
+- 📈 **自动扩缩容**:通过简单的副本配置实现集群扩缩容
+- 💾 **持久化存储**:内置持久化卷支持
+- 🔐 **RBAC 支持**:完整的基于角色的访问控制
+- 🛠️ **开发友好**:包含调试和开发工具
+
+## 关联项目
+
+- [Apache Seata](https://github.com/seata/seata) - 分布式事务框架
+- [Seata 
示例](https://github.com/seata/seata-samples/tree/docker/springboot-dubbo-fescar) 
- 示例实现
+- [Seata Docker](https://github.com/seata/seata-docker) - Docker 镜像仓库
+
+## 目录
+
+- [方式一:使用 Operator](#方式一使用-operator)
+  - [使用指南](#使用指南)
+  - [CRD 配置参考](#crd-配置参考)
+  - [开发者指南](#开发者指南)
+- [方式二:直接部署](#方式二直接部署)
+  - [部署步骤](#部署步骤)
+  - [测试验证](#测试验证)
+
+---
+
+## 方式一:使用 Operator
+
+### 前置要求
+
+- Kubernetes 1.16+ 集群
+- kubectl 已配置可访问集群
+- Make 和 Docker(用于构建镜像)
+
+### 使用指南
+
+#### 第一步:克隆仓库
+
+```shell
+git clone https://github.com/apache/incubator-seata-k8s.git
+cd incubator-seata-k8s
+```
+
+#### 第二步:部署 Operator
+
+将 Controller、CRD、RBAC 等资源部署到 Kubernetes 集群:
+
+```shell
+make deploy
+```
+
+验证 Operator 部署:
+
+```shell
+kubectl get deployment -n seata-k8s-controller-manager
+kubectl get pods -n seata-k8s-controller-manager
+```
+
+#### 第三步:部署 Seata Server 集群
+
+创建 SeataServer 资源。以下是基于 
[seata-server-cluster.yaml](deploy/seata-server-cluster.yaml) 的示例:
+
+```yaml
+apiVersion: operator.seata.apache.org/v1alpha1
+kind: SeataServer
+metadata:
+  name: seata-server
+  namespace: default
+spec:
+  serviceName: seata-server-cluster
+  replicas: 3
+  image: apache/seata-server:latest
+  persistence:
+    volumeReclaimPolicy: Retain
+    spec:
+      resources:
+        requests:
+          storage: 5Gi
+```
+
+将其应用到集群:
+
+```shell
+kubectl apply -f seata-server.yaml
+```
+
+如果一切正常,Operator 将会:
+- 创建 3 个 StatefulSet 副本
+- 创建一个名为 `seata-server-cluster` 的 Headless Service
+- 设置持久化存储卷
+
+在 Kubernetes 集群内访问 Seata Server 集群:
+
+```
+seata-server-0.seata-server-cluster.default.svc
+seata-server-1.seata-server-cluster.default.svc
+seata-server-2.seata-server-cluster.default.svc
+```
+
+查看 Pod 状态:
+
+```shell
+kubectl get pods -l app=seata-server
+kubectl logs -f seata-server-0
+```
+
+### CRD 配置参考
+
+详见 
[operator.seata.apache.org_seataservers.yaml](config/crd/bases/operator.seata.apache.org_seataservers.yaml)。
+
+#### 关键配置字段
+
+| 字段 | 描述 | 默认值 | 示例 |
+|------|------|--------|------|
+| `serviceName` | Headless Service 名称 | - | `seata-server-cluster` |
+| `replicas` | Seata Server 副本数 | 1 | 3 |
+| `image` | 容器镜像 | - | `apache/seata-server:latest` |
+| `ports.consolePort` | 控制台端口 | 7091 | 7091 |
+| `ports.servicePort` | 服务端口 | 8091 | 8091 |
+| `ports.raftPort` | Raft 一致性端口 | 9091 | 9091 |
+| `resources` | 容器资源请求/限制 | - | 见下例 |
+| `persistence.volumeReclaimPolicy` | 卷回收策略 | Retain | Retain 或 Delete |
+| `persistence.spec.resources.requests.storage` | 持久化卷大小 | - | 5Gi |
+| `env` | 环境变量 | - | 见下例 |
+
+#### 环境变量和 Secret 配置
+
+通过环境变量和 Kubernetes Secret 配置 Seata Server:
+
+```yaml
+apiVersion: operator.seata.apache.org/v1alpha1
+kind: SeataServer
+metadata:
+  name: seata-server
+  namespace: default
+spec:
+  image: apache/seata-server:latest
+  replicas: 1
+  persistence:
+    spec:
+      resources:
+        requests:
+          storage: 5Gi
+  env:
+  - name: console.user.username
+    value: seata
+  - name: console.user.password
+    valueFrom:
+      secretKeyRef:
+        name: seata-credentials
+        key: password
+---
+apiVersion: v1
+kind: Secret
+metadata:
+  name: seata-credentials
+  namespace: default
+type: Opaque
+stringData:
+  password: your-secure-password
+```
+
+### 开发者指南
+
+在本地调试 Operator 时,建议使用 Minikube 或相似的本地 Kubernetes 环境。
+
+#### 方式 1:构建并部署 Docker 镜像
+
+修改代码后重新构建 Controller 镜像:
+
+```shell
+# 启动 minikube 并设置 Docker 环境
+minikube start
+eval $(minikube docker-env)
+
+# 构建并部署
+make docker-build deploy
 
-https://github.com/seata/seata
+# 验证部署
+kubectl get deployment -n seata-k8s-controller-manager
+```
+
+#### 方式 2:使用 Telepresence 本地调试
+
+使用 [Telepresence](https://www.telepresence.io/) 在本地调试,无需构建容器镜像。
 
-https://github.com/seata/seata-samples/tree/docker/springboot-dubbo-fescar
+**前置要求:**
+- 安装 [Telepresence CLI](https://www.telepresence.io/docs/latest/quick-start/)
+- 安装 [Traffic 
Manager](https://www.getambassador.io/docs/telepresence/latest/install/manager#install-the-traffic-manager)
 
-https://github.com/seata/seata-docker
+**操作步骤:**
 
+1. 连接 Telepresence 到集群:
 
+```shell
+telepresence connect
+telepresence status  # 验证连接
+```
+
+2. 生成代码资源:
 
-## 方式一: 使用 Operator
+```shell
+make manifests generate fmt vet
+```
 
+3. 在本地运行 Controller(使用 IDE 或命令行):
 
+```shell
+go run .
+```
 
-### Usage
+现在您的本地开发环境可以访问 Kubernetes 集群的 DNS 和服务。
 
-想要体验 Operator 方式部署 Seata Server 可以参照以下方式进行:
+---
 
-1. 克隆本仓库
+## 方式二:直接部署
 
-   ```shell
-   git clone https://github.com/apache/incubator-seata-k8s.git
-   ```
+此方式直接使用 Kubernetes 清单部署 Seata Server,不使用 Operator。注意 Seata Docker 镜像目前需要在容器间使用 
link 模式进行通信。
 
-3. 部署 Controller, CRD, RBAC 等资源到 Kubernetes 集群
+### 前置要求
 
-   ```shell
-   make deploy
-   kubectl get deployment -n seata-k8s-controller-manager  # check if exists
-   ```
+- MySQL 数据库
+- Nacos 注册中心
+- Kubernetes 集群访问权限
 
-4. 此时即可发布你的 CR 到集群当中了,示例可以在这里找到 
[seata-server-cluster.yaml](deploy/seata-server-cluster.yaml)
+### 部署步骤
 
-   ```yaml
-   apiVersion: operator.seata.apache.org/v1alpha1
-   kind: SeataServer
-   metadata:
-     name: seata-server
-     namespace: default
-   spec:
-     serviceName: seata-server-cluster
-     replicas: 3
-     image: apache/seata-server:latest
-     persistence:
-         volumeReclaimPolicy: Retain
-       spec:
-         resources:
-           requests:
-               storage: 5Gi
-   
-   ```
-   
-   对于上面这个 CR 的例子而言,如果一切正常的话,controller 将会部署 3 个 StatefulSet 资源和一个 Headless 
Service 到集群中;在集群中你可以通过 seata-server-0.seata-server-cluster.default.svc 对 Seata 
Server 集群进行访问。
+#### 第一步:部署 Seata 及相关服务
 
-### Reference
+部署 Seata 服务器、Nacos 和 MySQL:
 
-关于 CRD 可以访问  
[operator.seata.apache.org_seataservers.yaml](config/crd/bases/operator.seata.apache.org_seataservers.yaml)
 以查看详细定义,这里列举出一些重要的配置并进行解读。
+```shell
+kubectl apply -f deploy/seata-deploy.yaml
+kubectl apply -f deploy/seata-service.yaml
+```
 
-1. `serviceName`: 用于定义 controller 部署的 Headless Service 的名称,这会影响你访问 server 
集群的方式,比如在之前的示例中,你可以通过 seata-server-0.seata-server-cluster.default.svc 进行访问。
+#### 第二步:获取服务信息
 
-2. `replicas`: 用于定义 Seata Server 的副本数量,你只需要调整该字段即可实现扩缩容,而不需要额外的 HTTP 请求去更改 
Seata raft 集群列表
+```shell
+kubectl get service
+# 记录 Seata 和 Nacos 的 NodePort IP 和端口
+```
 
-3. `image`: 定义了 Seata Server 的镜像名称
+#### 第三步:配置 DNS 地址
 
-4. `ports`: 属性下会有三个端口需要设定,分别是 `consolePort`,`servicePort`,  `raftPort`,默认分别为 
7091, 8091, 9091
+使用上一步获取的 NodePort IP 更新 `example/example-deploy.yaml` 中的地址。
 
-5. `resources`: 用于定义容器的资源要求
+#### 第四步:初始化数据库
 
-6. `persistence.spec`: 用于定义挂载的存储资源要求
+```shell
+# 连接到 MySQL 并导入 Seata 表结构
+# 用实际 MySQL 服务 IP 替换 CLUSTER_IP
+mysql -h <CLUSTER_IP> -u root -p < path/to/seata-db-schema.sql
+```
 
-7. `persistence.volumeReclaimPolicy`: 用于控制存储回收行为,允许的选项有 `Retain` 或者 
`Delete`,分别代表了在 CR 删除之后保存存储卷或删除存储卷
+#### 第五步:部署示例应用
 
-8. `env`: 传递给容器的环境变量,可以通过此字段去定义 Seata Server 的配置,比如:
+部署示例微服务:
 
-   ```yaml
-   apiVersion: operator.seata.apache.org/v1alpha1
-   kind: SeataServer
-   metadata:
-     name: seata-server
-     namespace: default
-   spec:
-     image: apache/seata-server:latest
-     store:
-       resources:
-         requests:
-           storage: 5Gi
-     env:
-     - name: console.user.username
-       value: seata
-     - name: console.user.password
-       valueFrom:
-         secretKeyRef:
-           name: seata
-           key: password
-   ---
-   apiVersion: v1
-   kind: Secret
-   metadata:
-     name: seata
-   type: Opaque
-   data:
-     password: seata
-   ```
+```shell
+# 部署账户和库存服务
+kubectl apply -f example/example-deploy.yaml
+kubectl apply -f example/example-service.yaml
 
-   
+# 部署订单服务
+kubectl apply -f example/order-deploy.yaml
+kubectl apply -f example/order-service.yaml
 
-### For Developer
-
-要在本地调试此 Operator,我们建议您使用像 Minikube 这样的测试 k8s 环境。
-
-1. 方法 1:修改代码并构建控制器镜像:
-
-   假设您正在使用 Minikube 进行测试,
-
-   ```shell
-   eval $(minikube docker-env)
-   make docker-build deploy
-   ```
-
-2. 方法 2:不构建镜像进行本地调试
-
-   您需要使用 Telepresence 将流量代理到 k8s 集群,参见[Telepresence 
教程](https://www.telepresence.io/docs/latest/quick-start/)来安装其 CLI 工具和[Traffic 
Manager](https://www.getambassador.io/docs/telepresence/latest/install/manager#install-the-traffic-manager)。安装
 Telepresence 后,可以按照以下命令连接到 Minikube:
-
-   ```shell
-   telepresence connect
-   # 检查流量管理器是否连接
-   telepresence status
-   ```
-
-   通过执行上述命令,您可以使用集群内 DNS 解析并将请求代理到集群。然后您可以使用 IDE 在本地运行或调试:
-
-   ```shell
-   # 首先确保生成适当的资源
-   make manifests generate fmt vet
-   
-   go run .
-   # 或者您也可以使用 IDE 在本地运行
-   ```
-
-## 方式二: 不使用 Operator 的示例
-
-由于一些原因, seata docker 镜像使用暂不提供容器外部调用 ,那么需要案例相关项目也在容器内部 和 seata 镜像保持link模式
-
-```sh
-## 启动 seata deployment (nacos,seata,mysql)
-kubectl create -f deploy/seata-deploy.yaml
-## 启动 seata service (nacos,seata,mysql)
-kubectl create -f deploy/seata-service.yaml 
-## 上面会得到一个nodeport ip ( kubectl get service )
-### seata-service           NodePort    10.108.3.238   <none>        
8091:31236/TCP,3305:30992/TCP,8848:30093/TCP   12m
-## 把ip修改到examples/examples-deploy中 用于dns寻址
-## 连接到mysql 导入表结构
-## 启动 example deployment (samples-account,samples-storage)
-kubectl create -f example/example-deploy.yaml
-## 启动 example service (samples-account,samples-storage)
-kubectl create -f example/example-service.yaml
-## 启动 order deployment (samples-order)
-kubectl create -f example/example-deploy.yaml
-## 启动 order service (samples-order)
-kubectl create -f example/example-service.yaml
-## 启动 business deployment (samples-dubbo-business-call)
-kubectl create -f example/business-deploy.yaml 
-## 启动 business deployment (samples-dubbo-service-call)
-kubectl create -f example/business-service.yaml 
-```
-
-### 浏览器 打开 nacos 控制台 http://localhost:8848/nacos/ 看看所有实例是否注册成功
-### 测试
-```sh
-# 账户服务  扣费
-curl  -H "Content-Type: application/json" -X POST --data 
"{\"id\":1,\"userId\":\"1\",\"amount\":100}"   
cluster-ip:8102/account/dec_account
-# 库存服务 扣库存
-curl  -H "Content-Type: application/json" -X POST --data 
"{\"commodityCode\":\"C201901140001\",\"count\":100}"   
cluster-ip:8100/storage/dec_storage
-# 订单服务 添加订单 扣费
-curl  -H "Content-Type: application/json" -X POST --data 
"{\"userId\":\"1\",\"commodityCode\":\"C201901140001\",\"orderCount\":10,\"orderAmount\":100}"
   cluster-ip:8101/order/create_order
-# 业务服务 客户端seata版本太低
-curl  -H "Content-Type: application/json" -X POST --data 
"{\"userId\":\"1\",\"commodityCode\":\"C201901140001\",\"count\":10,\"amount\":100}"
   cluster-ip:8104/business/dubbo/buy
+# 部署业务服务
+kubectl apply -f example/business-deploy.yaml
+kubectl apply -f example/business-service.yaml
 ```
 
+### 验证
+
+打开 Nacos 控制台验证服务注册:
+
+```
+http://localhost:8848/nacos/
+```
+
+检查是否所有服务均已注册:
+- account-service(账户服务)
+- storage-service(库存服务)
+- order-service(订单服务)
+- business-service(业务服务)
+
+### 测试验证
+
+使用以下 curl 命令测试分布式事务场景:
+
+#### 测试 1:账户服务 - 扣费
+
+```shell
+curl -H "Content-Type: application/json" \
+  -X POST \
+  --data '{"id":1,"userId":"1","amount":100}' \
+  http://<CLUSTER_IP>:8102/account/dec_account
+```
+
+#### 测试 2:库存服务 - 扣库存
+
+```shell
+curl -H "Content-Type: application/json" \
+  -X POST \
+  --data '{"commodityCode":"C201901140001","count":100}' \
+  http://<CLUSTER_IP>:8100/storage/dec_storage
+```
+
+#### 测试 3:订单服务 - 创建订单
+
+```shell
+curl -H "Content-Type: application/json" \
+  -X POST \
+  --data 
'{"userId":"1","commodityCode":"C201901140001","orderCount":10,"orderAmount":100}'
 \
+  http://<CLUSTER_IP>:8101/order/create_order
+```
+
+#### 测试 4:业务服务 - 执行事务
+
+```shell
+curl -H "Content-Type: application/json" \
+  -X POST \
+  --data 
'{"userId":"1","commodityCode":"C201901140001","count":10,"amount":100}' \
+  http://<CLUSTER_IP>:8104/business/dubbo/buy
+```
+
+用实际 NodePort 服务的 IP 地址替换 `<CLUSTER_IP>`。
+
+---
+
+## 故障排查
+
+### Pod 无法启动
+
+```shell
+# 查看 Pod 日志
+kubectl logs <pod-name>
+
+# 查看 Pod 详情
+kubectl describe pod <pod-name>
+```
+
+### 服务无法连接
+
+```shell
+# 测试 DNS 解析
+kubectl run -it --rm debug --image=busybox --restart=Never -- nslookup 
seata-server-0.seata-server-cluster.default.svc
+```
+
+### 持久化卷问题
+
+```shell
+# 查看 PVC 状态
+kubectl get pvc
+
+# 查看 PV 状态
+kubectl get pv
+```
+
+## 更多信息
+
+- [Seata 官方文档](https://seata.apache.org/)
+- [Kubernetes 文档](https://kubernetes.io/docs/)
+- [Operator SDK 文档](https://sdk.operatorframework.io/)


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to