This is an automated email from the ASF dual-hosted git repository.

ningjiang pushed a commit to branch master
in repository 
https://gitbox.apache.org/repos/asf/incubator-servicecomb-website.git

commit 10816bd5540b44465edfe82dcc190dbaa7324fb5
Author: zhengyangyong <yangyong.zh...@huawei.com>
AuthorDate: Wed Feb 28 10:09:36 2018 +0800

    update metrics docs for 1.0.0-m1 release
    
    Signed-off-by: zhengyangyong <yangyong.zh...@huawei.com>
---
 _users/cn/metrics-in-1.0.0-m1.md                   | 250 +++++++++++++--------
 ...rics-integration-with-prometheus-in-1.0.0-m1.md |  71 +++---
 ...-write-file-extension-and-sample-in-1.0.0-m1.md |   2 +-
 _users/metrics-in-1.0.0-m1.md                      | 234 ++++++++++++-------
 ...rics-integration-with-prometheus-in-1.0.0-m1.md |  69 +++---
 assets/images/MetricsDependency.png                | Bin 10921 -> 10323 bytes
 assets/images/MetricsInGrafana.png                 | Bin 35400 -> 108063 bytes
 assets/images/MetricsInPrometheus.png              | Bin 60061 -> 47642 bytes
 assets/images/MetricsWriteFileResult.png           | Bin 59021 -> 74432 bytes
 9 files changed, 363 insertions(+), 263 deletions(-)

diff --git a/_users/cn/metrics-in-1.0.0-m1.md b/_users/cn/metrics-in-1.0.0-m1.md
index 9c93f3e..fdf633e 100644
--- a/_users/cn/metrics-in-1.0.0-m1.md
+++ b/_users/cn/metrics-in-1.0.0-m1.md
@@ -31,40 +31,41 @@ redirect_from:
 4. 没有提供通用数据发布接口,难以和更多的第三方监控系统做集成;  
 5. 由于foundation-metrics模块过于底层,用户无法以可选的方式决定是否启用;  
 
-因此,从0.5.0版本升级到1.0.0-m1版本,我们进行了一次全面的重构,重构后的Metrics将分为如下几个模块  
+因此,从0.5.0版本升级到1.0.0-m1版本,我们进行了一次全面的重构,重构后的Metrics将分为如下两个个模块  
 
 | Module名             | 描述                               |
 | :------------------ | :------------------------------- |
+| foundation-metrics  | Metrics机制层模块,提供Metrics基础能力  |
 | metrics-core        | Metrics核心模块,引入后即启用Metrics数据收集功能  |
-| metrics-common      | Metrics通用模块,主要包含Metric DTO用于数据发布 |
-| metrics-extension   | 包含Metrics的一些扩展功能                 |
 | metrics-integration | 包含Metrics与其它三方系统集成               |
 
-它们的依赖关系如下图所示:
+它们的依赖关系如下图所示:  
+
 ![MetricsDependency.png](/assets/images/MetricsDependency.png)
 
 ### 数据采集不再依赖Hystrix(handler-bizkeeper),使用事件埋点收集与调用相关的所有数据
-1.0.0-m1版本不再从Hystrix获取调用的TPS和Latency,避免了不配置Java Chassis Bizkeeper 
Handler就不会输出这两项数据的问题;使用foundation-common中的EventBus作为事件总线,metrics-core中的DefaultEventListenerManager初始化后会立即注册三个事件监听处理类:
+1.0.0-m1版本不再从Hystrix获取调用的TPS和Latency,避免了不配置Java Chassis Bizkeeper 
Handler就不会输出这两项数据的问题;使用foundation-common中的EventBus作为事件总线,EventBus初始化的时候会通过SPI(Service
 Provider Interface)的机制将所有的EventListener注册进来,已实现的EventListener包括:
 
-| 事件监听处理类名                               | 功能                        |
+| 事件监听处理类名                               | 功能                 |
 | :------------------------------------- | :------------------------ |
-| InvocationStartedEventListener         | Consumer调用或Producer接收开始   |
-| InvocationStartProcessingEventListener | Producer从队列中取出调用开始处理      |
-| InvocationFinishedEventListener        | Consumer调用返回或Producer处理完毕 |
+| InvocationStartedEventListener         | 
处理Consumer调用或Producer接收开始时触发的InvocationStartedEvent   |
+| InvocationStartExecutionEventListener | 
处理Producer从队列中取出调用开始处理时触发的InvocationStartExecutionEvent      |
+| InvocationFinishedEventListener        | 
处理Consumer调用返回或Producer处理完毕触发的InvocationFinishedEvent |
 
-*特别说明,Java 
Chassis的Reactor框架基于[Vertx](http://vertx.io/),在同步调用模式下,微服务Producer端收到Invocation后,并不会马上同步处理请求,而是将它放入一个处理队列中,Invocation在队列中的时间称为**LifeTimeInQueue**,队列的长度称为**waitInQueue**,这是衡量微服务压力的两个重要指标,可以参考操作系统磁盘读写队列的概念;Consumer端并不会有队列,因此永远不会触发InvocationStartProcessingEvent。*
+*特别说明,Java 
Chassis的Reactor框架基于[Vertx](http://vertx.io/),在同步调用模式下,微服务Producer端收到Invocation后,并不会马上同步处理请求,而是将它放入一个处理队列中,Invocation在队列中的时间称为**LifeTimeInQueue**,队列的长度称为**waitInQueue**,这是衡量微服务压力的两个重要指标,可以参考操作系统磁盘读写队列的概念;Consumer端并不会有队列,因此永远不会触发InvocationStartExecutionEvent。*
 
 事件触发的代码分布在Java 
Chassis的RestInvocation、HighwayServerInvoke和InvokerUtils中,如果微服务没有启用Metrics,EventBus中就不会注册Metrics事件监听处理器,因此对性能的影响微乎其微。
 
 ### 使用Netflix Servo作为Metric的计数器
-[Netflix Servo](https://github.com/Netflix/servo)具有性能极高的计数器(Monitor),我们使用了四种:  
+[Netflix Servo](https://github.com/Netflix/servo)具有性能极高的计数器(Monitor),我们使用了五种:  
 
-| Monitor名     | 描述                               |
+| Monitor名     | 描述                             |
 | :----------- | :------------------------------- |
-| BasicCounter | 基本累积计数器(永续累加)                    |
+| BasicCounter | 基本累加计数器(永续累加)                    |
 | StepCounter  | 周期累加计数器(以前曾经称为ResettableCounter) |
-| MinGauge     | 周期最小值计数器                         |
-| MaxGauge     | 周期最大值计数器                         |
+| BasicTimer   | 时间计数器                         |
+| BasicGauge   | 基本计量器                         |
+| MaxGauge     | 周期最大值计数器                         |  
 
 *依赖的Servo版本为0.10.1*
 
@@ -79,40 +80,32 @@ Metrics有很多种分类方式,在技术实现上我们偏向以取值方式
   c) 与个数相关的,比如累加平均值、方差等等;    
   获取此类Metrics的值,返回的是上一个周期的统计结果,具有一定的延后性。在Servo中,这个时间被称为[“Polling 
Intervals”](https://github.com/Netflix/servo/wiki/Getting-Started)。    
   
从1.0.0-m1开始,可以通过microservice.yaml中的servicecomb.metrics.window_time配置设置周期,效果与servo.pollers一致。
  
-## Metric列表
-从1.0.0-m1开始,支持微服务Operation级别的Metric输出,列表如下:  
-
-| Group       | Level                  | Catalog  | Metrics         | Item     
      |
-| :---------- | :--------------------- | :------- | :-------------- | 
:------------- |
-| servicecomb | instance               | system   | cpu             | load     
      |
-| servicecomb | instance               | system   | cpu             | 
runningThreads |
-| servicecomb | instance               | system   | heap            | init     
      |
-| servicecomb | instance               | system   | heap            | max      
      |
-| servicecomb | instance               | system   | heap            | commit   
      |
-| servicecomb | instance               | system   | heap            | used     
      |
-| servicecomb | instance               | system   | nonHeap         | init     
      |
-| servicecomb | instance               | system   | nonHeap         | max      
      |
-| servicecomb | instance               | system   | nonHeap         | commit   
      |
-| servicecomb | instance               | system   | nonHeap         | used     
      |
-| servicecomb | instance &#124; operationName | producer | waitInQueue     | 
count          |
-| servicecomb | instance &#124; operationName | producer | lifeTimeInQueue | 
average        |
-| servicecomb | instance &#124; operationName | producer | lifeTimeInQueue | 
max            |
-| servicecomb | instance &#124; operationName | producer | lifeTimeInQueue | 
min            |
-| servicecomb | instance &#124; operationName | producer | executionTime   | 
average        |
-| servicecomb | instance &#124; operationName | producer | executionTime   | 
max            |
-| servicecomb | instance &#124; operationName | producer | executionTime   | 
min            |
-| servicecomb | instance &#124; operationName | producer | producerLatency | 
average        |
-| servicecomb | instance &#124; operationName | producer | producerLatency | 
max            |
-| servicecomb | instance &#124; operationName | producer | producerLatency | 
min            |
-| servicecomb | instance &#124; operationName | producer | producerCall    | 
total          |
-| servicecomb | instance &#124; operationName | producer | producerCall    | 
tps            |
-| servicecomb | instance &#124; operationName | consumer | consumerLatency | 
average        |
-| servicecomb | instance &#124; operationName | consumer | consumerLatency | 
max            |
-| servicecomb | instance &#124; operationName | consumer | consumerLatency | 
min            |
-| servicecomb | instance &#124; operationName | consumer | consumerCall    | 
total          |
-| servicecomb | instance &#124; operationName | consumer | consumerCall    | 
tps            |
-
-**当Level的值是“instance”的时候,代表微服务实例级别的Metric,否则代表微服务具体Operation的Metric,operationName使用的是Java
 Chassis MicroserviceQualifiedName,它是微服务名.SchemaID.操作方法名的组合。**
+
+## Metrics数据ID格式
+Java Chassis Metrics内置两种类型的Metric输出:
+### JVM信息
+输出ID格式为:*jvm(statistic=gauge,name={name})*
+name包括:  
+
+| name     | 描述                               |
+| :----------- | :------------------------------- |
+| cpuLoad | CPU使用率                    |
+| cpuRunningThreads  | 线程数 |
+| heapInit,heapMax,heapCommit,heapUsed  | 内存heap使用情况 |
+| nonHeapInit,nonHeapMax,nonHeapCommit,nonHeapUsed  | 内存nonHeap使用情况 |    
+
+### Invocation信息
+输出ID格式为:*servicecomb.invocation(operation={operationName},role={role},stage={stage},statistic={statistic},status={status},unit={unit})*
+标签含义及值如下:  
+
+| Tag名       | 描述                  | 值 |
+| :---------- | :---------- | :--------------------- |
+| operationName | Operation全名 | MicroserviceQualifiedName |
+| role | Consumer端统计还是Producer端统计 |consume,producer |
+| stage | 统计的阶段 |queue(在队列中,仅producer),execution(执行阶段,仅producer),total(整体) |
+| statistic | 统计项 |tps,count(总调用次数),max,waitInQueue(在队列中等待数,仅producer),latency 
|
+| status | 调用结果状态值 |200, 404等等|
+| unit| 如果是时延统计,单位 | MILLISECONDS,SECONDS等等 |  
 
 ## 如何配置
 ### 全局配置
@@ -125,13 +118,10 @@ service_description:
 
 servicecomb:
   metrics:
-    #时间窗间隔,与servo.pollers设置效果一致,单位毫秒
-    #支持多个时间窗间隔,使用逗号(,)将多个分隔开,例如5000,10000,代表设置两个时间窗
-    window_time: 5000,10000
+    #时间窗间隔,单位毫秒,默认为5000(5秒)
+    window_time: 5000
 ```
-*时间窗设置对于统计结果获取的影响,附上代码中包含的一段注释如下:*  
-
-![TimeWindowComment.png](/assets/images/TimeWindowComment.png)
+**为了降低Metrics理解和使用难度,我们暂时不支持多周期**
 
 ### 依赖配置
 只需要添加metrics-core依赖即可:  
@@ -143,43 +133,21 @@ servicecomb:
     </dependency>
 ```
 
-## 数据发布
+## 如何获取数据
 配置好Metrics后,你可以通过如下两种方式获取Metrics数据:  
-### 内置的发布接口
+### 通过发布接口获取
 当微服务启动后,metrics-core会自动以Springmvc的方式发布服务:  
 ```java
 @RestSchema(schemaId = "metricsEndpoint")
 @RequestMapping(path = "/metrics")
-public class DefaultMetricsPublisher implements MetricsPublisher {
-
-  private final DataSource dataSource;
-
-  public DefaultMetricsPublisher(DataSource dataSource) {
-    this.dataSource = dataSource;
-  }
-
-  @RequestMapping(path = "/appliedWindowTime", method = RequestMethod.GET)
-  @CrossOrigin
-  @Override
-  public List<Long> getAppliedWindowTime() {
-    return dataSource.getAppliedWindowTime();
-  }
-
-  @RequestMapping(path = "/", method = RequestMethod.GET)
-  @CrossOrigin
-  @Override
-  public RegistryMetric metrics() {
-    return dataSource.getRegistryMetric();
-  }
-
+public class MetricsPublisher {
   @ApiResponses({
       @ApiResponse(code = 400, response = String.class, message = "illegal 
request content"),
   })
-  @RequestMapping(path = "/{windowTime}", method = RequestMethod.GET)
+  @RequestMapping(path = "/", method = RequestMethod.GET)
   @CrossOrigin
-  @Override
-  public RegistryMetric metricsWithWindowTime(@PathVariable(name = 
"windowTime") long windowTime) {
-    return dataSource.getRegistryMetric(windowTime);
+  public Map<String, Double> measure() {
+    return MonitorManager.getInstance().measure();
   }
 }
 ```
@@ -193,14 +161,118 @@ cse:
     address: 0.0.0.0:8080
 ```
 你就可以通过http://localhost:8080/metrics 直接获取到数据,打开浏览器输入此URL就可以看到返回结果。
-### 直接代码获取
-从上面的代码可以看到,数据提供Bean接口是org.apache.servicecomb.metrics.core.publish.DataSource,因此如果你希望自己开发数据发布程序,只需要注入它即可。
+
+### 直接获取
+从上面的代码可以看到,数据提供的入口是org.apache.servicecomb.metrics.core.MonitorManager,因此如果你希望自己开发数据发布程序,只需要获取它即可。
+```java
+MonitorManager manager = MonitorManager.getInstance();
+Map<String, Double> metrics = manager.measure();
+```
+
+## 如何使用数据
+Metrics数据将以Map<String, 
Double>的形式输出,为了能够方便用户获取指定Metric的值,提供了org.apache.servicecomb.foundation.metrics.publish.MetricsLoader工具类:
 ```java
-@Autowired
-private DataSource dataSource;
+    //模拟MonitorManager.getInstance().measure()获取所有的Metrics值
+    Map<String, Double> metrics = new HashMap<>();
+    metrics.put("X(K1=1,K2=2,K3=3)", 100.0);
+    metrics.put("X(K1=1,K2=20,K3=30)", 200.0);
+    metrics.put("X(K1=2,K2=200,K3=300)", 300.0);
+    metrics.put("X(K1=2,K2=2000,K3=3000)", 400.0);
+
+    metrics.put("Y(K1=1,K2=2,K3=3)", 500.0);
+    metrics.put("Y(K1=10,K2=20,K3=30)", 600.0);
+    metrics.put("Y(K1=100,K2=200,K3=300)", 700.0);
+    metrics.put("Y(K1=1000,K2=2000,K3=3000)", 800.0);
+
+    //创建一个MetricsLoader加载所有的Metrics值
+    MetricsLoader loader = new MetricsLoader(metrics);
+
+    //获取name为X的所有Metrics并且按K1,K2两个Tag层次分组
+    MetricNode node = loader.getMetricTree("X","K1","K2");
+
+    //获取K1=1且K2=20的所有Metrics,因为node是按K1和K2的层次分组的
+    node.getChildrenNode("1").getChildrenNode("20").getMetrics();
+
+    //从层次结构中通过Tag匹配获取Metric的值
+    
node.getChildrenNode("1").getChildrenNode("20").getFirstMatchMetricValue("K3","30");
+```
+*demo/perf/PerfMetricsFilePublisher.java提供了MetricsLoader更详细的使用示例*
+
+## 如何扩展
+Java Chassis Metrics支持自定义Metrics扩展,MonitorManager包含一组获取各类Monitor的方法:
+| 方法名       | 描述         |
+| :---------- | :---------- |
+| getCounter | 获取一个计数器类的Monitor |
+| getMaxGauge | 获取一个最大值统计Monitor |
+| getGauge | 获取基本计量Monitor |
+| getTimer | 获取一个时间计数器类的Monitor |
+
+以处理订单这个场景为例:
+```java
+public class OrderController {
+  private final Counter orderCount;
+  private final Counter orderTps;
+  private final Timer averageLatency;
+  private final MaxGauge maxLatency;
+
+  OrderController() {
+    MonitorManager manager = MonitorManager.getInstance();
+    //"商品名","levis jeans"与"型号","512" 是两个自定义Tag的name和value,支持定义多Tag
+    this.orderCount = manager.getCounter("订单数量", "商品名", "levis jeans", "型号", 
"512");
+    this.orderTps = manager.getCounter(StepCounter::new, "生成订单", "统计项", 
"事务每秒");
+    this.averageLatency = manager.getTimer("生成订单", "统计项", "平均生成时间", "单位", 
"MILLISECONDS");
+    this.maxLatency = manager.getMaxGauge("生成订单", "统计项", "最大生成时间", "单位", 
"MILLISECONDS");
+  }
+
+  public void makeOrder() {
+    long startTime = System.nanoTime();
+    //处理订单逻辑
+    //...
+    //处理完毕
+    long totalTime = System.nanoTime() - startTime;
+
+    //增加订单数量
+    this.orderCount.increment();
+    
+    //更新Tps
+    this.orderTps.increment();
+
+    //记录订单生成处理时间
+    this.averageLatency.record(totalTime, TimeUnit.NANOSECONDS);
+
+    //记录最大订单生成时间,因为惯用毫秒作为最终输出,因此我们转换一下单位
+    this.maxLatency.update(TimeUnit.NANOSECONDS.toMillis(totalTime));
+  }
+}
+```
+
+注意事项:  
+1. 
通过MonitorManager获取Monitor传递name和tag数组,最终输出的ID是它们连接后的字符串,所以请保持唯一性,上面的例子输出的Metrics为:
+```java
+Map<String,Double> metrics = MonitorManager.getInstance().measure();
+
+//metrics的keySet()将包含:
+//     订单数量(商品名=levis jeans,型号=512)
+//     生成订单(统计项=事务每秒)
+//     生成订单(统计项=平均生成时间,单位=MILLISECONDS)
+//     生成订单(统计项=最大生成时间,单位=MILLISECONDS)
+```
+
+2. MonitorManager获取Monitor的方法均为**“获取或创建”**,因此多次传递相同的name和tag数组返回的是同一个计数器:
+```java
+    Counter counter1 = MonitorManager.getInstance().getCounter("订单数量", "商品名", 
"levis jeans", "型号", "512");
+    Counter counter2 = MonitorManager.getInstance().getCounter("订单数量", "商品名", 
"levis jeans", "型号", "512");
+
+    counter1.increment();
+    counter2.increment();
+
+    Assert.assertEquals(2,counter1.getValue());
+    Assert.assertEquals(2,counter2.getValue());
+    
Assert.assertEquals(2.0,MonitorManager.getInstance().measure().get("订单数量(商品名=levis
 jeans,型号=512)"),0);
 ```
+**获取Monitor的方法性能较低,请在初始化阶段一次获取所需的Monitor,然后将它缓存起来,请参照前面OrderController的做法**
 
 ## 参考示例
 我们已经开发完成了两个使用场景可以作为参考:  
-1. metrics-wirte-file:将Metrics数据写入文件,代码在metrics-extension中;  
-2. metrics-prometheus:将Metrics发布为prometheus Producer。  
+1. metrics-wirte-file:将Metrics数据写入文件,代码在samples\metrics-write-file-sample中;  
+2. metrics-prometheus:将Metrics发布为prometheus Producer。  
\ No newline at end of file
diff --git a/_users/cn/metrics-integration-with-prometheus-in-1.0.0-m1.md 
b/_users/cn/metrics-integration-with-prometheus-in-1.0.0-m1.md
index 559ad62..b5100dc 100644
--- a/_users/cn/metrics-integration-with-prometheus-in-1.0.0-m1.md
+++ b/_users/cn/metrics-integration-with-prometheus-in-1.0.0-m1.md
@@ -35,51 +35,6 @@ Prometheus推荐Pull模式拉取Metrics数据,被监控微服务作为Producer
   </dependency>
 ```
 因此一旦集成Prometheus引入了metrics-prometheus依赖后,不再需要添加metrics-core的依赖。
-### 与metrics-core Publish的关系
-文档[1.0.0-m1版本中的监控](/cn/users/metrics-in-1.0.0-m1/)中已经提到,metrics-core会伴随微服务启动内置的数据发布,如果你在microservice.yaml中配置了rest
 provider,例如:  
-```yaml
-cse:
-  service:
-    registry:
-      address: http://127.0.0.1:30100
-  rest:
-    address: 0.0.0.0:8080
-```
-你就可以通过http://localhost:8080/metrics 
直接获取到Metrics数据,它返回的是org.apache.servicecomb.metrics.common.RegistryMetric实体对象,输出格式为:
-```json
-{"instanceMetric":{
-"systemMetric":{"cpuLoad":10.0,"cpuRunningThreads":39,"heapInit":266338304,"heapMax":3786407936,"heapCommit":626524160,"heapUsed":338280024,"nonHeapInit":2555904,"nonHeapMax":-1,"nonHeapCommit":60342272,"nonHeapUsed":58673152},
-"consumerMetric":{"operationName":"instance","prefix":"servicecomb.instance.consumer","consumerLatency":{"total":0,"count":0,"min":0,"max":0,"average":0.0},"consumerCall":{"total":0,"tps":0.0}},
-"producerMetric":{"operationName":"instance","prefix":"servicecomb.instance.producer","waitInQueue":0,"lifeTimeInQueue":{"total":0,"count":0,"min":0,"max":0,"average":0.0},"executionTime":{"total":0,"count":0,"min":0,"max":0,"average":0.0},"producerLatency":{"total":0,"count":0,"min":0,"max":0,"average":0.0},"producerCall":{"total":1,"tps":0.0}}},
-"consumerMetrics":{},
-"producerMetrics":{"calculator.metricsEndpoint.metrics":{"operationName":"calculator.metricsEndpoint.metrics","prefix":"servicecomb.calculator.metricsEndpoint.metrics.producer","waitInQueue":0,"lifeTimeInQueue":{"total":0,"count":0,"min":0,"max":0,"average":0.0},"executionTime":{"total":0,"count":0,"min":0,"max":0,"average":0.0},"producerLatency":{"total":0,"count":0,"min":0,"max":0,"average":0.0},"producerCall":{"total":1,"tps":0.0}}
-}}
-```
-使用Prometheus Simple HTTP Server接口发布的数据是Prometheus采集的标准格式:
-```text
-# HELP Instance Level Instance Level Metrics
-# TYPE Instance Level untyped
-servicecomb_instance_producer_producerLatency_average 0.0
-servicecomb_instance_producer_producerLatency_total 0.0
-servicecomb_instance_consumer_producerLatency_count 0.0
-...
-servicecomb_instance_producer_producerLatency_min 0.0
-servicecomb_instance_producer_lifeTimeInQueue_average 0.0
-servicecomb_instance_producer_lifeTimeInQueue_count 0.0
-servicecomb_instance_system_heap_init 2.66338304E8
-# HELP calculator.metricsEndpoint.metrics Producer Side 
calculator.metricsEndpoint.metrics Producer Side Metrics
-# TYPE calculator.metricsEndpoint.metrics Producer Side untyped
-servicecomb_calculator_metricsEndpoint_metrics_producer_lifeTimeInQueue_average
 0.0
-...
-servicecomb_calculator_metricsEndpoint_metrics_producer_executionTime_total 0.0
-servicecomb_calculator_metricsEndpoint_metrics_producer_waitInQueue_count 0.0
-servicecomb_calculator_metricsEndpoint_metrics_producer_lifeTimeInQueue_count 
0.0
-```
-所以它们两个是完全独立各有用途的。  
-
-*Prometheus Simple HTTP 
Server同样使用/metrics作为默认URL,metrics-prometheus会使用9696作为默认端口,因此微服务启动后你可以使用http://localhost:9696/metrics
 访问它。*  
-
-我们可以看到在Prometheus的Metric命名统一使用下划线代替了点,因为需要遵守它的[命名规则](https://prometheus.io/docs/practices/naming/)。
 
 ## 如何配置
 开启对接普罗米修斯非常简单:
@@ -129,6 +84,32 @@ scrape_configs:
       - targets: ['localhost:9696']
 ```
 其中job_name: 
'servicecomb'即自定义的job配置,目标是本地微服务localhost:9696,关于prometheus.yml的配置更多信息可以参考[这篇文章](https://prometheus.io/docs/prometheus/latest/configuration/configuration/)。
+
+### 验证输出
+Prometheus Simple HTTP 
Server使用/metrics作为默认URL,metrics-prometheus会使用9696作为默认端口,微服务启动后你可以使用http://localhost:9696/metrics
 访问它。
+使用Prometheus Simple HTTP Server接口发布的数据是Prometheus采集的标准格式:
+```text
+# HELP ServiceComb Metrics ServiceComb Metrics
+# TYPE ServiceComb Metrics untyped
+jvm{name="cpuRunningThreads",statistic="gauge",} 45.0
+jvm{name="heapMax",statistic="gauge",} 3.786407936E9
+jvm{name="heapCommit",statistic="gauge",} 6.12892672E8
+servicecomb_invocation_calculator_calculatorRestEndpoint_calculate{role="producer",stage="total",statistic="max",status="200",unit="MILLISECONDS",}
 1.0
+servicecomb_invocation_calculator_calculatorRestEndpoint_calculate{role="producer",stage="total",statistic="tps",status="200",}
 0.4
+jvm{name="nonHeapCommit",statistic="gauge",} 6.1104128E7
+jvm{name="nonHeapInit",statistic="gauge",} 2555904.0
+servicecomb_invocation_calculator_calculatorRestEndpoint_calculate{role="producer",stage="execution",statistic="max",status="200",unit="MILLISECONDS",}
 0.0
+jvm{name="heapUsed",statistic="gauge",} 2.82814088E8
+servicecomb_invocation_calculator_calculatorRestEndpoint_calculate{role="producer",stage="total",statistic="latency",status="200",unit="MILLISECONDS",}
 1.0
+servicecomb_invocation_calculator_calculatorRestEndpoint_calculate{role="producer",stage="execution",statistic="latency",status="200",unit="MILLISECONDS",}
 0.0
+jvm{name="heapInit",statistic="gauge",} 2.66338304E8
+servicecomb_invocation_calculator_calculatorRestEndpoint_calculate{role="producer",stage="queue",statistic="waitInQueue",}
 0.0
+servicecomb_invocation_calculator_calculatorRestEndpoint_calculate{role="producer",stage="total",statistic="count",status="200",}
 39.0
+jvm{name="nonHeapUsed",statistic="gauge",} 5.9361032E7
+servicecomb_invocation_calculator_calculatorRestEndpoint_calculate{role="producer",stage="queue",statistic="latency",status="200",unit="MILLISECONDS",}
 0.0
+servicecomb_invocation_calculator_calculatorRestEndpoint_calculate{role="producer",stage="queue",statistic="max",status="200",unit="MILLISECONDS",}
 0.0
+```
+
 ### 配置Grafana(可选)
 
如何在Grafana中添加Prometheus作为数据源请参考[这篇文章](https://prometheus.io/docs/visualization/grafana/)。
 ## 运行效果
diff --git a/_users/cn/metrics-write-file-extension-and-sample-in-1.0.0-m1.md 
b/_users/cn/metrics-write-file-extension-and-sample-in-1.0.0-m1.md
index 81de0f1..bb9056e 100644
--- a/_users/cn/metrics-write-file-extension-and-sample-in-1.0.0-m1.md
+++ b/_users/cn/metrics-write-file-extension-and-sample-in-1.0.0-m1.md
@@ -90,7 +90,7 @@ Java Chassis集成了Spring Boot Starter,如果使用Spring Boot Starter启动
 Spring Boot Starter中包含了log4j-over-slf4j,这个Log 
Bridge并没有完全实现log4j的所有接口,包括RollingFileAppender,所以我们需要排除它让slf4j直接调用log4j而不是这个Log 
Bridge,请确定这种排除对你的系统不会有影响,关于log4j-over-slf4j的更多信息可以参考[这篇文章](https://www.slf4j.org/legacy.html#log4j-over-slf4j)。
 
 ## 运行示例
-metrics-write-file-config-log4j-springboot和metrics-write-file-config-log4j2-springboot都是可以直接运行的示例项目,使用ServiceApplication启动完成后,观察输出目录target/metric/下会生成很多Metrics文件,如果在浏览器中刷新几下http://localhost:8080/f
 请求,则可以看到对应的Operation级别的Metrics文件也会在目录下自动生成。    
+metrics-write-file-config-log4j-springboot和metrics-write-file-config-log4j2-springboot都是可以直接运行的示例项目,使用ServiceApplication启动完成后,观察输出目录target/metric/下会生成很多Metrics文件,如果在浏览器中刷新几下http://localhost:8080/f
 请求,则可以看到对应的Operation的Metrics文件也会在目录下自动生成。    
 ![MetricsWriteFileResult](/assets/images/MetricsWriteFileResult.png)
 
 ## Q & A
diff --git a/_users/metrics-in-1.0.0-m1.md b/_users/metrics-in-1.0.0-m1.md
index 38f042a..7703e1e 100644
--- a/_users/metrics-in-1.0.0-m1.md
+++ b/_users/metrics-in-1.0.0-m1.md
@@ -35,35 +35,35 @@ So,upgrading from 0.5.0 to 1.0.0-m1,we had done a fully 
reconstruction,now it's
 
 | Module Name         | Description                              |
 | :------------------ | :--------------------------------------- |
+| foundation-metrics  | Metrics mechanism module |
 | metrics-core        | Metrics core module,work immediately after imported |
-| metrics-common      | Metrics common module,include DTO classes |
-| metrics-extension   | Include some metrics extension module    |
 | metrics-integration | Include metrics Integration with other monitor system |
 
 The dependency of this modules is:
 ![MetricsDependency.png](/assets/images/MetricsDependency.png)
 
 ### Use event collect invocation data,not from Hystrix(handler-bizkeeper)any 
more
-From 1.0.0-m1 invocation data such as TPS and latency are collected from 
invocation event,not from Hystrix(handler-bizkeeper) any more,so you don't need 
add Java Chassis Bizkeeper Handler only for metrics.we use EventBus in 
foundation-common,when DefaultEventListenerManager in metrics-core had 
initialized,three event listener class will be auto registered:
+From 1.0.0-m1 invocation data such as TPS and latency are collected from 
invocation event,not from Hystrix(handler-bizkeeper) any more,so you don't need 
add Java Chassis Bizkeeper Handler only for metrics.we use EventBus in 
foundation-common,when EventBus had initialized,three build-in event listener 
class will be auto registered via SPI(Service Provider Interface):
 
 | Event Listener Name                    | Description                         
     |
 | :------------------------------------- | 
:--------------------------------------- |
-| InvocationStartedEventListener         | Trigger when consumer or producer 
called |
-| InvocationStartProcessingEventListener | Trigger when producer fetch 
invocation from queue and start process |
-| InvocationFinishedEventListener        | Trigger when consumer call returned 
or producer process finished |
+| InvocationStartedEventListener         | Process InvocationStartedEvent when 
consumer or producer called  |
+| InvocationStartExecutionEventListener | Process 
InvocationStartExecutionEvent when producer fetch invocation from queue and 
start process |
+| InvocationFinishedEventListener        | Process InvocationFinishedEvent 
when consumer call returned or producer process finished |
 
 *ServiceComb java chassis had used [Vertx](http://vertx.io/) as Reactor 
framework,in synchronous call mode when producer received invocation from 
consumer,it won't start process immediately but put it into a queue,this queue 
called invocation queue(like disk queue in operation system),time waiting in 
the queue called **LifeTimeInQueue**,the length of the queue called 
**waitInQueue**,this two metrics are very important for measure stress of the 
microservice;consumer not has this queue,so  [...]
 
-The code for trigger event write in RestInvocation,HighwayServerInvoke and 
InvokerUtils,if microservice don't import metrics,event listener of metrics 
won't be registered,the impact on performance is little.
+The code for trigger event write in RestInvocation,HighwayServerInvoke and 
InvokerUtils,if microservice don't import metrics,event listeners of metrics 
won't be registered,the impact on performance is little.
 
 ### Use Netflix Servo as Monitor of Metric
-[Netflix Servo](https://github.com/Netflix/servo) had implement a collection 
of high performance monitor,we had used four of them:
+[Netflix Servo](https://github.com/Netflix/servo) had implement a collection 
of high performance monitor,we had used five of them:
 
 | Monitor Name | Description                       |
 | :----------- | :-------------------------------- |
 | BasicCounter | As name of it,always increment    |
 | StepCounter  | Called 'ResettableCounter' before |
-| MinGauge     | Mark min value in step            |
+| BasicTimer   | Time (Latency) monitor            |
+| BasicGauge   | Return a Callable call result monitor  |
 | MaxGauge     | Mark max value in step            |
 
 *The version of Servo we used is 0.10.1*
@@ -80,42 +80,31 @@ Metrics had many classifications,we can divided them into 
two major types by how
   If get value of this type,the result returned is the last 'Step Cycle' 
counted.in Servo,this time called ['Polling 
Intervals'](https://github.com/Netflix/servo/wiki/Getting-Started).
   From 1.0.0-m1,can set **servicecomb.metrics.window_time** in 
microservice.yaml,it has same effect as set **servo.pollers**.   
 
-## Metric List
-From 1.0.0-m1,start support output metrics of operation level:   
-
-| Group       | Level                  | Catalog  | Metrics         | Item     
      |
-| :---------- | :--------------------- | :------- | :-------------- | 
:------------- |
-| servicecomb | instance               | system   | cpu             | load     
      |
-| servicecomb | instance               | system   | cpu             | 
runningThreads |
-| servicecomb | instance               | system   | heap            | init     
      |
-| servicecomb | instance               | system   | heap            | max      
      |
-| servicecomb | instance               | system   | heap            | commit   
      |
-| servicecomb | instance               | system   | heap            | used     
      |
-| servicecomb | instance               | system   | nonHeap         | init     
      |
-| servicecomb | instance               | system   | nonHeap         | max      
      |
-| servicecomb | instance               | system   | nonHeap         | commit   
      |
-| servicecomb | instance               | system   | nonHeap         | used     
      |
-| servicecomb | instance &#124; operationName | producer | waitInQueue     | 
count          |
-| servicecomb | instance &#124; operationName | producer | lifeTimeInQueue | 
average        |
-| servicecomb | instance &#124; operationName | producer | lifeTimeInQueue | 
max            |
-| servicecomb | instance &#124; operationName | producer | lifeTimeInQueue | 
min            |
-| servicecomb | instance &#124; operationName | producer | executionTime   | 
average        |
-| servicecomb | instance &#124; operationName | producer | executionTime   | 
max            |
-| servicecomb | instance &#124; operationName | producer | executionTime   | 
min            |
-| servicecomb | instance &#124; operationName | producer | producerLatency | 
average        |
-| servicecomb | instance &#124; operationName | producer | producerLatency | 
max            |
-| servicecomb | instance &#124; operationName | producer | producerLatency | 
min            |
-| servicecomb | instance &#124; operationName | producer | producerCall    | 
total          |
-| servicecomb | instance &#124; operationName | producer | producerCall    | 
tps            |
-| servicecomb | instance &#124; operationName | consumer | consumerLatency | 
average        |
-| servicecomb | instance &#124; operationName | consumer | consumerLatency | 
max            |
-| servicecomb | instance &#124; operationName | consumer | consumerLatency | 
min            |
-| servicecomb | instance &#124; operationName | consumer | consumerCall    | 
total          |
-| servicecomb | instance &#124; operationName | consumer | consumerCall    | 
tps            |
-
-**When the value of Level is 'instance',it's means microservice instance 
metric,otherwise specific operation metric,operationName same as Java Chassis 
MicroserviceQualifiedName,it's joined with microservice 
appId.SchemaID.methodName.**
-
-## How Configuration
+## Metrics ID Format
+From 1.0.0-m1,build-in two type Metric output:   
+### JVM Information
+ID format is : *jvm(statistic=gauge,name={name})*
+name include:
+| Name     | Description                               |
+| :----------- | :------------------------------- |
+| cpuLoad | CPU load rate                    |
+| cpuRunningThreads  | Running thread count |
+| heapInit,heapMax,heapCommit,heapUsed  | Memory heap usage |
+| nonHeapInit,nonHeapMax,nonHeapCommit,nonHeapUsed  | Memory nonHeap usage |
+
+### Invocation Information
+ID format is : 
*servicecomb.invocation(operation={operationName},role={role},stage={stage},statistic={statistic},status={status},unit={unit})*
+Tag name and value below:
+| Tag Name       | Description                  | Options or Values |
+| :---------- | :---------- | :--------------------- |
+| operationName | Operation full name | MicroserviceQualifiedName |
+| role | Consumer side or Producer side |consume,producer |
+| stage | Stage of metrics | queue(producer only),execution(producer 
only,total |
+| statistic | Normally metric type | tps,count(total call 
count),max,waitInQueue(producer),latency |
+| status | Call result code | 200, 404 etc..|
+| unit | TimeUint of latency | MILLISECONDS,SECONDS etc.. |
+
+## How to Configuration
 ### Global Configuration
 Please add window time config in microservice.yaml:  
 ```yaml 
@@ -126,14 +115,10 @@ service_description:
 
 servicecomb:
   metrics:
-    #window time,same as servo.pollers,unit is millisecond
-    #support multi window time and use ',' split them,like 5000,10000
-    window_time: 5000,10000
+    #window time,same as servo.pollers,unit is millisecond,default value is 
5000 (5 seconds)
+    window_time: 5000
 ```
-
-*The setting of window time is very important to getting value of metrics,here 
is a comment show how it effect*
-
-![TimeWindowComment.png](/assets/images/TimeWindowComment.png)
+**In order to decrease difficulty for understand and usage of metrics,we 
temporary do not support multi window time**
 
 ### Maven Configuration
 We just only need add metrics-core dependency:  
@@ -145,43 +130,21 @@ We just only need add metrics-core dependency:
     </dependency>
 ```
 
-## Metrics Publish
-After configuration completed,you can get collected metrics data via this 
method:   
+## How to Get Metrics Data
+After configuration completed,you can get collected metrics data via this two 
method:   
 ### Embedded publish interface
 When microservice start-up,metrics-core will auto publish data service using 
Springmvc provider:  
 ```java
 @RestSchema(schemaId = "metricsEndpoint")
 @RequestMapping(path = "/metrics")
-public class DefaultMetricsPublisher implements MetricsPublisher {
-
-  private final DataSource dataSource;
-
-  public DefaultMetricsPublisher(DataSource dataSource) {
-    this.dataSource = dataSource;
-  }
-
-  @RequestMapping(path = "/appliedWindowTime", method = RequestMethod.GET)
-  @CrossOrigin
-  @Override
-  public List<Long> getAppliedWindowTime() {
-    return dataSource.getAppliedWindowTime();
-  }
-
-  @RequestMapping(path = "/", method = RequestMethod.GET)
-  @CrossOrigin
-  @Override
-  public RegistryMetric metrics() {
-    return dataSource.getRegistryMetric();
-  }
-
+public class MetricsPublisher {
   @ApiResponses({
       @ApiResponse(code = 400, response = String.class, message = "illegal 
request content"),
   })
-  @RequestMapping(path = "/{windowTime}", method = RequestMethod.GET)
+  @RequestMapping(path = "/", method = RequestMethod.GET)
   @CrossOrigin
-  @Override
-  public RegistryMetric metricsWithWindowTime(@PathVariable(name = 
"windowTime") long windowTime) {
-    return dataSource.getRegistryMetric(windowTime);
+  public Map<String, Double> measure() {
+    return MonitorManager.getInstance().measure();
   }
 }
 ```
@@ -195,14 +158,117 @@ cse:
     address: 0.0.0.0:8080
 ```
 You can open a browser and input http://localhost:8080/metrics direct get 
metrics data.  
+
 ### Direct programming get
-From above code you can known,the interface of data provider bean is 
org.apache.servicecomb.metrics.core.publish.DataSource,so if you want develop 
your own metrics publisher,autowired it is enough.
+From above code you can known,the entry of data provider is 
org.apache.servicecomb.metrics.core.MonitorManager,so if you want develop your 
own metrics publisher,direct get it is enough.
+```java
+MonitorManager manager = MonitorManager.getInstance();
+Map<String, Double> metrics = manager.measure();
+```
+
+## How to Use Metrics Data
+Metrics data will output as Map<String,Double>,in order to let user easier 
fetch certain metric value,we provide 
org.apache.servicecomb.foundation.metrics.publish.MetricsLoader tool class:
+```java
+    //simulate MonitorManager.getInstance().measure() get all metrics data
+    Map<String, Double> metrics = new HashMap<>();
+    metrics.put("X(K1=1,K2=2,K3=3)", 100.0);
+    metrics.put("X(K1=1,K2=20,K3=30)", 200.0);
+    metrics.put("X(K1=2,K2=200,K3=300)", 300.0);
+    metrics.put("X(K1=2,K2=2000,K3=3000)", 400.0);
+
+    metrics.put("Y(K1=1,K2=2,K3=3)", 500.0);
+    metrics.put("Y(K1=10,K2=20,K3=30)", 600.0);
+    metrics.put("Y(K1=100,K2=200,K3=300)", 700.0);
+    metrics.put("Y(K1=1000,K2=2000,K3=3000)", 800.0);
+
+    //new MetricsLoader load all metrics data
+    MetricsLoader loader = new MetricsLoader(metrics);
+
+    //get name of 'X' Metrics then group by K1,K2
+    MetricNode node = loader.getMetricTree("X","K1","K2");
+
+    //get all Metrics of K1=1 and K2=20
+    node.getChildrenNode("1").getChildrenNode("20").getMetrics();
+
+    //get K3=30 Metric from node
+    
node.getChildrenNode("1").getChildrenNode("20").getFirstMatchMetricValue("K3","30");
+```
+*More detail can be found in demo/perf/PerfMetricsFilePublisher.java*
+
+## How to Extend Custom Metrics
+Java Chassis Metrics support user extend custom metrics,MonitorManager had a 
set of method get different type of Monitor:
+| Method Name       | Description         |
+| :---------- | :---------- |
+| getCounter | Get a counter monitor |
+| getMaxGauge | Get a max monitor |
+| getGauge | Get a gauge monitor |
+| getTimer | Get a timer monitor |
+
+Let us use Process Order make an example:
+```java
+public class OrderController {
+  private final Counter orderCount;
+  private final Counter orderTps;
+  private final Timer averageLatency;
+  private final MaxGauge maxLatency;
+
+  OrderController() {
+    MonitorManager manager = MonitorManager.getInstance();
+    //"product","levis jeans" and "model","512" are two custom Tag,support 
multi Tags
+    this.orderCount = manager.getCounter("orderCount", "product", "levis 
jeans", "model", "512");
+    this.orderTps = manager.getCounter(StepCounter::new, "orderGenerated", 
"statistic", "tps");
+    this.averageLatency = manager.getTimer("orderGenerated", "statistic", 
"latency", "unit", "MILLISECONDS");
+    this.maxLatency = manager.getMaxGauge("orderGenerated", "statistic", 
"max", "unit", "MILLISECONDS");
+  }
+
+  public void makeOrder() {
+    long startTime = System.nanoTime();
+    //process order logic
+    //...
+    //process finished
+    long totalTime = System.nanoTime() - startTime;
+
+    //increase order count
+    this.orderCount.increment();
+    
+    //increase tps
+    this.orderTps.increment();
+
+    //record latency for average
+    this.averageLatency.record(totalTime, TimeUnit.NANOSECONDS);
+
+    //record max latency
+    this.maxLatency.update(TimeUnit.NANOSECONDS.toMillis(totalTime));
+  }
+}
+```
+Notice:
+1. Metric ID is join name and all tags that pass to MonitorManager when 
getting monitor,so please keep uniqueness,metrics output of front example are:
+```java
+Map<String,Double> metrics = MonitorManager.getInstance().measure();
+
+//metrics.keySet() include:
+//     orderCount(product=levis jeans,model=512)
+//     orderGenerated(statistic=tps)
+//     orderGenerated(statistic=latency,unit=MILLISECONDS)
+//     orderGenerated(statistic=max,unit=MILLISECONDS)
+```
+
+2. All get monitor method in MonitorManager act as **'get or new'**,so use 
same name and tags will return same one monitor:
 ```java
-@Autowired
-private DataSource dataSource;
+    Counter counter1 = 
MonitorManager.getInstance().getCounter("orderGenerated", "product", "levis 
jeans", "model", "512");
+    Counter counter2 = 
MonitorManager.getInstance().getCounter("orderGenerated", "product", "levis 
jeans", "model", "512");
+
+    counter1.increment();
+    counter2.increment();
+
+    Assert.assertEquals(2,counter1.getValue());
+    Assert.assertEquals(2,counter2.getValue());
+    
Assert.assertEquals(2.0,MonitorManager.getInstance().measure().get("orderGenerated(product=levis
 jeans,model=512)"),0);
 ```
+**Performance of get monitor from MonitorManager is slightly lower,so please 
get all monitors what needed when init,then cache them for later use,like 
OrderController example**
 
 ## Other Reference 
 We had developed two use case for reference:  
-1. metrics-wirte-file:ouput metrics data into files,code is at 
metrics-extension;  
+1. metrics-wirte-file:ouput metrics data into files,code is at 
samples\metrics-write-file-sample;  
 2. metrics-prometheus:integration with prometheus,publish metrics as 
prometheus producer.
\ No newline at end of file
diff --git a/_users/metrics-integration-with-prometheus-in-1.0.0-m1.md 
b/_users/metrics-integration-with-prometheus-in-1.0.0-m1.md
index 657789a..e78275c 100644
--- a/_users/metrics-integration-with-prometheus-in-1.0.0-m1.md
+++ b/_users/metrics-integration-with-prometheus-in-1.0.0-m1.md
@@ -35,50 +35,6 @@ As an integration(optional) module,the implementation code 
is in metrics-integra
   </dependency>
 ```
 So if we import metrics-prometheus,no longer need to add metrics-core 
dependence.
-### Relation between metrics-core Publish
-[Metrics in 1.0.0-m1](/users/metrics-in-1.0.0-m1/) had already been 
mentioned,metrics-core will auto start up a embedded publish interface,so if 
you had configured rest provider in microservice.yaml like:
-```yaml
-cse:
-  service:
-    registry:
-      address: http://127.0.0.1:30100
-  rest:
-    address: 0.0.0.0:8080
-```
-You can direct get metrics data at http://localhost:8080/metrics ,it will 
return a entity of org.apache.servicecomb.metrics.common.RegistryMetric,the 
output is:  
-```json
-{"instanceMetric":{
-"systemMetric":{"cpuLoad":10.0,"cpuRunningThreads":39,"heapInit":266338304,"heapMax":3786407936,"heapCommit":626524160,"heapUsed":338280024,"nonHeapInit":2555904,"nonHeapMax":-1,"nonHeapCommit":60342272,"nonHeapUsed":58673152},
-"consumerMetric":{"operationName":"instance","prefix":"servicecomb.instance.consumer","consumerLatency":{"total":0,"count":0,"min":0,"max":0,"average":0.0},"consumerCall":{"total":0,"tps":0.0}},
-"producerMetric":{"operationName":"instance","prefix":"servicecomb.instance.producer","waitInQueue":0,"lifeTimeInQueue":{"total":0,"count":0,"min":0,"max":0,"average":0.0},"executionTime":{"total":0,"count":0,"min":0,"max":0,"average":0.0},"producerLatency":{"total":0,"count":0,"min":0,"max":0,"average":0.0},"producerCall":{"total":1,"tps":0.0}}},
-"consumerMetrics":{},
-"producerMetrics":{"calculator.metricsEndpoint.metrics":{"operationName":"calculator.metricsEndpoint.metrics","prefix":"servicecomb.calculator.metricsEndpoint.metrics.producer","waitInQueue":0,"lifeTimeInQueue":{"total":0,"count":0,"min":0,"max":0,"average":0.0},"executionTime":{"total":0,"count":0,"min":0,"max":0,"average":0.0},"producerLatency":{"total":0,"count":0,"min":0,"max":0,"average":0.0},"producerCall":{"total":1,"tps":0.0}}
-}}
-```
-But use Prometheus Simple HTTP Server provider interface will publish the 
standard format which prometheus needed:
-```text
-# HELP Instance Level Instance Level Metrics
-# TYPE Instance Level untyped
-servicecomb_instance_producer_producerLatency_average 0.0
-servicecomb_instance_producer_producerLatency_total 0.0
-servicecomb_instance_consumer_producerLatency_count 0.0
-...
-servicecomb_instance_producer_producerLatency_min 0.0
-servicecomb_instance_producer_lifeTimeInQueue_average 0.0
-servicecomb_instance_producer_lifeTimeInQueue_count 0.0
-servicecomb_instance_system_heap_init 2.66338304E8
-# HELP calculator.metricsEndpoint.metrics Producer Side 
calculator.metricsEndpoint.metrics Producer Side Metrics
-# TYPE calculator.metricsEndpoint.metrics Producer Side untyped
-servicecomb_calculator_metricsEndpoint_metrics_producer_lifeTimeInQueue_average
 0.0
-...
-servicecomb_calculator_metricsEndpoint_metrics_producer_executionTime_total 0.0
-servicecomb_calculator_metricsEndpoint_metrics_producer_waitInQueue_count 0.0
-servicecomb_calculator_metricsEndpoint_metrics_producer_lifeTimeInQueue_count 
0.0
-```
-So they are two independent,different for use.   
-
-*Prometheus Simple HTTP Server also use /metrics as default 
URL,metrics-prometheus will use 9696 as default port,after microservice start 
up you can get metrics data at http://localhost:9696/metrics .*    
-The metrics name in prometheus we replace all dot with underline,because we 
must follow its [naming rules](https://prometheus.io/docs/practices/naming/).   
 
 
 ## How Configuration
 Enable prometheus integration is very easy:
@@ -129,6 +85,31 @@ scrape_configs:
 ```
 The job_name: 'servicecomb' is our custom job,it will collect metrics data 
from local microservice localhost:9696,more information about configuration of 
prometheus can found 
[here](https://prometheus.io/docs/prometheus/latest/configuration/configuration/).
  
 
+### Verify Output
+Prometheus Simple HTTP Server use /metrics as default URL,metrics-prometheus 
will use 9696 as default port,after microservice start up you can get metrics 
data at http://localhost:9696/metrics . 
+Prometheus Simple HTTP Server provider interface will publish the standard 
format which prometheus needed:
+```text
+# HELP ServiceComb Metrics ServiceComb Metrics
+# TYPE ServiceComb Metrics untyped
+jvm{name="cpuRunningThreads",statistic="gauge",} 45.0
+jvm{name="heapMax",statistic="gauge",} 3.786407936E9
+jvm{name="heapCommit",statistic="gauge",} 6.12892672E8
+servicecomb_invocation_calculator_calculatorRestEndpoint_calculate{role="producer",stage="total",statistic="max",status="200",unit="MILLISECONDS",}
 1.0
+servicecomb_invocation_calculator_calculatorRestEndpoint_calculate{role="producer",stage="total",statistic="tps",status="200",}
 0.4
+jvm{name="nonHeapCommit",statistic="gauge",} 6.1104128E7
+jvm{name="nonHeapInit",statistic="gauge",} 2555904.0
+servicecomb_invocation_calculator_calculatorRestEndpoint_calculate{role="producer",stage="execution",statistic="max",status="200",unit="MILLISECONDS",}
 0.0
+jvm{name="heapUsed",statistic="gauge",} 2.82814088E8
+servicecomb_invocation_calculator_calculatorRestEndpoint_calculate{role="producer",stage="total",statistic="latency",status="200",unit="MILLISECONDS",}
 1.0
+servicecomb_invocation_calculator_calculatorRestEndpoint_calculate{role="producer",stage="execution",statistic="latency",status="200",unit="MILLISECONDS",}
 0.0
+jvm{name="heapInit",statistic="gauge",} 2.66338304E8
+servicecomb_invocation_calculator_calculatorRestEndpoint_calculate{role="producer",stage="queue",statistic="waitInQueue",}
 0.0
+servicecomb_invocation_calculator_calculatorRestEndpoint_calculate{role="producer",stage="total",statistic="count",status="200",}
 39.0
+jvm{name="nonHeapUsed",statistic="gauge",} 5.9361032E7
+servicecomb_invocation_calculator_calculatorRestEndpoint_calculate{role="producer",stage="queue",statistic="latency",status="200",unit="MILLISECONDS",}
 0.0
+servicecomb_invocation_calculator_calculatorRestEndpoint_calculate{role="producer",stage="queue",statistic="max",status="200",unit="MILLISECONDS",}
 0.0
+```
+
 ### Config Grafana(optional)
 How add prometheus as a datasource in grafana can found 
[here](https://prometheus.io/docs/visualization/grafana/).  
 ## Effect Show
diff --git a/assets/images/MetricsDependency.png 
b/assets/images/MetricsDependency.png
index 8133e13..9cfb146 100644
Binary files a/assets/images/MetricsDependency.png and 
b/assets/images/MetricsDependency.png differ
diff --git a/assets/images/MetricsInGrafana.png 
b/assets/images/MetricsInGrafana.png
index 99d381c..3e9c73c 100644
Binary files a/assets/images/MetricsInGrafana.png and 
b/assets/images/MetricsInGrafana.png differ
diff --git a/assets/images/MetricsInPrometheus.png 
b/assets/images/MetricsInPrometheus.png
index 136eb1f..2ac6a95 100644
Binary files a/assets/images/MetricsInPrometheus.png and 
b/assets/images/MetricsInPrometheus.png differ
diff --git a/assets/images/MetricsWriteFileResult.png 
b/assets/images/MetricsWriteFileResult.png
index 28a2449..66bb5e7 100644
Binary files a/assets/images/MetricsWriteFileResult.png and 
b/assets/images/MetricsWriteFileResult.png differ

-- 
To stop receiving notification emails like this one, please contact
ningji...@apache.org.

Reply via email to