Ethanlm commented on a change in pull request #3409:
URL: https://github.com/apache/storm/pull/3409#discussion_r689793357



##########
File path: docs/Metrics.md
##########
@@ -213,37 +213,33 @@ This metric records how many errors were reported by a 
spout/bolt. It is the tot
 
 #### Queue Metrics
 
-Each bolt or spout instance in a topology has a receive queue and a send 
queue.  Each worker also has a queue for sending messages to other workers.  
All of these have metrics that are reported.
+Each bolt or spout instance in a topology has a receive queue.  Each worker 
also has a worker transfer queue for sending messages to other workers.  All of 
these have metrics that are reported.
 
-The receive queue metrics are reported under the `__receive` name and send 
queue metrics are reported under the `__sendqueue` for the given bolt/spout 
they are a part of.  The metrics for the queue that sends messages to other 
workers is under the `__transfer` metric name for the system bolt (`__system`).
+The receive queue metrics are reported under the `receive_queue` name.  The 
metrics for the queue that sends messages to other workers is under the 
`worker-transfer-queue` metric name for the system bolt (`__system`).
 
-They all have the form.
+These queues report the following metrics:
 
 ```
 {
     "arrival_rate_secs": 1229.1195171893523,
     "overflow": 0,
-    "read_pos": 103445,
-    "write_pos": 103448,
     "sojourn_time_ms": 2.440771591407277,
     "capacity": 1024,
-    "population": 19
-    "tuple_population": 200
+    "population": 19,
+    "pct_full": "0.018".
+    "insert_failures": "0",
+    "dropped_messages": "0"
 }
 ```
-In storm we sometimes batch multiple tuples into a single entry in the 
disruptor queue. This batching is an optimization that has been in storm in 
some form since the beginning, but the metrics did not always reflect this so 
be careful with how you interpret the metrics and pay attention to which 
metrics are for tuples and which metrics are for entries in the disruptor 
queue. The `__receive` and `__transfer` queues can have batching but the 
`__sendqueue` should not.
 
 `arrival_rate_secs` is an estimation of the number of tuples that are inserted 
into the queue in one second, although it is actually the dequeue rate.
 The `sojourn_time_ms` is calculated from the arrival rate and is an estimate 
of how many milliseconds each tuple sits in the queue before it is processed.
-Prior to STORM-2621 (v1.1.1, v1.2.0, and v2.0.0) these were the rate of 
entries, not of tuples.
 
-A disruptor queue has a set maximum number of entries.  If the regular queue 
fills up an overflow queue takes over.  The number of tuple batches stored in 
this overflow section are represented by the `overflow` metric.  Storm also 
does some micro batching of tuples for performance/efficiency reasons so you 
may see the overflow with a very small number in it even if the queue is not 
full.
+The queue has a set maximum number of entries.  If the regular queue fills up 
an overflow queue takes over.  The number of tuple batches stored in this 
overflow section are represented by the `overflow` metric.  Storm also does 
some micro batching of tuples for performance/efficiency reasons so you may see 
the overflow with a very small number in it even if the queue is not full.

Review comment:
       I am not very sure about this sentence:
   ```
   Storm also does some micro batching of tuples for performance/efficiency 
reasons so you may see the overflow with a very small number in it even if the 
queue is not full.
   ```
   Is this still valid?




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: dev-unsubscr...@storm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Reply via email to