isaacreath commented on code in PR #2058:
URL: https://github.com/apache/cassandra/pull/2058#discussion_r1061719687


##########
test/distributed/org/apache/cassandra/distributed/test/metrics/StreamingMetricsTest.java:
##########
@@ -145,10 +155,159 @@ public void 
testMetricsWithRepairAndStreamingToTwoNodes() throws Exception
         testMetricsWithStreamingToTwoNodes(true);
     }
 
-    private int getNumberOfSSTables(Cluster cluster, int node) {
+    @Test
+    public void 
testMetricsUpdateIncrementallyWithRepairAndStreamingBetweenNodes() throws 
Exception
+    {
+        try(Cluster cluster = init(Cluster.build(2)
+                                          .withDataDirCount(1)
+                                          .withConfig(config -> 
config.with(NETWORK, GOSSIP)
+                                                                      
.set("stream_entire_sstables", false)
+                                                                      
.set("hinted_handoff_enabled", false))
+                                          .start(), 2))
+        {
+            runStreamingOperationAndCheckIncrementalMetrics(cluster, () -> 
cluster.get(2).nodetool("repair", "--full"));
+        }
+    }
+
+    @Test
+    public void 
testMetricsUpdateIncrementallyWithRebuildAndStreamingBetweenNodes() throws 
Exception
+    {
+        try(Cluster cluster = init(Cluster.build(2)
+                                          .withDataDirCount(1)
+                                          .withConfig(config -> 
config.with(NETWORK, GOSSIP)
+                                                                      
.set("stream_entire_sstables", false)
+                                                                      
.set("hinted_handoff_enabled", false))
+                                          .start(), 2))
+        {
+            runStreamingOperationAndCheckIncrementalMetrics(cluster, () -> 
cluster.get(2).nodetool("rebuild"));
+        }
+    }
+
+    /**
+     * Test to verify that streaming metrics are updated incrementally
+     * - Create 2 node cluster with RF=2
+     * - Create 1 sstable with 10MB on node1, while node2 is empty due to 
message drop
+     * - Run repair OR rebuild on node2 to transfer sstable from node1
+     * - Collect metrics during streaming and check that at least 3 different 
values are reported [0, partial1, .., final_size]
+     * - Check final transferred size is correct (~10MB bytes)
+     */
+    public void runStreamingOperationAndCheckIncrementalMetrics(Cluster 
cluster, Callable<Integer> streamingOperation) throws Exception
+    {
+        assertThat(cluster.size())
+            .describedAs("The minimum cluster size to check streaming metrics 
is 2 nodes.")
+            .isEqualTo(2);
+
+        // Create table with compression disabled so we can easily compute the 
expected final sstable size
+        cluster.schemaChange(String.format("CREATE TABLE %s.cf (k text PRIMARY 
KEY, c1 text) " +
+                                           "WITH compaction = {'class': '%s', 
'enabled': 'false'} " +
+                                           "AND compression = 
{'enabled':'false'};",
+                                           KEYSPACE, 
"SizeTieredCompactionStrategy"));

Review Comment:
   This just keeps consistent with the way we create a schema in the existing 
tests (see: 
https://github.com/apache/cassandra/blob/trunk/test/distributed/org/apache/cassandra/distributed/test/metrics/StreamingMetricsTest.java#L71).
 If it's better to be simple, we change it in both places. 



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to