devmadhuu commented on code in PR #9681:
URL: https://github.com/apache/ozone/pull/9681#discussion_r2798588867
##########
hadoop-ozone/recon/src/main/java/org/apache/hadoop/ozone/recon/api/StorageDistributionEndpoint.java:
##########
@@ -114,6 +128,85 @@ public Response getStorageDistribution() {
}
}
+ @GET
+ @Path("/download")
+ public Response downloadDataNodeDistribution() {
Review Comment:
Add some javadoc , as this is public method for API.
##########
hadoop-ozone/recon/src/main/java/org/apache/hadoop/ozone/recon/api/StorageDistributionEndpoint.java:
##########
@@ -114,6 +128,85 @@ public Response getStorageDistribution() {
}
}
+ @GET
+ @Path("/download")
+ public Response downloadDataNodeDistribution() {
Review Comment:
```suggestion
public Response downloadDataNodeStorageDistribution() {
```
##########
hadoop-ozone/recon/src/main/java/org/apache/hadoop/ozone/recon/api/StorageDistributionEndpoint.java:
##########
@@ -114,6 +128,85 @@ public Response getStorageDistribution() {
}
}
+ @GET
+ @Path("/download")
+ public Response downloadDataNodeDistribution() {
+ DataNodeMetricsServiceResponse metricsResponse =
+ dataNodeMetricsService.getCollectedMetrics(null);
+
+ if (metricsResponse.getStatus() !=
DataNodeMetricsService.MetricCollectionStatus.FINISHED) {
+ return Response.status(Response.Status.ACCEPTED)
+ .entity(metricsResponse)
+ .type(MediaType.APPLICATION_JSON)
+ .build();
+ }
+
+ List<DatanodePendingDeletionMetrics> pendingDeletionMetrics =
+ metricsResponse.getPendingDeletionPerDataNode();
+
+ if (pendingDeletionMetrics == null) {
+ return Response.status(Response.Status.INTERNAL_SERVER_ERROR)
+ .entity("Metrics data is missing despite FINISHED status.")
+ .type(MediaType.TEXT_PLAIN)
+ .build();
+ }
+
+ Map<String, DatanodeStorageReport> reportByUuid =
+ collectDatanodeReports().stream()
+ .collect(Collectors.toMap(
+ DatanodeStorageReport::getDatanodeUuid,
+ Function.identity()));
+
+ StreamingOutput stream = output -> {
+ CSVFormat format = CSVFormat.DEFAULT.builder()
+ .setHeader(
+ "HostName",
+ "Datanode UUID",
+ "Capacity",
+ "Used Space",
+ "Remaining Space",
+ "Committed Space",
+ "Reserved Space",
+ "Minimum Free Space",
+ "Pending Block Size")
Review Comment:
Are we not including recently added fsReserved, fsUsed ... fields as well ?
##########
hadoop-ozone/recon/src/main/java/org/apache/hadoop/ozone/recon/api/StorageDistributionEndpoint.java:
##########
@@ -114,6 +128,85 @@ public Response getStorageDistribution() {
}
}
+ @GET
+ @Path("/download")
+ public Response downloadDataNodeDistribution() {
+ DataNodeMetricsServiceResponse metricsResponse =
+ dataNodeMetricsService.getCollectedMetrics(null);
+
+ if (metricsResponse.getStatus() !=
DataNodeMetricsService.MetricCollectionStatus.FINISHED) {
+ return Response.status(Response.Status.ACCEPTED)
+ .entity(metricsResponse)
+ .type(MediaType.APPLICATION_JSON)
+ .build();
+ }
+
+ List<DatanodePendingDeletionMetrics> pendingDeletionMetrics =
+ metricsResponse.getPendingDeletionPerDataNode();
+
+ if (pendingDeletionMetrics == null) {
+ return Response.status(Response.Status.INTERNAL_SERVER_ERROR)
+ .entity("Metrics data is missing despite FINISHED status.")
+ .type(MediaType.TEXT_PLAIN)
+ .build();
+ }
+
+ Map<String, DatanodeStorageReport> reportByUuid =
+ collectDatanodeReports().stream()
+ .collect(Collectors.toMap(
+ DatanodeStorageReport::getDatanodeUuid,
+ Function.identity()));
+
+ StreamingOutput stream = output -> {
+ CSVFormat format = CSVFormat.DEFAULT.builder()
+ .setHeader(
+ "HostName",
Review Comment:
Define them as CSV_HEADERs constant array and then use somewhere in util
class.
##########
hadoop-ozone/recon/src/main/java/org/apache/hadoop/ozone/recon/api/StorageDistributionEndpoint.java:
##########
@@ -114,6 +128,85 @@ public Response getStorageDistribution() {
}
}
+ @GET
+ @Path("/download")
Review Comment:
Since we are adding new APIs, ReadMe also to be updated along with Swagger
API doc
cc: @devabhishekpal
##########
hadoop-ozone/recon/src/main/java/org/apache/hadoop/ozone/recon/api/StorageDistributionEndpoint.java:
##########
@@ -114,6 +128,85 @@ public Response getStorageDistribution() {
}
}
+ @GET
+ @Path("/download")
+ public Response downloadDataNodeDistribution() {
+ DataNodeMetricsServiceResponse metricsResponse =
+ dataNodeMetricsService.getCollectedMetrics(null);
+
+ if (metricsResponse.getStatus() !=
DataNodeMetricsService.MetricCollectionStatus.FINISHED) {
+ return Response.status(Response.Status.ACCEPTED)
+ .entity(metricsResponse)
+ .type(MediaType.APPLICATION_JSON)
+ .build();
+ }
+
+ List<DatanodePendingDeletionMetrics> pendingDeletionMetrics =
+ metricsResponse.getPendingDeletionPerDataNode();
+
+ if (pendingDeletionMetrics == null) {
+ return Response.status(Response.Status.INTERNAL_SERVER_ERROR)
+ .entity("Metrics data is missing despite FINISHED status.")
+ .type(MediaType.TEXT_PLAIN)
+ .build();
+ }
+
+ Map<String, DatanodeStorageReport> reportByUuid =
+ collectDatanodeReports().stream()
Review Comment:
1. collectDatanodeReports() calls nodeManager.getAllNodes() which iterates
all datanodes
2. For each datanode, it calls nodeManager.getNodeStat() and
nodeManager.getTotalFilesystemUsage()
3. This happens synchronously during the HTTP request
For 1000+ datanodes, this could be a bottleneck, can we use
already-collected metrics from DataNodeMetricsService, because chances are that
user may click to download csv as soon as he see the data in table and if this
is not feasible, then can we add some kind of timeout protection ?
##########
hadoop-ozone/recon/src/test/java/org/apache/hadoop/ozone/recon/api/TestStorageDistributionEndpoint.java:
##########
@@ -0,0 +1,186 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements. See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.ozone.recon.api;
+
+import static org.junit.jupiter.api.Assertions.assertEquals;
+import static org.junit.jupiter.api.Assertions.assertInstanceOf;
+import static org.junit.jupiter.api.Assertions.assertTrue;
+import static org.mockito.Mockito.mock;
+import static org.mockito.Mockito.when;
+
+import java.io.ByteArrayOutputStream;
+import java.nio.charset.StandardCharsets;
+import java.util.Arrays;
+import java.util.List;
+import java.util.UUID;
+import java.util.stream.Collectors;
+import javax.ws.rs.core.Response;
+import javax.ws.rs.core.StreamingOutput;
+import org.apache.hadoop.hdds.protocol.DatanodeDetails;
+import org.apache.hadoop.hdds.scm.container.placement.metrics.SCMNodeMetric;
+import org.apache.hadoop.hdds.scm.node.DatanodeInfo;
+import org.apache.hadoop.hdds.scm.server.OzoneStorageContainerManager;
+import org.apache.hadoop.ozone.recon.api.types.DataNodeMetricsServiceResponse;
+import org.apache.hadoop.ozone.recon.api.types.DatanodePendingDeletionMetrics;
+import org.apache.hadoop.ozone.recon.api.types.DatanodeStorageReport;
+import org.apache.hadoop.ozone.recon.scm.ReconNodeManager;
+import org.apache.hadoop.ozone.recon.spi.ReconGlobalStatsManager;
+import org.junit.jupiter.api.BeforeEach;
+import org.junit.jupiter.api.Test;
+
+/**
+ * The TestStorageDistributionEndpoint class contains unit tests for verifying
+ * the functionality of the {@link StorageDistributionEndpoint} class.
+ *
+ */
+public class TestStorageDistributionEndpoint {
+ private DataNodeMetricsService dataNodeMetricsService;
+ private StorageDistributionEndpoint storageDistributionEndpoint;
+ private ReconNodeManager nodeManager = mock(ReconNodeManager.class);
+
+ @BeforeEach
+ public void setup() {
+ ReconGlobalMetricsService reconGlobalMetricsService =
mock(ReconGlobalMetricsService.class);
+ dataNodeMetricsService = mock(DataNodeMetricsService.class);
+ NSSummaryEndpoint nssummaryEndpoint = mock(NSSummaryEndpoint.class);
+ OzoneStorageContainerManager reconSCM =
mock(OzoneStorageContainerManager.class);
+ when(reconSCM.getScmNodeManager()).thenReturn(nodeManager);
+ ReconGlobalStatsManager reconGlobalStatsManager =
mock(ReconGlobalStatsManager.class);
+ storageDistributionEndpoint = new StorageDistributionEndpoint(reconSCM,
+ nssummaryEndpoint,
+ reconGlobalStatsManager,
+ reconGlobalMetricsService,
+ dataNodeMetricsService);
+ }
+
+ @Test
+ public void testDownloadReturnsAcceptedWhenCollectionInProgress() {
+ DataNodeMetricsServiceResponse metricsResponse =
DataNodeMetricsServiceResponse.newBuilder()
+ .setStatus(DataNodeMetricsService.MetricCollectionStatus.IN_PROGRESS)
+ .build();
+
when(dataNodeMetricsService.getCollectedMetrics(null)).thenReturn(metricsResponse);
+ Response response =
storageDistributionEndpoint.downloadDataNodeDistribution();
+
+ assertEquals(Response.Status.ACCEPTED.getStatusCode(),
response.getStatus());
+ assertEquals("application/json", response.getMediaType().toString());
+ assertEquals(metricsResponse, response.getEntity());
+ }
+
+ @Test
+ public void testDownloadReturnsServerErrorWhenMetricsMissing() {
+ DataNodeMetricsServiceResponse metricsResponse =
DataNodeMetricsServiceResponse.newBuilder()
+ .setStatus(DataNodeMetricsService.MetricCollectionStatus.FINISHED)
+ .build();
+
when(dataNodeMetricsService.getCollectedMetrics(null)).thenReturn(metricsResponse);
+ Response response =
storageDistributionEndpoint.downloadDataNodeDistribution();
+
+ assertEquals(Response.Status.INTERNAL_SERVER_ERROR.getStatusCode(),
response.getStatus());
+ assertEquals("Metrics data is missing despite FINISHED status.",
response.getEntity());
+ assertEquals("text/plain", response.getMediaType().toString());
+ }
+
+ @Test
+ public void testDownloadReturnsCsvWithMetrics() throws Exception {
+ // given
+ UUID uuid1 = UUID.randomUUID();
+ UUID uuid2 = UUID.randomUUID();
+
+ String dataNode1 = "dn1";
+ String dataNode2 = "dn2";
+
+ List<DatanodePendingDeletionMetrics> pendingDeletionMetrics =
Arrays.asList(
+ new DatanodePendingDeletionMetrics(dataNode1, uuid1.toString(), 10L),
+ new DatanodePendingDeletionMetrics(dataNode2, uuid2.toString(), 20L)
+ );
+
+ DataNodeMetricsServiceResponse metricsResponse =
+ DataNodeMetricsServiceResponse.newBuilder()
+ .setStatus(DataNodeMetricsService.MetricCollectionStatus.FINISHED)
+ .setPendingDeletion(pendingDeletionMetrics)
+ .build();
+
+ when(dataNodeMetricsService.getCollectedMetrics(null))
+ .thenReturn(metricsResponse);
+
+ mockDatanodeStorageReports(pendingDeletionMetrics);
+ mockNodeManagerStats(uuid1, uuid2);
+
+ Response response =
storageDistributionEndpoint.downloadDataNodeDistribution();
+
+ // then
+ assertEquals(Response.Status.ACCEPTED.getStatusCode(),
response.getStatus());
+ assertEquals("text/csv", response.getMediaType().toString());
+ assertEquals(
+ "attachment;
filename=\"datanode_storage_and_pending_deletion_stats.csv\"",
+ response.getHeaderString("Content-Disposition")
+ );
+
+ String csv = readCsv(response);
+
+ assertTrue(csv.contains(
+ "HostName,Datanode UUID,Capacity,Used Space,Remaining Space," +
+ "Committed Space,Reserved Space,Minimum Free Space,Pending Block
Size"
+ ));
+ assertTrue(csv.contains(dataNode1 + "," + uuid1 + ",100,10,10,10,5,5,10"));
+ assertTrue(csv.contains(dataNode2 + "," + uuid2 + ",100,10,10,10,5,5,20"));
+ }
+
+ private void mockDatanodeStorageReports(
+ List<DatanodePendingDeletionMetrics> metrics) {
+
+ List<DatanodeStorageReport> reports = metrics.stream()
+ .map(m -> DatanodeStorageReport.newBuilder()
+ .setDatanodeUuid(m.getDatanodeUuid())
+ .setHostName(m.getHostName())
+ .build())
Review Comment:
The mock creates reports with only UUID and hostname, but the CSV expects
all 8 storage fields
##########
hadoop-ozone/recon/src/main/java/org/apache/hadoop/ozone/recon/api/StorageDistributionEndpoint.java:
##########
@@ -114,6 +128,85 @@ public Response getStorageDistribution() {
}
}
+ @GET
+ @Path("/download")
+ public Response downloadDataNodeDistribution() {
+ DataNodeMetricsServiceResponse metricsResponse =
+ dataNodeMetricsService.getCollectedMetrics(null);
+
+ if (metricsResponse.getStatus() !=
DataNodeMetricsService.MetricCollectionStatus.FINISHED) {
+ return Response.status(Response.Status.ACCEPTED)
+ .entity(metricsResponse)
+ .type(MediaType.APPLICATION_JSON)
+ .build();
+ }
+
+ List<DatanodePendingDeletionMetrics> pendingDeletionMetrics =
+ metricsResponse.getPendingDeletionPerDataNode();
+
+ if (pendingDeletionMetrics == null) {
+ return Response.status(Response.Status.INTERNAL_SERVER_ERROR)
+ .entity("Metrics data is missing despite FINISHED status.")
+ .type(MediaType.TEXT_PLAIN)
+ .build();
+ }
+
+ Map<String, DatanodeStorageReport> reportByUuid =
+ collectDatanodeReports().stream()
+ .collect(Collectors.toMap(
+ DatanodeStorageReport::getDatanodeUuid,
+ Function.identity()));
+
+ StreamingOutput stream = output -> {
+ CSVFormat format = CSVFormat.DEFAULT.builder()
+ .setHeader(
+ "HostName",
+ "Datanode UUID",
+ "Capacity",
+ "Used Space",
+ "Remaining Space",
+ "Committed Space",
+ "Reserved Space",
+ "Minimum Free Space",
+ "Pending Block Size")
+ .build();
+
+ try (CSVPrinter printer = new CSVPrinter(
+ new BufferedWriter(new OutputStreamWriter(output,
StandardCharsets.UTF_8)),
+ format)) {
+
+ for (DatanodePendingDeletionMetrics metric : pendingDeletionMetrics) {
+ DatanodeStorageReport report =
reportByUuid.get(metric.getDatanodeUuid());
+ if (report == null) {
+ continue; // skip if report is missing
Review Comment:
When a datanode has pending deletion metrics but no storage report, the row
is silently dropped from the CSV. This could mask operational issues.
**Scenarios Where This Fails:**
1. Datanode just registered but hasn't sent full report yet
2. Datanode in STALE state
3. Race condition between metrics collection and report generation
**Impact:** Operators lose visibility into datanodes with pending deletions,
which is exactly what they're trying to monitor.
##########
hadoop-ozone/recon/src/test/java/org/apache/hadoop/ozone/recon/api/TestStorageDistributionEndpoint.java:
##########
@@ -0,0 +1,186 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements. See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.ozone.recon.api;
+
+import static org.junit.jupiter.api.Assertions.assertEquals;
+import static org.junit.jupiter.api.Assertions.assertInstanceOf;
+import static org.junit.jupiter.api.Assertions.assertTrue;
+import static org.mockito.Mockito.mock;
+import static org.mockito.Mockito.when;
+
+import java.io.ByteArrayOutputStream;
+import java.nio.charset.StandardCharsets;
+import java.util.Arrays;
+import java.util.List;
+import java.util.UUID;
+import java.util.stream.Collectors;
+import javax.ws.rs.core.Response;
+import javax.ws.rs.core.StreamingOutput;
+import org.apache.hadoop.hdds.protocol.DatanodeDetails;
+import org.apache.hadoop.hdds.scm.container.placement.metrics.SCMNodeMetric;
+import org.apache.hadoop.hdds.scm.node.DatanodeInfo;
+import org.apache.hadoop.hdds.scm.server.OzoneStorageContainerManager;
+import org.apache.hadoop.ozone.recon.api.types.DataNodeMetricsServiceResponse;
+import org.apache.hadoop.ozone.recon.api.types.DatanodePendingDeletionMetrics;
+import org.apache.hadoop.ozone.recon.api.types.DatanodeStorageReport;
+import org.apache.hadoop.ozone.recon.scm.ReconNodeManager;
+import org.apache.hadoop.ozone.recon.spi.ReconGlobalStatsManager;
+import org.junit.jupiter.api.BeforeEach;
+import org.junit.jupiter.api.Test;
+
+/**
+ * The TestStorageDistributionEndpoint class contains unit tests for verifying
+ * the functionality of the {@link StorageDistributionEndpoint} class.
+ *
+ */
+public class TestStorageDistributionEndpoint {
+ private DataNodeMetricsService dataNodeMetricsService;
+ private StorageDistributionEndpoint storageDistributionEndpoint;
+ private ReconNodeManager nodeManager = mock(ReconNodeManager.class);
+
+ @BeforeEach
+ public void setup() {
+ ReconGlobalMetricsService reconGlobalMetricsService =
mock(ReconGlobalMetricsService.class);
+ dataNodeMetricsService = mock(DataNodeMetricsService.class);
+ NSSummaryEndpoint nssummaryEndpoint = mock(NSSummaryEndpoint.class);
+ OzoneStorageContainerManager reconSCM =
mock(OzoneStorageContainerManager.class);
+ when(reconSCM.getScmNodeManager()).thenReturn(nodeManager);
+ ReconGlobalStatsManager reconGlobalStatsManager =
mock(ReconGlobalStatsManager.class);
+ storageDistributionEndpoint = new StorageDistributionEndpoint(reconSCM,
+ nssummaryEndpoint,
+ reconGlobalStatsManager,
+ reconGlobalMetricsService,
+ dataNodeMetricsService);
+ }
+
+ @Test
+ public void testDownloadReturnsAcceptedWhenCollectionInProgress() {
+ DataNodeMetricsServiceResponse metricsResponse =
DataNodeMetricsServiceResponse.newBuilder()
+ .setStatus(DataNodeMetricsService.MetricCollectionStatus.IN_PROGRESS)
+ .build();
+
when(dataNodeMetricsService.getCollectedMetrics(null)).thenReturn(metricsResponse);
+ Response response =
storageDistributionEndpoint.downloadDataNodeDistribution();
+
+ assertEquals(Response.Status.ACCEPTED.getStatusCode(),
response.getStatus());
+ assertEquals("application/json", response.getMediaType().toString());
+ assertEquals(metricsResponse, response.getEntity());
+ }
+
+ @Test
+ public void testDownloadReturnsServerErrorWhenMetricsMissing() {
+ DataNodeMetricsServiceResponse metricsResponse =
DataNodeMetricsServiceResponse.newBuilder()
+ .setStatus(DataNodeMetricsService.MetricCollectionStatus.FINISHED)
+ .build();
+
when(dataNodeMetricsService.getCollectedMetrics(null)).thenReturn(metricsResponse);
+ Response response =
storageDistributionEndpoint.downloadDataNodeDistribution();
+
+ assertEquals(Response.Status.INTERNAL_SERVER_ERROR.getStatusCode(),
response.getStatus());
+ assertEquals("Metrics data is missing despite FINISHED status.",
response.getEntity());
+ assertEquals("text/plain", response.getMediaType().toString());
+ }
+
+ @Test
+ public void testDownloadReturnsCsvWithMetrics() throws Exception {
+ // given
+ UUID uuid1 = UUID.randomUUID();
+ UUID uuid2 = UUID.randomUUID();
+
+ String dataNode1 = "dn1";
+ String dataNode2 = "dn2";
+
+ List<DatanodePendingDeletionMetrics> pendingDeletionMetrics =
Arrays.asList(
+ new DatanodePendingDeletionMetrics(dataNode1, uuid1.toString(), 10L),
+ new DatanodePendingDeletionMetrics(dataNode2, uuid2.toString(), 20L)
+ );
+
+ DataNodeMetricsServiceResponse metricsResponse =
+ DataNodeMetricsServiceResponse.newBuilder()
+ .setStatus(DataNodeMetricsService.MetricCollectionStatus.FINISHED)
+ .setPendingDeletion(pendingDeletionMetrics)
+ .build();
+
+ when(dataNodeMetricsService.getCollectedMetrics(null))
+ .thenReturn(metricsResponse);
+
+ mockDatanodeStorageReports(pendingDeletionMetrics);
+ mockNodeManagerStats(uuid1, uuid2);
+
+ Response response =
storageDistributionEndpoint.downloadDataNodeDistribution();
+
+ // then
+ assertEquals(Response.Status.ACCEPTED.getStatusCode(),
response.getStatus());
+ assertEquals("text/csv", response.getMediaType().toString());
+ assertEquals(
+ "attachment;
filename=\"datanode_storage_and_pending_deletion_stats.csv\"",
+ response.getHeaderString("Content-Disposition")
+ );
+
+ String csv = readCsv(response);
+
+ assertTrue(csv.contains(
+ "HostName,Datanode UUID,Capacity,Used Space,Remaining Space," +
+ "Committed Space,Reserved Space,Minimum Free Space,Pending Block
Size"
+ ));
+ assertTrue(csv.contains(dataNode1 + "," + uuid1 + ",100,10,10,10,5,5,10"));
+ assertTrue(csv.contains(dataNode2 + "," + uuid2 + ",100,10,10,10,5,5,20"));
+ }
+
+ private void mockDatanodeStorageReports(
+ List<DatanodePendingDeletionMetrics> metrics) {
+
+ List<DatanodeStorageReport> reports = metrics.stream()
+ .map(m -> DatanodeStorageReport.newBuilder()
+ .setDatanodeUuid(m.getDatanodeUuid())
+ .setHostName(m.getHostName())
+ .build())
+ .collect(Collectors.toList());
+
+ when(storageDistributionEndpoint.collectDatanodeReports())
+ .thenReturn(reports);
Review Comment:
1. The test assertions check for hardcoded values like "100,10,10,10,5,5"
but the mocked reports have default values (0)
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]