[ 
https://issues.apache.org/jira/browse/HADOOP-14068?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15870948#comment-15870948
 ] 

Aaron Fabbri commented on HADOOP-14068:
---------------------------------------

Thanks for doing this.   It is clear there is value in also running the 
MetadataStore tests against the live service since it behaves differently.

{noformat}
@@ -733,12 +733,17 @@ void provisionTable(Long readCapacity, Long writeCapacity)
         .withReadCapacityUnits(readCapacity)
         .withWriteCapacityUnits(writeCapacity);
     try {
-      final ProvisionedThroughputDescription p =
-          table.updateTable(toProvision).getProvisionedThroughput();
+      table.updateTable(toProvision).getProvisionedThroughput();
+      table.waitForActive();
+      final ProvisionedThroughputDescription p = table.getDescription()
+          .getProvisionedThroughput();
       LOG.info("Provision table {} in region {}: readCapacityUnits={}, "
               + "writeCapacityUnits={}",
           tableName, region, p.getReadCapacityUnits(),
           p.getWriteCapacityUnits());
{noformat}

I assume this change is because the table moves out of the Active state after 
changing throughput provisioning?  So we wait for it to become  Active before 
proceeding?  And I assume we never hit the issue with the local dynamoDB?

{noformat}
+    } catch (InterruptedException e) {
+      LOG.error("Interrupted while reprovisioning I/O for table; " +
+          "may not have taken effect yet...");
{noformat}

Don't think we want to swallow this exception.  How about resetting the 
thread's interrupt flag then throwing an InterruptedIOException that wraps the 
original?

{noformat}
     conf.unset(Constants.S3GUARD_DDB_TABLE_NAME_KEY);
-    conf.unset(Constants.S3GUARD_DDB_ENDPOINT_KEY);
+    //conf.unset(Constants.S3GUARD_DDB_ENDPOINT_KEY);
     try {
{noformat}

Just remove the line.. 

{noformat}
-    try {
+    /*try {
       DynamoDBMetadataStore ddbms = new DynamoDBMetadataStore();
       ddbms.initialize(conf);
       fail("Should have failed because as the endpoint is not set!");
     } catch (IllegalArgumentException ignored) {
-    }
-    // config endpoint
-    conf.set(Constants.S3GUARD_DDB_ENDPOINT_KEY, ddbEndpoint);
-    // config credentials
-    conf.set(Constants.ACCESS_KEY, "dummy-access-key");
-    conf.set(Constants.SECRET_KEY, "dummy-secret-key");
-    conf.setBoolean(Constants.S3GUARD_DDB_TABLE_CREATE_KEY, true);
+    }*/
+    customizeConfigurationForDynamoDB(conf);
{noformat}

Ditto here.. looks like that code moved to the helper function.

> Add integration test version of TestMetadataStore for DynamoDB
> --------------------------------------------------------------
>
>                 Key: HADOOP-14068
>                 URL: https://issues.apache.org/jira/browse/HADOOP-14068
>             Project: Hadoop Common
>          Issue Type: Sub-task
>          Components: fs/s3
>            Reporter: Sean Mackrory
>            Assignee: Sean Mackrory
>         Attachments: HADOOP-14068-HADOOP-13345.001.patch, 
> HADOOP-14068-HADOOP-13345.002.patch
>
>
> I tweaked TestDynamoDBMetadataStore to run against the actual Amazon DynamoDB 
> service (as opposed to the "local" edition). Several tests failed because of 
> minor variations in behavior. I think the differences that are clearly 
> possible are enough to warrant extending that class as an ITest (but 
> obviously keeping the existing test so 99% of the the coverage remains even 
> when not configured for actual DynamoDB usage).



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org

Reply via email to