[ 
https://issues.apache.org/jira/browse/HADOOP-12334?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14717812#comment-14717812
 ] 

Duo Xu commented on HADOOP-12334:
---------------------------------

[~gouravk]

1. there is a tab in the line of code {code}"LOG.warn("Rename: CopyBlob: 
StorageException: ServerBusy: Retry complete, will attempt client side copy for 
page blob");"{code}
2. there is white spaces at the end of the last line of code 
{code}                
                opStream.flush();
                opStream.close();
                ipStream.close();
              } else {
                  throw new AzureException(e);
              }
          }         
{code}
3. make sure indention keeps the same. I see some places you use 4 blanks. It 
should be 2.
4. add your code in here, if copyblob with retry fails, the exception will be 
catched here. I believe this is more clear and avoids nested try-catches, which 
causes the above checkstyle error. 
{code}
      waitForCopyToComplete(dstBlob, getInstrumentedContext());
      safeDelete(srcBlob, lease);
    } catch (StorageException e) {
      LOG.warn("Rename: CopyBlob: StorageException: Failed");
      throw new AzureException(e);
    } catch (URISyntaxException e) {
      // Re-throw exception as an Azure storage exception.
      throw new AzureException(e);
    }
{code}
5. you still need to add {code}if 
(e.getErrorCode().equals(StorageErrorCode.SERVER_BUSY.toString())){code} 
wrapping your code, because storage exception may not always be the throttling.
6. {code}opStream.write(buffer, 0, len);{code} it seems this line of code keeps 
writing the same offset, that is the first 512byte of the blob. You can write a 
small application to test and make sure the blobs you rewrite are exactly the 
same.

> Change Mode Of Copy Operation of HBase WAL Archiving to bypass Azure Storage 
> Throttling after retries
> -----------------------------------------------------------------------------------------------------
>
>                 Key: HADOOP-12334
>                 URL: https://issues.apache.org/jira/browse/HADOOP-12334
>             Project: Hadoop Common
>          Issue Type: Improvement
>          Components: tools
>            Reporter: Gaurav Kanade
>            Assignee: Gaurav Kanade
>         Attachments: HADOOP-12334.01.patch, HADOOP-12334.02.patch
>
>
> HADOOP-11693 mitigated the problem of HMaster aborting regionserver due to 
> Azure Storage Throttling event during HBase WAL archival. The way this was 
> achieved was by applying an intensive exponential retry when throttling 
> occurred.
> As a second level of mitigation we will change the mode of copy operation if 
> the operation fails even after all retries -i.e. we will do a client side 
> copy of the blob and then copy it back to destination. This operation will 
> not be subject to throttling and hence should provide a stronger mitigation. 
> However it is more expensive, hence we do it only in the case we fail after 
> all retries



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to