[ 
https://issues.apache.org/jira/browse/HBASE-17614?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16136235#comment-16136235
 ] 

stack commented on HBASE-17614:
-------------------------------

I downloaded the patch and tried running the tests locally to see if it was 
just our infra. This is what I got when I tried to run them:

{code}
-------------------------------------------------------
 T E S T S
-------------------------------------------------------
Running org.apache.hadoop.hbase.backup.TestBackupDeleteRestore
Running org.apache.hadoop.hbase.backup.TestBackupSystemTable
Tests run: 15, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 21.485 sec - 
in org.apache.hadoop.hbase.backup.TestBackupSystemTable
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 41.663 sec - in 
org.apache.hadoop.hbase.backup.TestBackupDeleteRestore
Running org.apache.hadoop.hbase.backup.TestHFileArchiving
Tests run: 5, Failures: 0, Errors: 4, Skipped: 0, Time elapsed: 20.586 sec <<< 
FAILURE! - in org.apache.hadoop.hbase.backup.TestHFileArchiving
testDeleteRegionWithNoStoreFiles(org.apache.hadoop.hbase.backup.TestHFileArchiving)
  Time elapsed: 0.06 sec  <<< ERROR!
org.apache.hadoop.hbase.DoNotRetryIOException:
org.apache.hadoop.hbase.DoNotRetryIOException: MEMSTORE_FLUSHSIZE for table 
descriptor or "hbase.hregion.memstore.flush.size" (25000) is too small, which 
might cause very frequent flushing. Set hbase.table.sanity.checks to false at 
conf or table descriptor if you want to bypass sanity checks
        at 
org.apache.hadoop.hbase.master.HMaster.warnOrThrowExceptionForFailure(HMaster.java:1970)
        at 
org.apache.hadoop.hbase.master.HMaster.sanityCheckTableDescriptor(HMaster.java:1816)
        at org.apache.hadoop.hbase.master.HMaster.createTable(HMaster.java:1725)
        at 
org.apache.hadoop.hbase.master.MasterRpcServices.createTable(MasterRpcServices.java:446)
        at 
org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java)
        at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:406)
        at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:133)
        at 
org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:278)
        at 
org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:258)

        at 
org.apache.hadoop.hbase.backup.TestHFileArchiving.testDeleteRegionWithNoStoreFiles(TestHFileArchiving.java:180)
Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException:
org.apache.hadoop.hbase.DoNotRetryIOException: MEMSTORE_FLUSHSIZE for table 
descriptor or "hbase.hregion.memstore.flush.size" (25000) is too small, which 
might cause very frequent flushing. Set hbase.table.sanity.checks to false at 
conf or table descriptor if you want to bypass sanity checks
        at 
org.apache.hadoop.hbase.master.HMaster.warnOrThrowExceptionForFailure(HMaster.java:1970)
        at 
org.apache.hadoop.hbase.master.HMaster.sanityCheckTableDescriptor(HMaster.java:1816)
        at org.apache.hadoop.hbase.master.HMaster.createTable(HMaster.java:1725)
        at 
org.apache.hadoop.hbase.master.MasterRpcServices.createTable(MasterRpcServices.java:446)
        at 
org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java)
        at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:406)
        at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:133)
        at 
org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:278)
        at 
org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:258)


testRemovesRegionDirOnArchive(org.apache.hadoop.hbase.backup.TestHFileArchiving)
  Time elapsed: 0.007 sec  <<< ERROR!
org.apache.hadoop.hbase.DoNotRetryIOException:
org.apache.hadoop.hbase.DoNotRetryIOException: MEMSTORE_FLUSHSIZE for table 
descriptor or "hbase.hregion.memstore.flush.size" (25000) is too small, which 
might cause very frequent flushing. Set hbase.table.sanity.checks to false at 
conf or table descriptor if you want to bypass sanity checks
        at 
org.apache.hadoop.hbase.master.HMaster.warnOrThrowExceptionForFailure(HMaster.java:1970)
        at 
org.apache.hadoop.hbase.master.HMaster.sanityCheckTableDescriptor(HMaster.java:1816)
        at org.apache.hadoop.hbase.master.HMaster.createTable(HMaster.java:1725)
        at 
org.apache.hadoop.hbase.master.MasterRpcServices.createTable(MasterRpcServices.java:446)
        at 
org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java)
        at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:406)
        at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:133)
        at 
org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:278)
        at 
org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:258)

        at 
org.apache.hadoop.hbase.backup.TestHFileArchiving.testRemovesRegionDirOnArchive(TestHFileArchiving.java:121)
Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException:
org.apache.hadoop.hbase.DoNotRetryIOException: MEMSTORE_FLUSHSIZE for table 
descriptor or "hbase.hregion.memstore.flush.size" (25000) is too small, which 
might cause very frequent flushing. Set hbase.table.sanity.checks to false at 
conf or table descriptor if you want to bypass sanity checks
        at 
org.apache.hadoop.hbase.master.HMaster.warnOrThrowExceptionForFailure(HMaster.java:1970)
        at 
org.apache.hadoop.hbase.master.HMaster.sanityCheckTableDescriptor(HMaster.java:1816)
        at org.apache.hadoop.hbase.master.HMaster.createTable(HMaster.java:1725)
        at 
org.apache.hadoop.hbase.master.MasterRpcServices.createTable(MasterRpcServices.java:446)
        at 
org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java)
        at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:406)
        at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:133)
        at 
org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:278)
        at 
org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:258)


testArchiveOnTableDelete(org.apache.hadoop.hbase.backup.TestHFileArchiving)  
Time elapsed: 0.007 sec  <<< ERROR!
org.apache.hadoop.hbase.DoNotRetryIOException:
org.apache.hadoop.hbase.DoNotRetryIOException: MEMSTORE_FLUSHSIZE for table 
descriptor or "hbase.hregion.memstore.flush.size" (25000) is too small, which 
might cause very frequent flushing. Set hbase.table.sanity.checks to false at 
conf or table descriptor if you want to bypass sanity checks
        at 
org.apache.hadoop.hbase.master.HMaster.warnOrThrowExceptionForFailure(HMaster.java:1970)
        at 
org.apache.hadoop.hbase.master.HMaster.sanityCheckTableDescriptor(HMaster.java:1816)
        at org.apache.hadoop.hbase.master.HMaster.createTable(HMaster.java:1725)
        at 
org.apache.hadoop.hbase.master.MasterRpcServices.createTable(MasterRpcServices.java:446)
        at 
org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java)
        at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:406)
        at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:133)
        at 
org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:278)
        at 
org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:258)

        at 
org.apache.hadoop.hbase.backup.TestHFileArchiving.testArchiveOnTableDelete(TestHFileArchiving.java:228)
Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException:
org.apache.hadoop.hbase.DoNotRetryIOException: MEMSTORE_FLUSHSIZE for table 
descriptor or "hbase.hregion.memstore.flush.size" (25000) is too small, which 
might cause very frequent flushing. Set hbase.table.sanity.checks to false at 
conf or table descriptor if you want to bypass sanity checks
        at 
org.apache.hadoop.hbase.master.HMaster.warnOrThrowExceptionForFailure(HMaster.java:1970)
        at 
org.apache.hadoop.hbase.master.HMaster.sanityCheckTableDescriptor(HMaster.java:1816)
        at org.apache.hadoop.hbase.master.HMaster.createTable(HMaster.java:1725)
        at 
org.apache.hadoop.hbase.master.MasterRpcServices.createTable(MasterRpcServices.java:446)
        at 
org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java)
        at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:406)
        at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:133)
        at 
org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:278)
        at 
org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:258)


testArchiveOnTableFamilyDelete(org.apache.hadoop.hbase.backup.TestHFileArchiving)
  Time elapsed: 0.007 sec  <<< ERROR!
org.apache.hadoop.hbase.DoNotRetryIOException:
org.apache.hadoop.hbase.DoNotRetryIOException: MEMSTORE_FLUSHSIZE for table 
descriptor or "hbase.hregion.memstore.flush.size" (25000) is too small, which 
might cause very frequent flushing. Set hbase.table.sanity.checks to false at 
conf or table descriptor if you want to bypass sanity checks
        at 
org.apache.hadoop.hbase.master.HMaster.warnOrThrowExceptionForFailure(HMaster.java:1970)
        at 
org.apache.hadoop.hbase.master.HMaster.sanityCheckTableDescriptor(HMaster.java:1816)
        at org.apache.hadoop.hbase.master.HMaster.createTable(HMaster.java:1725)
        at 
org.apache.hadoop.hbase.master.MasterRpcServices.createTable(MasterRpcServices.java:446)
        at 
org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java)
        at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:406)
        at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:133)
        at 
org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:278)
        at 
org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:258)

        at 
org.apache.hadoop.hbase.backup.TestHFileArchiving.testArchiveOnTableFamilyDelete(TestHFileArchiving.java:307)
Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException:
org.apache.hadoop.hbase.DoNotRetryIOException: MEMSTORE_FLUSHSIZE for table 
descriptor or "hbase.hregion.memstore.flush.size" (25000) is too small, which 
might cause very frequent flushing. Set hbase.table.sanity.checks to false at 
conf or table descriptor if you want to bypass sanity checks
        at 
org.apache.hadoop.hbase.master.HMaster.warnOrThrowExceptionForFailure(HMaster.java:1970)
        at 
org.apache.hadoop.hbase.master.HMaster.sanityCheckTableDescriptor(HMaster.java:1816)
        at org.apache.hadoop.hbase.master.HMaster.createTable(HMaster.java:1725)
        at 
org.apache.hadoop.hbase.master.MasterRpcServices.createTable(MasterRpcServices.java:446)
        at 
org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java)
        at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:406)
        at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:133)
        at 
org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:278)
        at 
org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:258)



Results :

Tests in error:
  TestHFileArchiving.testArchiveOnTableDelete:228 » DoNotRetryIO 
org.apache.hado...
  TestHFileArchiving.testArchiveOnTableFamilyDelete:307 » DoNotRetryIO 
org.apach...
  TestHFileArchiving.testDeleteRegionWithNoStoreFiles:180 » DoNotRetryIO 
org.apa...
  TestHFileArchiving.testRemovesRegionDirOnArchive:121 » DoNotRetryIO 
org.apache...

Tests run: 21, Failures: 0, Errors: 4, Skipped: 0
{code}

> Move Backup/Restore into separate module 
> -----------------------------------------
>
>                 Key: HBASE-17614
>                 URL: https://issues.apache.org/jira/browse/HBASE-17614
>             Project: HBase
>          Issue Type: Task
>            Reporter: Vladimir Rodionov
>            Assignee: Vladimir Rodionov
>            Priority: Blocker
>              Labels: backup
>             Fix For: 2.0.0
>
>         Attachments: HBASE-17614-v1.patch, HBASE-17614-v2.patch, 
> HBASE-17614-v3.patch
>
>
> Move all the backup code into separate hbase-backup module.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

Reply via email to