[
https://issues.apache.org/jira/browse/HBASE-25501?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Mallikarjun updated HBASE-25501:
--------------------------------
Description:
{code:java}
// code placeholder
$ sudo /etc/init.d/yak master hbase backup create
Please make sure that backup is enabled on the cluster. To enable backup, in
hbase-site.xml, set:
hbase.backup.enable=true
hbase.master.logcleaner.plugins=YOUR_PLUGINS,org.apache.hadoop.hbase.backup.master.BackupLogCleaner
hbase.procedure.master.classes=YOUR_CLASSES,org.apache.hadoop.hbase.backup.master.LogRollMasterProcedureManager
hbase.procedure.regionserver.classes=YOUR_CLASSES,org.apache.hadoop.hbase.backup.regionserver.LogRollRegionServerProcedureManager
hbase.coprocessor.region.classes=YOUR_CLASSES,org.apache.hadoop.hbase.backup.BackupObserver
and restart the clusterUsage: hbase backup create <type> <backup_path> [options]
type "full" to create a full backup image
"incremental" to create an incremental backup image
backup_path Full path to store the backup imageOptions:
-b <arg> Bandwidth per task (MapReduce task) in MB/s
-d Enable debug loggings
-q <arg> Yarn queue name to run backup create command on
-s <arg> Backup set to backup, mutually exclusive with -t (table list)
-t <arg> Table name list, comma-separated.
-w <arg> Number of parallel MapReduce tasks to execute {code}
Parameters -b, -q, -w are not being used when creating a export snapshot
request
{code:java}
// code placeholder
for (TableName table : backupInfo.getTables()) {
// Currently we simply set the sub copy tasks by counting the table snapshot
number, we can
// calculate the real files' size for the percentage in the future.
// backupCopier.setSubTaskPercntgInWholeTask(1f / numOfSnapshots);
int res;
String[] args = new String[4];
args[0] = "-snapshot";
args[1] = backupInfo.getSnapshotName(table);
args[2] = "-copy-to";
args[3] = backupInfo.getTableBackupDir(table);
String jobname = "Full-Backup_" + backupInfo.getBackupId() + "_" +
table.getNameAsString();
if (LOG.isDebugEnabled()) {
LOG.debug("Setting snapshot copy job name to : " + jobname);
}
conf.set(JOB_NAME_CONF_KEY, jobname);
LOG.debug("Copy snapshot " + args[1] + " to " + args[3]);
res = copyService.copy(backupInfo, backupManager, conf, BackupType.FULL,
args);
// if one snapshot export failed, do not continue for remained snapshots
if (res != 0) {
LOG.error("Exporting Snapshot " + args[1] + " failed with return code: " +
res + ".");
throw new IOException("Failed of exporting snapshot " + args[1] + " to " +
args[3]
+ " with reason code " + res);
}
conf.unset(JOB_NAME_CONF_KEY);
LOG.info("Snapshot copy " + args[1] + " finished.");
}{code}
was:
{code:java}
// code placeholder
$ sudo /etc/init.d/yak master hbase backup create
Please make sure that backup is enabled on the cluster. To enable backup, in
hbase-site.xml, set:
hbase.backup.enable=true
hbase.master.logcleaner.plugins=YOUR_PLUGINS,org.apache.hadoop.hbase.backup.master.BackupLogCleaner
hbase.procedure.master.classes=YOUR_CLASSES,org.apache.hadoop.hbase.backup.master.LogRollMasterProcedureManager
hbase.procedure.regionserver.classes=YOUR_CLASSES,org.apache.hadoop.hbase.backup.regionserver.LogRollRegionServerProcedureManager
hbase.coprocessor.region.classes=YOUR_CLASSES,org.apache.hadoop.hbase.backup.BackupObserver
and restart the clusterUsage: hbase backup create <type> <backup_path> [options]
type "full" to create a full backup image
"incremental" to create an incremental backup image
backup_path Full path to store the backup imageOptions:
-b <arg> Bandwidth per task (MapReduce task) in MB/s
-d Enable debug loggings
-q <arg> Yarn queue name to run backup create command on
-s <arg> Backup set to backup, mutually exclusive with -t (table list)
-t <arg> Table name list, comma-separated.
-w <arg> Number of parallel MapReduce tasks to execute
{code}
Parameters -b, -q, -w are not being used when creating a export snapshot request
{code:java}
// code placeholder
for (TableName table : backupInfo.getTables()) {
// Currently we simply set the sub copy tasks by counting the table snapshot
number, we can
// calculate the real files' size for the percentage in the future.
// backupCopier.setSubTaskPercntgInWholeTask(1f / numOfSnapshots);
int res;
String[] args = new String[4];
args[0] = "-snapshot";
args[1] = backupInfo.getSnapshotName(table);
args[2] = "-copy-to";
args[3] = backupInfo.getTableBackupDir(table);
String jobname = "Full-Backup_" + backupInfo.getBackupId() + "_" +
table.getNameAsString();
if (LOG.isDebugEnabled()) {
LOG.debug("Setting snapshot copy job name to : " + jobname);
}
conf.set(JOB_NAME_CONF_KEY, jobname);
LOG.debug("Copy snapshot " + args[1] + " to " + args[3]);
res = copyService.copy(backupInfo, backupManager, conf, BackupType.FULL,
args);
// if one snapshot export failed, do not continue for remained snapshots
if (res != 0) {
LOG.error("Exporting Snapshot " + args[1] + " failed with return code: " +
res + ".");
throw new IOException("Failed of exporting snapshot " + args[1] + " to " +
args[3]
+ " with reason code " + res);
}
conf.unset(JOB_NAME_CONF_KEY);
LOG.info("Snapshot copy " + args[1] + " finished.");
}{code}
> Backup not using parameters such as bandwidth, workers, etc while exporting
> snapshot
> ------------------------------------------------------------------------------------
>
> Key: HBASE-25501
> URL: https://issues.apache.org/jira/browse/HBASE-25501
> Project: HBase
> Issue Type: Bug
> Components: backup&restore
> Affects Versions: 3.0.0-alpha-1
> Reporter: Mallikarjun
> Assignee: Mallikarjun
> Priority: Major
>
>
> {code:java}
> // code placeholder
> $ sudo /etc/init.d/yak master hbase backup create
> Please make sure that backup is enabled on the cluster. To enable backup, in
> hbase-site.xml, set:
> hbase.backup.enable=true
> hbase.master.logcleaner.plugins=YOUR_PLUGINS,org.apache.hadoop.hbase.backup.master.BackupLogCleaner
> hbase.procedure.master.classes=YOUR_CLASSES,org.apache.hadoop.hbase.backup.master.LogRollMasterProcedureManager
> hbase.procedure.regionserver.classes=YOUR_CLASSES,org.apache.hadoop.hbase.backup.regionserver.LogRollRegionServerProcedureManager
> hbase.coprocessor.region.classes=YOUR_CLASSES,org.apache.hadoop.hbase.backup.BackupObserver
> and restart the clusterUsage: hbase backup create <type> <backup_path>
> [options]
> type "full" to create a full backup image
> "incremental" to create an incremental backup image
> backup_path Full path to store the backup imageOptions:
> -b <arg> Bandwidth per task (MapReduce task) in MB/s
> -d Enable debug loggings
> -q <arg> Yarn queue name to run backup create command on
> -s <arg> Backup set to backup, mutually exclusive with -t (table
> list)
> -t <arg> Table name list, comma-separated.
> -w <arg> Number of parallel MapReduce tasks to execute {code}
> Parameters -b, -q, -w are not being used when creating a export snapshot
> request
> {code:java}
> // code placeholder
> for (TableName table : backupInfo.getTables()) {
> // Currently we simply set the sub copy tasks by counting the table
> snapshot number, we can
> // calculate the real files' size for the percentage in the future.
> // backupCopier.setSubTaskPercntgInWholeTask(1f / numOfSnapshots);
> int res;
> String[] args = new String[4];
> args[0] = "-snapshot";
> args[1] = backupInfo.getSnapshotName(table);
> args[2] = "-copy-to";
> args[3] = backupInfo.getTableBackupDir(table);
> String jobname = "Full-Backup_" + backupInfo.getBackupId() + "_" +
> table.getNameAsString();
> if (LOG.isDebugEnabled()) {
> LOG.debug("Setting snapshot copy job name to : " + jobname);
> }
> conf.set(JOB_NAME_CONF_KEY, jobname);
> LOG.debug("Copy snapshot " + args[1] + " to " + args[3]);
> res = copyService.copy(backupInfo, backupManager, conf, BackupType.FULL,
> args);
> // if one snapshot export failed, do not continue for remained snapshots
> if (res != 0) {
> LOG.error("Exporting Snapshot " + args[1] + " failed with return code: "
> + res + ".");
> throw new IOException("Failed of exporting snapshot " + args[1] + " to "
> + args[3]
> + " with reason code " + res);
> }
> conf.unset(JOB_NAME_CONF_KEY);
> LOG.info("Snapshot copy " + args[1] + " finished.");
> }{code}
>
>
>
>
--
This message was sent by Atlassian Jira
(v8.3.4#803005)