[jira] [Updated] (HBASE-9953) PerformanceEvaluation: Decouple data size from client concurrency
[ https://issues.apache.org/jira/browse/HBASE-9953?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Nick Dimiduk updated HBASE-9953: Resolution: Fixed Fix Version/s: 0.99.0 Status: Resolved (was: Patch Available) Committed to trunk. > PerformanceEvaluation: Decouple data size from client concurrency > - > > Key: HBASE-9953 > URL: https://issues.apache.org/jira/browse/HBASE-9953 > Project: HBase > Issue Type: Test > Components: Performance >Reporter: Nick Dimiduk >Assignee: Nick Dimiduk >Priority: Minor > Fix For: 0.99.0 > > Attachments: HBASE-9953.00.patch, HBASE-9953.01.patch > > > PerfEval tool provides a {{--rows=R}} for specifying the number of records to > work with and requires the user provide a value of N, used as the concurrency > level. From what I can tell, every concurrent process will interact with R > rows. In order to perform an apples-to-apples test, the user must > re-calculate the value R for every new value of N. > Instead, I propose accepting a {{--size=S}} for the amount of data to > interact with and let PerfEval divide that amongst the N clients on the > user's behalf. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (HBASE-11083) ExportSnapshot should provide capability to limit bandwidth consumption
[ https://issues.apache.org/jira/browse/HBASE-11083?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ted Yu updated HBASE-11083: --- Status: Patch Available (was: Open) > ExportSnapshot should provide capability to limit bandwidth consumption > --- > > Key: HBASE-11083 > URL: https://issues.apache.org/jira/browse/HBASE-11083 > Project: HBase > Issue Type: Improvement > Components: snapshots >Reporter: Ted Yu >Assignee: Ted Yu > Attachments: 11083-v1.txt > > > This capability was first brought up in this thread: > http://search-hadoop.com/m/DHED4Td8Xb1 > The rewritten distcp already provides this capability. > See MAPREDUCE-2765 > distcp implementation utilizes ThrottledInputStream which provides bandwidth > throttling on a specified InputStream. > As a first step, we can > * add an option to ExportSnapshot which expresses bandwidth per map in MB > * utilize ThrottledInputStream in ExportSnapshot#ExportMapper#copyFile(). -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HBASE-10918) [VisibilityController] System table backed ScanLabelGenerator
[ https://issues.apache.org/jira/browse/HBASE-10918?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13981898#comment-13981898 ] Hudson commented on HBASE-10918: SUCCESS: Integrated in HBase-0.98-on-Hadoop-1.1 #281 (See [https://builds.apache.org/job/HBase-0.98-on-Hadoop-1.1/281/]) HBASE-10918 [VisibilityController] System table backed ScanLabelGenerator (apurtell: rev 1590181) * /hbase/branches/0.98/hbase-client/src/main/java/org/apache/hadoop/hbase/security/visibility/Authorizations.java * /hbase/branches/0.98/hbase-server/src/main/java/org/apache/hadoop/hbase/security/visibility/DefaultScanLabelGenerator.java * /hbase/branches/0.98/hbase-server/src/main/java/org/apache/hadoop/hbase/security/visibility/EnforcingScanLabelGenerator.java * /hbase/branches/0.98/hbase-server/src/test/java/org/apache/hadoop/hbase/security/visibility/TestEnforcingScanLabelGenerator.java > [VisibilityController] System table backed ScanLabelGenerator > -- > > Key: HBASE-10918 > URL: https://issues.apache.org/jira/browse/HBASE-10918 > Project: HBase > Issue Type: Sub-task >Reporter: Andrew Purtell >Assignee: Andrew Purtell >Priority: Critical > Fix For: 0.99.0, 0.98.2 > > Attachments: HBASE-10918.patch, HBASE-10918_1.patch > > > A ScanLabelGenerator that retrieves a static set of authorizations for a user > or group from a new HBase system table, and insures these auths are part of > the effective set. > Useful for forcing a baseline set of auths for a user. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HBASE-10892) [Shell] Add support for globs in user_permission
[ https://issues.apache.org/jira/browse/HBASE-10892?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13981899#comment-13981899 ] Hudson commented on HBASE-10892: SUCCESS: Integrated in HBase-0.98-on-Hadoop-1.1 #281 (See [https://builds.apache.org/job/HBase-0.98-on-Hadoop-1.1/281/]) HBASE-10892 [Shell] Add support for globs in user_permission (Esteban Gutierrez) (apurtell: rev 1590173) * /hbase/branches/0.98/hbase-shell/src/main/ruby/hbase/security.rb * /hbase/branches/0.98/hbase-shell/src/main/ruby/shell/commands/user_permission.rb > [Shell] Add support for globs in user_permission > > > Key: HBASE-10892 > URL: https://issues.apache.org/jira/browse/HBASE-10892 > Project: HBase > Issue Type: Improvement > Components: shell >Affects Versions: 0.99.0 >Reporter: Esteban Gutierrez >Assignee: Esteban Gutierrez > Fix For: 0.99.0, 0.98.2 > > Attachments: HBASE-10892.v0.diff, HBASE-10892.v1.patch > > > It would be nice for {{user_permission}} to show all the permissions for all > the tables or a subset of tables if a glob (regex) is provided. > {code} > hbase> user_permission '*' > User Table,Family,Qualifier:Permission > esteban x,,: [Permission: > actions=READ,WRITE,EXEC,CREATE,ADMIN] > hbase1y,,: [Permission: > actions=READ,WRITE] > hbase2z,,: [Permission: > actions=READ,WRITE] > {code} -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HBASE-10958) [dataloss] Bulk loading with seqids can prevent some log entries from being replayed
[ https://issues.apache.org/jira/browse/HBASE-10958?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13981900#comment-13981900 ] Hudson commented on HBASE-10958: SUCCESS: Integrated in HBase-0.98-on-Hadoop-1.1 #281 (See [https://builds.apache.org/job/HBase-0.98-on-Hadoop-1.1/281/]) HBASE-10958 [dataloss] Bulk loading with seqids can prevent some log entries from being replayed (jdcryans: rev 1590145) * /hbase/branches/0.98/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java * /hbase/branches/0.98/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegionServer.java * /hbase/branches/0.98/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HStore.java * /hbase/branches/0.98/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/MemStoreFlusher.java * /hbase/branches/0.98/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/StoreFile.java * /hbase/branches/0.98/hbase-server/src/test/java/org/apache/hadoop/hbase/mapreduce/TestLoadIncrementalHFiles.java * /hbase/branches/0.98/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestHRegion.java * /hbase/branches/0.98/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/wal/TestWALReplay.java * /hbase/branches/0.98/hbase-server/src/test/java/org/apache/hadoop/hbase/util/HFileTestUtil.java > [dataloss] Bulk loading with seqids can prevent some log entries from being > replayed > > > Key: HBASE-10958 > URL: https://issues.apache.org/jira/browse/HBASE-10958 > Project: HBase > Issue Type: Bug >Affects Versions: 0.96.2, 0.98.1, 0.94.18 >Reporter: Jean-Daniel Cryans >Assignee: Jean-Daniel Cryans >Priority: Blocker > Fix For: 0.99.0, 0.98.2, 0.96.3, 0.94.20 > > Attachments: HBASE-10958-0.94.patch, > HBASE-10958-less-intrusive-hack-0.96.patch, > HBASE-10958-quick-hack-0.96.patch, HBASE-10958-v2.patch, > HBASE-10958-v3.patch, HBASE-10958.patch > > > We found an issue with bulk loads causing data loss when assigning sequence > ids (HBASE-6630) that is triggered when replaying recovered edits. We're > nicknaming this issue *Blindspot*. > The problem is that the sequence id given to a bulk loaded file is higher > than those of the edits in the region's memstore. When replaying recovered > edits, the rule to skip some of them is that they have to be _lower than the > highest sequence id_. In other words, the edits that have a sequence id lower > than the highest one in the store files *should* have also been flushed. This > is not the case with bulk loaded files since we now have an HFile with a > sequence id higher than unflushed edits. > The log recovery code takes this into account by simply skipping the bulk > loaded files, but this "bulk loaded status" is *lost* on compaction. The > edits in the logs that have a sequence id lower than the bulk loaded file > that got compacted are put in a blind spot and are skipped during replay. > Here's the easiest way to recreate this issue: > - Create an empty table > - Put one row in it (let's say it gets seqid 1) > - Bulk load one file (it gets seqid 2). I used ImporTsv and set > hbase.mapreduce.bulkload.assign.sequenceNumbers. > - Bulk load a second file the same way (it gets seqid 3). > - Major compact the table (the new file has seqid 3 and isn't considered > bulk loaded). > - Kill the region server that holds the table's region. > - Scan the table once the region is made available again. The first row, at > seqid 1, will be missing since the HFile with seqid 3 makes us believe that > everything that came before it was flushed. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HBASE-11008) Align bulk load, flush, and compact to require Action.CREATE
[ https://issues.apache.org/jira/browse/HBASE-11008?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13981897#comment-13981897 ] Hudson commented on HBASE-11008: SUCCESS: Integrated in HBase-0.98-on-Hadoop-1.1 #281 (See [https://builds.apache.org/job/HBase-0.98-on-Hadoop-1.1/281/]) HBASE-11008 Align bulk load, flush, and compact to require Action.CREATE (jdcryans: rev 1590125) * /hbase/branches/0.98/hbase-server/src/main/java/org/apache/hadoop/hbase/security/access/AccessController.java * /hbase/branches/0.98/hbase-server/src/test/java/org/apache/hadoop/hbase/security/access/TestAccessController.java * /hbase/branches/0.98/src/main/docbkx/security.xml > Align bulk load, flush, and compact to require Action.CREATE > > > Key: HBASE-11008 > URL: https://issues.apache.org/jira/browse/HBASE-11008 > Project: HBase > Issue Type: Improvement > Components: security >Reporter: Jean-Daniel Cryans >Assignee: Jean-Daniel Cryans > Fix For: 0.99.0, 0.98.2, 0.96.3, 0.94.20 > > Attachments: HBASE-11008-0.94.patch, HBASE-11008-v2.patch, > HBASE-11008-v3.patch, HBASE-11008.patch > > > Over in HBASE-10958 we noticed that it might make sense to require > Action.CREATE for bulk load, flush, and compact since it is also required for > things like enable and disable. > This means the following changes: > - preBulkLoadHFile goes from WRITE to CREATE > - compact/flush go from ADMIN to ADMIN or CREATE -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HBASE-10958) [dataloss] Bulk loading with seqids can prevent some log entries from being replayed
[ https://issues.apache.org/jira/browse/HBASE-10958?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13981896#comment-13981896 ] Hudson commented on HBASE-10958: FAILURE: Integrated in hbase-0.96 #395 (See [https://builds.apache.org/job/hbase-0.96/395/]) HBASE-10958 [dataloss] Bulk loading with seqids can prevent some log entries from being replayed (jdcryans: rev 1590146) * /hbase/branches/0.96/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java * /hbase/branches/0.96/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegionServer.java * /hbase/branches/0.96/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HStore.java * /hbase/branches/0.96/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/MemStoreFlusher.java * /hbase/branches/0.96/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/StoreFile.java * /hbase/branches/0.96/hbase-server/src/test/java/org/apache/hadoop/hbase/mapreduce/TestLoadIncrementalHFiles.java * /hbase/branches/0.96/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestHRegion.java * /hbase/branches/0.96/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/wal/TestWALReplay.java * /hbase/branches/0.96/hbase-server/src/test/java/org/apache/hadoop/hbase/util/HFileTestUtil.java > [dataloss] Bulk loading with seqids can prevent some log entries from being > replayed > > > Key: HBASE-10958 > URL: https://issues.apache.org/jira/browse/HBASE-10958 > Project: HBase > Issue Type: Bug >Affects Versions: 0.96.2, 0.98.1, 0.94.18 >Reporter: Jean-Daniel Cryans >Assignee: Jean-Daniel Cryans >Priority: Blocker > Fix For: 0.99.0, 0.98.2, 0.96.3, 0.94.20 > > Attachments: HBASE-10958-0.94.patch, > HBASE-10958-less-intrusive-hack-0.96.patch, > HBASE-10958-quick-hack-0.96.patch, HBASE-10958-v2.patch, > HBASE-10958-v3.patch, HBASE-10958.patch > > > We found an issue with bulk loads causing data loss when assigning sequence > ids (HBASE-6630) that is triggered when replaying recovered edits. We're > nicknaming this issue *Blindspot*. > The problem is that the sequence id given to a bulk loaded file is higher > than those of the edits in the region's memstore. When replaying recovered > edits, the rule to skip some of them is that they have to be _lower than the > highest sequence id_. In other words, the edits that have a sequence id lower > than the highest one in the store files *should* have also been flushed. This > is not the case with bulk loaded files since we now have an HFile with a > sequence id higher than unflushed edits. > The log recovery code takes this into account by simply skipping the bulk > loaded files, but this "bulk loaded status" is *lost* on compaction. The > edits in the logs that have a sequence id lower than the bulk loaded file > that got compacted are put in a blind spot and are skipped during replay. > Here's the easiest way to recreate this issue: > - Create an empty table > - Put one row in it (let's say it gets seqid 1) > - Bulk load one file (it gets seqid 2). I used ImporTsv and set > hbase.mapreduce.bulkload.assign.sequenceNumbers. > - Bulk load a second file the same way (it gets seqid 3). > - Major compact the table (the new file has seqid 3 and isn't considered > bulk loaded). > - Kill the region server that holds the table's region. > - Scan the table once the region is made available again. The first row, at > seqid 1, will be missing since the HFile with seqid 3 makes us believe that > everything that came before it was flushed. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HBASE-11008) Align bulk load, flush, and compact to require Action.CREATE
[ https://issues.apache.org/jira/browse/HBASE-11008?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13981895#comment-13981895 ] Hudson commented on HBASE-11008: FAILURE: Integrated in hbase-0.96 #395 (See [https://builds.apache.org/job/hbase-0.96/395/]) HBASE-11008 Align bulk load, flush, and compact to require Action.CREATE (jdcryans: rev 1590126) * /hbase/branches/0.96/hbase-server/src/main/java/org/apache/hadoop/hbase/security/access/AccessController.java * /hbase/branches/0.96/hbase-server/src/test/java/org/apache/hadoop/hbase/security/access/TestAccessController.java * /hbase/branches/0.96/src/main/docbkx/security.xml > Align bulk load, flush, and compact to require Action.CREATE > > > Key: HBASE-11008 > URL: https://issues.apache.org/jira/browse/HBASE-11008 > Project: HBase > Issue Type: Improvement > Components: security >Reporter: Jean-Daniel Cryans >Assignee: Jean-Daniel Cryans > Fix For: 0.99.0, 0.98.2, 0.96.3, 0.94.20 > > Attachments: HBASE-11008-0.94.patch, HBASE-11008-v2.patch, > HBASE-11008-v3.patch, HBASE-11008.patch > > > Over in HBASE-10958 we noticed that it might make sense to require > Action.CREATE for bulk load, flush, and compact since it is also required for > things like enable and disable. > This means the following changes: > - preBulkLoadHFile goes from WRITE to CREATE > - compact/flush go from ADMIN to ADMIN or CREATE -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HBASE-11008) Align bulk load, flush, and compact to require Action.CREATE
[ https://issues.apache.org/jira/browse/HBASE-11008?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13981889#comment-13981889 ] Hudson commented on HBASE-11008: SUCCESS: Integrated in hbase-0.96-hadoop2 #274 (See [https://builds.apache.org/job/hbase-0.96-hadoop2/274/]) HBASE-11008 Align bulk load, flush, and compact to require Action.CREATE (jdcryans: rev 1590126) * /hbase/branches/0.96/hbase-server/src/main/java/org/apache/hadoop/hbase/security/access/AccessController.java * /hbase/branches/0.96/hbase-server/src/test/java/org/apache/hadoop/hbase/security/access/TestAccessController.java * /hbase/branches/0.96/src/main/docbkx/security.xml > Align bulk load, flush, and compact to require Action.CREATE > > > Key: HBASE-11008 > URL: https://issues.apache.org/jira/browse/HBASE-11008 > Project: HBase > Issue Type: Improvement > Components: security >Reporter: Jean-Daniel Cryans >Assignee: Jean-Daniel Cryans > Fix For: 0.99.0, 0.98.2, 0.96.3, 0.94.20 > > Attachments: HBASE-11008-0.94.patch, HBASE-11008-v2.patch, > HBASE-11008-v3.patch, HBASE-11008.patch > > > Over in HBASE-10958 we noticed that it might make sense to require > Action.CREATE for bulk load, flush, and compact since it is also required for > things like enable and disable. > This means the following changes: > - preBulkLoadHFile goes from WRITE to CREATE > - compact/flush go from ADMIN to ADMIN or CREATE -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HBASE-11038) Filtered scans can bypass metrics collection
[ https://issues.apache.org/jira/browse/HBASE-11038?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13981890#comment-13981890 ] Hudson commented on HBASE-11038: SUCCESS: Integrated in hbase-0.96-hadoop2 #274 (See [https://builds.apache.org/job/hbase-0.96-hadoop2/274/]) HBASE-11038 Filtered scans can bypass metrics collection (ndimiduk: rev 1590069) * /hbase/branches/0.96/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java > Filtered scans can bypass metrics collection > > > Key: HBASE-11038 > URL: https://issues.apache.org/jira/browse/HBASE-11038 > Project: HBase > Issue Type: Bug > Components: Scanners >Affects Versions: 0.96.2, 0.98.1, 0.99.0 >Reporter: Nick Dimiduk >Assignee: Nick Dimiduk >Priority: Minor > Fix For: 0.99.0, 0.98.2, 0.96.3 > > Attachments: HBASE-11038.00.patch, HBASE-11038.01.96.patch, > HBASE-11038.01.98.patch, HBASE-11038.01.patch > > > In RegionScannerImpl#nextRaw, after a batch of results are retrieved, > delegates to the filter regarding continuation of the scan. If > filterAllRemaining returns true, the method exits immediately, without > calling MetricsRegion#updateNextScan. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HBASE-10958) [dataloss] Bulk loading with seqids can prevent some log entries from being replayed
[ https://issues.apache.org/jira/browse/HBASE-10958?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13981891#comment-13981891 ] Hudson commented on HBASE-10958: SUCCESS: Integrated in hbase-0.96-hadoop2 #274 (See [https://builds.apache.org/job/hbase-0.96-hadoop2/274/]) HBASE-10958 [dataloss] Bulk loading with seqids can prevent some log entries from being replayed (jdcryans: rev 1590146) * /hbase/branches/0.96/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java * /hbase/branches/0.96/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegionServer.java * /hbase/branches/0.96/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HStore.java * /hbase/branches/0.96/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/MemStoreFlusher.java * /hbase/branches/0.96/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/StoreFile.java * /hbase/branches/0.96/hbase-server/src/test/java/org/apache/hadoop/hbase/mapreduce/TestLoadIncrementalHFiles.java * /hbase/branches/0.96/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestHRegion.java * /hbase/branches/0.96/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/wal/TestWALReplay.java * /hbase/branches/0.96/hbase-server/src/test/java/org/apache/hadoop/hbase/util/HFileTestUtil.java > [dataloss] Bulk loading with seqids can prevent some log entries from being > replayed > > > Key: HBASE-10958 > URL: https://issues.apache.org/jira/browse/HBASE-10958 > Project: HBase > Issue Type: Bug >Affects Versions: 0.96.2, 0.98.1, 0.94.18 >Reporter: Jean-Daniel Cryans >Assignee: Jean-Daniel Cryans >Priority: Blocker > Fix For: 0.99.0, 0.98.2, 0.96.3, 0.94.20 > > Attachments: HBASE-10958-0.94.patch, > HBASE-10958-less-intrusive-hack-0.96.patch, > HBASE-10958-quick-hack-0.96.patch, HBASE-10958-v2.patch, > HBASE-10958-v3.patch, HBASE-10958.patch > > > We found an issue with bulk loads causing data loss when assigning sequence > ids (HBASE-6630) that is triggered when replaying recovered edits. We're > nicknaming this issue *Blindspot*. > The problem is that the sequence id given to a bulk loaded file is higher > than those of the edits in the region's memstore. When replaying recovered > edits, the rule to skip some of them is that they have to be _lower than the > highest sequence id_. In other words, the edits that have a sequence id lower > than the highest one in the store files *should* have also been flushed. This > is not the case with bulk loaded files since we now have an HFile with a > sequence id higher than unflushed edits. > The log recovery code takes this into account by simply skipping the bulk > loaded files, but this "bulk loaded status" is *lost* on compaction. The > edits in the logs that have a sequence id lower than the bulk loaded file > that got compacted are put in a blind spot and are skipped during replay. > Here's the easiest way to recreate this issue: > - Create an empty table > - Put one row in it (let's say it gets seqid 1) > - Bulk load one file (it gets seqid 2). I used ImporTsv and set > hbase.mapreduce.bulkload.assign.sequenceNumbers. > - Bulk load a second file the same way (it gets seqid 3). > - Major compact the table (the new file has seqid 3 and isn't considered > bulk loaded). > - Kill the region server that holds the table's region. > - Scan the table once the region is made available again. The first row, at > seqid 1, will be missing since the HFile with seqid 3 makes us believe that > everything that came before it was flushed. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HBASE-11083) ExportSnapshot should provide capability to limit bandwidth consumption
[ https://issues.apache.org/jira/browse/HBASE-11083?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13981883#comment-13981883 ] Matteo Bertozzi commented on HBASE-11083: - assuming that the distcp/hadoop2 dependency is ok for the project the patch looks good to me > ExportSnapshot should provide capability to limit bandwidth consumption > --- > > Key: HBASE-11083 > URL: https://issues.apache.org/jira/browse/HBASE-11083 > Project: HBase > Issue Type: Improvement > Components: snapshots >Reporter: Ted Yu >Assignee: Ted Yu > Attachments: 11083-v1.txt > > > This capability was first brought up in this thread: > http://search-hadoop.com/m/DHED4Td8Xb1 > The rewritten distcp already provides this capability. > See MAPREDUCE-2765 > distcp implementation utilizes ThrottledInputStream which provides bandwidth > throttling on a specified InputStream. > As a first step, we can > * add an option to ExportSnapshot which expresses bandwidth per map in MB > * utilize ThrottledInputStream in ExportSnapshot#ExportMapper#copyFile(). -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (HBASE-10935) support snapshot policy where flush memstore can be skipped to prevent production cluster freeze
[ https://issues.apache.org/jira/browse/HBASE-10935?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Tianying Chang updated HBASE-10935: --- Attachment: HBase-10935.patch New patch has tow more things: 1. unit test added. I put the test in TestFlushSnapshotFromClient.java instead of creating a new dedicated file since it really just one call with one parameter change. Feels no need for extra duplicated code. 2. Changed the interface to use {SKIP_FLUSH => 'true'} I will add patch for trunk in the last step after the patch for 0.94 is considered fine. > support snapshot policy where flush memstore can be skipped to prevent > production cluster freeze > > > Key: HBASE-10935 > URL: https://issues.apache.org/jira/browse/HBASE-10935 > Project: HBase > Issue Type: New Feature > Components: shell, snapshots >Affects Versions: 0.94.7, 0.94.18 >Reporter: Tianying Chang >Assignee: Tianying Chang >Priority: Minor > Fix For: 0.94.20 > > Attachments: HBase-10935.patch, HBase-10935.patch > > > We are using snapshot feature to do HBase disaster recovery. We will do > snapshot in our production cluster periodically. The current flush snapshot > policy require all regions of the table to coordinate to prevent write and do > flush at the same time. Since we use WALPlayer to complete the data that is > not in the snapshot HFile, we don't need the snapshot to do coordinated > flush. The snapshot just recored all the HFile that are already there. > I added the parameter in the HBase shell. So people can choose to use the > NoFlush snapshot when they need, like below. Otherwise, the default flush > snpahot support is not impacted. > >snaphot 'TestTable', 'TestSnapshot', 'skipFlush' -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Assigned] (HBASE-11083) ExportSnapshot should provide capability to limit bandwidth consumption
[ https://issues.apache.org/jira/browse/HBASE-11083?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ted Yu reassigned HBASE-11083: -- Assignee: Ted Yu > ExportSnapshot should provide capability to limit bandwidth consumption > --- > > Key: HBASE-11083 > URL: https://issues.apache.org/jira/browse/HBASE-11083 > Project: HBase > Issue Type: Improvement > Components: snapshots >Reporter: Ted Yu >Assignee: Ted Yu > Attachments: 11083-v1.txt > > > This capability was first brought up in this thread: > http://search-hadoop.com/m/DHED4Td8Xb1 > The rewritten distcp already provides this capability. > See MAPREDUCE-2765 > distcp implementation utilizes ThrottledInputStream which provides bandwidth > throttling on a specified InputStream. > As a first step, we can > * add an option to ExportSnapshot which expresses bandwidth per map in MB > * utilize ThrottledInputStream in ExportSnapshot#ExportMapper#copyFile(). -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (HBASE-11083) ExportSnapshot should provide capability to limit bandwidth consumption
[ https://issues.apache.org/jira/browse/HBASE-11083?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ted Yu updated HBASE-11083: --- Attachment: 11083-v1.txt > ExportSnapshot should provide capability to limit bandwidth consumption > --- > > Key: HBASE-11083 > URL: https://issues.apache.org/jira/browse/HBASE-11083 > Project: HBase > Issue Type: Improvement > Components: snapshots >Reporter: Ted Yu >Assignee: Ted Yu > Attachments: 11083-v1.txt > > > This capability was first brought up in this thread: > http://search-hadoop.com/m/DHED4Td8Xb1 > The rewritten distcp already provides this capability. > See MAPREDUCE-2765 > distcp implementation utilizes ThrottledInputStream which provides bandwidth > throttling on a specified InputStream. > As a first step, we can > * add an option to ExportSnapshot which expresses bandwidth per map in MB > * utilize ThrottledInputStream in ExportSnapshot#ExportMapper#copyFile(). -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HBASE-10111) Verify that a snapshot is not corrupted before restoring it
[ https://issues.apache.org/jira/browse/HBASE-10111?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13981871#comment-13981871 ] Esteban Gutierrez commented on HBASE-10111: --- [~tychang] yes, it was committed for 0.94.15 (https://svn.apache.org/repos/asf/hbase/branches/0.94@1549820) or via github: https://github.com/apache/hbase/commit/8215c58511a964680e9842c34ce61356a4f24756 > Verify that a snapshot is not corrupted before restoring it > --- > > Key: HBASE-10111 > URL: https://issues.apache.org/jira/browse/HBASE-10111 > Project: HBase > Issue Type: Bug > Components: snapshots >Affects Versions: 0.98.0, 0.96.0, 0.94.14 >Reporter: Matteo Bertozzi >Assignee: Matteo Bertozzi >Priority: Minor > Fix For: 0.98.0, 0.96.1, 0.94.15 > > Attachments: HBASE-10111-v0.patch, HBASE-10111-v1.patch > > > To avoid assigning/opening regions with missing files, verify that the > snapshot is not corrupted before restoring/cloning it. > In 96 a corrupted file in a region is "not a problem" since the assignment > will give up after awhile. > In 94 having a corrupted file in a region means looping forever, on "enable", > until manual intervention. (Easy way to test this is create a table, disable > it, add a corrupted reference file and enable the table to start looping: you > can use echo "foo" > > .) -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HBASE-10111) Verify that a snapshot is not corrupted before restoring it
[ https://issues.apache.org/jira/browse/HBASE-10111?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13981865#comment-13981865 ] Tianying Chang commented on HBASE-10111: Do we have a patch for 0.94? > Verify that a snapshot is not corrupted before restoring it > --- > > Key: HBASE-10111 > URL: https://issues.apache.org/jira/browse/HBASE-10111 > Project: HBase > Issue Type: Bug > Components: snapshots >Affects Versions: 0.98.0, 0.96.0, 0.94.14 >Reporter: Matteo Bertozzi >Assignee: Matteo Bertozzi >Priority: Minor > Fix For: 0.98.0, 0.96.1, 0.94.15 > > Attachments: HBASE-10111-v0.patch, HBASE-10111-v1.patch > > > To avoid assigning/opening regions with missing files, verify that the > snapshot is not corrupted before restoring/cloning it. > In 96 a corrupted file in a region is "not a problem" since the assignment > will give up after awhile. > In 94 having a corrupted file in a region means looping forever, on "enable", > until manual intervention. (Easy way to test this is create a table, disable > it, add a corrupted reference file and enable the table to start looping: you > can use echo "foo" > > .) -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HBASE-11038) Filtered scans can bypass metrics collection
[ https://issues.apache.org/jira/browse/HBASE-11038?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13981859#comment-13981859 ] Hudson commented on HBASE-11038: SUCCESS: Integrated in HBase-0.98 #296 (See [https://builds.apache.org/job/HBase-0.98/296/]) HBASE-11038 Filtered scans can bypass metrics collection (ndimiduk: rev 1590068) * /hbase/branches/0.98/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java > Filtered scans can bypass metrics collection > > > Key: HBASE-11038 > URL: https://issues.apache.org/jira/browse/HBASE-11038 > Project: HBase > Issue Type: Bug > Components: Scanners >Affects Versions: 0.96.2, 0.98.1, 0.99.0 >Reporter: Nick Dimiduk >Assignee: Nick Dimiduk >Priority: Minor > Fix For: 0.99.0, 0.98.2, 0.96.3 > > Attachments: HBASE-11038.00.patch, HBASE-11038.01.96.patch, > HBASE-11038.01.98.patch, HBASE-11038.01.patch > > > In RegionScannerImpl#nextRaw, after a batch of results are retrieved, > delegates to the filter regarding continuation of the scan. If > filterAllRemaining returns true, the method exits immediately, without > calling MetricsRegion#updateNextScan. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HBASE-10892) [Shell] Add support for globs in user_permission
[ https://issues.apache.org/jira/browse/HBASE-10892?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13981861#comment-13981861 ] Hudson commented on HBASE-10892: SUCCESS: Integrated in HBase-0.98 #296 (See [https://builds.apache.org/job/HBase-0.98/296/]) HBASE-10892 [Shell] Add support for globs in user_permission (Esteban Gutierrez) (apurtell: rev 1590173) * /hbase/branches/0.98/hbase-shell/src/main/ruby/hbase/security.rb * /hbase/branches/0.98/hbase-shell/src/main/ruby/shell/commands/user_permission.rb > [Shell] Add support for globs in user_permission > > > Key: HBASE-10892 > URL: https://issues.apache.org/jira/browse/HBASE-10892 > Project: HBase > Issue Type: Improvement > Components: shell >Affects Versions: 0.99.0 >Reporter: Esteban Gutierrez >Assignee: Esteban Gutierrez > Fix For: 0.99.0, 0.98.2 > > Attachments: HBASE-10892.v0.diff, HBASE-10892.v1.patch > > > It would be nice for {{user_permission}} to show all the permissions for all > the tables or a subset of tables if a glob (regex) is provided. > {code} > hbase> user_permission '*' > User Table,Family,Qualifier:Permission > esteban x,,: [Permission: > actions=READ,WRITE,EXEC,CREATE,ADMIN] > hbase1y,,: [Permission: > actions=READ,WRITE] > hbase2z,,: [Permission: > actions=READ,WRITE] > {code} -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HBASE-10918) [VisibilityController] System table backed ScanLabelGenerator
[ https://issues.apache.org/jira/browse/HBASE-10918?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13981860#comment-13981860 ] Hudson commented on HBASE-10918: SUCCESS: Integrated in HBase-0.98 #296 (See [https://builds.apache.org/job/HBase-0.98/296/]) HBASE-10918 [VisibilityController] System table backed ScanLabelGenerator (apurtell: rev 1590181) * /hbase/branches/0.98/hbase-client/src/main/java/org/apache/hadoop/hbase/security/visibility/Authorizations.java * /hbase/branches/0.98/hbase-server/src/main/java/org/apache/hadoop/hbase/security/visibility/DefaultScanLabelGenerator.java * /hbase/branches/0.98/hbase-server/src/main/java/org/apache/hadoop/hbase/security/visibility/EnforcingScanLabelGenerator.java * /hbase/branches/0.98/hbase-server/src/test/java/org/apache/hadoop/hbase/security/visibility/TestEnforcingScanLabelGenerator.java > [VisibilityController] System table backed ScanLabelGenerator > -- > > Key: HBASE-10918 > URL: https://issues.apache.org/jira/browse/HBASE-10918 > Project: HBase > Issue Type: Sub-task >Reporter: Andrew Purtell >Assignee: Andrew Purtell >Priority: Critical > Fix For: 0.99.0, 0.98.2 > > Attachments: HBASE-10918.patch, HBASE-10918_1.patch > > > A ScanLabelGenerator that retrieves a static set of authorizations for a user > or group from a new HBase system table, and insures these auths are part of > the effective set. > Useful for forcing a baseline set of auths for a user. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HBASE-10958) [dataloss] Bulk loading with seqids can prevent some log entries from being replayed
[ https://issues.apache.org/jira/browse/HBASE-10958?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13981862#comment-13981862 ] Hudson commented on HBASE-10958: SUCCESS: Integrated in HBase-0.98 #296 (See [https://builds.apache.org/job/HBase-0.98/296/]) HBASE-10958 [dataloss] Bulk loading with seqids can prevent some log entries from being replayed (jdcryans: rev 1590145) * /hbase/branches/0.98/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java * /hbase/branches/0.98/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegionServer.java * /hbase/branches/0.98/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HStore.java * /hbase/branches/0.98/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/MemStoreFlusher.java * /hbase/branches/0.98/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/StoreFile.java * /hbase/branches/0.98/hbase-server/src/test/java/org/apache/hadoop/hbase/mapreduce/TestLoadIncrementalHFiles.java * /hbase/branches/0.98/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestHRegion.java * /hbase/branches/0.98/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/wal/TestWALReplay.java * /hbase/branches/0.98/hbase-server/src/test/java/org/apache/hadoop/hbase/util/HFileTestUtil.java > [dataloss] Bulk loading with seqids can prevent some log entries from being > replayed > > > Key: HBASE-10958 > URL: https://issues.apache.org/jira/browse/HBASE-10958 > Project: HBase > Issue Type: Bug >Affects Versions: 0.96.2, 0.98.1, 0.94.18 >Reporter: Jean-Daniel Cryans >Assignee: Jean-Daniel Cryans >Priority: Blocker > Fix For: 0.99.0, 0.98.2, 0.96.3, 0.94.20 > > Attachments: HBASE-10958-0.94.patch, > HBASE-10958-less-intrusive-hack-0.96.patch, > HBASE-10958-quick-hack-0.96.patch, HBASE-10958-v2.patch, > HBASE-10958-v3.patch, HBASE-10958.patch > > > We found an issue with bulk loads causing data loss when assigning sequence > ids (HBASE-6630) that is triggered when replaying recovered edits. We're > nicknaming this issue *Blindspot*. > The problem is that the sequence id given to a bulk loaded file is higher > than those of the edits in the region's memstore. When replaying recovered > edits, the rule to skip some of them is that they have to be _lower than the > highest sequence id_. In other words, the edits that have a sequence id lower > than the highest one in the store files *should* have also been flushed. This > is not the case with bulk loaded files since we now have an HFile with a > sequence id higher than unflushed edits. > The log recovery code takes this into account by simply skipping the bulk > loaded files, but this "bulk loaded status" is *lost* on compaction. The > edits in the logs that have a sequence id lower than the bulk loaded file > that got compacted are put in a blind spot and are skipped during replay. > Here's the easiest way to recreate this issue: > - Create an empty table > - Put one row in it (let's say it gets seqid 1) > - Bulk load one file (it gets seqid 2). I used ImporTsv and set > hbase.mapreduce.bulkload.assign.sequenceNumbers. > - Bulk load a second file the same way (it gets seqid 3). > - Major compact the table (the new file has seqid 3 and isn't considered > bulk loaded). > - Kill the region server that holds the table's region. > - Scan the table once the region is made available again. The first row, at > seqid 1, will be missing since the HFile with seqid 3 makes us believe that > everything that came before it was flushed. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HBASE-11008) Align bulk load, flush, and compact to require Action.CREATE
[ https://issues.apache.org/jira/browse/HBASE-11008?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13981858#comment-13981858 ] Hudson commented on HBASE-11008: SUCCESS: Integrated in HBase-0.98 #296 (See [https://builds.apache.org/job/HBase-0.98/296/]) HBASE-11008 Align bulk load, flush, and compact to require Action.CREATE (jdcryans: rev 1590125) * /hbase/branches/0.98/hbase-server/src/main/java/org/apache/hadoop/hbase/security/access/AccessController.java * /hbase/branches/0.98/hbase-server/src/test/java/org/apache/hadoop/hbase/security/access/TestAccessController.java * /hbase/branches/0.98/src/main/docbkx/security.xml > Align bulk load, flush, and compact to require Action.CREATE > > > Key: HBASE-11008 > URL: https://issues.apache.org/jira/browse/HBASE-11008 > Project: HBase > Issue Type: Improvement > Components: security >Reporter: Jean-Daniel Cryans >Assignee: Jean-Daniel Cryans > Fix For: 0.99.0, 0.98.2, 0.96.3, 0.94.20 > > Attachments: HBASE-11008-0.94.patch, HBASE-11008-v2.patch, > HBASE-11008-v3.patch, HBASE-11008.patch > > > Over in HBASE-10958 we noticed that it might make sense to require > Action.CREATE for bulk load, flush, and compact since it is also required for > things like enable and disable. > This means the following changes: > - preBulkLoadHFile goes from WRITE to CREATE > - compact/flush go from ADMIN to ADMIN or CREATE -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HBASE-10962) Decouple region opening (HM and HRS) from ZK
[ https://issues.apache.org/jira/browse/HBASE-10962?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13981833#comment-13981833 ] Hadoop QA commented on HBASE-10962: --- {color:red}-1 overall{color}. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12642036/HBASE-10962.patch against trunk revision . ATTACHMENT ID: 12642036 {color:green}+1 @author{color}. The patch does not contain any @author tags. {color:green}+1 tests included{color}. The patch appears to include 9 new or modified tests. {color:red}-1 javadoc{color}. The javadoc tool appears to have generated 8 warning messages. {color:green}+1 javac{color}. The applied patch does not increase the total number of javac compiler warnings. {color:green}+1 findbugs{color}. The patch does not introduce any new Findbugs (version 1.3.9) warnings. {color:green}+1 release audit{color}. The applied patch does not increase the total number of release audit warnings. {color:green}+1 lineLengths{color}. The patch does not introduce lines longer than 100 {color:green}+1 site{color}. The mvn site goal succeeds with this patch. {color:red}-1 core tests{color}. The patch failed these unit tests: org.apache.hadoop.hbase.mapreduce.TestMultiTableInputFormat org.apache.hadoop.hbase.client.TestScannerTimeout org.apache.hadoop.hbase.mapreduce.TestTableSnapshotInputFormat org.apache.hadoop.hbase.client.TestMultiParallel org.apache.hadoop.hbase.master.TestRegionPlacement org.apache.hadoop.hbase.TestRegionRebalancing org.apache.hadoop.hbase.master.TestMasterFailover org.apache.hadoop.hbase.TestFullLogReconstruction org.apache.hadoop.hbase.replication.TestReplicationKillMasterRS org.apache.hadoop.hbase.snapshot.TestSecureExportSnapshot org.apache.hadoop.hbase.master.TestAssignmentManagerOnCluster org.apache.hadoop.hbase.regionserver.TestHRegionOnCluster org.apache.hadoop.hbase.client.TestAdmin org.apache.hadoop.hbase.regionserver.TestSplitTransactionOnCluster org.apache.hadoop.hbase.TestZooKeeper org.apache.hadoop.hbase.master.TestZKBasedOpenCloseRegion org.apache.hadoop.hbase.replication.TestReplicationDisableInactivePeer org.apache.hadoop.hbase.regionserver.TestRegionServerNoMaster org.apache.hadoop.hbase.rest.TestGetAndPutResource org.apache.hadoop.hbase.regionserver.TestRSKilledWhenInitializing org.apache.hadoop.hbase.client.TestSnapshotCloneIndependence org.apache.hadoop.hbase.regionserver.TestRegionFavoredNodes org.apache.hadoop.hbase.util.TestMiniClusterLoadEncoded org.apache.hadoop.hbase.regionserver.wal.TestWALReplayCompressed org.apache.hadoop.hbase.master.TestMasterMetricsWrapper org.apache.hadoop.hbase.mapreduce.TestTableInputFormatScan1 org.apache.hadoop.hbase.replication.TestReplicationWithTags org.apache.hadoop.hbase.util.TestHBaseFsckEncryption {color:red}-1 core zombie tests{color}. There are 8 zombie test(s): at org.apache.hadoop.hbase.master.TestDistributedLogSplitting.testLogReplayWithNonMetaRSDown(TestDistributedLogSplitting.java:279) at org.apache.hadoop.hbase.util.TestHBaseFsck.testMissingRegionInfoQualifier(TestHBaseFsck.java:1935) at org.apache.hadoop.hbase.regionserver.wal.TestWALReplay.testReplayEditsAfterRegionMovedWithMultiCF(TestWALReplay.java:192) at org.apache.hadoop.hbase.client.TestFromClientSide.testReversedScanUnderMultiRegions(TestFromClientSide.java:6006) at org.apache.hadoop.hbase.client.TestHCM.testMulti(TestHCM.java:1064) at org.apache.hadoop.hbase.regionserver.wal.TestWALReplay.testReplayEditsAfterRegionMovedWithMultiCF(TestWALReplay.java:162) at org.apache.hadoop.hbase.client.TestFromClientSide.testReversedScanUnderMultiRegions(TestFromClientSide.java:6006) Test results: https://builds.apache.org/job/PreCommit-HBASE-Build/9405//testReport/ Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/9405//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/9405//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-thrift.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/9405//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/9405//artifact/trun
[jira] [Commented] (HBASE-10831) IntegrationTestIngestWithACL is not setting up LoadTestTool correctly
[ https://issues.apache.org/jira/browse/HBASE-10831?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13981816#comment-13981816 ] Vandana Ayyalasomayajula commented on HBASE-10831: -- When I try to run this test with the valid super user and user list specified: I get the following exception: {code} 14/04/26 00:34:15 WARN ipc.RpcClient: Exception encountered while connecting to the server : javax.security.sasl.SaslException: GSS initiate failed [Caused by GSSException: No valid credentials provided (Mechanism level: Failed to find any Kerberos tgt)] 14/04/26 00:34:15 FATAL ipc.RpcClient: SASL authentication failed. The most likely cause is missing or invalid credentials. Consider 'kinit'. javax.security.sasl.SaslException: GSS initiate failed [Caused by GSSException: No valid credentials provided (Mechanism level: Failed to find any Kerberos tgt)] at com.sun.security.sasl.gsskerb.GssKrb5Client.evaluateChallenge(GssKrb5Client.java:212) at org.apache.hadoop.hbase.security.HBaseSaslRpcClient.saslConnect(HBaseSaslRpcClient.java:169) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupSaslConnection(RpcClient.java:768) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.access$600(RpcClient.java:357) at org.apache.hadoop.hbase.ipc.RpcClient$Connection$2.run(RpcClient.java:889) at org.apache.hadoop.hbase.ipc.RpcClient$Connection$2.run(RpcClient.java:886) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:415) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1548) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:886) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1536) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1435) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1654) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1712) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:29876) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1470) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:706) at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:704) at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:114) at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:710) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:144) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:1158) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:1222) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:1110) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:1067) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:356) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:301) at org.apache.hadoop.hbase.client.HTable.backgroundFlushCommits(HTable.java:955) at org.apache.hadoop.hbase.client.HTable.flushCommits(HTable.java:1239) at org.apache.hadoop.hbase.client.HTable.put(HTable.java:901) at org.apache.hadoop.hbase.util.MultiThreadedWriterWithACL$HBaseWriterThreadWithACL$WriteAccessAction.run(MultiThreadedWriterWithACL.java:130) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:415) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1548) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:601) at org.apache.hadoop.hbase.util.Methods.call(Methods.java:39) at org.apache.hadoop.hbase.security.User.call(User.java:434) at org.apache.hadoop.hbase.security.User.access$300(User.java:49) at org.apache.hadoop.hbase.security.User$SecureHadoopUser.runAs(User.java:288) at org.apache.hadoop.hbase.util.MultiThreadedWriterWithACL$HBaseWriterThreadWithACL.insert(MultiThreadedWriterWithACL.java:96) at org.apache.hadoop.hba
[jira] [Created] (HBASE-11083) ExportSnapshot should provide capability to limit bandwidth consumption
Ted Yu created HBASE-11083: -- Summary: ExportSnapshot should provide capability to limit bandwidth consumption Key: HBASE-11083 URL: https://issues.apache.org/jira/browse/HBASE-11083 Project: HBase Issue Type: Improvement Components: snapshots Reporter: Ted Yu This capability was first brought up in this thread: http://search-hadoop.com/m/DHED4Td8Xb1 The rewritten distcp already provides this capability. See MAPREDUCE-2765 distcp implementation utilizes ThrottledInputStream which provides bandwidth throttling on a specified InputStream. As a first step, we can * add an option to ExportSnapshot which expresses bandwidth per map in MB * utilize ThrottledInputStream in ExportSnapshot#ExportMapper#copyFile(). -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HBASE-10602) Cleanup HTable public interface
[ https://issues.apache.org/jira/browse/HBASE-10602?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13981790#comment-13981790 ] Sergey Shelukhin commented on HBASE-10602: -- some feedback on R > Cleanup HTable public interface > --- > > Key: HBASE-10602 > URL: https://issues.apache.org/jira/browse/HBASE-10602 > Project: HBase > Issue Type: Improvement > Components: Client, Usability >Reporter: Nick Dimiduk >Assignee: Enis Soztutar >Priority: Blocker > Fix For: 0.99.0 > > Attachments: hbase-10602_v1.patch > > > HBASE-6580 replaced the preferred means of HTableInterface acquisition to the > HConnection#getTable factory methods. HBASE-9117 removes the HConnection > cache, placing the burden of responsible connection cleanup on whomever > acquires it. > The remaining HTable constructors use a Connection instance and manage their > own HConnection on the callers behalf. This is convenient but also a > surprising source of poor performance for anyone accustomed to the previous > connection caching behavior. I propose deprecating those remaining > constructors for 0.98/0.96 and removing them for 1.0. > While I'm at it, I suggest we pursue some API hygiene in general and convert > HTable into an interface. I'm sure there are method overloads for accepting > String/byte[]/TableName where just TableName is sufficient. Can that be done > for 1.0 as well? -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HBASE-10915) Decouple region closing (HM and HRS) from ZK
[ https://issues.apache.org/jira/browse/HBASE-10915?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13981792#comment-13981792 ] Mikhail Antonov commented on HBASE-10915: - Do you prefer this to go through review board [~stack]? > Decouple region closing (HM and HRS) from ZK > > > Key: HBASE-10915 > URL: https://issues.apache.org/jira/browse/HBASE-10915 > Project: HBase > Issue Type: Sub-task > Components: Consensus, Zookeeper >Affects Versions: 0.99.0 >Reporter: Mikhail Antonov >Assignee: Mikhail Antonov > Attachments: HBASE-10915.patch, HBASE-10915.patch, HBASE-10915.patch, > HBASE-10915.patch, HBASE-10915.patch, HBASE-10915.patch, HBASE-10915.patch, > HBASE-10915.patch, HBASE-10915.patch > > > Decouple region closing from ZK. > Includes RS side (CloseRegionHandler), HM side (ClosedRegionHandler) and the > code using (HRegionServer, RSRpcServices etc). > May need small changes in AssignmentManager. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HBASE-10357) Failover RPC's for scans
[ https://issues.apache.org/jira/browse/HBASE-10357?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13981776#comment-13981776 ] Enis Soztutar commented on HBASE-10357: --- - Can we default to 200ms or smt. 1s seems long. However, this rpc request seems that it would wait for the whole batch to complete. Maybe we should have different confs for switching from one replica to another. If the scan usually takes around that timeout, then we can be switching back and forth a lot causing open+close scanners very frequently. {code} +this.configuration.getInt("hbase.client.primaryCallTimeout.scan", 100); // 1000 ms {code} - Instead of wrapping ScannerCallable everytime, can we check the consistency and not wrap it if STRONG? - in addCallsForCurrentReplica, the location is always from the primary replica? - Would be good to have a test where we switch in the middle of the scan as well. I think current test only switching at start. > Failover RPC's for scans > > > Key: HBASE-10357 > URL: https://issues.apache.org/jira/browse/HBASE-10357 > Project: HBase > Issue Type: Sub-task > Components: Client >Reporter: Enis Soztutar > Fix For: 0.99.0 > > Attachments: 10357-1.txt, 10357-2.txt, 10357-3.2.txt, 10357-3.txt > > > This is extension of HBASE-10355 to add failover support for scans. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HBASE-10513) Provide user documentation for region replicas
[ https://issues.apache.org/jira/browse/HBASE-10513?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13981763#comment-13981763 ] Devaraj Das commented on HBASE-10513: - Good doc, [~enis]. The one comment I have is that we don't have "replication" implemented for shipping the edits to the secondaries as of now. Maybe, we should update the doc to only talk about what has been implemented in phase-1. In that spirit, we should also edit some parts in the "Tradeoffs" section. Also, add the scan related configuration hbase.client.primaryCallTimeout.scan with the default value 100 in the "Client side properties". Thanks. > Provide user documentation for region replicas > -- > > Key: HBASE-10513 > URL: https://issues.apache.org/jira/browse/HBASE-10513 > Project: HBase > Issue Type: Sub-task >Reporter: Enis Soztutar >Assignee: Enis Soztutar > Fix For: 0.99.0 > > Attachments: UserdocumentationforHBASE-10070.pdf > > > We need some documentation for the feature introduced in HBASE-10070. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HBASE-11008) Align bulk load, flush, and compact to require Action.CREATE
[ https://issues.apache.org/jira/browse/HBASE-11008?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13981746#comment-13981746 ] Hudson commented on HBASE-11008: FAILURE: Integrated in HBase-TRUNK #5118 (See [https://builds.apache.org/job/HBase-TRUNK/5118/]) HBASE-11008 Align bulk load, flush, and compact to require Action.CREATE (jdcryans: rev 1590124) * /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/security/access/AccessController.java * /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/security/access/TestAccessController.java * /hbase/trunk/src/main/docbkx/security.xml > Align bulk load, flush, and compact to require Action.CREATE > > > Key: HBASE-11008 > URL: https://issues.apache.org/jira/browse/HBASE-11008 > Project: HBase > Issue Type: Improvement > Components: security >Reporter: Jean-Daniel Cryans >Assignee: Jean-Daniel Cryans > Fix For: 0.99.0, 0.98.2, 0.96.3, 0.94.20 > > Attachments: HBASE-11008-0.94.patch, HBASE-11008-v2.patch, > HBASE-11008-v3.patch, HBASE-11008.patch > > > Over in HBASE-10958 we noticed that it might make sense to require > Action.CREATE for bulk load, flush, and compact since it is also required for > things like enable and disable. > This means the following changes: > - preBulkLoadHFile goes from WRITE to CREATE > - compact/flush go from ADMIN to ADMIN or CREATE -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HBASE-10958) [dataloss] Bulk loading with seqids can prevent some log entries from being replayed
[ https://issues.apache.org/jira/browse/HBASE-10958?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13981749#comment-13981749 ] Hudson commented on HBASE-10958: FAILURE: Integrated in HBase-TRUNK #5118 (See [https://builds.apache.org/job/HBase-TRUNK/5118/]) HBASE-10958 [dataloss] Bulk loading with seqids can prevent some log entries from being replayed (jdcryans: rev 1590144) * /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java * /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HStore.java * /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/MemStoreFlusher.java * /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/RSRpcServices.java * /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/StoreFile.java * /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/mapreduce/TestLoadIncrementalHFiles.java * /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestHRegion.java * /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/wal/TestWALReplay.java * /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/util/HFileTestUtil.java > [dataloss] Bulk loading with seqids can prevent some log entries from being > replayed > > > Key: HBASE-10958 > URL: https://issues.apache.org/jira/browse/HBASE-10958 > Project: HBase > Issue Type: Bug >Affects Versions: 0.96.2, 0.98.1, 0.94.18 >Reporter: Jean-Daniel Cryans >Assignee: Jean-Daniel Cryans >Priority: Blocker > Fix For: 0.99.0, 0.98.2, 0.96.3, 0.94.20 > > Attachments: HBASE-10958-0.94.patch, > HBASE-10958-less-intrusive-hack-0.96.patch, > HBASE-10958-quick-hack-0.96.patch, HBASE-10958-v2.patch, > HBASE-10958-v3.patch, HBASE-10958.patch > > > We found an issue with bulk loads causing data loss when assigning sequence > ids (HBASE-6630) that is triggered when replaying recovered edits. We're > nicknaming this issue *Blindspot*. > The problem is that the sequence id given to a bulk loaded file is higher > than those of the edits in the region's memstore. When replaying recovered > edits, the rule to skip some of them is that they have to be _lower than the > highest sequence id_. In other words, the edits that have a sequence id lower > than the highest one in the store files *should* have also been flushed. This > is not the case with bulk loaded files since we now have an HFile with a > sequence id higher than unflushed edits. > The log recovery code takes this into account by simply skipping the bulk > loaded files, but this "bulk loaded status" is *lost* on compaction. The > edits in the logs that have a sequence id lower than the bulk loaded file > that got compacted are put in a blind spot and are skipped during replay. > Here's the easiest way to recreate this issue: > - Create an empty table > - Put one row in it (let's say it gets seqid 1) > - Bulk load one file (it gets seqid 2). I used ImporTsv and set > hbase.mapreduce.bulkload.assign.sequenceNumbers. > - Bulk load a second file the same way (it gets seqid 3). > - Major compact the table (the new file has seqid 3 and isn't considered > bulk loaded). > - Kill the region server that holds the table's region. > - Scan the table once the region is made available again. The first row, at > seqid 1, will be missing since the HFile with seqid 3 makes us believe that > everything that came before it was flushed. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HBASE-11081) Trunk Master won't start; looking for Constructor that takes conf only
[ https://issues.apache.org/jira/browse/HBASE-11081?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13981747#comment-13981747 ] Hudson commented on HBASE-11081: FAILURE: Integrated in HBase-TRUNK #5118 (See [https://builds.apache.org/job/HBase-TRUNK/5118/]) HBASE-11081 Trunk Master won't start; looking for Constructor that takes conf only (stack: rev 1590154) * /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/master/HMaster.java * /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/master/HMasterCommandLine.java > Trunk Master won't start; looking for Constructor that takes conf only > -- > > Key: HBASE-11081 > URL: https://issues.apache.org/jira/browse/HBASE-11081 > Project: HBase > Issue Type: Bug >Reporter: stack >Assignee: stack > Fix For: 0.99.0 > > Attachments: 11081.txt > > > Committing the Consensus Infra, we broke starting master. Small fix so > constructMaster passes in a ConsensusProvider. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HBASE-10960) Enhance HBase Thrift 1 to include "append" and "checkAndPut" operations
[ https://issues.apache.org/jira/browse/HBASE-10960?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13981748#comment-13981748 ] Hudson commented on HBASE-10960: FAILURE: Integrated in HBase-TRUNK #5118 (See [https://builds.apache.org/job/HBase-TRUNK/5118/]) HBASE-10960 Enhance HBase Thrift 1 to include "append" and "checkAndPut" operations (Srikanth Srungarapu) (jmhsieh: rev 1590152) * /hbase/trunk/hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift/ThriftServerRunner.java * /hbase/trunk/hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift/ThriftUtilities.java * /hbase/trunk/hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift/generated/Hbase.java * /hbase/trunk/hbase-thrift/src/main/resources/org/apache/hadoop/hbase/thrift/Hbase.thrift * /hbase/trunk/hbase-thrift/src/test/java/org/apache/hadoop/hbase/thrift/TestThriftServer.java > Enhance HBase Thrift 1 to include "append" and "checkAndPut" operations > --- > > Key: HBASE-10960 > URL: https://issues.apache.org/jira/browse/HBASE-10960 > Project: HBase > Issue Type: Improvement > Components: Thrift >Reporter: Srikanth Srungarapu >Assignee: Srikanth Srungarapu > Fix For: 0.99.0 > > Attachments: HBASE-10960.patch, hbase-10960.v3.patch > > > Both append, and checkAndPut functionalities are available in Thrift 2 > interface, but not in Thrift. So, adding the support for these > functionalities in Thrift1 too. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Resolved] (HBASE-10957) HBASE-10070: HMaster can abort with NPE in #rebuildUserRegions
[ https://issues.apache.org/jira/browse/HBASE-10957?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Enis Soztutar resolved HBASE-10957. --- Resolution: Fixed Hadoop Flags: Reviewed I've committed this to branch. > HBASE-10070: HMaster can abort with NPE in #rebuildUserRegions > --- > > Key: HBASE-10957 > URL: https://issues.apache.org/jira/browse/HBASE-10957 > Project: HBase > Issue Type: Sub-task > Components: master >Affects Versions: hbase-10070 >Reporter: Nicolas Liochon >Assignee: Nicolas Liochon > Fix For: hbase-10070 > > Attachments: 10957.v1.patch > > > Seen during tests. The fix is to test this condition as well. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HBASE-10960) Enhance HBase Thrift 1 to include "append" and "checkAndPut" operations
[ https://issues.apache.org/jira/browse/HBASE-10960?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13981738#comment-13981738 ] Jonathan Hsieh commented on HBASE-10960: Sorry about that. Thanks for the fix. > Enhance HBase Thrift 1 to include "append" and "checkAndPut" operations > --- > > Key: HBASE-10960 > URL: https://issues.apache.org/jira/browse/HBASE-10960 > Project: HBase > Issue Type: Improvement > Components: Thrift >Reporter: Srikanth Srungarapu >Assignee: Srikanth Srungarapu > Fix For: 0.99.0 > > Attachments: HBASE-10960.patch, hbase-10960.v3.patch > > > Both append, and checkAndPut functionalities are available in Thrift 2 > interface, but not in Thrift. So, adding the support for these > functionalities in Thrift1 too. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Resolved] (HBASE-10960) Enhance HBase Thrift 1 to include "append" and "checkAndPut" operations
[ https://issues.apache.org/jira/browse/HBASE-10960?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrew Purtell resolved HBASE-10960. Resolution: Fixed Committed missing file, verified compilation. Thanks Srikanth. > Enhance HBase Thrift 1 to include "append" and "checkAndPut" operations > --- > > Key: HBASE-10960 > URL: https://issues.apache.org/jira/browse/HBASE-10960 > Project: HBase > Issue Type: Improvement > Components: Thrift >Reporter: Srikanth Srungarapu >Assignee: Srikanth Srungarapu > Fix For: 0.99.0 > > Attachments: HBASE-10960.patch, hbase-10960.v3.patch > > > Both append, and checkAndPut functionalities are available in Thrift 2 > interface, but not in Thrift. So, adding the support for these > functionalities in Thrift1 too. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (HBASE-10070) HBase read high-availability using timeline-consistent region replicas
[ https://issues.apache.org/jira/browse/HBASE-10070?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Enis Soztutar updated HBASE-10070: -- Summary: HBase read high-availability using timeline-consistent region replicas (was: HBase read high-availability using eventually consistent region replicas) > HBase read high-availability using timeline-consistent region replicas > -- > > Key: HBASE-10070 > URL: https://issues.apache.org/jira/browse/HBASE-10070 > Project: HBase > Issue Type: New Feature >Reporter: Enis Soztutar >Assignee: Enis Soztutar > Attachments: HighAvailabilityDesignforreadsApachedoc.pdf > > > In the present HBase architecture, it is hard, probably impossible, to > satisfy constraints like 99th percentile of the reads will be served under 10 > ms. One of the major factors that affects this is the MTTR for regions. There > are three phases in the MTTR process - detection, assignment, and recovery. > Of these, the detection is usually the longest and is presently in the order > of 20-30 seconds. During this time, the clients would not be able to read the > region data. > However, some clients will be better served if regions will be available for > reads during recovery for doing eventually consistent reads. This will help > with satisfying low latency guarantees for some class of applications which > can work with stale reads. > For improving read availability, we propose a replicated read-only region > serving design, also referred as secondary regions, or region shadows. > Extending current model of a region being opened for reads and writes in a > single region server, the region will be also opened for reading in region > servers. The region server which hosts the region for reads and writes (as in > current case) will be declared as PRIMARY, while 0 or more region servers > might be hosting the region as SECONDARY. There may be more than one > secondary (replica count > 2). > Will attach a design doc shortly which contains most of the details and some > thoughts about development approaches. Reviews are more than welcome. > We also have a proof of concept patch, which includes the master and regions > server side of changes. Client side changes will be coming soon as well. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (HBASE-5697) Audit HBase for usage of deprecated hadoop 0.20.x property names.
[ https://issues.apache.org/jira/browse/HBASE-5697?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jonathan Hsieh updated HBASE-5697: -- Status: In Progress (was: Patch Available) > Audit HBase for usage of deprecated hadoop 0.20.x property names. > - > > Key: HBASE-5697 > URL: https://issues.apache.org/jira/browse/HBASE-5697 > Project: HBase > Issue Type: Task >Reporter: Jonathan Hsieh >Assignee: Srikanth Srungarapu > Labels: noob > Fix For: 0.99.0 > > Attachments: HBASE-5697.patch, HBASE-5697_v2.patch, > HBASE-5697_v3.patch, deprecated_properties > > > Many xml config properties in Hadoop have changed in 0.23. We should audit > hbase to insulate it from hadoop property name changes. > Here is a list of the hadoop property name changes: > http://hadoop.apache.org/common/docs/r0.23.1/hadoop-project-dist/hadoop-common/DeprecatedProperties.html -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (HBASE-10918) [VisibilityController] System table backed ScanLabelGenerator
[ https://issues.apache.org/jira/browse/HBASE-10918?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrew Purtell updated HBASE-10918: --- Resolution: Fixed Hadoop Flags: Reviewed Status: Resolved (was: Patch Available) Committed to trunk and 0.98. Thanks for the reviews Ram and Anoop. > [VisibilityController] System table backed ScanLabelGenerator > -- > > Key: HBASE-10918 > URL: https://issues.apache.org/jira/browse/HBASE-10918 > Project: HBase > Issue Type: Sub-task >Reporter: Andrew Purtell >Assignee: Andrew Purtell >Priority: Critical > Fix For: 0.99.0, 0.98.2 > > Attachments: HBASE-10918.patch, HBASE-10918_1.patch > > > A ScanLabelGenerator that retrieves a static set of authorizations for a user > or group from a new HBase system table, and insures these auths are part of > the effective set. > Useful for forcing a baseline set of auths for a user. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HBASE-10070) HBase read high-availability using timeline-consistent region replicas
[ https://issues.apache.org/jira/browse/HBASE-10070?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13981722#comment-13981722 ] Enis Soztutar commented on HBASE-10070: --- Changed issue title, according to HBASE-10354. > HBase read high-availability using timeline-consistent region replicas > -- > > Key: HBASE-10070 > URL: https://issues.apache.org/jira/browse/HBASE-10070 > Project: HBase > Issue Type: New Feature >Reporter: Enis Soztutar >Assignee: Enis Soztutar > Attachments: HighAvailabilityDesignforreadsApachedoc.pdf > > > In the present HBase architecture, it is hard, probably impossible, to > satisfy constraints like 99th percentile of the reads will be served under 10 > ms. One of the major factors that affects this is the MTTR for regions. There > are three phases in the MTTR process - detection, assignment, and recovery. > Of these, the detection is usually the longest and is presently in the order > of 20-30 seconds. During this time, the clients would not be able to read the > region data. > However, some clients will be better served if regions will be available for > reads during recovery for doing eventually consistent reads. This will help > with satisfying low latency guarantees for some class of applications which > can work with stale reads. > For improving read availability, we propose a replicated read-only region > serving design, also referred as secondary regions, or region shadows. > Extending current model of a region being opened for reads and writes in a > single region server, the region will be also opened for reading in region > servers. The region server which hosts the region for reads and writes (as in > current case) will be declared as PRIMARY, while 0 or more region servers > might be hosting the region as SECONDARY. There may be more than one > secondary (replica count > 2). > Will attach a design doc shortly which contains most of the details and some > thoughts about development approaches. Reviews are more than welcome. > We also have a proof of concept patch, which includes the master and regions > server side of changes. Client side changes will be coming soon as well. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HBASE-10960) Enhance HBase Thrift 1 to include "append" and "checkAndPut" operations
[ https://issues.apache.org/jira/browse/HBASE-10960?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13981720#comment-13981720 ] Hadoop QA commented on HBASE-10960: --- {color:red}-1 overall{color}. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12642010/hbase-10960.v3.patch against trunk revision . ATTACHMENT ID: 12642010 {color:green}+1 @author{color}. The patch does not contain any @author tags. {color:green}+1 tests included{color}. The patch appears to include 4 new or modified tests. {color:red}-1 javadoc{color}. The javadoc tool appears to have generated 8 warning messages. {color:green}+1 javac{color}. The applied patch does not increase the total number of javac compiler warnings. {color:green}+1 findbugs{color}. The patch does not introduce any new Findbugs (version 1.3.9) warnings. {color:green}+1 release audit{color}. The applied patch does not increase the total number of release audit warnings. {color:red}-1 lineLengths{color}. The patch introduces the following lines longer than 100: + append_call method_call = new append_call(append, resultHandler, this, ___protocolFactory, ___transport); + checkAndPut_call method_call = new checkAndPut_call(tableName, row, column, value, mput, attributes, resultHandler, this, ___protocolFactory, ___transport); + result.success = iface.checkAndPut(args.tableName, args.row, args.column, args.value, args.mput, args.attributes); +private static final Map, SchemeFactory> schemes = new HashMap, SchemeFactory>(); +/** The set of fields this struct contains, along with convenience methods for finding and manipulating them. */ +if (fields == null) throw new IllegalArgumentException("Field " + fieldId + " doesn't exist!"); +/** Returns true if field corresponding to fieldID is set (has been assigned a value) and false otherwise */ +private void readObject(java.io.ObjectInputStream in) throws java.io.IOException, ClassNotFoundException { +// check for required fields of primitive type, which can't be checked in the validate method +private static final Map, SchemeFactory> schemes = new HashMap, SchemeFactory>(); {color:green}+1 site{color}. The mvn site goal succeeds with this patch. {color:green}+1 core tests{color}. The patch passed unit tests in . Test results: https://builds.apache.org/job/PreCommit-HBASE-Build/9402//testReport/ Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/9402//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/9402//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-thrift.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/9402//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/9402//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/9402//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/9402//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/9402//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/9402//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/9402//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/9402//console This message is automatically generated. > Enhance HBase Thrift 1 to include "append" and "checkAndPut" operations > --- > > Key: HBASE-10960 > URL: https://issues.apache.org/jira/browse/HBASE-10960 > Project: HBase > Issue Type: Improvement > Components: Thrift >Reporter: Srikanth Srungarapu >Assignee: Srikanth Srungarapu > Fix For: 0.99.0 > > Attachments: HBASE-10960.patch, hbase-10960.v3.patch > > > Both append, and checkAndPut functionalities are available in Thrift 2 > interface, but not in Thrift. So, adding the support for these > functionalities in Thrift1 too. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HBASE-5697) Audit HBase for usage of deprecated hadoop 0.20.x property names.
[ https://issues.apache.org/jira/browse/HBASE-5697?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13981717#comment-13981717 ] Enis Soztutar commented on HBASE-5697: -- +1. > Audit HBase for usage of deprecated hadoop 0.20.x property names. > - > > Key: HBASE-5697 > URL: https://issues.apache.org/jira/browse/HBASE-5697 > Project: HBase > Issue Type: Task >Reporter: Jonathan Hsieh >Assignee: Srikanth Srungarapu > Labels: noob > Fix For: 0.99.0 > > Attachments: HBASE-5697.patch, HBASE-5697_v2.patch, > HBASE-5697_v3.patch, deprecated_properties > > > Many xml config properties in Hadoop have changed in 0.23. We should audit > hbase to insulate it from hadoop property name changes. > Here is a list of the hadoop property name changes: > http://hadoop.apache.org/common/docs/r0.23.1/hadoop-project-dist/hadoop-common/DeprecatedProperties.html -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (HBASE-10962) Decouple region opening (HM and HRS) from ZK
[ https://issues.apache.org/jira/browse/HBASE-10962?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mikhail Antonov updated HBASE-10962: Status: Patch Available (was: Open) > Decouple region opening (HM and HRS) from ZK > > > Key: HBASE-10962 > URL: https://issues.apache.org/jira/browse/HBASE-10962 > Project: HBase > Issue Type: Sub-task > Components: Consensus, Zookeeper >Reporter: Mikhail Antonov >Assignee: Mikhail Antonov > Attachments: HBASE-10962.patch, HBASE-10962.patch, HBASE-10962.patch > > > This involves creating consensus class to be kept by ConsensusProvider > interface, and modifications of the following classes: > - HRS side (OpenRegionHandler, OpenMetaHandler, and few accompanying changes > in HRegionServer code and RsRpcServices) > - HM side (OpenedRegionHandler, may be some changes in AssignmentManager) -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (HBASE-10962) Decouple region opening (HM and HRS) from ZK
[ https://issues.apache.org/jira/browse/HBASE-10962?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mikhail Antonov updated HBASE-10962: Attachment: HBASE-10962.patch Merged patch with the work made for consensus inftra; that includes RS side only, for initial review - will post updated patch with HM side shortly. > Decouple region opening (HM and HRS) from ZK > > > Key: HBASE-10962 > URL: https://issues.apache.org/jira/browse/HBASE-10962 > Project: HBase > Issue Type: Sub-task > Components: Consensus, Zookeeper >Reporter: Mikhail Antonov >Assignee: Mikhail Antonov > Attachments: HBASE-10962.patch, HBASE-10962.patch, HBASE-10962.patch > > > This involves creating consensus class to be kept by ConsensusProvider > interface, and modifications of the following classes: > - HRS side (OpenRegionHandler, OpenMetaHandler, and few accompanying changes > in HRegionServer code and RsRpcServices) > - HM side (OpenedRegionHandler, may be some changes in AssignmentManager) -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (HBASE-10962) Decouple region opening (HM and HRS) from ZK
[ https://issues.apache.org/jira/browse/HBASE-10962?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mikhail Antonov updated HBASE-10962: Status: Open (was: Patch Available) > Decouple region opening (HM and HRS) from ZK > > > Key: HBASE-10962 > URL: https://issues.apache.org/jira/browse/HBASE-10962 > Project: HBase > Issue Type: Sub-task > Components: Consensus, Zookeeper >Reporter: Mikhail Antonov >Assignee: Mikhail Antonov > Attachments: HBASE-10962.patch, HBASE-10962.patch, HBASE-10962.patch > > > This involves creating consensus class to be kept by ConsensusProvider > interface, and modifications of the following classes: > - HRS side (OpenRegionHandler, OpenMetaHandler, and few accompanying changes > in HRegionServer code and RsRpcServices) > - HM side (OpenedRegionHandler, may be some changes in AssignmentManager) -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (HBASE-10892) [Shell] Add support for globs in user_permission
[ https://issues.apache.org/jira/browse/HBASE-10892?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrew Purtell updated HBASE-10892: --- Resolution: Fixed Hadoop Flags: Reviewed Status: Resolved (was: Patch Available) Committed to trunk and 0.98. Tested with 0.98. Thanks for the patch [~esteban]! > [Shell] Add support for globs in user_permission > > > Key: HBASE-10892 > URL: https://issues.apache.org/jira/browse/HBASE-10892 > Project: HBase > Issue Type: Improvement > Components: shell >Affects Versions: 0.99.0 >Reporter: Esteban Gutierrez >Assignee: Esteban Gutierrez > Fix For: 0.99.0, 0.98.2 > > Attachments: HBASE-10892.v0.diff, HBASE-10892.v1.patch > > > It would be nice for {{user_permission}} to show all the permissions for all > the tables or a subset of tables if a glob (regex) is provided. > {code} > hbase> user_permission '*' > User Table,Family,Qualifier:Permission > esteban x,,: [Permission: > actions=READ,WRITE,EXEC,CREATE,ADMIN] > hbase1y,,: [Permission: > actions=READ,WRITE] > hbase2z,,: [Permission: > actions=READ,WRITE] > {code} -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (HBASE-10892) [Shell] Add support for globs in user_permission
[ https://issues.apache.org/jira/browse/HBASE-10892?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrew Purtell updated HBASE-10892: --- Summary: [Shell] Add support for globs in user_permission (was: Add support for globs in user_permission) > [Shell] Add support for globs in user_permission > > > Key: HBASE-10892 > URL: https://issues.apache.org/jira/browse/HBASE-10892 > Project: HBase > Issue Type: Improvement > Components: shell >Affects Versions: 0.99.0 >Reporter: Esteban Gutierrez >Assignee: Esteban Gutierrez > Fix For: 0.99.0, 0.98.2 > > Attachments: HBASE-10892.v0.diff, HBASE-10892.v1.patch > > > It would be nice for {{user_permission}} to show all the permissions for all > the tables or a subset of tables if a glob (regex) is provided. > {code} > hbase> user_permission '*' > User Table,Family,Qualifier:Permission > esteban x,,: [Permission: > actions=READ,WRITE,EXEC,CREATE,ADMIN] > hbase1y,,: [Permission: > actions=READ,WRITE] > hbase2z,,: [Permission: > actions=READ,WRITE] > {code} -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HBASE-11079) Normalize test tools across branches
[ https://issues.apache.org/jira/browse/HBASE-11079?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13981707#comment-13981707 ] Andrew Purtell commented on HBASE-11079: Cool, [~jmspaggi] any thoughts on this issue? Seem a good idea? Or unnecessary? Or somewhere in between? > Normalize test tools across branches > > > Key: HBASE-11079 > URL: https://issues.apache.org/jira/browse/HBASE-11079 > Project: HBase > Issue Type: Umbrella >Reporter: Andrew Purtell > > Will be a challenge wherever the branches vary functionally, but it would be > good to normalize the test tools (LoadTestTool and PerformanceEvaluation) as > much as possible among the active branches so we can compare them. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HBASE-10960) Enhance HBase Thrift 1 to include "append" and "checkAndPut" operations
[ https://issues.apache.org/jira/browse/HBASE-10960?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13981704#comment-13981704 ] Srikanth Srungarapu commented on HBASE-10960: - [~jmhsieh] The attached patch has changes for introducing new file TAppend.java, but unfortunately this got missed in trunk https://github.com/apache/hbase/commit/ac9928f53c1552696ad9995b25de811475712717 and hence the build broke. Can you please take a look into it? > Enhance HBase Thrift 1 to include "append" and "checkAndPut" operations > --- > > Key: HBASE-10960 > URL: https://issues.apache.org/jira/browse/HBASE-10960 > Project: HBase > Issue Type: Improvement > Components: Thrift >Reporter: Srikanth Srungarapu >Assignee: Srikanth Srungarapu > Fix For: 0.99.0 > > Attachments: HBASE-10960.patch, hbase-10960.v3.patch > > > Both append, and checkAndPut functionalities are available in Thrift 2 > interface, but not in Thrift. So, adding the support for these > functionalities in Thrift1 too. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HBASE-11079) Normalize test tools across branches
[ https://issues.apache.org/jira/browse/HBASE-11079?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13981699#comment-13981699 ] Nick Dimiduk commented on HBASE-11079: -- I believe [~jmspaggi] does the comparative version test using these very tools while testing RC's. He's reminded me more than once to back-port enhancements. > Normalize test tools across branches > > > Key: HBASE-11079 > URL: https://issues.apache.org/jira/browse/HBASE-11079 > Project: HBase > Issue Type: Umbrella >Reporter: Andrew Purtell > > Will be a challenge wherever the branches vary functionally, but it would be > good to normalize the test tools (LoadTestTool and PerformanceEvaluation) as > much as possible among the active branches so we can compare them. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Reopened] (HBASE-10960) Enhance HBase Thrift 1 to include "append" and "checkAndPut" operations
[ https://issues.apache.org/jira/browse/HBASE-10960?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrew Purtell reopened HBASE-10960: I think this commit broke the trunk build {noformat} [ERROR] Failed to execute goal org.apache.maven.plugins:maven-compiler-plugin:2.5.1:compile (default-compile) on project hbase-thrift: Compilation failure: Compilation failure: [ERROR] /usr/src/Hadoop/hbase/hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift/ThriftServerRunner.java:[81,47] error: cannot find symbol [ERROR] symbol: class TAppend [ERROR] location: package org.apache.hadoop.hbase.thrift.generated [ERROR] /usr/src/Hadoop/hbase/hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift/generated/Hbase.java:[629,30] error: cannot find symbol [ERROR] symbol: class TAppend [ERROR] location: interface Iface [ERROR] /usr/src/Hadoop/hbase/hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift/ThriftServerRunner.java:[1493,30] error: cannot find symbol [ERROR] symbol: class TAppend [ERROR] location: class HBaseHandler [ERROR] /usr/src/Hadoop/hbase/hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift/ThriftUtilities.java:[40,47] error: cannot find symbol [ERROR] symbol: class TAppend [ERROR] location: package org.apache.hadoop.hbase.thrift.generated [ERROR] /usr/src/Hadoop/hbase/hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift/ThriftUtilities.java:[215,40] error: cannot find symbol [ERROR] symbol: class TAppend [ERROR] location: class ThriftUtilities [ERROR] /usr/src/Hadoop/hbase/hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift/generated/Hbase.java:[741,23] error: cannot find symbol [ERROR] symbol: class TAppend [ERROR] location: interface AsyncIface [ERROR] /usr/src/Hadoop/hbase/hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift/generated/Hbase.java:[3666,23] error: cannot find symbol [ERROR] symbol: class TAppend [ERROR] location: class AsyncClient [ERROR] /usr/src/Hadoop/hbase/hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift/generated/Hbase.java:[3674,14] error: cannot find symbol [ERROR] symbol: class TAppend [ERROR] location: class append_call [ERROR] /usr/src/Hadoop/hbase/hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift/generated/Hbase.java:[3675,25] error: cannot find symbol [ERROR] symbol: class TAppend [ERROR] location: class append_call [ERROR] /usr/src/Hadoop/hbase/hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift/generated/Hbase.java:[1951,30] error: cannot find symbol [ERROR] symbol: class TAppend [ERROR] location: class Client [ERROR] /usr/src/Hadoop/hbase/hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift/generated/Hbase.java:[1957,28] error: cannot find symbol [ERROR] symbol: class TAppend [ERROR] location: class Client [ERROR] /usr/src/Hadoop/hbase/hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift/generated/Hbase.java:[53476,11] error: cannot find symbol [ERROR] symbol: class TAppend [ERROR] location: class append_args [ERROR] /usr/src/Hadoop/hbase/hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift/generated/Hbase.java:[53553,6] error: cannot find symbol [ERROR] symbol: class TAppend [ERROR] location: class append_args [ERROR] /usr/src/Hadoop/hbase/hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift/generated/Hbase.java:[53580,11] error: cannot find symbol [ERROR] symbol: class TAppend [ERROR] location: class append_args [ERROR] /usr/src/Hadoop/hbase/hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift/generated/Hbase.java:[53587,33] error: cannot find symbol [ERROR] symbol: class TAppend [ERROR] location: class append_args [ERROR] /usr/src/Hadoop/hbase/hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift/generated/Hbase.java:[53544,98] error: cannot find symbol [ERROR] symbol: class TAppend [ERROR] location: class append_args [ERROR] /usr/src/Hadoop/hbase/hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift/generated/Hbase.java:[53564,26] error: cannot find symbol [ERROR] symbol: class TAppend [ERROR] location: class append_args [ERROR] /usr/src/Hadoop/hbase/hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift/generated/Hbase.java:[53613,21] error: cannot find symbol [ERROR] symbol: class TAppend [ERROR] location: class append_args [ERROR] /usr/src/Hadoop/hbase/hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift/generated/Hbase.java:[53765,36] error: cannot find symbol [ERROR] symbol: class TAppend [ERROR] location: class append_argsStandardScheme [ERROR] /usr/src/Hadoop/hbase/hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift/generated/Hbase.java:[53824,30] error: cannot find symbol [ERROR] -> [Help 1] {noformat} > Enhance HBase Thrift 1 to include "append" and "checkAndPut" operations > --- > > Key: HBASE-10960 > URL: https://i
[jira] [Commented] (HBASE-11080) TestZKSecretWatcher#testKeyUpdate occasionally fails
[ https://issues.apache.org/jira/browse/HBASE-11080?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13981690#comment-13981690 ] Andrew Purtell commented on HBASE-11080: Do we keep filing these types of issues as builds.apache.org gets slower and slower? I run the 0.98 test suite each day 20 times using JDK 6 and JDK 7 and never see this. But anyway a patch increasing the number of attempts can't hurt. > TestZKSecretWatcher#testKeyUpdate occasionally fails > > > Key: HBASE-11080 > URL: https://issues.apache.org/jira/browse/HBASE-11080 > Project: HBase > Issue Type: Test >Affects Versions: 0.98.1 >Reporter: Ted Yu >Priority: Minor > > From > https://builds.apache.org/job/HBase-0.98-on-Hadoop-1.1/280/testReport/junit/org.apache.hadoop.hbase.security.token/TestZKSecretWatcher/testKeyUpdate/ > : > {code} > java.lang.AssertionError > at org.junit.Assert.fail(Assert.java:86) > at org.junit.Assert.assertTrue(Assert.java:41) > at org.junit.Assert.assertNotNull(Assert.java:621) > at org.junit.Assert.assertNotNull(Assert.java:631) > at > org.apache.hadoop.hbase.security.token.TestZKSecretWatcher.testKeyUpdate(TestZKSecretWatcher.java:221) > {code} > Here is the assertion that failed: > {code} > assertNotNull(newMaster); > {code} > Looks like new master did not come up within 5 tries. > One potential fix is to increase the number of attempts. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Created] (HBASE-11082) Potential unclosed TraceScope in FSHLog#replaceWriter()
Ted Yu created HBASE-11082: -- Summary: Potential unclosed TraceScope in FSHLog#replaceWriter() Key: HBASE-11082 URL: https://issues.apache.org/jira/browse/HBASE-11082 Project: HBase Issue Type: Bug Reporter: Ted Yu Priority: Minor In the finally block starting at line 924: {code} } finally { // Let the writer thread go regardless, whether error or not. if (zigzagLatch != null) { zigzagLatch.releaseSafePoint(); // It will be null if we failed our wait on safe point above. if (syncFuture != null) blockOnSync(syncFuture); } scope.close(); {code} If blockOnSync() throws IOException, the TraceScope would be left unclosed. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HBASE-11079) Normalize test tools across branches
[ https://issues.apache.org/jira/browse/HBASE-11079?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13981671#comment-13981671 ] Andrew Purtell commented on HBASE-11079: Was thinking along those lines. A one shot sync. Might be worth as a dev process pasting-over from trunk to 0.98, fixing up, then pasting-over from 0.98 to 0.96, fixing up, then pasting-over from 0.96 to 0.94. The outcome would be six patches, one for each branch for each tool (and helper classes), for independent review and commit. It would be a fair amount of work but I'd be interested in doing it and then some drag racing among the branches on the same test cluster. > Normalize test tools across branches > > > Key: HBASE-11079 > URL: https://issues.apache.org/jira/browse/HBASE-11079 > Project: HBase > Issue Type: Umbrella >Reporter: Andrew Purtell > > Will be a challenge wherever the branches vary functionally, but it would be > good to normalize the test tools (LoadTestTool and PerformanceEvaluation) as > much as possible among the active branches so we can compare them. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (HBASE-5554) "hadoop.native.lib" config is deprecated
[ https://issues.apache.org/jira/browse/HBASE-5554?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Esteban Gutierrez updated HBASE-5554: - Resolution: Duplicate Status: Resolved (was: Patch Available) Dup of HBASE-5697 > "hadoop.native.lib" config is deprecated > > > Key: HBASE-5554 > URL: https://issues.apache.org/jira/browse/HBASE-5554 > Project: HBase > Issue Type: Task >Reporter: Ted Yu >Assignee: Esteban Gutierrez > Fix For: 0.99.0, 0.96.3, 0.98.3 > > Attachments: HBASE-5554.v0.patch > > > When using HBase shell, we see: > {code} > 12/03/09 09:06:58 WARN conf.Configuration: hadoop.native.lib is deprecated. > Instead, use io.native.lib.available > {code} > "io.native.lib.available" should be used. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (HBASE-5554) "hadoop.native.lib" config is deprecated
[ https://issues.apache.org/jira/browse/HBASE-5554?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Esteban Gutierrez updated HBASE-5554: - Fix Version/s: 0.98.3 0.96.3 0.99.0 Status: Patch Available (was: Open) > "hadoop.native.lib" config is deprecated > > > Key: HBASE-5554 > URL: https://issues.apache.org/jira/browse/HBASE-5554 > Project: HBase > Issue Type: Task >Reporter: Ted Yu >Assignee: Esteban Gutierrez > Fix For: 0.99.0, 0.96.3, 0.98.3 > > Attachments: HBASE-5554.v0.patch > > > When using HBase shell, we see: > {code} > 12/03/09 09:06:58 WARN conf.Configuration: hadoop.native.lib is deprecated. > Instead, use io.native.lib.available > {code} > "io.native.lib.available" should be used. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (HBASE-5554) "hadoop.native.lib" config is deprecated
[ https://issues.apache.org/jira/browse/HBASE-5554?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Esteban Gutierrez updated HBASE-5554: - Attachment: HBASE-5554.v0.patch Since {{hadoop.native.lib}} has been deprecated for both hadoop1 and hadoop2 the fix should be fine. > "hadoop.native.lib" config is deprecated > > > Key: HBASE-5554 > URL: https://issues.apache.org/jira/browse/HBASE-5554 > Project: HBase > Issue Type: Task >Reporter: Ted Yu > Attachments: HBASE-5554.v0.patch > > > When using HBase shell, we see: > {code} > 12/03/09 09:06:58 WARN conf.Configuration: hadoop.native.lib is deprecated. > Instead, use io.native.lib.available > {code} > "io.native.lib.available" should be used. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Assigned] (HBASE-5554) "hadoop.native.lib" config is deprecated
[ https://issues.apache.org/jira/browse/HBASE-5554?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Esteban Gutierrez reassigned HBASE-5554: Assignee: Esteban Gutierrez > "hadoop.native.lib" config is deprecated > > > Key: HBASE-5554 > URL: https://issues.apache.org/jira/browse/HBASE-5554 > Project: HBase > Issue Type: Task >Reporter: Ted Yu >Assignee: Esteban Gutierrez > Attachments: HBASE-5554.v0.patch > > > When using HBase shell, we see: > {code} > 12/03/09 09:06:58 WARN conf.Configuration: hadoop.native.lib is deprecated. > Instead, use io.native.lib.available > {code} > "io.native.lib.available" should be used. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HBASE-11081) Trunk Master won't start; looking for Constructor that takes conf only
[ https://issues.apache.org/jira/browse/HBASE-11081?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13981652#comment-13981652 ] Mikhail Antonov commented on HBASE-11081: - Ouch :( Should we have a second constructor in HMaster jsut as for HRS? > Trunk Master won't start; looking for Constructor that takes conf only > -- > > Key: HBASE-11081 > URL: https://issues.apache.org/jira/browse/HBASE-11081 > Project: HBase > Issue Type: Bug >Reporter: stack >Assignee: stack > Fix For: 0.99.0 > > Attachments: 11081.txt > > > Committing the Consensus Infra, we broke starting master. Small fix so > constructMaster passes in a ConsensusProvider. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (HBASE-5697) Audit HBase for usage of deprecated hadoop 0.20.x property names.
[ https://issues.apache.org/jira/browse/HBASE-5697?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Srikanth Srungarapu updated HBASE-5697: --- Attachment: HBASE-5697_v3.patch Uploading the same patch for HadoopQA bot to pick it up. > Audit HBase for usage of deprecated hadoop 0.20.x property names. > - > > Key: HBASE-5697 > URL: https://issues.apache.org/jira/browse/HBASE-5697 > Project: HBase > Issue Type: Task >Reporter: Jonathan Hsieh >Assignee: Srikanth Srungarapu > Labels: noob > Fix For: 0.99.0 > > Attachments: HBASE-5697.patch, HBASE-5697_v2.patch, > HBASE-5697_v3.patch, deprecated_properties > > > Many xml config properties in Hadoop have changed in 0.23. We should audit > hbase to insulate it from hadoop property name changes. > Here is a list of the hadoop property name changes: > http://hadoop.apache.org/common/docs/r0.23.1/hadoop-project-dist/hadoop-common/DeprecatedProperties.html -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (HBASE-10918) [VisibilityController] System table backed ScanLabelGenerator
[ https://issues.apache.org/jira/browse/HBASE-10918?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrew Purtell updated HBASE-10918: --- Status: Patch Available (was: Open) > [VisibilityController] System table backed ScanLabelGenerator > -- > > Key: HBASE-10918 > URL: https://issues.apache.org/jira/browse/HBASE-10918 > Project: HBase > Issue Type: Sub-task >Reporter: Andrew Purtell >Assignee: Andrew Purtell >Priority: Critical > Fix For: 0.99.0, 0.98.2 > > Attachments: HBASE-10918.patch, HBASE-10918_1.patch > > > A ScanLabelGenerator that retrieves a static set of authorizations for a user > or group from a new HBase system table, and insures these auths are part of > the effective set. > Useful for forcing a baseline set of auths for a user. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (HBASE-10918) [VisibilityController] System table backed ScanLabelGenerator
[ https://issues.apache.org/jira/browse/HBASE-10918?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrew Purtell updated HBASE-10918: --- Attachment: HBASE-10918_1.patch Patch 10918_1 adds a new test. Passes locally. What I will commit soon. > [VisibilityController] System table backed ScanLabelGenerator > -- > > Key: HBASE-10918 > URL: https://issues.apache.org/jira/browse/HBASE-10918 > Project: HBase > Issue Type: Sub-task >Reporter: Andrew Purtell >Assignee: Andrew Purtell >Priority: Critical > Fix For: 0.99.0, 0.98.2 > > Attachments: HBASE-10918.patch, HBASE-10918_1.patch > > > A ScanLabelGenerator that retrieves a static set of authorizations for a user > or group from a new HBase system table, and insures these auths are part of > the effective set. > Useful for forcing a baseline set of auths for a user. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (HBASE-10918) [VisibilityController] System table backed ScanLabelGenerator
[ https://issues.apache.org/jira/browse/HBASE-10918?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrew Purtell updated HBASE-10918: --- Status: Open (was: Patch Available) > [VisibilityController] System table backed ScanLabelGenerator > -- > > Key: HBASE-10918 > URL: https://issues.apache.org/jira/browse/HBASE-10918 > Project: HBase > Issue Type: Sub-task >Reporter: Andrew Purtell >Assignee: Andrew Purtell >Priority: Critical > Fix For: 0.99.0, 0.98.2 > > Attachments: HBASE-10918.patch > > > A ScanLabelGenerator that retrieves a static set of authorizations for a user > or group from a new HBase system table, and insures these auths are part of > the effective set. > Useful for forcing a baseline set of auths for a user. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HBASE-11079) Normalize test tools across branches
[ https://issues.apache.org/jira/browse/HBASE-11079?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13981637#comment-13981637 ] Nick Dimiduk commented on HBASE-11079: -- Re: PerfEval, 0.94 has been left behind. I tried backporting patches on a couple instances; it's a struggle. May be worth trying a blanket paste-over from 0.96. > Normalize test tools across branches > > > Key: HBASE-11079 > URL: https://issues.apache.org/jira/browse/HBASE-11079 > Project: HBase > Issue Type: Umbrella >Reporter: Andrew Purtell > > Will be a challenge wherever the branches vary functionally, but it would be > good to normalize the test tools (LoadTestTool and PerformanceEvaluation) as > much as possible among the active branches so we can compare them. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (HBASE-11081) Trunk Master won't start; looking for Constructor that takes conf only
[ https://issues.apache.org/jira/browse/HBASE-11081?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] stack updated HBASE-11081: -- Attachment: 11081.txt Small patch. Tried it on cluster and it works. Going to commit. > Trunk Master won't start; looking for Constructor that takes conf only > -- > > Key: HBASE-11081 > URL: https://issues.apache.org/jira/browse/HBASE-11081 > Project: HBase > Issue Type: Bug >Reporter: stack >Assignee: stack > Fix For: 0.99.0 > > Attachments: 11081.txt > > > Committing the Consensus Infra, we broke starting master. Small fix so > constructMaster passes in a ConsensusProvider. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (HBASE-5697) Audit HBase for usage of deprecated hadoop 0.20.x property names.
[ https://issues.apache.org/jira/browse/HBASE-5697?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Srikanth Srungarapu updated HBASE-5697: --- Fix Version/s: 0.99.0 > Audit HBase for usage of deprecated hadoop 0.20.x property names. > - > > Key: HBASE-5697 > URL: https://issues.apache.org/jira/browse/HBASE-5697 > Project: HBase > Issue Type: Task >Reporter: Jonathan Hsieh >Assignee: Srikanth Srungarapu > Labels: noob > Fix For: 0.99.0 > > Attachments: HBASE-5697.patch, HBASE-5697_v2.patch, > deprecated_properties > > > Many xml config properties in Hadoop have changed in 0.23. We should audit > hbase to insulate it from hadoop property name changes. > Here is a list of the hadoop property name changes: > http://hadoop.apache.org/common/docs/r0.23.1/hadoop-project-dist/hadoop-common/DeprecatedProperties.html -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Created] (HBASE-11081) Trunk Master won't start; looking for Constructor that takes conf only
stack created HBASE-11081: - Summary: Trunk Master won't start; looking for Constructor that takes conf only Key: HBASE-11081 URL: https://issues.apache.org/jira/browse/HBASE-11081 Project: HBase Issue Type: Bug Reporter: stack Assignee: stack Fix For: 0.99.0 Committing the Consensus Infra, we broke starting master. Small fix so constructMaster passes in a ConsensusProvider. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HBASE-11038) Filtered scans can bypass metrics collection
[ https://issues.apache.org/jira/browse/HBASE-11038?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13981626#comment-13981626 ] Hudson commented on HBASE-11038: SUCCESS: Integrated in hbase-0.96 #394 (See [https://builds.apache.org/job/hbase-0.96/394/]) HBASE-11038 Filtered scans can bypass metrics collection (ndimiduk: rev 1590069) * /hbase/branches/0.96/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java > Filtered scans can bypass metrics collection > > > Key: HBASE-11038 > URL: https://issues.apache.org/jira/browse/HBASE-11038 > Project: HBase > Issue Type: Bug > Components: Scanners >Affects Versions: 0.96.2, 0.98.1, 0.99.0 >Reporter: Nick Dimiduk >Assignee: Nick Dimiduk >Priority: Minor > Fix For: 0.99.0, 0.98.2, 0.96.3 > > Attachments: HBASE-11038.00.patch, HBASE-11038.01.96.patch, > HBASE-11038.01.98.patch, HBASE-11038.01.patch > > > In RegionScannerImpl#nextRaw, after a batch of results are retrieved, > delegates to the filter regarding continuation of the scan. If > filterAllRemaining returns true, the method exits immediately, without > calling MetricsRegion#updateNextScan. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (HBASE-10960) Enhance HBase Thrift 1 to include "append" and "checkAndPut" operations
[ https://issues.apache.org/jira/browse/HBASE-10960?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jonathan Hsieh updated HBASE-10960: --- Resolution: Fixed Status: Resolved (was: Patch Available) > Enhance HBase Thrift 1 to include "append" and "checkAndPut" operations > --- > > Key: HBASE-10960 > URL: https://issues.apache.org/jira/browse/HBASE-10960 > Project: HBase > Issue Type: Improvement > Components: Thrift >Reporter: Srikanth Srungarapu >Assignee: Srikanth Srungarapu > Fix For: 0.99.0 > > Attachments: HBASE-10960.patch, hbase-10960.v3.patch > > > Both append, and checkAndPut functionalities are available in Thrift 2 > interface, but not in Thrift. So, adding the support for these > functionalities in Thrift1 too. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HBASE-10960) Enhance HBase Thrift 1 to include "append" and "checkAndPut" operations
[ https://issues.apache.org/jira/browse/HBASE-10960?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13981617#comment-13981617 ] Jonathan Hsieh commented on HBASE-10960: Since this only affects thrift, I ran TestThriftServer with the patch applied and it passed. Thanks for the patch Srikanth, committing to trunk. > Enhance HBase Thrift 1 to include "append" and "checkAndPut" operations > --- > > Key: HBASE-10960 > URL: https://issues.apache.org/jira/browse/HBASE-10960 > Project: HBase > Issue Type: Improvement > Components: Thrift >Reporter: Srikanth Srungarapu >Assignee: Srikanth Srungarapu > Fix For: 0.99.0 > > Attachments: HBASE-10960.patch, hbase-10960.v3.patch > > > Both append, and checkAndPut functionalities are available in Thrift 2 > interface, but not in Thrift. So, adding the support for these > functionalities in Thrift1 too. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (HBASE-10958) [dataloss] Bulk loading with seqids can prevent some log entries from being replayed
[ https://issues.apache.org/jira/browse/HBASE-10958?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jean-Daniel Cryans updated HBASE-10958: --- Release Note: Bulk loading with sequence IDs, an option in late 0.94 releases and the default since 0.96.0, will now trigger a flush per region that loads an HFile (if there's data that needs to flushed). Committed to 0.96 and up. Like for HBASE-11008, I'm waiting to commit to 0.94 or I can open a backport jira. > [dataloss] Bulk loading with seqids can prevent some log entries from being > replayed > > > Key: HBASE-10958 > URL: https://issues.apache.org/jira/browse/HBASE-10958 > Project: HBase > Issue Type: Bug >Affects Versions: 0.96.2, 0.98.1, 0.94.18 >Reporter: Jean-Daniel Cryans >Assignee: Jean-Daniel Cryans >Priority: Blocker > Fix For: 0.99.0, 0.98.2, 0.96.3, 0.94.20 > > Attachments: HBASE-10958-0.94.patch, > HBASE-10958-less-intrusive-hack-0.96.patch, > HBASE-10958-quick-hack-0.96.patch, HBASE-10958-v2.patch, > HBASE-10958-v3.patch, HBASE-10958.patch > > > We found an issue with bulk loads causing data loss when assigning sequence > ids (HBASE-6630) that is triggered when replaying recovered edits. We're > nicknaming this issue *Blindspot*. > The problem is that the sequence id given to a bulk loaded file is higher > than those of the edits in the region's memstore. When replaying recovered > edits, the rule to skip some of them is that they have to be _lower than the > highest sequence id_. In other words, the edits that have a sequence id lower > than the highest one in the store files *should* have also been flushed. This > is not the case with bulk loaded files since we now have an HFile with a > sequence id higher than unflushed edits. > The log recovery code takes this into account by simply skipping the bulk > loaded files, but this "bulk loaded status" is *lost* on compaction. The > edits in the logs that have a sequence id lower than the bulk loaded file > that got compacted are put in a blind spot and are skipped during replay. > Here's the easiest way to recreate this issue: > - Create an empty table > - Put one row in it (let's say it gets seqid 1) > - Bulk load one file (it gets seqid 2). I used ImporTsv and set > hbase.mapreduce.bulkload.assign.sequenceNumbers. > - Bulk load a second file the same way (it gets seqid 3). > - Major compact the table (the new file has seqid 3 and isn't considered > bulk loaded). > - Kill the region server that holds the table's region. > - Scan the table once the region is made available again. The first row, at > seqid 1, will be missing since the HFile with seqid 3 makes us believe that > everything that came before it was flushed. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HBASE-11053) Change DeleteTracker APIs to work with Cell
[ https://issues.apache.org/jira/browse/HBASE-11053?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13981582#comment-13981582 ] Andrew Purtell commented on HBASE-11053: bq. If I commit this to 0.98 one thing is as there is no Cell related changes in 0.98, so passing kv to the DeleteTracker would make it do Kv.getqualifierOffset() , kv.getQualifierLength(), kv.getTimeStamp(), kv.getTypeByte() once again which is already extracted out in the SQM itself. Yes, well we need this in 0.98 so please consider making the change and attach what you commit here. +1 if what you describe above is the only change necessary. > Change DeleteTracker APIs to work with Cell > --- > > Key: HBASE-11053 > URL: https://issues.apache.org/jira/browse/HBASE-11053 > Project: HBase > Issue Type: Sub-task >Affects Versions: 0.98.1, 0.99.0 >Reporter: ramkrishna.s.vasudevan >Assignee: ramkrishna.s.vasudevan > Fix For: 0.99.0, 0.98.2 > > Attachments: HBASE-11053.patch, HBASE-11053_0.98.patch, > HBASE-11053_1.patch > > > DeleteTracker interface (marked as Private) should work with Cells. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HBASE-5697) Audit HBase for usage of deprecated hadoop 0.20.x property names.
[ https://issues.apache.org/jira/browse/HBASE-5697?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13981597#comment-13981597 ] Jonathan Hsieh commented on HBASE-5697: --- np. I'm +1 for the patch if it applies and passes. Can you upload an updated version and kick off the hadoopqa build bot again? I'd like to see the build work before I commit. > Audit HBase for usage of deprecated hadoop 0.20.x property names. > - > > Key: HBASE-5697 > URL: https://issues.apache.org/jira/browse/HBASE-5697 > Project: HBase > Issue Type: Task >Reporter: Jonathan Hsieh >Assignee: Srikanth Srungarapu > Labels: noob > Attachments: HBASE-5697.patch, HBASE-5697_v2.patch, > deprecated_properties > > > Many xml config properties in Hadoop have changed in 0.23. We should audit > hbase to insulate it from hadoop property name changes. > Here is a list of the hadoop property name changes: > http://hadoop.apache.org/common/docs/r0.23.1/hadoop-project-dist/hadoop-common/DeprecatedProperties.html -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (HBASE-10960) Enhance HBase Thrift 1 to include "append" and "checkAndPut" operations
[ https://issues.apache.org/jira/browse/HBASE-10960?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jonathan Hsieh updated HBASE-10960: --- Fix Version/s: (was: 0.98.2) 0.99.0 Hadoop Flags: Reviewed Status: Patch Available (was: Open) > Enhance HBase Thrift 1 to include "append" and "checkAndPut" operations > --- > > Key: HBASE-10960 > URL: https://issues.apache.org/jira/browse/HBASE-10960 > Project: HBase > Issue Type: Improvement > Components: Thrift >Reporter: Srikanth Srungarapu >Assignee: Srikanth Srungarapu > Fix For: 0.99.0 > > Attachments: HBASE-10960.patch, hbase-10960.v3.patch > > > Both append, and checkAndPut functionalities are available in Thrift 2 > interface, but not in Thrift. So, adding the support for these > functionalities in Thrift1 too. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (HBASE-10960) Enhance HBase Thrift 1 to include "append" and "checkAndPut" operations
[ https://issues.apache.org/jira/browse/HBASE-10960?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jonathan Hsieh updated HBASE-10960: --- Attachment: hbase-10960.v3.patch Attaching +1'ed patch for hadoopqa to test. > Enhance HBase Thrift 1 to include "append" and "checkAndPut" operations > --- > > Key: HBASE-10960 > URL: https://issues.apache.org/jira/browse/HBASE-10960 > Project: HBase > Issue Type: Improvement > Components: Thrift >Reporter: Srikanth Srungarapu >Assignee: Srikanth Srungarapu > Fix For: 0.98.2 > > Attachments: HBASE-10960.patch, hbase-10960.v3.patch > > > Both append, and checkAndPut functionalities are available in Thrift 2 > interface, but not in Thrift. So, adding the support for these > functionalities in Thrift1 too. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Created] (HBASE-11080) TestZKSecretWatcher#testKeyUpdate occasionally fails
Ted Yu created HBASE-11080: -- Summary: TestZKSecretWatcher#testKeyUpdate occasionally fails Key: HBASE-11080 URL: https://issues.apache.org/jira/browse/HBASE-11080 Project: HBase Issue Type: Test Affects Versions: 0.98.1 Reporter: Ted Yu Priority: Minor >From >https://builds.apache.org/job/HBase-0.98-on-Hadoop-1.1/280/testReport/junit/org.apache.hadoop.hbase.security.token/TestZKSecretWatcher/testKeyUpdate/ > : {code} java.lang.AssertionError at org.junit.Assert.fail(Assert.java:86) at org.junit.Assert.assertTrue(Assert.java:41) at org.junit.Assert.assertNotNull(Assert.java:621) at org.junit.Assert.assertNotNull(Assert.java:631) at org.apache.hadoop.hbase.security.token.TestZKSecretWatcher.testKeyUpdate(TestZKSecretWatcher.java:221) {code} Here is the assertion that failed: {code} assertNotNull(newMaster); {code} Looks like new master did not come up within 5 tries. One potential fix is to increase the number of attempts. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HBASE-11050) Replace empty catch block in TestHLog#testFailedToCreateHLogIfParentRenamed with @Test(expected=)
[ https://issues.apache.org/jira/browse/HBASE-11050?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13981566#comment-13981566 ] Ted Yu commented on HBASE-11050: I think this would be closed when hbase 1.0 is released. > Replace empty catch block in TestHLog#testFailedToCreateHLogIfParentRenamed > with @Test(expected=) > -- > > Key: HBASE-11050 > URL: https://issues.apache.org/jira/browse/HBASE-11050 > Project: HBase > Issue Type: Task >Reporter: Gustavo Anatoly >Assignee: Gustavo Anatoly >Priority: Trivial > Fix For: 0.99.0 > > Attachments: HBASE-11050.patch > > > This change refactor TestHLog#testFailedToCreateHLogIfParentRenamed. The test > basically create {{HLogFactory.createWALWriter(fs, path, conf);}} and after > that parent {{path}} is renamed followed to another call > {{HLogFactory.createWALWriter(fs, path, conf);}} in this moment is expected > IOException, because the parent {{path}} doesn't exist more. > The second call monitored by a block try-catch with an empty {{catch}}: > {code} > try { > HLogFactory.createWALWriter(fs, path, conf); > fail("It should fail to create the new WAL"); > } catch (IOException ioe) { > // expected, good. > } > {code} > The patch proposed removes the {{try-catch}} and use > {{@Test(expected=IOException.class)}} to capture exception produced by the > test. > -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HBASE-11038) Filtered scans can bypass metrics collection
[ https://issues.apache.org/jira/browse/HBASE-11038?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13981541#comment-13981541 ] Hudson commented on HBASE-11038: FAILURE: Integrated in HBase-0.98-on-Hadoop-1.1 #280 (See [https://builds.apache.org/job/HBase-0.98-on-Hadoop-1.1/280/]) HBASE-11038 Filtered scans can bypass metrics collection (ndimiduk: rev 1590068) * /hbase/branches/0.98/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java > Filtered scans can bypass metrics collection > > > Key: HBASE-11038 > URL: https://issues.apache.org/jira/browse/HBASE-11038 > Project: HBase > Issue Type: Bug > Components: Scanners >Affects Versions: 0.96.2, 0.98.1, 0.99.0 >Reporter: Nick Dimiduk >Assignee: Nick Dimiduk >Priority: Minor > Fix For: 0.99.0, 0.98.2, 0.96.3 > > Attachments: HBASE-11038.00.patch, HBASE-11038.01.96.patch, > HBASE-11038.01.98.patch, HBASE-11038.01.patch > > > In RegionScannerImpl#nextRaw, after a batch of results are retrieved, > delegates to the filter regarding continuation of the scan. If > filterAllRemaining returns true, the method exits immediately, without > calling MetricsRegion#updateNextScan. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (HBASE-11008) Align bulk load, flush, and compact to require Action.CREATE
[ https://issues.apache.org/jira/browse/HBASE-11008?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jean-Daniel Cryans updated HBASE-11008: --- Release Note: preBulkLoadHFile now requires CREATE, which it effectively already needed since getTableDescriptor also requires it which is what LoadIncrementalHFiles is doing before bulk loading. compact and flush can now be issued by users with CREATE permission. I committed to 0.96 and up. I'm waiting on [~lhofhansl]'s +1 for 0.94 (or I can create a backport jira). > Align bulk load, flush, and compact to require Action.CREATE > > > Key: HBASE-11008 > URL: https://issues.apache.org/jira/browse/HBASE-11008 > Project: HBase > Issue Type: Improvement > Components: security >Reporter: Jean-Daniel Cryans >Assignee: Jean-Daniel Cryans > Fix For: 0.99.0, 0.98.2, 0.96.3, 0.94.20 > > Attachments: HBASE-11008-0.94.patch, HBASE-11008-v2.patch, > HBASE-11008-v3.patch, HBASE-11008.patch > > > Over in HBASE-10958 we noticed that it might make sense to require > Action.CREATE for bulk load, flush, and compact since it is also required for > things like enable and disable. > This means the following changes: > - preBulkLoadHFile goes from WRITE to CREATE > - compact/flush go from ADMIN to ADMIN or CREATE -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (HBASE-10915) Decouple region closing (HM and HRS) from ZK
[ https://issues.apache.org/jira/browse/HBASE-10915?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mikhail Antonov updated HBASE-10915: Status: Patch Available (was: Open) > Decouple region closing (HM and HRS) from ZK > > > Key: HBASE-10915 > URL: https://issues.apache.org/jira/browse/HBASE-10915 > Project: HBase > Issue Type: Sub-task > Components: Consensus, Zookeeper >Affects Versions: 0.99.0 >Reporter: Mikhail Antonov >Assignee: Mikhail Antonov > Attachments: HBASE-10915.patch, HBASE-10915.patch, HBASE-10915.patch, > HBASE-10915.patch, HBASE-10915.patch, HBASE-10915.patch, HBASE-10915.patch, > HBASE-10915.patch, HBASE-10915.patch > > > Decouple region closing from ZK. > Includes RS side (CloseRegionHandler), HM side (ClosedRegionHandler) and the > code using (HRegionServer, RSRpcServices etc). > May need small changes in AssignmentManager. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (HBASE-10915) Decouple region closing (HM and HRS) from ZK
[ https://issues.apache.org/jira/browse/HBASE-10915?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mikhail Antonov updated HBASE-10915: Status: Open (was: Patch Available) > Decouple region closing (HM and HRS) from ZK > > > Key: HBASE-10915 > URL: https://issues.apache.org/jira/browse/HBASE-10915 > Project: HBase > Issue Type: Sub-task > Components: Consensus, Zookeeper >Affects Versions: 0.99.0 >Reporter: Mikhail Antonov >Assignee: Mikhail Antonov > Attachments: HBASE-10915.patch, HBASE-10915.patch, HBASE-10915.patch, > HBASE-10915.patch, HBASE-10915.patch, HBASE-10915.patch, HBASE-10915.patch, > HBASE-10915.patch, HBASE-10915.patch > > > Decouple region closing from ZK. > Includes RS side (CloseRegionHandler), HM side (ClosedRegionHandler) and the > code using (HRegionServer, RSRpcServices etc). > May need small changes in AssignmentManager. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HBASE-11059) ZK-less region assignment
[ https://issues.apache.org/jira/browse/HBASE-11059?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13981504#comment-13981504 ] Mikhail Antonov commented on HBASE-11059: - [~jxiang] thanks for desctiption! look forward for the writeup. > ZK-less region assignment > - > > Key: HBASE-11059 > URL: https://issues.apache.org/jira/browse/HBASE-11059 > Project: HBase > Issue Type: Improvement > Components: master, Region Assignment >Reporter: Jimmy Xiang >Assignee: Jimmy Xiang > > It seems that most people don't like region assignment with ZK (HBASE-5487), > which causes many uncertainties. This jira is to support ZK-less region > assignment. We need to make sure this patch doesn't break backward > compatibility/rolling upgrade. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Assigned] (HBASE-11072) Abstract WAL splitting from ZK
[ https://issues.apache.org/jira/browse/HBASE-11072?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mikhail Antonov reassigned HBASE-11072: --- Assignee: Mikhail Antonov > Abstract WAL splitting from ZK > -- > > Key: HBASE-11072 > URL: https://issues.apache.org/jira/browse/HBASE-11072 > Project: HBase > Issue Type: Sub-task > Components: Consensus, Zookeeper >Affects Versions: 0.99.0 >Reporter: Mikhail Antonov >Assignee: Mikhail Antonov > > HM side: > - SplitLogManager > RS side: > - SplitLogWorker > - HLogSplitter and a few handler classes. > This jira may need to be split further apart into smaller ones. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HBASE-11074) Have PE emit histogram stats as it runs rather than dump once at end of test
[ https://issues.apache.org/jira/browse/HBASE-11074?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13981496#comment-13981496 ] Andrew Purtell commented on HBASE-11074: Thanks [~stack] and [~ndimiduk]. I filed HBASE-11079 and will look for more related. > Have PE emit histogram stats as it runs rather than dump once at end of test > > > Key: HBASE-11074 > URL: https://issues.apache.org/jira/browse/HBASE-11074 > Project: HBase > Issue Type: Improvement > Components: Performance >Reporter: stack >Assignee: stack >Priority: Minor > Fix For: 0.99.0 > > Attachments: 11074.txt > > > PE emits progress reading and writing. Add to the progress emission current > histogram snapshot readings. Means don' t have to wait till test completes > to get idea of latencies. Here is sample: > {code} > 2014-04-24 22:47:28,085 INFO [TestClient-0] hbase.PerformanceEvaluation: > 0/188730/1048576, Min=137.00, Mean=578.74, Max=5152756.00, StdDev=11884.79, > 95th=1590.00, 99th=2950.68 > 2014-04-24 22:47:32,465 INFO [TestClient-0] hbase.PerformanceEvaluation: > 0/199215/1048576, Min=137.00, Mean=570.19, Max=5152756.00, StdDev=11591.56, > 95th=1543.00, 99th=2911.00 > 2014-04-24 22:47:37,334 INFO [TestClient-0] hbase.PerformanceEvaluation: > 0/209700/1048576, Min=137.00, Mean=564.80, Max=5152756.00, StdDev=11317.96, > 95th=1480.00, 99th=2863.00 > {code} -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (HBASE-11071) Abstract HM admin table handlers from ZK
[ https://issues.apache.org/jira/browse/HBASE-11071?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mikhail Antonov updated HBASE-11071: Assignee: Konstantin Boudnik > Abstract HM admin table handlers from ZK > > > Key: HBASE-11071 > URL: https://issues.apache.org/jira/browse/HBASE-11071 > Project: HBase > Issue Type: Sub-task > Components: Consensus, Zookeeper >Reporter: Mikhail Antonov >Assignee: Konstantin Boudnik > > Abstract table admin handlers, including: > - CreateTableHandler > - DeleteTableHandler > - EnableTableHandler > - DisableTableHandler -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Created] (HBASE-11079) Normalize test tools across branches
Andrew Purtell created HBASE-11079: -- Summary: Normalize test tools across branches Key: HBASE-11079 URL: https://issues.apache.org/jira/browse/HBASE-11079 Project: HBase Issue Type: Umbrella Reporter: Andrew Purtell Will be a challenge wherever the branches vary functionally, but it would be good to normalize the test tools (LoadTestTool and PerformanceEvaluation) as much as possible among the active branches so we can compare them. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HBASE-11074) Have PE emit histogram stats as it runs rather than dump once at end of test
[ https://issues.apache.org/jira/browse/HBASE-11074?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13981487#comment-13981487 ] Hudson commented on HBASE-11074: FAILURE: Integrated in HBase-TRUNK #5117 (See [https://builds.apache.org/job/HBase-TRUNK/5117/]) HBASE-11074 Have PE emit histogram stats as it runs rather than dump once at end of test (stack: rev 1590085) * /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/PerformanceEvaluation.java > Have PE emit histogram stats as it runs rather than dump once at end of test > > > Key: HBASE-11074 > URL: https://issues.apache.org/jira/browse/HBASE-11074 > Project: HBase > Issue Type: Improvement > Components: Performance >Reporter: stack >Assignee: stack >Priority: Minor > Fix For: 0.99.0 > > Attachments: 11074.txt > > > PE emits progress reading and writing. Add to the progress emission current > histogram snapshot readings. Means don' t have to wait till test completes > to get idea of latencies. Here is sample: > {code} > 2014-04-24 22:47:28,085 INFO [TestClient-0] hbase.PerformanceEvaluation: > 0/188730/1048576, Min=137.00, Mean=578.74, Max=5152756.00, StdDev=11884.79, > 95th=1590.00, 99th=2950.68 > 2014-04-24 22:47:32,465 INFO [TestClient-0] hbase.PerformanceEvaluation: > 0/199215/1048576, Min=137.00, Mean=570.19, Max=5152756.00, StdDev=11591.56, > 95th=1543.00, 99th=2911.00 > 2014-04-24 22:47:37,334 INFO [TestClient-0] hbase.PerformanceEvaluation: > 0/209700/1048576, Min=137.00, Mean=564.80, Max=5152756.00, StdDev=11317.96, > 95th=1480.00, 99th=2863.00 > {code} -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HBASE-11038) Filtered scans can bypass metrics collection
[ https://issues.apache.org/jira/browse/HBASE-11038?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13981486#comment-13981486 ] Hudson commented on HBASE-11038: FAILURE: Integrated in HBase-TRUNK #5117 (See [https://builds.apache.org/job/HBase-TRUNK/5117/]) HBASE-11038 Filtered scans can bypass metrics collection (ndimiduk: rev 1590067) * /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java > Filtered scans can bypass metrics collection > > > Key: HBASE-11038 > URL: https://issues.apache.org/jira/browse/HBASE-11038 > Project: HBase > Issue Type: Bug > Components: Scanners >Affects Versions: 0.96.2, 0.98.1, 0.99.0 >Reporter: Nick Dimiduk >Assignee: Nick Dimiduk >Priority: Minor > Fix For: 0.99.0, 0.98.2, 0.96.3 > > Attachments: HBASE-11038.00.patch, HBASE-11038.01.96.patch, > HBASE-11038.01.98.patch, HBASE-11038.01.patch > > > In RegionScannerImpl#nextRaw, after a batch of results are retrieved, > delegates to the filter regarding continuation of the scan. If > filterAllRemaining returns true, the method exits immediately, without > calling MetricsRegion#updateNextScan. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HBASE-10873) Control number of regions assigned to backup masters
[ https://issues.apache.org/jira/browse/HBASE-10873?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13981485#comment-13981485 ] Hudson commented on HBASE-10873: FAILURE: Integrated in HBase-TRUNK #5117 (See [https://builds.apache.org/job/HBase-TRUNK/5117/]) HBASE-10873 Control number of regions assigned to backup masters (jxiang: rev 1590078) * /hbase/trunk/hbase-common/src/main/resources/hbase-default.xml * /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/master/ServerManager.java * /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/master/balancer/BaseLoadBalancer.java * /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/master/balancer/ClusterLoadState.java * /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/master/balancer/FavoredNodeLoadBalancer.java * /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/master/balancer/SimpleLoadBalancer.java * /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/master/balancer/StochasticLoadBalancer.java * /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/master/balancer/BalancerTestBase.java * /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/master/balancer/TestBaseLoadBalancer.java * /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/master/balancer/TestDefaultLoadBalancer.java * /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/master/balancer/TestStochasticLoadBalancer.java > Control number of regions assigned to backup masters > > > Key: HBASE-10873 > URL: https://issues.apache.org/jira/browse/HBASE-10873 > Project: HBase > Issue Type: Improvement > Components: Balancer >Reporter: Jimmy Xiang >Assignee: Jimmy Xiang > Fix For: 0.99.0 > > Attachments: hbase-10873.patch, hbase-10873_v2.patch, > hbase-10873_v3.patch > > > By default, a backup master is treated just like another regionserver. So it > can host as many regions as other regionserver does. When the backup master > becomes the active one, region balancer needs to move those user regions on > this master to other region servers. To minimize the impact, it's better not > to assign too many regions on backup masters. It may not be good to leave the > backup masters idle and not host any region either. > We should make this adjustable so that users can control how many regions to > assign to each backup master. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Resolved] (HBASE-10932) Improve RowCounter to allow mapper number set/control
[ https://issues.apache.org/jira/browse/HBASE-10932?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jean-Daniel Cryans resolved HBASE-10932. Resolution: Won't Fix Resolving as won't fix. If you want to work on a more general solution, like adding this option to the TIF, please open a new jira. Thanks. > Improve RowCounter to allow mapper number set/control > - > > Key: HBASE-10932 > URL: https://issues.apache.org/jira/browse/HBASE-10932 > Project: HBase > Issue Type: Improvement > Components: mapreduce >Reporter: Yu Li >Assignee: Yu Li >Priority: Minor > Attachments: HBASE-10932_v1.patch, HBASE-10932_v2.patch > > > The typical use case of RowCounter is to do some kind of data integrity > checking, like after exporting some data from RDBMS to HBase, or from one > HBase cluster to another, making sure the row(record) number matches. Such > check commonly won't require much on response time. > Meanwhile, based on current impl, RowCounter will launch one mapper per > region, and each mapper will send one scan request. Assuming the table is > kind of big like having tens of regions, and the cpu core number of the whole > MR cluster is also enough, the parallel scan requests sent by mapper would be > a real burden for the HBase cluster. > So in this JIRA, we're proposing to make rowcounter support an additional > option "--maps" to specify mapper number, and make each mapper able to scan > more than one region of the target table. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HBASE-11074) Have PE emit histogram stats as it runs rather than dump once at end of test
[ https://issues.apache.org/jira/browse/HBASE-11074?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13981462#comment-13981462 ] Nick Dimiduk commented on HBASE-11074: -- According to git, 0.96 and 0.98 have equivalent patches applied. Trunk is ahead by at least: HBASE-11055, HBASE-11026, HBASE-11000, HBASE-11000, HBASE-10997, HBASE-10788. > Have PE emit histogram stats as it runs rather than dump once at end of test > > > Key: HBASE-11074 > URL: https://issues.apache.org/jira/browse/HBASE-11074 > Project: HBase > Issue Type: Improvement > Components: Performance >Reporter: stack >Assignee: stack >Priority: Minor > Fix For: 0.99.0 > > Attachments: 11074.txt > > > PE emits progress reading and writing. Add to the progress emission current > histogram snapshot readings. Means don' t have to wait till test completes > to get idea of latencies. Here is sample: > {code} > 2014-04-24 22:47:28,085 INFO [TestClient-0] hbase.PerformanceEvaluation: > 0/188730/1048576, Min=137.00, Mean=578.74, Max=5152756.00, StdDev=11884.79, > 95th=1590.00, 99th=2950.68 > 2014-04-24 22:47:32,465 INFO [TestClient-0] hbase.PerformanceEvaluation: > 0/199215/1048576, Min=137.00, Mean=570.19, Max=5152756.00, StdDev=11591.56, > 95th=1543.00, 99th=2911.00 > 2014-04-24 22:47:37,334 INFO [TestClient-0] hbase.PerformanceEvaluation: > 0/209700/1048576, Min=137.00, Mean=564.80, Max=5152756.00, StdDev=11317.96, > 95th=1480.00, 99th=2863.00 > {code} -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HBASE-10932) Improve RowCounter to allow mapper number set/control
[ https://issues.apache.org/jira/browse/HBASE-10932?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13981441#comment-13981441 ] Yu Li commented on HBASE-10932: --- Hi [~jdcryans], If we follow this logic, do you mean the "-m" option of DistCp also useless? IMHO, the configuration of job scheduler in JT/Yarn is the server-side configuration, while the "-m" option is the client-side configuration, and both are necessary. Back to the scheduler discussion, I believe job scheduler could only limit the max resource one user could use, and it depends on the user to decide how he uses the resource assigned to him. Like in the example you gave, what if the "slow" pool have 4 slots while only one user submit a rowcounter and he prefers only 2 maps running in parallel? I'm afraid asking the cluster operator to create another "slow" pool with only 2 slots is not a good solution. In a common hbase ETL application, user would need to first do distcp, then bulkload, then rowcounter to check data integrity, and he would prefer distcp to run as fast as possible w/ low scan workload during rowcounter. In this case, he would need to submit the distcp job to the "fast" queue while the rowcounter job to the "slow" queue? And he also needs to get access to both queues... Anyway, this is a real requirement from user in our product env, and I'm just trying to contribute this to community in case this can help other users. But if you still think it useless, just go ahead and close it, you're the boss after all. :-) And no matter what decision made, thanks for your time on reviewing this JIRA and discussion. > Improve RowCounter to allow mapper number set/control > - > > Key: HBASE-10932 > URL: https://issues.apache.org/jira/browse/HBASE-10932 > Project: HBase > Issue Type: Improvement > Components: mapreduce >Reporter: Yu Li >Assignee: Yu Li >Priority: Minor > Attachments: HBASE-10932_v1.patch, HBASE-10932_v2.patch > > > The typical use case of RowCounter is to do some kind of data integrity > checking, like after exporting some data from RDBMS to HBase, or from one > HBase cluster to another, making sure the row(record) number matches. Such > check commonly won't require much on response time. > Meanwhile, based on current impl, RowCounter will launch one mapper per > region, and each mapper will send one scan request. Assuming the table is > kind of big like having tens of regions, and the cpu core number of the whole > MR cluster is also enough, the parallel scan requests sent by mapper would be > a real burden for the HBase cluster. > So in this JIRA, we're proposing to make rowcounter support an additional > option "--maps" to specify mapper number, and make each mapper able to scan > more than one region of the target table. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HBASE-11074) Have PE emit histogram stats as it runs rather than dump once at end of test
[ https://issues.apache.org/jira/browse/HBASE-11074?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13981453#comment-13981453 ] stack commented on HBASE-11074: --- bq. Why not backport the histogram support? Sorry. Working on trunk. PE seems to be under a bit of churn at the moment (folks are using it). Lets backport the whole thing when it settles. > Have PE emit histogram stats as it runs rather than dump once at end of test > > > Key: HBASE-11074 > URL: https://issues.apache.org/jira/browse/HBASE-11074 > Project: HBase > Issue Type: Improvement > Components: Performance >Reporter: stack >Assignee: stack >Priority: Minor > Fix For: 0.99.0 > > Attachments: 11074.txt > > > PE emits progress reading and writing. Add to the progress emission current > histogram snapshot readings. Means don' t have to wait till test completes > to get idea of latencies. Here is sample: > {code} > 2014-04-24 22:47:28,085 INFO [TestClient-0] hbase.PerformanceEvaluation: > 0/188730/1048576, Min=137.00, Mean=578.74, Max=5152756.00, StdDev=11884.79, > 95th=1590.00, 99th=2950.68 > 2014-04-24 22:47:32,465 INFO [TestClient-0] hbase.PerformanceEvaluation: > 0/199215/1048576, Min=137.00, Mean=570.19, Max=5152756.00, StdDev=11591.56, > 95th=1543.00, 99th=2911.00 > 2014-04-24 22:47:37,334 INFO [TestClient-0] hbase.PerformanceEvaluation: > 0/209700/1048576, Min=137.00, Mean=564.80, Max=5152756.00, StdDev=11317.96, > 95th=1480.00, 99th=2863.00 > {code} -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HBASE-11074) Have PE emit histogram stats as it runs rather than dump once at end of test
[ https://issues.apache.org/jira/browse/HBASE-11074?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13981448#comment-13981448 ] Andrew Purtell commented on HBASE-11074: Why not backport the histogram support? Should tools like this have the same functionality cross branches so we can compare them? > Have PE emit histogram stats as it runs rather than dump once at end of test > > > Key: HBASE-11074 > URL: https://issues.apache.org/jira/browse/HBASE-11074 > Project: HBase > Issue Type: Improvement > Components: Performance >Reporter: stack >Assignee: stack >Priority: Minor > Fix For: 0.99.0 > > Attachments: 11074.txt > > > PE emits progress reading and writing. Add to the progress emission current > histogram snapshot readings. Means don' t have to wait till test completes > to get idea of latencies. Here is sample: > {code} > 2014-04-24 22:47:28,085 INFO [TestClient-0] hbase.PerformanceEvaluation: > 0/188730/1048576, Min=137.00, Mean=578.74, Max=5152756.00, StdDev=11884.79, > 95th=1590.00, 99th=2950.68 > 2014-04-24 22:47:32,465 INFO [TestClient-0] hbase.PerformanceEvaluation: > 0/199215/1048576, Min=137.00, Mean=570.19, Max=5152756.00, StdDev=11591.56, > 95th=1543.00, 99th=2911.00 > 2014-04-24 22:47:37,334 INFO [TestClient-0] hbase.PerformanceEvaluation: > 0/209700/1048576, Min=137.00, Mean=564.80, Max=5152756.00, StdDev=11317.96, > 95th=1480.00, 99th=2863.00 > {code} -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HBASE-11070) [AccessController] Restore early-out access denial if the user has no access at the table or CF level
[ https://issues.apache.org/jira/browse/HBASE-11070?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13981442#comment-13981442 ] Andrew Purtell commented on HBASE-11070: Opened subtasks > [AccessController] Restore early-out access denial if the user has no access > at the table or CF level > - > > Key: HBASE-11070 > URL: https://issues.apache.org/jira/browse/HBASE-11070 > Project: HBase > Issue Type: Task >Reporter: Andrew Purtell >Assignee: Andrew Purtell > Fix For: 0.99.0, 0.98.3 > > > We want to support two different use cases for cell ACLs: > 1. The user can see all cells in a table or CF unless a cell ACL denies access > 2. The user cannot see any cells in a table or CF unless a cell ACL grants > access > For the sake of flexibility we made it a toggle on an operation by operation > basis. However this changed the behavior of the AccessController with respect > to how requests for which a user has no grant at the table or CF level are > handled. Prior to the cell ACL changes if a user had no grant at the table or > CF level, they would see an AccessDeniedException. We can't do that if we > want cell ACLs to provide exceptional access. Subsequent to the cell ACL > changes if a user has no grant at the table or CF level, there is no > exception, they simply won't see any cells except those granting exceptional > access at the cell level. This also brings the AccessController semantics in > line with those of the new VisibilityController. > Feedback on dev@ is this change is a bridge too far for at least three > reasons. First, it is surprising (Enis and Vandana). Second, the audit trail > is affected or missing (Enis). Third, it allows any user on the cluster to > mount targeted queries against all tables looking for timing differences, > that depending on schema design could possibly leak the existence in row keys > of sensitive information, or leak the size of the table (Todd). Although we > can't prevent timing attacks in general we can limit the scope of what a user > can explore by restoring early-out access denial if the user has no access at > the table or CF level. > We can make early-out access denial if the user has no access at the table or > CF level configurable on a per table basis. Setting the default to "false", > with a release note and paragraph in the security guide explaining how to > reintroduce the old behavior, would address the above and not introduce > another surprising change among 0.98 releases. If the consensus is > (presumably a milder) surprise due to this change is fine, then the default > could be "true" -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Created] (HBASE-11078) [AccessController] Consider new permission for "read visible"
Andrew Purtell created HBASE-11078: -- Summary: [AccessController] Consider new permission for "read visible" Key: HBASE-11078 URL: https://issues.apache.org/jira/browse/HBASE-11078 Project: HBase Issue Type: Sub-task Reporter: Andrew Purtell Fix For: 0.99.0 See parent for the whole story. Consider a new permission with the semantics "being able to read only granted cells", perhaps called READ_VISIBLE. Maybe consider a symmetric new permission for writes. The lack of default READ perm should prevent users from launching scanners. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Created] (HBASE-11077) [AccessController] Restore compatible early-out access denial
Andrew Purtell created HBASE-11077: -- Summary: [AccessController] Restore compatible early-out access denial Key: HBASE-11077 URL: https://issues.apache.org/jira/browse/HBASE-11077 Project: HBase Issue Type: Sub-task Reporter: Andrew Purtell Assignee: Andrew Purtell Fix For: 0.99.0, 0.98.2 See parent for the whole story. For 0.98, to start, just put back the early out that was removed in 0.98.0 and allow it to be overridden with a table attribute. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HBASE-10576) Custom load balancer to co-locate the regions of two tables which are having same split keys
[ https://issues.apache.org/jira/browse/HBASE-10576?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13981427#comment-13981427 ] Jeffrey Zhong commented on HBASE-10576: --- [~rajesh23] {quote} (Need not have any special state like shadow) {quote} You can't reuse disabled state because a client can't talk to a region in a disabled table. Introducing a new state like "shadow", I think it's cleaner. {quote} I will do prototype of this {quote} Please do. [~saint@gmail.com] Just changing balancer alone isn't enough. Because even we send out region assignment requests simultaneously with same destination RS, there is no guarantee which assignment will happen firstly, when they will happen & complete and if they can both succeed. With this coprocessor approach, since both region open the same time, we can even atomically update both their location info in meta table with a single batch. So clients can see both of them in a location at the same time. [~giacomotaylor] The new proposal is to enforce strong co-locating. We still need same split key & prefix for the index regions. There are other ways without requiring same split key/prefix but they're not clean. Since there is an entry in meta table for the index region with "shadow" state, a client can scan the region directly. Thanks. > Custom load balancer to co-locate the regions of two tables which are having > same split keys > > > Key: HBASE-10576 > URL: https://issues.apache.org/jira/browse/HBASE-10576 > Project: HBase > Issue Type: Sub-task > Components: Balancer >Reporter: rajeshbabu >Assignee: rajeshbabu > Attachments: HBASE-10536_v2.patch, HBASE-10576.patch > > > To support local indexing both user table and index table should have same > split keys. This issue to provide custom balancer to colocate the regions of > two tables which are having same split keys. > This helps in Phoenix as well. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HBASE-11070) [AccessController] Restore early-out access denial if the user has no access at the table or CF level
[ https://issues.apache.org/jira/browse/HBASE-11070?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13981423#comment-13981423 ] Andrew Purtell commented on HBASE-11070: bq. I think it makes sense to have a separate permission for "being able to read only granted cells". We can tinker and see how this works out. See above comment about WRITE being granted independent of READ. Do we want the same kind of separate permission for "being able to write only granted cells"? bq. Also thinking more about this, we want the lack of default READ perm would prevent the users from launching scanners. This could be done pretty easily by varying the permissions tests in preGet and preExists versus preScannerOpen. > [AccessController] Restore early-out access denial if the user has no access > at the table or CF level > - > > Key: HBASE-11070 > URL: https://issues.apache.org/jira/browse/HBASE-11070 > Project: HBase > Issue Type: Task >Reporter: Andrew Purtell >Assignee: Andrew Purtell > Fix For: 0.99.0, 0.98.3 > > > We want to support two different use cases for cell ACLs: > 1. The user can see all cells in a table or CF unless a cell ACL denies access > 2. The user cannot see any cells in a table or CF unless a cell ACL grants > access > For the sake of flexibility we made it a toggle on an operation by operation > basis. However this changed the behavior of the AccessController with respect > to how requests for which a user has no grant at the table or CF level are > handled. Prior to the cell ACL changes if a user had no grant at the table or > CF level, they would see an AccessDeniedException. We can't do that if we > want cell ACLs to provide exceptional access. Subsequent to the cell ACL > changes if a user has no grant at the table or CF level, there is no > exception, they simply won't see any cells except those granting exceptional > access at the cell level. This also brings the AccessController semantics in > line with those of the new VisibilityController. > Feedback on dev@ is this change is a bridge too far for at least three > reasons. First, it is surprising (Enis and Vandana). Second, the audit trail > is affected or missing (Enis). Third, it allows any user on the cluster to > mount targeted queries against all tables looking for timing differences, > that depending on schema design could possibly leak the existence in row keys > of sensitive information, or leak the size of the table (Todd). Although we > can't prevent timing attacks in general we can limit the scope of what a user > can explore by restoring early-out access denial if the user has no access at > the table or CF level. > We can make early-out access denial if the user has no access at the table or > CF level configurable on a per table basis. Setting the default to "false", > with a release note and paragraph in the security guide explaining how to > reintroduce the old behavior, would address the above and not introduce > another surprising change among 0.98 releases. If the consensus is > (presumably a milder) surprise due to this change is fine, then the default > could be "true" -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HBASE-11050) Replace empty catch block in TestHLog#testFailedToCreateHLogIfParentRenamed with @Test(expected=)
[ https://issues.apache.org/jira/browse/HBASE-11050?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13981410#comment-13981410 ] Jean-Marc Spaggiari commented on HBASE-11050: - Since this has been committed, can we close this JIRA? > Replace empty catch block in TestHLog#testFailedToCreateHLogIfParentRenamed > with @Test(expected=) > -- > > Key: HBASE-11050 > URL: https://issues.apache.org/jira/browse/HBASE-11050 > Project: HBase > Issue Type: Task >Reporter: Gustavo Anatoly >Assignee: Gustavo Anatoly >Priority: Trivial > Fix For: 0.99.0 > > Attachments: HBASE-11050.patch > > > This change refactor TestHLog#testFailedToCreateHLogIfParentRenamed. The test > basically create {{HLogFactory.createWALWriter(fs, path, conf);}} and after > that parent {{path}} is renamed followed to another call > {{HLogFactory.createWALWriter(fs, path, conf);}} in this moment is expected > IOException, because the parent {{path}} doesn't exist more. > The second call monitored by a block try-catch with an empty {{catch}}: > {code} > try { > HLogFactory.createWALWriter(fs, path, conf); > fail("It should fail to create the new WAL"); > } catch (IOException ioe) { > // expected, good. > } > {code} > The patch proposed removes the {{try-catch}} and use > {{@Test(expected=IOException.class)}} to capture exception produced by the > test. > -- This message was sent by Atlassian JIRA (v6.2#6252)