[jira] [Commented] (HBASE-8304) Bulkload fail to remove files if fs.default.name / fs.defaultFS is configured without default port.
[ https://issues.apache.org/jira/browse/HBASE-8304?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13909296#comment-13909296 ] haosdent commented on HBASE-8304: - Thank you for your advice, let me try it. [~brandonli] > Bulkload fail to remove files if fs.default.name / fs.defaultFS is configured > without default port. > --- > > Key: HBASE-8304 > URL: https://issues.apache.org/jira/browse/HBASE-8304 > Project: HBase > Issue Type: Bug > Components: HFile, regionserver >Affects Versions: 0.94.5 >Reporter: Raymond Liu > Labels: bulkloader > Attachments: HBASE-9537.patch > > > When fs.default.name or fs.defaultFS in hadoop core-site.xml is configured as > hdfs://ip, and hbase.rootdir is configured as hdfs://ip:port/hbaserootdir > where port is the hdfs namenode's default port. the bulkload operation will > not remove the file in bulk output dir. Store::bulkLoadHfile will think > hdfs:://ip and hdfs:://ip:port as different filesystem and go with copy > approaching instead of rename. > The root cause is that hbase master will rewrite fs.default.name/fs.defaultFS > according to hbase.rootdir when regionserver started, thus, dest fs uri from > the hregion will not matching src fs uri passed from client. > any suggestion what is the best approaching to fix this issue? > I kind of think that we could check for default port if src uri come without > port info. -- This message was sent by Atlassian JIRA (v6.1.5#6160)
[jira] [Commented] (HBASE-10584) Inconsistency between tableExists and listTables in implementation
[ https://issues.apache.org/jira/browse/HBASE-10584?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13909293#comment-13909293 ] Hadoop QA commented on HBASE-10584: --- {color:red}-1 overall{color}. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12630288/HBASE-10584-trunk_v1.patch against trunk revision . ATTACHMENT ID: 12630288 {color:green}+1 @author{color}. The patch does not contain any @author tags. {color:red}-1 tests included{color}. The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color:green}+1 hadoop1.0{color}. The patch compiles against the hadoop 1.0 profile. {color:green}+1 hadoop1.1{color}. The patch compiles against the hadoop 1.1 profile. {color:green}+1 javadoc{color}. The javadoc tool did not generate any warning messages. {color:green}+1 javac{color}. The applied patch does not increase the total number of javac compiler warnings. {color:green}+1 findbugs{color}. The patch does not introduce any new Findbugs (version 1.3.9) warnings. {color:green}+1 release audit{color}. The applied patch does not increase the total number of release audit warnings. {color:green}+1 lineLengths{color}. The patch does not introduce lines longer than 100 {color:green}+1 site{color}. The mvn site goal succeeds with this patch. {color:red}-1 core tests{color}. The patch failed these unit tests: org.apache.hadoop.hbase.util.TestHBaseFsck Test results: https://builds.apache.org/job/PreCommit-HBASE-Build/8774//testReport/ Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8774//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8774//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-thrift.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8774//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8774//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8774//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8774//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8774//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8774//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8774//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/8774//console This message is automatically generated. > Inconsistency between tableExists and listTables in implementation > -- > > Key: HBASE-10584 > URL: https://issues.apache.org/jira/browse/HBASE-10584 > Project: HBase > Issue Type: Bug > Components: Client, master >Reporter: Feng Honghua >Assignee: Feng Honghua > Attachments: HBASE-10584-trunk_v1.patch > > > # HBaseAdmin.tableExists is implemented by scanning meta table > # HBaseAdmin.listTables(and HBaseAdmin.getTableDescriptor) is implemented by > talking with HMaster which responses by querying the FSTableDescriptors, and > FSTableDescriptors return all tables by scanning all the table descriptor > files in FS(cache also plays here, so most of time it can be satisfied by > cache)... > Actually HBaseAdmin requests HMaster to check if a table exists internally > when implementing deleteTable(see below), then why does it use a > different(scanning meta table) way to implementing tableExists() for outside > user to use for the same purpose? > {code} > tableExists = false; > GetTableDescriptorsResponse htds; > MasterKeepAliveConnection master = connection.getKeepAliveMasterService(); > try { > GetTableDescriptorsRequest req = > RequestConverter.buildGetTableDescriptorsRequest(tableName); > htds = master.getTableDescriptors(null, req); > } catch (ServiceException se) { > throw ProtobufUtil.getRemoteException(se); > } finally { > master.close(); > } > tableExists = !htds.getTableSchemaList().isEmpty(); > {code} > (
[jira] [Commented] (HBASE-10580) IntegrationTestingUtility#restoreCluster leak resource when running in a mini cluster mode
[ https://issues.apache.org/jira/browse/HBASE-10580?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13909283#comment-13909283 ] Hudson commented on HBASE-10580: SUCCESS: Integrated in HBase-0.98 #178 (See [https://builds.apache.org/job/HBase-0.98/178/]) HBASE-10580: IntegrationTestingUtility#restoreCluster leak resource when running in a mini cluster mode (jeffreyz: rev 1570770) * /hbase/branches/0.98/hbase-it/src/test/java/org/apache/hadoop/hbase/IntegrationTestingUtility.java > IntegrationTestingUtility#restoreCluster leak resource when running in a mini > cluster mode > -- > > Key: HBASE-10580 > URL: https://issues.apache.org/jira/browse/HBASE-10580 > Project: HBase > Issue Type: Bug > Components: test >Affects Versions: 0.98.0, 0.96.0, 0.96.1, 0.99.0 >Reporter: Jeffrey Zhong >Assignee: Jeffrey Zhong > Fix For: 0.96.2, 0.98.1, 0.99.0 > > Attachments: hbase-10580.patch > > > When the utility isn't run in distributed cluster mode, the restore only > shutdown MiniHBaseCluster not MiniDFSCluster & MiniZKCluster. -- This message was sent by Atlassian JIRA (v6.1.5#6160)
[jira] [Updated] (HBASE-10584) Inconsistency between tableExists and listTables in implementation
[ https://issues.apache.org/jira/browse/HBASE-10584?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] rajeshbabu updated HBASE-10584: --- Status: Patch Available (was: Open) > Inconsistency between tableExists and listTables in implementation > -- > > Key: HBASE-10584 > URL: https://issues.apache.org/jira/browse/HBASE-10584 > Project: HBase > Issue Type: Bug > Components: Client, master >Reporter: Feng Honghua >Assignee: Feng Honghua > Attachments: HBASE-10584-trunk_v1.patch > > > # HBaseAdmin.tableExists is implemented by scanning meta table > # HBaseAdmin.listTables(and HBaseAdmin.getTableDescriptor) is implemented by > talking with HMaster which responses by querying the FSTableDescriptors, and > FSTableDescriptors return all tables by scanning all the table descriptor > files in FS(cache also plays here, so most of time it can be satisfied by > cache)... > Actually HBaseAdmin requests HMaster to check if a table exists internally > when implementing deleteTable(see below), then why does it use a > different(scanning meta table) way to implementing tableExists() for outside > user to use for the same purpose? > {code} > tableExists = false; > GetTableDescriptorsResponse htds; > MasterKeepAliveConnection master = connection.getKeepAliveMasterService(); > try { > GetTableDescriptorsRequest req = > RequestConverter.buildGetTableDescriptorsRequest(tableName); > htds = master.getTableDescriptors(null, req); > } catch (ServiceException se) { > throw ProtobufUtil.getRemoteException(se); > } finally { > master.close(); > } > tableExists = !htds.getTableSchemaList().isEmpty(); > {code} > (Above verifying that table descriptor file is deleted can guarantee all > items of this table are deleted from meta table...) > Since creating table descriptor files and inserting item to meta table occur > in different time without atomic semantic, this inconsistency in > implementation can lead to confusing behaviors when create-table or > delete-table fails midway, (before according cleanup is done) table > descriptor file may exists while no item exists in meta table (for > create-table where table descriptor file is created before inserting item to > meta table), this leads to listTables including that table, while tableExists > says no. Similar inconsistency if delete-table fails mid-way... > Confusing behavior can happen during the process even though eventually it > succeed: > # During table creation, when a user calls listTables and then calls > tableExists for this table after the table descriptor is created but before > item is inserted to meta table. He will find the listTables includes a table > but tableExists return false for that same table, this behavior is confusing > and should only acceptable during the table is being deleted... > # Similar behavior occurs during table deletion. > Seems the benefit of implementing tableExists this way is we can avoid > talking with HMaster, considering we talk with HMaster for listTables and > getTableDescriptor, such benefit can't offset the drawback from inconsistency. -- This message was sent by Atlassian JIRA (v6.1.5#6160)
[jira] [Commented] (HBASE-10191) Move large arena storage off heap
[ https://issues.apache.org/jira/browse/HBASE-10191?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13909242#comment-13909242 ] Lars Hofhansl commented on HBASE-10191: --- HBASE-5311 and HBASE-9440 have related discussion. If we're smart we can build all these things such that they work on- and off heap. > Move large arena storage off heap > - > > Key: HBASE-10191 > URL: https://issues.apache.org/jira/browse/HBASE-10191 > Project: HBase > Issue Type: Umbrella >Reporter: Andrew Purtell > > Even with the improved G1 GC in Java 7, Java processes that want to address > large regions of memory while also providing low high-percentile latencies > continue to be challenged. Fundamentally, a Java server process that has high > data throughput and also tight latency SLAs will be stymied by the fact that > the JVM does not provide a fully concurrent collector. There is simply not > enough throughput to copy data during GC under safepoint (all application > threads suspended) within available time bounds. This is increasingly an > issue for HBase users operating under dual pressures: 1. tight response SLAs, > 2. the increasing amount of RAM available in "commodity" server > configurations, because GC load is roughly proportional to heap size. > We can address this using parallel strategies. We should talk with the Java > platform developer community about the possibility of a fully concurrent > collector appearing in OpenJDK somehow. Set aside the question of if this is > too little too late, if one becomes available the benefit will be immediate > though subject to qualification for production, and transparent in terms of > code changes. However in the meantime we need an answer for Java versions > already in production. This requires we move the large arena allocations off > heap, those being the blockcache and memstore. On other JIRAs recently there > has been related discussion about combining the blockcache and memstore > (HBASE-9399) and on flushing memstore into blockcache (HBASE-5311), which is > related work. We should build off heap allocation for memstore and > blockcache, perhaps a unified pool for both, and plumb through zero copy > direct access to these allocations (via direct buffers) through the read and > write I/O paths. This may require the construction of classes that provide > object views over data contained within direct buffers. This is something > else we could talk with the Java platform developer community about - it > could be possible to provide language level object views over off heap > memory, on heap objects could hold references to objects backed by off heap > memory but not vice versa, maybe facilitated by new intrinsics in Unsafe. > Again we need an answer for today also. We should investigate what existing > libraries may be available in this regard. Key will be avoiding > marshalling/unmarshalling costs. At most we should be copying primitives out > of the direct buffers to register or stack locations until finally copying > data to construct protobuf Messages. A related issue there is HBASE-9794, > which proposes scatter-gather access to KeyValues when constructing RPC > messages. We should see how far we can get with that and also zero copy > construction of protobuf Messages backed by direct buffer allocations. Some > amount of native code may be required. -- This message was sent by Atlassian JIRA (v6.1.5#6160)
[jira] [Commented] (HBASE-10547) TestFixedLengthWrapper#testReadWrite occasionally fails with the IBM JDK
[ https://issues.apache.org/jira/browse/HBASE-10547?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13909238#comment-13909238 ] Hudson commented on HBASE-10547: FAILURE: Integrated in hbase-0.96 #308 (See [https://builds.apache.org/job/hbase-0.96/308/]) HBASE-10547 TestFixedLengthWrapper#testReadWrite occasionally fails with the IBM JDK (apurtell: rev 1570747) * /hbase/branches/0.96/hbase-common/src/test/java/org/apache/hadoop/hbase/types/TestFixedLengthWrapper.java > TestFixedLengthWrapper#testReadWrite occasionally fails with the IBM JDK > > > Key: HBASE-10547 > URL: https://issues.apache.org/jira/browse/HBASE-10547 > Project: HBase > Issue Type: Bug >Affects Versions: 0.98.0, 0.96.1.1 > Environment: IBM J9 VM (build 2.7, JRE 1.7.0 Linux amd64-64 > Compressed References 20131114_175264 (JIT enabled, AOT enabled) >Reporter: Andrew Purtell >Assignee: Andrew Purtell >Priority: Trivial > Fix For: 0.96.2, 0.98.1, 0.99.0 > > Attachments: 10547.patch > > > Here's the trace. > {noformat} > Tests run: 4, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 29.288 sec > <<< FAILURE! > testReadWrite(org.apache.hadoop.hbase.types.TestFixedLengthWrapper) Time > elapsed: 0.025 sec <<< FAILURE! > arrays first differed at element [8]; expected:<-40> but was:<0> > at > org.junit.internal.ComparisonCriteria.arrayEquals(ComparisonCriteria.java:50) > at org.junit.Assert.internalArrayEquals(Assert.java:473) > at org.junit.Assert.assertArrayEquals(Assert.java:294) > at org.junit.Assert.assertArrayEquals(Assert.java:305) > at > org.apache.hadoop.hbase.types.TestFixedLengthWrapper.testReadWrite(TestFixedLengthWrapper.java:60) > {noformat} > This is with 0.98.0. -- This message was sent by Atlassian JIRA (v6.1.5#6160)
[jira] [Commented] (HBASE-10580) IntegrationTestingUtility#restoreCluster leak resource when running in a mini cluster mode
[ https://issues.apache.org/jira/browse/HBASE-10580?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13909239#comment-13909239 ] Hudson commented on HBASE-10580: FAILURE: Integrated in hbase-0.96 #308 (See [https://builds.apache.org/job/hbase-0.96/308/]) HBASE-10580: IntegrationTestingUtility#restoreCluster leak resource when running in a mini cluster mode (jeffreyz: rev 1570774) * /hbase/branches/0.96/hbase-it/src/test/java/org/apache/hadoop/hbase/IntegrationTestingUtility.java > IntegrationTestingUtility#restoreCluster leak resource when running in a mini > cluster mode > -- > > Key: HBASE-10580 > URL: https://issues.apache.org/jira/browse/HBASE-10580 > Project: HBase > Issue Type: Bug > Components: test >Affects Versions: 0.98.0, 0.96.0, 0.96.1, 0.99.0 >Reporter: Jeffrey Zhong >Assignee: Jeffrey Zhong > Fix For: 0.96.2, 0.98.1, 0.99.0 > > Attachments: hbase-10580.patch > > > When the utility isn't run in distributed cluster mode, the restore only > shutdown MiniHBaseCluster not MiniDFSCluster & MiniZKCluster. -- This message was sent by Atlassian JIRA (v6.1.5#6160)
[jira] [Commented] (HBASE-10586) hadoop2-compat IPC metric registred twice
[ https://issues.apache.org/jira/browse/HBASE-10586?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13909237#comment-13909237 ] Hudson commented on HBASE-10586: FAILURE: Integrated in hbase-0.96 #308 (See [https://builds.apache.org/job/hbase-0.96/308/]) HBASE-10586 hadoop2-compat IPC metric registred twice (mbertozzi: rev 1570742) * /hbase/branches/0.96/hbase-hadoop2-compat/src/main/java/org/apache/hadoop/hbase/ipc/MetricsHBaseServerSourceImpl.java > hadoop2-compat IPC metric registred twice > - > > Key: HBASE-10586 > URL: https://issues.apache.org/jira/browse/HBASE-10586 > Project: HBase > Issue Type: Bug > Components: metrics >Affects Versions: 0.98.0, 0.96.1.1 >Reporter: Matteo Bertozzi >Assignee: Matteo Bertozzi > Fix For: 0.96.2, 0.98.1, 0.99.0 > > Attachments: HBASE-10586-v0.patch > > > There is an extra snapshot/addrecord line in hadoop2-compat > MetricsHBaseServerSourceImpl resulting in IPC metrics with a ".1" > the extra line is not present in the hadoop1-compat "mirror" > {code} > "numCallsInGeneralQueue.1" : 0, > "numCallsInReplicationQueue.1" : 0, > "numCallsInPriorityQueue.1" : 0, > ... > {code} -- This message was sent by Atlassian JIRA (v6.1.5#6160)
[jira] [Commented] (HBASE-10589) Reduce unnecessary TestRowProcessorEndpoint resource usage
[ https://issues.apache.org/jira/browse/HBASE-10589?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13909236#comment-13909236 ] Hudson commented on HBASE-10589: FAILURE: Integrated in hbase-0.96 #308 (See [https://builds.apache.org/job/hbase-0.96/308/]) HBASE-10589 Reduce unnecessary TestRowProcessorEndpoint resource usage (apurtell: rev 1570768) * /hbase/branches/0.96/hbase-server/src/test/java/org/apache/hadoop/hbase/coprocessor/TestRowProcessorEndpoint.java > Reduce unnecessary TestRowProcessorEndpoint resource usage > -- > > Key: HBASE-10589 > URL: https://issues.apache.org/jira/browse/HBASE-10589 > Project: HBase > Issue Type: Improvement >Affects Versions: 0.98.0 >Reporter: Andrew Purtell >Assignee: Andrew Purtell >Priority: Trivial > Fix For: 0.96.2, 0.98.1, 0.99.0 > > Attachments: 10589.patch > > > We don't need 1000 concurrent threads when 100 will do. -- This message was sent by Atlassian JIRA (v6.1.5#6160)
[jira] [Commented] (HBASE-10191) Move large arena storage off heap
[ https://issues.apache.org/jira/browse/HBASE-10191?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13909235#comment-13909235 ] stack commented on HBASE-10191: --- (Good discussion going on here) How then to have KeyValues/Cells w/o calling them out as individual objects? Keep cellblocks of KeyValues/Cells w/ a CellScanner to read over 64k blocks of them? For MemStore, once we hit some upper bound -- say 64k, 1M? -- 'flush' it to an inmemory, sorted, cellblock? Reading, we'd consult the (small) CSLM memstore and some tiering of cellblocks? > Move large arena storage off heap > - > > Key: HBASE-10191 > URL: https://issues.apache.org/jira/browse/HBASE-10191 > Project: HBase > Issue Type: Umbrella >Reporter: Andrew Purtell > > Even with the improved G1 GC in Java 7, Java processes that want to address > large regions of memory while also providing low high-percentile latencies > continue to be challenged. Fundamentally, a Java server process that has high > data throughput and also tight latency SLAs will be stymied by the fact that > the JVM does not provide a fully concurrent collector. There is simply not > enough throughput to copy data during GC under safepoint (all application > threads suspended) within available time bounds. This is increasingly an > issue for HBase users operating under dual pressures: 1. tight response SLAs, > 2. the increasing amount of RAM available in "commodity" server > configurations, because GC load is roughly proportional to heap size. > We can address this using parallel strategies. We should talk with the Java > platform developer community about the possibility of a fully concurrent > collector appearing in OpenJDK somehow. Set aside the question of if this is > too little too late, if one becomes available the benefit will be immediate > though subject to qualification for production, and transparent in terms of > code changes. However in the meantime we need an answer for Java versions > already in production. This requires we move the large arena allocations off > heap, those being the blockcache and memstore. On other JIRAs recently there > has been related discussion about combining the blockcache and memstore > (HBASE-9399) and on flushing memstore into blockcache (HBASE-5311), which is > related work. We should build off heap allocation for memstore and > blockcache, perhaps a unified pool for both, and plumb through zero copy > direct access to these allocations (via direct buffers) through the read and > write I/O paths. This may require the construction of classes that provide > object views over data contained within direct buffers. This is something > else we could talk with the Java platform developer community about - it > could be possible to provide language level object views over off heap > memory, on heap objects could hold references to objects backed by off heap > memory but not vice versa, maybe facilitated by new intrinsics in Unsafe. > Again we need an answer for today also. We should investigate what existing > libraries may be available in this regard. Key will be avoiding > marshalling/unmarshalling costs. At most we should be copying primitives out > of the direct buffers to register or stack locations until finally copying > data to construct protobuf Messages. A related issue there is HBASE-9794, > which proposes scatter-gather access to KeyValues when constructing RPC > messages. We should see how far we can get with that and also zero copy > construction of protobuf Messages backed by direct buffer allocations. Some > amount of native code may be required. -- This message was sent by Atlassian JIRA (v6.1.5#6160)
[jira] [Commented] (HBASE-10547) TestFixedLengthWrapper#testReadWrite occasionally fails with the IBM JDK
[ https://issues.apache.org/jira/browse/HBASE-10547?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13909229#comment-13909229 ] Hudson commented on HBASE-10547: FAILURE: Integrated in HBase-0.98-on-Hadoop-1.1 #164 (See [https://builds.apache.org/job/HBase-0.98-on-Hadoop-1.1/164/]) HBASE-10547 TestFixedLengthWrapper#testReadWrite occasionally fails with the IBM JDK (apurtell: rev 1570745) * /hbase/branches/0.98/hbase-common/src/test/java/org/apache/hadoop/hbase/types/TestFixedLengthWrapper.java > TestFixedLengthWrapper#testReadWrite occasionally fails with the IBM JDK > > > Key: HBASE-10547 > URL: https://issues.apache.org/jira/browse/HBASE-10547 > Project: HBase > Issue Type: Bug >Affects Versions: 0.98.0, 0.96.1.1 > Environment: IBM J9 VM (build 2.7, JRE 1.7.0 Linux amd64-64 > Compressed References 20131114_175264 (JIT enabled, AOT enabled) >Reporter: Andrew Purtell >Assignee: Andrew Purtell >Priority: Trivial > Fix For: 0.96.2, 0.98.1, 0.99.0 > > Attachments: 10547.patch > > > Here's the trace. > {noformat} > Tests run: 4, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 29.288 sec > <<< FAILURE! > testReadWrite(org.apache.hadoop.hbase.types.TestFixedLengthWrapper) Time > elapsed: 0.025 sec <<< FAILURE! > arrays first differed at element [8]; expected:<-40> but was:<0> > at > org.junit.internal.ComparisonCriteria.arrayEquals(ComparisonCriteria.java:50) > at org.junit.Assert.internalArrayEquals(Assert.java:473) > at org.junit.Assert.assertArrayEquals(Assert.java:294) > at org.junit.Assert.assertArrayEquals(Assert.java:305) > at > org.apache.hadoop.hbase.types.TestFixedLengthWrapper.testReadWrite(TestFixedLengthWrapper.java:60) > {noformat} > This is with 0.98.0. -- This message was sent by Atlassian JIRA (v6.1.5#6160)
[jira] [Commented] (HBASE-10526) Using Cell instead of KeyValue in HFileOutputFormat
[ https://issues.apache.org/jira/browse/HBASE-10526?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13909230#comment-13909230 ] Hudson commented on HBASE-10526: FAILURE: Integrated in HBase-0.98-on-Hadoop-1.1 #164 (See [https://builds.apache.org/job/HBase-0.98-on-Hadoop-1.1/164/]) HBASE-10526 Using Cell instead of KeyValue in HFileOutputFormat (jxiang: rev 1570688) * /hbase/branches/0.98/hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/HFileOutputFormat.java * /hbase/branches/0.98/hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/HFileOutputFormat2.java * /hbase/branches/0.98/hbase-server/src/test/java/org/apache/hadoop/hbase/mapreduce/TestHFileOutputFormat2.java > Using Cell instead of KeyValue in HFileOutputFormat > --- > > Key: HBASE-10526 > URL: https://issues.apache.org/jira/browse/HBASE-10526 > Project: HBase > Issue Type: Sub-task > Components: mapreduce >Reporter: Jimmy Xiang >Assignee: Jimmy Xiang > Fix For: 0.96.2, 0.98.1, 0.99.0 > > Attachments: hbase-10526.patch, hbase-10526_v1.1.patch, > hbase-10526_v2.patch, hbase-10526_v3.patch > > > HFileOutputFormat/KeyValueSortReducer use KeyValue. We should deprecate them > and use Cell instead. -- This message was sent by Atlassian JIRA (v6.1.5#6160)
[jira] [Commented] (HBASE-10586) hadoop2-compat IPC metric registred twice
[ https://issues.apache.org/jira/browse/HBASE-10586?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13909228#comment-13909228 ] Hudson commented on HBASE-10586: FAILURE: Integrated in HBase-0.98-on-Hadoop-1.1 #164 (See [https://builds.apache.org/job/HBase-0.98-on-Hadoop-1.1/164/]) HBASE-10586 hadoop2-compat IPC metric registred twice (mbertozzi: rev 1570740) * /hbase/branches/0.98/hbase-hadoop2-compat/src/main/java/org/apache/hadoop/hbase/ipc/MetricsHBaseServerSourceImpl.java > hadoop2-compat IPC metric registred twice > - > > Key: HBASE-10586 > URL: https://issues.apache.org/jira/browse/HBASE-10586 > Project: HBase > Issue Type: Bug > Components: metrics >Affects Versions: 0.98.0, 0.96.1.1 >Reporter: Matteo Bertozzi >Assignee: Matteo Bertozzi > Fix For: 0.96.2, 0.98.1, 0.99.0 > > Attachments: HBASE-10586-v0.patch > > > There is an extra snapshot/addrecord line in hadoop2-compat > MetricsHBaseServerSourceImpl resulting in IPC metrics with a ".1" > the extra line is not present in the hadoop1-compat "mirror" > {code} > "numCallsInGeneralQueue.1" : 0, > "numCallsInReplicationQueue.1" : 0, > "numCallsInPriorityQueue.1" : 0, > ... > {code} -- This message was sent by Atlassian JIRA (v6.1.5#6160)
[jira] [Commented] (HBASE-10529) Make Cell extend Cloneable
[ https://issues.apache.org/jira/browse/HBASE-10529?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13909226#comment-13909226 ] stack commented on HBASE-10529: --- bq. ...but have a different Cell implementation on the Server side may when we need to send out a Cell back to the client? Sorry boss I don't follow on the above. What you thinking? Thanks. > Make Cell extend Cloneable > -- > > Key: HBASE-10529 > URL: https://issues.apache.org/jira/browse/HBASE-10529 > Project: HBase > Issue Type: Sub-task >Reporter: ramkrishna.s.vasudevan >Assignee: ramkrishna.s.vasudevan > Fix For: 0.99.0 > > Attachments: HBSE-10529.patch > > > Refer to the parent JIRA for discussion on making extending Cloneable. -- This message was sent by Atlassian JIRA (v6.1.5#6160)
[jira] [Commented] (HBASE-10584) Inconsistency between tableExists and listTables in implementation
[ https://issues.apache.org/jira/browse/HBASE-10584?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13909221#comment-13909221 ] stack commented on HBASE-10584: --- This patch is great. Should be in all versions. We need a new issue after this one? There should be a general prescription on how to avoid our doing table-state transitions that by-pass each other thereby going forward? Should all queries about the state of tables (enabled/disabled) go via the master from here on out? If so, what to deprecate? (Can be different issue) Thanks [~fenghh] > Inconsistency between tableExists and listTables in implementation > -- > > Key: HBASE-10584 > URL: https://issues.apache.org/jira/browse/HBASE-10584 > Project: HBase > Issue Type: Bug > Components: Client, master >Reporter: Feng Honghua >Assignee: Feng Honghua > Attachments: HBASE-10584-trunk_v1.patch > > > # HBaseAdmin.tableExists is implemented by scanning meta table > # HBaseAdmin.listTables(and HBaseAdmin.getTableDescriptor) is implemented by > talking with HMaster which responses by querying the FSTableDescriptors, and > FSTableDescriptors return all tables by scanning all the table descriptor > files in FS(cache also plays here, so most of time it can be satisfied by > cache)... > Actually HBaseAdmin requests HMaster to check if a table exists internally > when implementing deleteTable(see below), then why does it use a > different(scanning meta table) way to implementing tableExists() for outside > user to use for the same purpose? > {code} > tableExists = false; > GetTableDescriptorsResponse htds; > MasterKeepAliveConnection master = connection.getKeepAliveMasterService(); > try { > GetTableDescriptorsRequest req = > RequestConverter.buildGetTableDescriptorsRequest(tableName); > htds = master.getTableDescriptors(null, req); > } catch (ServiceException se) { > throw ProtobufUtil.getRemoteException(se); > } finally { > master.close(); > } > tableExists = !htds.getTableSchemaList().isEmpty(); > {code} > (Above verifying that table descriptor file is deleted can guarantee all > items of this table are deleted from meta table...) > Since creating table descriptor files and inserting item to meta table occur > in different time without atomic semantic, this inconsistency in > implementation can lead to confusing behaviors when create-table or > delete-table fails midway, (before according cleanup is done) table > descriptor file may exists while no item exists in meta table (for > create-table where table descriptor file is created before inserting item to > meta table), this leads to listTables including that table, while tableExists > says no. Similar inconsistency if delete-table fails mid-way... > Confusing behavior can happen during the process even though eventually it > succeed: > # During table creation, when a user calls listTables and then calls > tableExists for this table after the table descriptor is created but before > item is inserted to meta table. He will find the listTables includes a table > but tableExists return false for that same table, this behavior is confusing > and should only acceptable during the table is being deleted... > # Similar behavior occurs during table deletion. > Seems the benefit of implementing tableExists this way is we can avoid > talking with HMaster, considering we talk with HMaster for listTables and > getTableDescriptor, such benefit can't offset the drawback from inconsistency. -- This message was sent by Atlassian JIRA (v6.1.5#6160)
[jira] [Commented] (HBASE-10580) IntegrationTestingUtility#restoreCluster leak resource when running in a mini cluster mode
[ https://issues.apache.org/jira/browse/HBASE-10580?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13909217#comment-13909217 ] stack commented on HBASE-10580: --- +1 > IntegrationTestingUtility#restoreCluster leak resource when running in a mini > cluster mode > -- > > Key: HBASE-10580 > URL: https://issues.apache.org/jira/browse/HBASE-10580 > Project: HBase > Issue Type: Bug > Components: test >Affects Versions: 0.98.0, 0.96.0, 0.96.1, 0.99.0 >Reporter: Jeffrey Zhong >Assignee: Jeffrey Zhong > Fix For: 0.96.2, 0.98.1, 0.99.0 > > Attachments: hbase-10580.patch > > > When the utility isn't run in distributed cluster mode, the restore only > shutdown MiniHBaseCluster not MiniDFSCluster & MiniZKCluster. -- This message was sent by Atlassian JIRA (v6.1.5#6160)
[jira] [Commented] (HBASE-10581) ACL znode are left without PBed during upgrading hbase0.94* to hbase0.96+
[ https://issues.apache.org/jira/browse/HBASE-10581?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13909216#comment-13909216 ] stack commented on HBASE-10581: --- Thanks [~himan...@cloudera.com] for being easy. Agree w/ [~jeffreyz] on the never underestimate the user's 'imagination'. Suggest we commit this, conservative, fix for now and move on. > ACL znode are left without PBed during upgrading hbase0.94* to hbase0.96+ > - > > Key: HBASE-10581 > URL: https://issues.apache.org/jira/browse/HBASE-10581 > Project: HBase > Issue Type: Bug >Affects Versions: 0.98.0, 0.96.0, 0.96.1 >Reporter: Jeffrey Zhong >Assignee: Jeffrey Zhong >Priority: Critical > Attachments: hbase-10581.patch > > > ACL znodes are left in the upgrade process when upgrading 0.94 to 0.96+ > Those 0.94 znodes will choke HMaster because their data aren't PBed. -- This message was sent by Atlassian JIRA (v6.1.5#6160)
[jira] [Commented] (HBASE-10494) hadoop2 class reference in Maven Central's hbase-client-0.96.1.1-hadoop1
[ https://issues.apache.org/jira/browse/HBASE-10494?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13909214#comment-13909214 ] Yong Zhang commented on HBASE-10494: The maven for HBase 0.94.16 had the same problem. It broken in the same ClassNotFoundException. I changed the version to 0.94.15 and it worked fine. > hadoop2 class reference in Maven Central's hbase-client-0.96.1.1-hadoop1 > > > Key: HBASE-10494 > URL: https://issues.apache.org/jira/browse/HBASE-10494 > Project: HBase > Issue Type: Bug >Affects Versions: 0.96.1.1 > Environment: Only affects jar on Maven Central. Jar in the hadoop1 > tarball download is not affected. >Reporter: Dan LaRocque >Priority: Minor > > RpcClient$Connection.class as shipped in the hbase-client-0.96.1.1-hadoop1 > jar on Maven Central contains references to > org.apache.hadoop.net.SocketInputWrapper. I think this class does not exist > in hadoop1 because a classfile search of central yields hits only on 2.0 and > 0.23. There may be other references. I only know about this one because it > was killing my HRegionServer early with this exception: > {noformat} > 2014-02-10 20:55:52,021 INFO [M:0;dalarolap:48768] master.ServerManager: > Waiting for region servers count to settle; currently checked in 0, slept for > 0 ms, expecting minimum of 1, maximum of 2147483647, timeout of 4500 ms, > interval of 1500 ms. > 2014-02-10 20:55:52,066 WARN [RS:0;dalarolap:33703] > regionserver.HRegionServer: error telling master we are up > com.google.protobuf.ServiceException: java.lang.NoClassDefFoundError: > org/apache/hadoop/net/SocketInputWrapper > at > org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1670) > at > org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1711) > at > org.apache.hadoop.hbase.protobuf.generated.RegionServerStatusProtos$RegionServerStatusService$BlockingStub.regionServerStartup(RegionServerStatusProtos.java:5402) > at > org.apache.hadoop.hbase.regionserver.HRegionServer.reportForDuty(HRegionServer.java:1926) > at > org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:784) > at java.lang.Thread.run(Thread.java:744) > Caused by: java.lang.NoClassDefFoundError: > org/apache/hadoop/net/SocketInputWrapper > at > org.apache.hadoop.hbase.ipc.RpcClient.createConnection(RpcClient.java:348) > at > org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1522) > at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1424) > at > org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1653) > ... 5 more > Caused by: java.lang.ClassNotFoundException: > org.apache.hadoop.net.SocketInputWrapper > at java.net.URLClassLoader$1.run(URLClassLoader.java:366) > at java.net.URLClassLoader$1.run(URLClassLoader.java:355) > at java.security.AccessController.doPrivileged(Native Method) > at java.net.URLClassLoader.findClass(URLClassLoader.java:354) > at java.lang.ClassLoader.loadClass(ClassLoader.java:425) > at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308) > at java.lang.ClassLoader.loadClass(ClassLoader.java:358) > ... 9 more > {noformat} > I first stumbled over this while developing an app managed by Maven that > depends on hbase-client, but then reproduced it by extracting the hadoop1 > tarball and replacing the client jar with the same-named one from Maven > Central. > I think this is not the same as HBASE-7269, although the stacktrace is > similar. > Here's a disassembler grep on the Maven Central copy showing some references: > {noformat} > # From the root of an extracted hbase-client-0.96.1.1-hadoop1 jar downloaded > from Maven Central > client-maven$ javap -verbose > 'org/apache/hadoop/hbase/ipc/RpcClient$Connection.class' | grep > SocketInputWrapper >#198 = Methodref #744.#818// > org/apache/hadoop/net/NetUtils.getInputStream:(Ljava/net/Socket;)Lorg/apache/hadoop/net/SocketInputWrapper; >#818 = NameAndType#1115:#1159 // > getInputStream:(Ljava/net/Socket;)Lorg/apache/hadoop/net/SocketInputWrapper; > #1159 = Utf8 > (Ljava/net/Socket;)Lorg/apache/hadoop/net/SocketInputWrapper; >180: invokestatic #198// Method > org/apache/hadoop/net/NetUtils.getInputStream:(Ljava/net/Socket;)Lorg/apache/hadoop/net/SocketInputWrapper; > {noformat} > Here's the same grep on the tarball's copy. No references. > {noformat} > # Same as above, but using jar from the download tarball for hadoop1 > client-tarball$ javap -verbose > 'org/apache/hadoop/hbase/ipc/
[jira] [Commented] (HBASE-9117) Remove HTablePool and all HConnection pooling related APIs
[ https://issues.apache.org/jira/browse/HBASE-9117?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13909211#comment-13909211 ] Hadoop QA commented on HBASE-9117: -- {color:red}-1 overall{color}. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12630431/HBASE-9117.05.patch against trunk revision . ATTACHMENT ID: 12630431 {color:green}+1 @author{color}. The patch does not contain any @author tags. {color:green}+1 tests included{color}. The patch appears to include 116 new or modified tests. {color:green}+1 hadoop1.0{color}. The patch compiles against the hadoop 1.0 profile. {color:green}+1 hadoop1.1{color}. The patch compiles against the hadoop 1.1 profile. {color:green}+1 javadoc{color}. The javadoc tool did not generate any warning messages. {color:green}+1 javac{color}. The applied patch does not increase the total number of javac compiler warnings. {color:green}+1 findbugs{color}. The patch does not introduce any new Findbugs (version 1.3.9) warnings. {color:green}+1 release audit{color}. The applied patch does not increase the total number of release audit warnings. {color:red}-1 lineLengths{color}. The patch introduces the following lines longer than 100: +// HBaseAdmin only waits for regions to appear in hbase:meta we should wait until they are assigned +LOG.warn("close() called on HConnection instance returned from HBaseTestingUtility.getConnection()"); + public static THBaseService.Iface newInstance(Configuration conf, ThriftMetrics metrics) throws IOException { {color:green}+1 site{color}. The mvn site goal succeeds with this patch. {color:red}-1 core tests{color}. The patch failed these unit tests: org.apache.hadoop.hbase.client.TestClientNoCluster Test results: https://builds.apache.org/job/PreCommit-HBASE-Build/8773//testReport/ Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8773//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8773//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8773//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8773//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8773//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8773//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8773//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8773//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-thrift.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8773//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/8773//console This message is automatically generated. > Remove HTablePool and all HConnection pooling related APIs > -- > > Key: HBASE-9117 > URL: https://issues.apache.org/jira/browse/HBASE-9117 > Project: HBase > Issue Type: Bug >Reporter: Lars Hofhansl >Assignee: Nick Dimiduk >Priority: Critical > Fix For: 0.99.0 > > Attachments: HBASE-9117.00.patch, HBASE-9117.01.patch, > HBASE-9117.02.patch, HBASE-9117.03.patch, HBASE-9117.04.patch, > HBASE-9117.05.patch > > > The recommended way is now: > # Create an HConnection: HConnectionManager.createConnection(...) > # Create a light HTable: HConnection.getTable(...) > # table.close() > # connection.close() > All other API and pooling will be removed. -- This message was sent by Atlassian JIRA (v6.1.5#6160)
[jira] [Commented] (HBASE-10583) backport HBASE-8402 to 0.94 - ScanMetrics depends on number of rpc calls to the server.
[ https://issues.apache.org/jira/browse/HBASE-10583?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13909205#comment-13909205 ] Hudson commented on HBASE-10583: FAILURE: Integrated in HBase-0.94-on-Hadoop-2 #30 (See [https://builds.apache.org/job/HBase-0.94-on-Hadoop-2/30/]) HBASE-10583 backport HBASE-8402 to 0.94 - ScanMetrics depends on number of rpc calls to the server. (Liu Shaohui and Himanshu Vashishtha) (larsh: rev 1570756) * /hbase/branches/0.94/src/main/java/org/apache/hadoop/hbase/client/ClientScanner.java * /hbase/branches/0.94/src/test/java/org/apache/hadoop/hbase/client/TestFromClientSide.java > backport HBASE-8402 to 0.94 - ScanMetrics depends on number of rpc calls to > the server. > --- > > Key: HBASE-10583 > URL: https://issues.apache.org/jira/browse/HBASE-10583 > Project: HBase > Issue Type: Bug >Reporter: Liu Shaohui >Assignee: Liu Shaohui > Fix For: 0.94.18 > > Attachments: HBASE-10583-v1.diff > > > see HBASE-8402 -- This message was sent by Atlassian JIRA (v6.1.5#6160)
[jira] [Commented] (HBASE-8402) ScanMetrics depends on number of rpc calls to the server.
[ https://issues.apache.org/jira/browse/HBASE-8402?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13909204#comment-13909204 ] Hudson commented on HBASE-8402: --- FAILURE: Integrated in HBase-0.94-on-Hadoop-2 #30 (See [https://builds.apache.org/job/HBase-0.94-on-Hadoop-2/30/]) HBASE-10583 backport HBASE-8402 to 0.94 - ScanMetrics depends on number of rpc calls to the server. (Liu Shaohui and Himanshu Vashishtha) (larsh: rev 1570756) * /hbase/branches/0.94/src/main/java/org/apache/hadoop/hbase/client/ClientScanner.java * /hbase/branches/0.94/src/test/java/org/apache/hadoop/hbase/client/TestFromClientSide.java > ScanMetrics depends on number of rpc calls to the server. > - > > Key: HBASE-8402 > URL: https://issues.apache.org/jira/browse/HBASE-8402 > Project: HBase > Issue Type: Bug > Components: Client, metrics >Affects Versions: 0.95.0 >Reporter: Himanshu Vashishtha >Assignee: Himanshu Vashishtha >Priority: Minor > Fix For: 0.98.0, 0.95.1 > > Attachments: HBASE-8402-v1.patch, HBASE-8402-v2.patch > > > Currently, scan metrics is not published in case there is one trip to server. > I was testing it on a small row range (200 rows) with a large cache value > (1000). It doesn't look right as metrics should not depend on number of rpc > calls (number of rpc call is just one metrics fwiw). -- This message was sent by Atlassian JIRA (v6.1.5#6160)
[jira] [Commented] (HBASE-10586) hadoop2-compat IPC metric registred twice
[ https://issues.apache.org/jira/browse/HBASE-10586?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13909195#comment-13909195 ] Hudson commented on HBASE-10586: FAILURE: Integrated in HBase-TRUNK #4943 (See [https://builds.apache.org/job/HBase-TRUNK/4943/]) HBASE-10586 hadoop2-compat IPC metric registred twice (mbertozzi: rev 1570737) * /hbase/trunk/hbase-hadoop2-compat/src/main/java/org/apache/hadoop/hbase/ipc/MetricsHBaseServerSourceImpl.java > hadoop2-compat IPC metric registred twice > - > > Key: HBASE-10586 > URL: https://issues.apache.org/jira/browse/HBASE-10586 > Project: HBase > Issue Type: Bug > Components: metrics >Affects Versions: 0.98.0, 0.96.1.1 >Reporter: Matteo Bertozzi >Assignee: Matteo Bertozzi > Fix For: 0.96.2, 0.98.1, 0.99.0 > > Attachments: HBASE-10586-v0.patch > > > There is an extra snapshot/addrecord line in hadoop2-compat > MetricsHBaseServerSourceImpl resulting in IPC metrics with a ".1" > the extra line is not present in the hadoop1-compat "mirror" > {code} > "numCallsInGeneralQueue.1" : 0, > "numCallsInReplicationQueue.1" : 0, > "numCallsInPriorityQueue.1" : 0, > ... > {code} -- This message was sent by Atlassian JIRA (v6.1.5#6160)
[jira] [Commented] (HBASE-10392) Correct references to hbase.regionserver.global.memstore.upperLimit
[ https://issues.apache.org/jira/browse/HBASE-10392?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13909196#comment-13909196 ] Hudson commented on HBASE-10392: FAILURE: Integrated in HBase-TRUNK #4943 (See [https://builds.apache.org/job/HBase-TRUNK/4943/]) HBASE-10392 Correct references to hbase.regionserver.global.memstore.upperLimit (ndimiduk: rev 1570721) * /hbase/trunk/hbase-common/src/main/java/org/apache/hadoop/hbase/HBaseConfiguration.java * /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/MemStoreFlusher.java * /hbase/trunk/src/main/docbkx/ops_mgt.xml * /hbase/trunk/src/main/docbkx/performance.xml > Correct references to hbase.regionserver.global.memstore.upperLimit > --- > > Key: HBASE-10392 > URL: https://issues.apache.org/jira/browse/HBASE-10392 > Project: HBase > Issue Type: Bug > Components: documentation >Reporter: Nick Dimiduk >Assignee: Nick Dimiduk > Fix For: 0.99.0 > > Attachments: HBASE-10392.0.patch, HBASE-10392.1.patch, > HBASE-10392.2.patch, HBASE-10392.3.patch, HBASE-10392.4.patch, > HBASE-10392.5.patch > > > As part of the awesome new HBASE-5349, a couple references to > {{hbase.regionserver.global.memstore.upperLimit}} was missed. Clean those up > to use the new {{hbase.regionserver.global.memstore.size}} instead. -- This message was sent by Atlassian JIRA (v6.1.5#6160)
[jira] [Commented] (HBASE-10547) TestFixedLengthWrapper#testReadWrite occasionally fails with the IBM JDK
[ https://issues.apache.org/jira/browse/HBASE-10547?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13909197#comment-13909197 ] Hudson commented on HBASE-10547: FAILURE: Integrated in HBase-TRUNK #4943 (See [https://builds.apache.org/job/HBase-TRUNK/4943/]) HBASE-10547 TestFixedLengthWrapper#testReadWrite occasionally fails with the IBM JDK (apurtell: rev 1570744) * /hbase/trunk/hbase-common/src/test/java/org/apache/hadoop/hbase/types/TestFixedLengthWrapper.java > TestFixedLengthWrapper#testReadWrite occasionally fails with the IBM JDK > > > Key: HBASE-10547 > URL: https://issues.apache.org/jira/browse/HBASE-10547 > Project: HBase > Issue Type: Bug >Affects Versions: 0.98.0, 0.96.1.1 > Environment: IBM J9 VM (build 2.7, JRE 1.7.0 Linux amd64-64 > Compressed References 20131114_175264 (JIT enabled, AOT enabled) >Reporter: Andrew Purtell >Assignee: Andrew Purtell >Priority: Trivial > Fix For: 0.96.2, 0.98.1, 0.99.0 > > Attachments: 10547.patch > > > Here's the trace. > {noformat} > Tests run: 4, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 29.288 sec > <<< FAILURE! > testReadWrite(org.apache.hadoop.hbase.types.TestFixedLengthWrapper) Time > elapsed: 0.025 sec <<< FAILURE! > arrays first differed at element [8]; expected:<-40> but was:<0> > at > org.junit.internal.ComparisonCriteria.arrayEquals(ComparisonCriteria.java:50) > at org.junit.Assert.internalArrayEquals(Assert.java:473) > at org.junit.Assert.assertArrayEquals(Assert.java:294) > at org.junit.Assert.assertArrayEquals(Assert.java:305) > at > org.apache.hadoop.hbase.types.TestFixedLengthWrapper.testReadWrite(TestFixedLengthWrapper.java:60) > {noformat} > This is with 0.98.0. -- This message was sent by Atlassian JIRA (v6.1.5#6160)
[jira] [Commented] (HBASE-10589) Reduce unnecessary TestRowProcessorEndpoint resource usage
[ https://issues.apache.org/jira/browse/HBASE-10589?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13909185#comment-13909185 ] Hudson commented on HBASE-10589: SUCCESS: Integrated in HBase-0.98 #177 (See [https://builds.apache.org/job/HBase-0.98/177/]) HBASE-10589 Reduce unnecessary TestRowProcessorEndpoint resource usage (apurtell: rev 1570766) * /hbase/branches/0.98/hbase-server/src/test/java/org/apache/hadoop/hbase/coprocessor/TestRowProcessorEndpoint.java > Reduce unnecessary TestRowProcessorEndpoint resource usage > -- > > Key: HBASE-10589 > URL: https://issues.apache.org/jira/browse/HBASE-10589 > Project: HBase > Issue Type: Improvement >Affects Versions: 0.98.0 >Reporter: Andrew Purtell >Assignee: Andrew Purtell >Priority: Trivial > Fix For: 0.96.2, 0.98.1, 0.99.0 > > Attachments: 10589.patch > > > We don't need 1000 concurrent threads when 100 will do. -- This message was sent by Atlassian JIRA (v6.1.5#6160)
[jira] [Commented] (HBASE-10586) hadoop2-compat IPC metric registred twice
[ https://issues.apache.org/jira/browse/HBASE-10586?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13909186#comment-13909186 ] Hudson commented on HBASE-10586: SUCCESS: Integrated in HBase-0.98 #177 (See [https://builds.apache.org/job/HBase-0.98/177/]) HBASE-10586 hadoop2-compat IPC metric registred twice (mbertozzi: rev 1570740) * /hbase/branches/0.98/hbase-hadoop2-compat/src/main/java/org/apache/hadoop/hbase/ipc/MetricsHBaseServerSourceImpl.java > hadoop2-compat IPC metric registred twice > - > > Key: HBASE-10586 > URL: https://issues.apache.org/jira/browse/HBASE-10586 > Project: HBase > Issue Type: Bug > Components: metrics >Affects Versions: 0.98.0, 0.96.1.1 >Reporter: Matteo Bertozzi >Assignee: Matteo Bertozzi > Fix For: 0.96.2, 0.98.1, 0.99.0 > > Attachments: HBASE-10586-v0.patch > > > There is an extra snapshot/addrecord line in hadoop2-compat > MetricsHBaseServerSourceImpl resulting in IPC metrics with a ".1" > the extra line is not present in the hadoop1-compat "mirror" > {code} > "numCallsInGeneralQueue.1" : 0, > "numCallsInReplicationQueue.1" : 0, > "numCallsInPriorityQueue.1" : 0, > ... > {code} -- This message was sent by Atlassian JIRA (v6.1.5#6160)
[jira] [Commented] (HBASE-10547) TestFixedLengthWrapper#testReadWrite occasionally fails with the IBM JDK
[ https://issues.apache.org/jira/browse/HBASE-10547?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13909187#comment-13909187 ] Hudson commented on HBASE-10547: SUCCESS: Integrated in HBase-0.98 #177 (See [https://builds.apache.org/job/HBase-0.98/177/]) HBASE-10547 TestFixedLengthWrapper#testReadWrite occasionally fails with the IBM JDK (apurtell: rev 1570745) * /hbase/branches/0.98/hbase-common/src/test/java/org/apache/hadoop/hbase/types/TestFixedLengthWrapper.java > TestFixedLengthWrapper#testReadWrite occasionally fails with the IBM JDK > > > Key: HBASE-10547 > URL: https://issues.apache.org/jira/browse/HBASE-10547 > Project: HBase > Issue Type: Bug >Affects Versions: 0.98.0, 0.96.1.1 > Environment: IBM J9 VM (build 2.7, JRE 1.7.0 Linux amd64-64 > Compressed References 20131114_175264 (JIT enabled, AOT enabled) >Reporter: Andrew Purtell >Assignee: Andrew Purtell >Priority: Trivial > Fix For: 0.96.2, 0.98.1, 0.99.0 > > Attachments: 10547.patch > > > Here's the trace. > {noformat} > Tests run: 4, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 29.288 sec > <<< FAILURE! > testReadWrite(org.apache.hadoop.hbase.types.TestFixedLengthWrapper) Time > elapsed: 0.025 sec <<< FAILURE! > arrays first differed at element [8]; expected:<-40> but was:<0> > at > org.junit.internal.ComparisonCriteria.arrayEquals(ComparisonCriteria.java:50) > at org.junit.Assert.internalArrayEquals(Assert.java:473) > at org.junit.Assert.assertArrayEquals(Assert.java:294) > at org.junit.Assert.assertArrayEquals(Assert.java:305) > at > org.apache.hadoop.hbase.types.TestFixedLengthWrapper.testReadWrite(TestFixedLengthWrapper.java:60) > {noformat} > This is with 0.98.0. -- This message was sent by Atlassian JIRA (v6.1.5#6160)
[jira] [Commented] (HBASE-10587) Master metrics clusterRequests is wrong
[ https://issues.apache.org/jira/browse/HBASE-10587?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13909183#comment-13909183 ] Hadoop QA commented on HBASE-10587: --- {color:green}+1 overall{color}. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12630416/hbase-10587.patch against trunk revision . ATTACHMENT ID: 12630416 {color:green}+1 @author{color}. The patch does not contain any @author tags. {color:green}+1 tests included{color}. The patch appears to include 3 new or modified tests. {color:green}+1 hadoop1.0{color}. The patch compiles against the hadoop 1.0 profile. {color:green}+1 hadoop1.1{color}. The patch compiles against the hadoop 1.1 profile. {color:green}+1 javadoc{color}. The javadoc tool did not generate any warning messages. {color:green}+1 javac{color}. The applied patch does not increase the total number of javac compiler warnings. {color:green}+1 findbugs{color}. The patch does not introduce any new Findbugs (version 1.3.9) warnings. {color:green}+1 release audit{color}. The applied patch does not increase the total number of release audit warnings. {color:green}+1 lineLengths{color}. The patch does not introduce lines longer than 100 {color:green}+1 site{color}. The mvn site goal succeeds with this patch. {color:green}+1 core tests{color}. The patch passed unit tests in . Test results: https://builds.apache.org/job/PreCommit-HBASE-Build/8772//testReport/ Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8772//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8772//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8772//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8772//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8772//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8772//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8772//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8772//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-thrift.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8772//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/8772//console This message is automatically generated. > Master metrics clusterRequests is wrong > --- > > Key: HBASE-10587 > URL: https://issues.apache.org/jira/browse/HBASE-10587 > Project: HBase > Issue Type: Bug > Components: master, metrics >Affects Versions: 0.96.0 >Reporter: Jimmy Xiang >Assignee: Jimmy Xiang > Attachments: hbase-10587.patch > > > In the master jmx, metrics clusterRequests increases so fast. Looked into the > code and found the calculation is a little bit wrong. It's a counter. > However, for each region server report, the total number of requests is added > to clusterRequests. That means it's added multiple times. -- This message was sent by Atlassian JIRA (v6.1.5#6160)
[jira] [Created] (HBASE-10591) Sanity check table configuration in createTable
Enis Soztutar created HBASE-10591: - Summary: Sanity check table configuration in createTable Key: HBASE-10591 URL: https://issues.apache.org/jira/browse/HBASE-10591 Project: HBase Issue Type: Improvement Reporter: Enis Soztutar Assignee: Enis Soztutar Fix For: 0.99.0 We had a cluster completely become unoperational, because a couple of table was erroneously created with MAX_FILESIZE set to 4K, which resulted in 180K regions in a short interval, and bringing the master down due to HBASE-4246. We can do some sanity checking in master.createTable() and reject the requests. We already check the compression there, so it seems a good place. Alter table should also check for this as well. -- This message was sent by Atlassian JIRA (v6.1.5#6160)
[jira] [Commented] (HBASE-10583) backport HBASE-8402 to 0.94 - ScanMetrics depends on number of rpc calls to the server.
[ https://issues.apache.org/jira/browse/HBASE-10583?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13909174#comment-13909174 ] Hudson commented on HBASE-10583: SUCCESS: Integrated in HBase-0.94 #1295 (See [https://builds.apache.org/job/HBase-0.94/1295/]) HBASE-10583 backport HBASE-8402 to 0.94 - ScanMetrics depends on number of rpc calls to the server. (Liu Shaohui and Himanshu Vashishtha) (larsh: rev 1570756) * /hbase/branches/0.94/src/main/java/org/apache/hadoop/hbase/client/ClientScanner.java * /hbase/branches/0.94/src/test/java/org/apache/hadoop/hbase/client/TestFromClientSide.java > backport HBASE-8402 to 0.94 - ScanMetrics depends on number of rpc calls to > the server. > --- > > Key: HBASE-10583 > URL: https://issues.apache.org/jira/browse/HBASE-10583 > Project: HBase > Issue Type: Bug >Reporter: Liu Shaohui >Assignee: Liu Shaohui > Fix For: 0.94.18 > > Attachments: HBASE-10583-v1.diff > > > see HBASE-8402 -- This message was sent by Atlassian JIRA (v6.1.5#6160)
[jira] [Commented] (HBASE-8402) ScanMetrics depends on number of rpc calls to the server.
[ https://issues.apache.org/jira/browse/HBASE-8402?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13909173#comment-13909173 ] Hudson commented on HBASE-8402: --- SUCCESS: Integrated in HBase-0.94 #1295 (See [https://builds.apache.org/job/HBase-0.94/1295/]) HBASE-10583 backport HBASE-8402 to 0.94 - ScanMetrics depends on number of rpc calls to the server. (Liu Shaohui and Himanshu Vashishtha) (larsh: rev 1570756) * /hbase/branches/0.94/src/main/java/org/apache/hadoop/hbase/client/ClientScanner.java * /hbase/branches/0.94/src/test/java/org/apache/hadoop/hbase/client/TestFromClientSide.java > ScanMetrics depends on number of rpc calls to the server. > - > > Key: HBASE-8402 > URL: https://issues.apache.org/jira/browse/HBASE-8402 > Project: HBase > Issue Type: Bug > Components: Client, metrics >Affects Versions: 0.95.0 >Reporter: Himanshu Vashishtha >Assignee: Himanshu Vashishtha >Priority: Minor > Fix For: 0.98.0, 0.95.1 > > Attachments: HBASE-8402-v1.patch, HBASE-8402-v2.patch > > > Currently, scan metrics is not published in case there is one trip to server. > I was testing it on a small row range (200 rows) with a large cache value > (1000). It doesn't look right as metrics should not depend on number of rpc > calls (number of rpc call is just one metrics fwiw). -- This message was sent by Atlassian JIRA (v6.1.5#6160)
[jira] [Commented] (HBASE-8402) ScanMetrics depends on number of rpc calls to the server.
[ https://issues.apache.org/jira/browse/HBASE-8402?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13909169#comment-13909169 ] Hudson commented on HBASE-8402: --- SUCCESS: Integrated in HBase-0.94-JDK7 #59 (See [https://builds.apache.org/job/HBase-0.94-JDK7/59/]) HBASE-10583 backport HBASE-8402 to 0.94 - ScanMetrics depends on number of rpc calls to the server. (Liu Shaohui and Himanshu Vashishtha) (larsh: rev 1570756) * /hbase/branches/0.94/src/main/java/org/apache/hadoop/hbase/client/ClientScanner.java * /hbase/branches/0.94/src/test/java/org/apache/hadoop/hbase/client/TestFromClientSide.java > ScanMetrics depends on number of rpc calls to the server. > - > > Key: HBASE-8402 > URL: https://issues.apache.org/jira/browse/HBASE-8402 > Project: HBase > Issue Type: Bug > Components: Client, metrics >Affects Versions: 0.95.0 >Reporter: Himanshu Vashishtha >Assignee: Himanshu Vashishtha >Priority: Minor > Fix For: 0.98.0, 0.95.1 > > Attachments: HBASE-8402-v1.patch, HBASE-8402-v2.patch > > > Currently, scan metrics is not published in case there is one trip to server. > I was testing it on a small row range (200 rows) with a large cache value > (1000). It doesn't look right as metrics should not depend on number of rpc > calls (number of rpc call is just one metrics fwiw). -- This message was sent by Atlassian JIRA (v6.1.5#6160)
[jira] [Commented] (HBASE-10583) backport HBASE-8402 to 0.94 - ScanMetrics depends on number of rpc calls to the server.
[ https://issues.apache.org/jira/browse/HBASE-10583?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13909170#comment-13909170 ] Hudson commented on HBASE-10583: SUCCESS: Integrated in HBase-0.94-JDK7 #59 (See [https://builds.apache.org/job/HBase-0.94-JDK7/59/]) HBASE-10583 backport HBASE-8402 to 0.94 - ScanMetrics depends on number of rpc calls to the server. (Liu Shaohui and Himanshu Vashishtha) (larsh: rev 1570756) * /hbase/branches/0.94/src/main/java/org/apache/hadoop/hbase/client/ClientScanner.java * /hbase/branches/0.94/src/test/java/org/apache/hadoop/hbase/client/TestFromClientSide.java > backport HBASE-8402 to 0.94 - ScanMetrics depends on number of rpc calls to > the server. > --- > > Key: HBASE-10583 > URL: https://issues.apache.org/jira/browse/HBASE-10583 > Project: HBase > Issue Type: Bug >Reporter: Liu Shaohui >Assignee: Liu Shaohui > Fix For: 0.94.18 > > Attachments: HBASE-10583-v1.diff > > > see HBASE-8402 -- This message was sent by Atlassian JIRA (v6.1.5#6160)
[jira] [Commented] (HBASE-10516) Refactor code where Threads.sleep is called within a while/for loop
[ https://issues.apache.org/jira/browse/HBASE-10516?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13909144#comment-13909144 ] Hudson commented on HBASE-10516: SUCCESS: Integrated in HBase-TRUNK-on-Hadoop-1.1 #96 (See [https://builds.apache.org/job/HBase-TRUNK-on-Hadoop-1.1/96/]) HBASE-10516 Refactor code where Threads.sleep is called within a while/for loop (Feng Honghua) (nkeywal: rev 1570524) * /hbase/trunk/hbase-client/src/main/java/org/apache/hadoop/hbase/zookeeper/ZooKeeperWatcher.java * /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/LruBlockCache.java * /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/master/AssignmentManager.java * /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/master/handler/DeleteTableHandler.java * /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegionFileSystem.java * /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegionServer.java * /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/master/TestAssignmentManager.java > Refactor code where Threads.sleep is called within a while/for loop > --- > > Key: HBASE-10516 > URL: https://issues.apache.org/jira/browse/HBASE-10516 > Project: HBase > Issue Type: Bug > Components: Client, master, regionserver >Affects Versions: 0.98.0, 0.99.0 >Reporter: Feng Honghua >Assignee: Feng Honghua > Fix For: 0.99.0 > > Attachments: HBASE-10516-trunk_v1.patch, HBASE-10516-trunk_v2.patch, > HBASE-10516-trunk_v3.patch > > > Threads.sleep implementation: > {code} > public static void sleep(long millis) { > try { > Thread.sleep(millis); > } catch (InterruptedException e) { > e.printStackTrace(); > Thread.currentThread().interrupt(); > } > } > {code} > From above implementation, the current thread's interrupt status is > restored/reset when InterruptedException is caught and handled. If this > method is called within a while/for loop, if a first InterruptedException is > thrown during sleep, it will make the Threads.sleep in next loop immediately > throw InterruptedException without expected sleep. This behavior breaks the > intention for independent sleep in each loop > I mentioned above in HBASE-10497 and this jira is created to handle it > separately per [~nkeywal]'s suggestion -- This message was sent by Atlassian JIRA (v6.1.5#6160)
[jira] [Commented] (HBASE-10526) Using Cell instead of KeyValue in HFileOutputFormat
[ https://issues.apache.org/jira/browse/HBASE-10526?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13909147#comment-13909147 ] Hudson commented on HBASE-10526: SUCCESS: Integrated in HBase-TRUNK-on-Hadoop-1.1 #96 (See [https://builds.apache.org/job/HBase-TRUNK-on-Hadoop-1.1/96/]) HBASE-10526 Using Cell instead of KeyValue in HFileOutputFormat (jxiang: rev 1570702) * /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/HFileOutputFormat.java * /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/HFileOutputFormat2.java * /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/mapreduce/TestHFileOutputFormat2.java > Using Cell instead of KeyValue in HFileOutputFormat > --- > > Key: HBASE-10526 > URL: https://issues.apache.org/jira/browse/HBASE-10526 > Project: HBase > Issue Type: Sub-task > Components: mapreduce >Reporter: Jimmy Xiang >Assignee: Jimmy Xiang > Fix For: 0.96.2, 0.98.1, 0.99.0 > > Attachments: hbase-10526.patch, hbase-10526_v1.1.patch, > hbase-10526_v2.patch, hbase-10526_v3.patch > > > HFileOutputFormat/KeyValueSortReducer use KeyValue. We should deprecate them > and use Cell instead. -- This message was sent by Atlassian JIRA (v6.1.5#6160)
[jira] [Commented] (HBASE-10527) TestTokenAuthentication fails with the IBM JDK
[ https://issues.apache.org/jira/browse/HBASE-10527?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13909146#comment-13909146 ] Hudson commented on HBASE-10527: SUCCESS: Integrated in HBase-TRUNK-on-Hadoop-1.1 #96 (See [https://builds.apache.org/job/HBase-TRUNK-on-Hadoop-1.1/96/]) HBASE-10527 Token authentication fails with IBM JDK (garyh: rev 1570437) * /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/security/token/AuthenticationTokenSecretManager.java > TestTokenAuthentication fails with the IBM JDK > -- > > Key: HBASE-10527 > URL: https://issues.apache.org/jira/browse/HBASE-10527 > Project: HBase > Issue Type: Bug >Affects Versions: 0.98.0 > Environment: IBM J9 VM (build 2.7, JRE 1.7.0 Linux amd64-64 > Compressed References 20131114_175264 (JIT enabled, AOT enabled) >Reporter: Andrew Purtell >Assignee: Gary Helmling >Priority: Minor > Fix For: 0.96.2, 0.98.1 > > Attachments: HBASE-10527.patch, > org.apache.hadoop.hbase.security.token.TestTokenAuthentication-output.txt.gz > > > "DIGEST-MD5: digest response format violation. Mismatched response." > The failure trace: > {noformat} > 2014-02-13 15:41:00,449 WARN [RpcServer.reader=1,port=54751] > ipc.RpcServer$Listener(794): RpcServer.listener,port=54751: count of bytes > read: 0 > javax.security.sasl.SaslException: DIGEST-MD5: digest response format > violation. Mismatched response. > at > com.ibm.security.sasl.digest.DigestMD5Server.validateClientResponse(DigestMD5Server.java:614) > at > com.ibm.security.sasl.digest.DigestMD5Server.evaluateResponse(DigestMD5Server.java:234) > at > org.apache.hadoop.hbase.ipc.RpcServer$Connection.saslReadAndProcess(RpcServer.java:1315) > at > org.apache.hadoop.hbase.ipc.RpcServer$Connection.readAndProcess(RpcServer.java:1501) > at > org.apache.hadoop.hbase.ipc.RpcServer$Listener.doRead(RpcServer.java:790) > at > org.apache.hadoop.hbase.ipc.RpcServer$Listener$Reader.doRunLoop(RpcServer.java:581) > at > org.apache.hadoop.hbase.ipc.RpcServer$Listener$Reader.run(RpcServer.java:556) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1170) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:640) > at java.lang.Thread.run(Thread.java:853) > {noformat} -- This message was sent by Atlassian JIRA (v6.1.5#6160)
[jira] [Commented] (HBASE-10585) Avoid early creation of Node objects in LRUDictionary.BidirectionalLRUMap
[ https://issues.apache.org/jira/browse/HBASE-10585?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13909148#comment-13909148 ] Hudson commented on HBASE-10585: SUCCESS: Integrated in HBase-TRUNK-on-Hadoop-1.1 #96 (See [https://builds.apache.org/job/HBase-TRUNK-on-Hadoop-1.1/96/]) HBASE-10585 Avoid early creation of Node objects in LRUDictionary.BidirectionalLRUMap.(Anoop) (anoopsamjohn: rev 1570672) * /hbase/trunk/hbase-common/src/main/java/org/apache/hadoop/hbase/io/util/LRUDictionary.java > Avoid early creation of Node objects in LRUDictionary.BidirectionalLRUMap > - > > Key: HBASE-10585 > URL: https://issues.apache.org/jira/browse/HBASE-10585 > Project: HBase > Issue Type: Bug >Reporter: Anoop Sam John >Assignee: Anoop Sam John > Fix For: 0.98.1, 0.99.0 > > Attachments: HBASE-10585.patch > > > When LRUDictionary initialized with N as the size, the BidirectionalLRUMap > creates N Node objects and kept in an array. It will be better not doing this > eager creation. Can create Node object on demand if array's current position > Node element is null. Once it is created the object can be reused as we do > now. -- This message was sent by Atlassian JIRA (v6.1.5#6160)
[jira] [Commented] (HBASE-10567) Add overwrite manifest option to ExportSnapshot
[ https://issues.apache.org/jira/browse/HBASE-10567?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13909143#comment-13909143 ] Hudson commented on HBASE-10567: SUCCESS: Integrated in HBase-TRUNK-on-Hadoop-1.1 #96 (See [https://builds.apache.org/job/HBase-TRUNK-on-Hadoop-1.1/96/]) HBASE-10567 Add overwrite manifest option to ExportSnapshot (mbertozzi: rev 1570502) * /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/snapshot/ExportSnapshot.java * /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/snapshot/TestExportSnapshot.java > Add overwrite manifest option to ExportSnapshot > --- > > Key: HBASE-10567 > URL: https://issues.apache.org/jira/browse/HBASE-10567 > Project: HBase > Issue Type: Bug > Components: snapshots >Affects Versions: 0.98.0, 0.94.16, 0.99.0, 0.96.1.1 >Reporter: Matteo Bertozzi >Assignee: Matteo Bertozzi >Priority: Minor > Fix For: 0.96.2, 0.98.1, 0.99.0 > > Attachments: HBASE-10567-v0.patch, HBASE-10567-v1.patch > > > If you want to export a snapshot twice (e.g. in case you accidentally removed > a file and now your snapshot is corrupted) you have to manually remove the > .hbase-snapshot/SNAPSHOT_NAME directory and then run the ExportSnapshot tool. > Add an -overwrite option to this operation automatically. -- This message was sent by Atlassian JIRA (v6.1.5#6160)
[jira] [Commented] (HBASE-10392) Correct references to hbase.regionserver.global.memstore.upperLimit
[ https://issues.apache.org/jira/browse/HBASE-10392?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13909145#comment-13909145 ] Hudson commented on HBASE-10392: SUCCESS: Integrated in HBase-TRUNK-on-Hadoop-1.1 #96 (See [https://builds.apache.org/job/HBase-TRUNK-on-Hadoop-1.1/96/]) HBASE-10392 Correct references to hbase.regionserver.global.memstore.upperLimit (ndimiduk: rev 1570721) * /hbase/trunk/hbase-common/src/main/java/org/apache/hadoop/hbase/HBaseConfiguration.java * /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/MemStoreFlusher.java * /hbase/trunk/src/main/docbkx/ops_mgt.xml * /hbase/trunk/src/main/docbkx/performance.xml > Correct references to hbase.regionserver.global.memstore.upperLimit > --- > > Key: HBASE-10392 > URL: https://issues.apache.org/jira/browse/HBASE-10392 > Project: HBase > Issue Type: Bug > Components: documentation >Reporter: Nick Dimiduk >Assignee: Nick Dimiduk > Fix For: 0.99.0 > > Attachments: HBASE-10392.0.patch, HBASE-10392.1.patch, > HBASE-10392.2.patch, HBASE-10392.3.patch, HBASE-10392.4.patch, > HBASE-10392.5.patch > > > As part of the awesome new HBASE-5349, a couple references to > {{hbase.regionserver.global.memstore.upperLimit}} was missed. Clean those up > to use the new {{hbase.regionserver.global.memstore.size}} instead. -- This message was sent by Atlassian JIRA (v6.1.5#6160)
[jira] [Updated] (HBASE-10362) HBCK changes for supporting region replicas
[ https://issues.apache.org/jira/browse/HBASE-10362?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Devaraj Das updated HBASE-10362: Attachment: 10362-3.txt I moved the skipCheck in the place where the HbckInfo are constructed. > HBCK changes for supporting region replicas > --- > > Key: HBASE-10362 > URL: https://issues.apache.org/jira/browse/HBASE-10362 > Project: HBase > Issue Type: Sub-task > Components: hbck >Reporter: Enis Soztutar >Assignee: Devaraj Das > Fix For: 0.99.0 > > Attachments: 10362-1.txt, 10362-2.txt, 10362-3.txt > > > We should support region replicas in HBCK. The changes are probably not that > intrusive. -- This message was sent by Atlassian JIRA (v6.1.5#6160)
[jira] [Commented] (HBASE-10587) Master metrics clusterRequests is wrong
[ https://issues.apache.org/jira/browse/HBASE-10587?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13909129#comment-13909129 ] Enis Soztutar commented on HBASE-10587: --- +1 > Master metrics clusterRequests is wrong > --- > > Key: HBASE-10587 > URL: https://issues.apache.org/jira/browse/HBASE-10587 > Project: HBase > Issue Type: Bug > Components: master, metrics >Affects Versions: 0.96.0 >Reporter: Jimmy Xiang >Assignee: Jimmy Xiang > Attachments: hbase-10587.patch > > > In the master jmx, metrics clusterRequests increases so fast. Looked into the > code and found the calculation is a little bit wrong. It's a counter. > However, for each region server report, the total number of requests is added > to clusterRequests. That means it's added multiple times. -- This message was sent by Atlassian JIRA (v6.1.5#6160)
[jira] [Commented] (HBASE-8402) ScanMetrics depends on number of rpc calls to the server.
[ https://issues.apache.org/jira/browse/HBASE-8402?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13909112#comment-13909112 ] Hudson commented on HBASE-8402: --- SUCCESS: Integrated in HBase-0.94-security #419 (See [https://builds.apache.org/job/HBase-0.94-security/419/]) HBASE-10583 backport HBASE-8402 to 0.94 - ScanMetrics depends on number of rpc calls to the server. (Liu Shaohui and Himanshu Vashishtha) (larsh: rev 1570756) * /hbase/branches/0.94/src/main/java/org/apache/hadoop/hbase/client/ClientScanner.java * /hbase/branches/0.94/src/test/java/org/apache/hadoop/hbase/client/TestFromClientSide.java > ScanMetrics depends on number of rpc calls to the server. > - > > Key: HBASE-8402 > URL: https://issues.apache.org/jira/browse/HBASE-8402 > Project: HBase > Issue Type: Bug > Components: Client, metrics >Affects Versions: 0.95.0 >Reporter: Himanshu Vashishtha >Assignee: Himanshu Vashishtha >Priority: Minor > Fix For: 0.98.0, 0.95.1 > > Attachments: HBASE-8402-v1.patch, HBASE-8402-v2.patch > > > Currently, scan metrics is not published in case there is one trip to server. > I was testing it on a small row range (200 rows) with a large cache value > (1000). It doesn't look right as metrics should not depend on number of rpc > calls (number of rpc call is just one metrics fwiw). -- This message was sent by Atlassian JIRA (v6.1.5#6160)
[jira] [Commented] (HBASE-10583) backport HBASE-8402 to 0.94 - ScanMetrics depends on number of rpc calls to the server.
[ https://issues.apache.org/jira/browse/HBASE-10583?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13909113#comment-13909113 ] Hudson commented on HBASE-10583: SUCCESS: Integrated in HBase-0.94-security #419 (See [https://builds.apache.org/job/HBase-0.94-security/419/]) HBASE-10583 backport HBASE-8402 to 0.94 - ScanMetrics depends on number of rpc calls to the server. (Liu Shaohui and Himanshu Vashishtha) (larsh: rev 1570756) * /hbase/branches/0.94/src/main/java/org/apache/hadoop/hbase/client/ClientScanner.java * /hbase/branches/0.94/src/test/java/org/apache/hadoop/hbase/client/TestFromClientSide.java > backport HBASE-8402 to 0.94 - ScanMetrics depends on number of rpc calls to > the server. > --- > > Key: HBASE-10583 > URL: https://issues.apache.org/jira/browse/HBASE-10583 > Project: HBase > Issue Type: Bug >Reporter: Liu Shaohui >Assignee: Liu Shaohui > Fix For: 0.94.18 > > Attachments: HBASE-10583-v1.diff > > > see HBASE-8402 -- This message was sent by Atlassian JIRA (v6.1.5#6160)
[jira] [Comment Edited] (HBASE-7295) Contention in HBaseClient.getConnection
[ https://issues.apache.org/jira/browse/HBASE-7295?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13909105#comment-13909105 ] Lars Hofhansl edited comment on HBASE-7295 at 2/22/14 1:22 AM: --- I haven't seen any convincing evidence that this is an issue. Do you have seen this issue [~mubarakseyed]? Do you have a stack trace? was (Author: lhofhansl): I haven't see any convincing evidence that this is an issue. Do you have seen this issue [~mubarakseyed]? Do you have a stack trace? > Contention in HBaseClient.getConnection > --- > > Key: HBASE-7295 > URL: https://issues.apache.org/jira/browse/HBASE-7295 > Project: HBase > Issue Type: Improvement >Affects Versions: 0.94.3 >Reporter: Varun Sharma >Assignee: Varun Sharma > Attachments: 7295-0.94-v2.txt, 7295-0.94-v3.txt, 7295-0.94-v4.txt, > 7295-0.94-v5.txt, 7295-0.94.txt, 7295-trunk-v2.txt, 7295-trunk-v3.txt, > 7295-trunk-v3.txt, 7295-trunk-v4.txt, 7295-trunk.txt, 7295-trunk.txt, > TestSynchronized.java, TestVolatile.java, synchronized_output.txt, > volatile_output.txt > > > HBaseClient.getConnection() synchronizes on the connections object. We found > severe contention on a thrift gateway which was fanning out roughly 3000+ > calls per second to hbase region servers. The thrift gateway had 2000+ > threads for handling incoming connections. Threads were blocked on the > syncrhonized block - we set ipc.pool.size to 200. Since we are using > RoundRobin/ThreadLocal pool only - its not necessary to synchronize on > connections - it might lead to cases where we might go slightly over the > ipc.max.pool.size() but the additional connections would timeout after > maxIdleTime - underlying PoolMap connections object is thread safe. -- This message was sent by Atlassian JIRA (v6.1.5#6160)
[jira] [Commented] (HBASE-7295) Contention in HBaseClient.getConnection
[ https://issues.apache.org/jira/browse/HBASE-7295?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13909105#comment-13909105 ] Lars Hofhansl commented on HBASE-7295: -- I haven't see any convincing evidence that this is an issue. Do you have seen this issue [~mubarakseyed]? Do you have a stack trace? > Contention in HBaseClient.getConnection > --- > > Key: HBASE-7295 > URL: https://issues.apache.org/jira/browse/HBASE-7295 > Project: HBase > Issue Type: Improvement >Affects Versions: 0.94.3 >Reporter: Varun Sharma >Assignee: Varun Sharma > Attachments: 7295-0.94-v2.txt, 7295-0.94-v3.txt, 7295-0.94-v4.txt, > 7295-0.94-v5.txt, 7295-0.94.txt, 7295-trunk-v2.txt, 7295-trunk-v3.txt, > 7295-trunk-v3.txt, 7295-trunk-v4.txt, 7295-trunk.txt, 7295-trunk.txt, > TestSynchronized.java, TestVolatile.java, synchronized_output.txt, > volatile_output.txt > > > HBaseClient.getConnection() synchronizes on the connections object. We found > severe contention on a thrift gateway which was fanning out roughly 3000+ > calls per second to hbase region servers. The thrift gateway had 2000+ > threads for handling incoming connections. Threads were blocked on the > syncrhonized block - we set ipc.pool.size to 200. Since we are using > RoundRobin/ThreadLocal pool only - its not necessary to synchronize on > connections - it might lead to cases where we might go slightly over the > ipc.max.pool.size() but the additional connections would timeout after > maxIdleTime - underlying PoolMap connections object is thread safe. -- This message was sent by Atlassian JIRA (v6.1.5#6160)
[jira] [Commented] (HBASE-10567) Add overwrite manifest option to ExportSnapshot
[ https://issues.apache.org/jira/browse/HBASE-10567?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13909089#comment-13909089 ] Aleksandr Shulman commented on HBASE-10567: --- Took a first read of the patch. Looks good to me. I'd maybe like to see a few more tests, but this is probably okay for now. > Add overwrite manifest option to ExportSnapshot > --- > > Key: HBASE-10567 > URL: https://issues.apache.org/jira/browse/HBASE-10567 > Project: HBase > Issue Type: Bug > Components: snapshots >Affects Versions: 0.98.0, 0.94.16, 0.99.0, 0.96.1.1 >Reporter: Matteo Bertozzi >Assignee: Matteo Bertozzi >Priority: Minor > Fix For: 0.96.2, 0.98.1, 0.99.0 > > Attachments: HBASE-10567-v0.patch, HBASE-10567-v1.patch > > > If you want to export a snapshot twice (e.g. in case you accidentally removed > a file and now your snapshot is corrupted) you have to manually remove the > .hbase-snapshot/SNAPSHOT_NAME directory and then run the ExportSnapshot tool. > Add an -overwrite option to this operation automatically. -- This message was sent by Atlassian JIRA (v6.1.5#6160)
[jira] [Commented] (HBASE-10581) ACL znode are left without PBed during upgrading hbase0.94* to hbase0.96+
[ https://issues.apache.org/jira/browse/HBASE-10581?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13909090#comment-13909090 ] Himanshu Vashishtha commented on HBASE-10581: - Yeah, I okay'ed it in my last comment considering such odd usages… But IMHO, any non-hbase process creating arbitrary znodes under /hbase doesn't sound right. If we treat it similar to creating arbitrary files in hbase directory, then a recursive delete of such znodes is the right option (and doc it accordingly in the book). Having said that, I am still okay if we don't delete them. :) > ACL znode are left without PBed during upgrading hbase0.94* to hbase0.96+ > - > > Key: HBASE-10581 > URL: https://issues.apache.org/jira/browse/HBASE-10581 > Project: HBase > Issue Type: Bug >Affects Versions: 0.98.0, 0.96.0, 0.96.1 >Reporter: Jeffrey Zhong >Assignee: Jeffrey Zhong >Priority: Critical > Attachments: hbase-10581.patch > > > ACL znodes are left in the upgrade process when upgrading 0.94 to 0.96+ > Those 0.94 znodes will choke HMaster because their data aren't PBed. -- This message was sent by Atlassian JIRA (v6.1.5#6160)
[jira] [Updated] (HBASE-9117) Remove HTablePool and all HConnection pooling related APIs
[ https://issues.apache.org/jira/browse/HBASE-9117?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Nick Dimiduk updated HBASE-9117: Attachment: HBASE-9117.05.patch Rebased to HEAD of trunk. Compiles at least, probably lots of broken behavior and even more broken tests. As for deprecating vs removing, the biggest change that will be painful to deprecate is the Configuration-instance-based automagical connection management. > Remove HTablePool and all HConnection pooling related APIs > -- > > Key: HBASE-9117 > URL: https://issues.apache.org/jira/browse/HBASE-9117 > Project: HBase > Issue Type: Bug >Reporter: Lars Hofhansl >Assignee: Nick Dimiduk >Priority: Critical > Fix For: 0.99.0 > > Attachments: HBASE-9117.00.patch, HBASE-9117.01.patch, > HBASE-9117.02.patch, HBASE-9117.03.patch, HBASE-9117.04.patch, > HBASE-9117.05.patch > > > The recommended way is now: > # Create an HConnection: HConnectionManager.createConnection(...) > # Create a light HTable: HConnection.getTable(...) > # table.close() > # connection.close() > All other API and pooling will be removed. -- This message was sent by Atlassian JIRA (v6.1.5#6160)
[jira] [Updated] (HBASE-10580) IntegrationTestingUtility#restoreCluster leak resource when running in a mini cluster mode
[ https://issues.apache.org/jira/browse/HBASE-10580?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jeffrey Zhong updated HBASE-10580: -- Resolution: Fixed Fix Version/s: 0.99.0 0.98.1 0.96.2 Hadoop Flags: Reviewed Status: Resolved (was: Patch Available) Thanks [~enis] for the review! I've integrated the changes into trunk, 0.98 & 0.96 branch. > IntegrationTestingUtility#restoreCluster leak resource when running in a mini > cluster mode > -- > > Key: HBASE-10580 > URL: https://issues.apache.org/jira/browse/HBASE-10580 > Project: HBase > Issue Type: Bug > Components: test >Affects Versions: 0.98.0, 0.96.0, 0.96.1, 0.99.0 >Reporter: Jeffrey Zhong >Assignee: Jeffrey Zhong > Fix For: 0.96.2, 0.98.1, 0.99.0 > > Attachments: hbase-10580.patch > > > When the utility isn't run in distributed cluster mode, the restore only > shutdown MiniHBaseCluster not MiniDFSCluster & MiniZKCluster. -- This message was sent by Atlassian JIRA (v6.1.5#6160)
[jira] [Commented] (HBASE-10547) TestFixedLengthWrapper#testReadWrite occasionally fails with the IBM JDK
[ https://issues.apache.org/jira/browse/HBASE-10547?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13909070#comment-13909070 ] Hudson commented on HBASE-10547: SUCCESS: Integrated in hbase-0.96-hadoop2 #211 (See [https://builds.apache.org/job/hbase-0.96-hadoop2/211/]) HBASE-10547 TestFixedLengthWrapper#testReadWrite occasionally fails with the IBM JDK (apurtell: rev 1570747) * /hbase/branches/0.96/hbase-common/src/test/java/org/apache/hadoop/hbase/types/TestFixedLengthWrapper.java > TestFixedLengthWrapper#testReadWrite occasionally fails with the IBM JDK > > > Key: HBASE-10547 > URL: https://issues.apache.org/jira/browse/HBASE-10547 > Project: HBase > Issue Type: Bug >Affects Versions: 0.98.0, 0.96.1.1 > Environment: IBM J9 VM (build 2.7, JRE 1.7.0 Linux amd64-64 > Compressed References 20131114_175264 (JIT enabled, AOT enabled) >Reporter: Andrew Purtell >Assignee: Andrew Purtell >Priority: Trivial > Fix For: 0.96.2, 0.98.1, 0.99.0 > > Attachments: 10547.patch > > > Here's the trace. > {noformat} > Tests run: 4, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 29.288 sec > <<< FAILURE! > testReadWrite(org.apache.hadoop.hbase.types.TestFixedLengthWrapper) Time > elapsed: 0.025 sec <<< FAILURE! > arrays first differed at element [8]; expected:<-40> but was:<0> > at > org.junit.internal.ComparisonCriteria.arrayEquals(ComparisonCriteria.java:50) > at org.junit.Assert.internalArrayEquals(Assert.java:473) > at org.junit.Assert.assertArrayEquals(Assert.java:294) > at org.junit.Assert.assertArrayEquals(Assert.java:305) > at > org.apache.hadoop.hbase.types.TestFixedLengthWrapper.testReadWrite(TestFixedLengthWrapper.java:60) > {noformat} > This is with 0.98.0. -- This message was sent by Atlassian JIRA (v6.1.5#6160)
[jira] [Commented] (HBASE-10586) hadoop2-compat IPC metric registred twice
[ https://issues.apache.org/jira/browse/HBASE-10586?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13909069#comment-13909069 ] Hudson commented on HBASE-10586: SUCCESS: Integrated in hbase-0.96-hadoop2 #211 (See [https://builds.apache.org/job/hbase-0.96-hadoop2/211/]) HBASE-10586 hadoop2-compat IPC metric registred twice (mbertozzi: rev 1570742) * /hbase/branches/0.96/hbase-hadoop2-compat/src/main/java/org/apache/hadoop/hbase/ipc/MetricsHBaseServerSourceImpl.java > hadoop2-compat IPC metric registred twice > - > > Key: HBASE-10586 > URL: https://issues.apache.org/jira/browse/HBASE-10586 > Project: HBase > Issue Type: Bug > Components: metrics >Affects Versions: 0.98.0, 0.96.1.1 >Reporter: Matteo Bertozzi >Assignee: Matteo Bertozzi > Fix For: 0.96.2, 0.98.1, 0.99.0 > > Attachments: HBASE-10586-v0.patch > > > There is an extra snapshot/addrecord line in hadoop2-compat > MetricsHBaseServerSourceImpl resulting in IPC metrics with a ".1" > the extra line is not present in the hadoop1-compat "mirror" > {code} > "numCallsInGeneralQueue.1" : 0, > "numCallsInReplicationQueue.1" : 0, > "numCallsInPriorityQueue.1" : 0, > ... > {code} -- This message was sent by Atlassian JIRA (v6.1.5#6160)
[jira] [Commented] (HBASE-10526) Using Cell instead of KeyValue in HFileOutputFormat
[ https://issues.apache.org/jira/browse/HBASE-10526?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13909071#comment-13909071 ] Hudson commented on HBASE-10526: SUCCESS: Integrated in hbase-0.96-hadoop2 #211 (See [https://builds.apache.org/job/hbase-0.96-hadoop2/211/]) HBASE-10526 Using Cell instead of KeyValue in HFileOutputFormat (jxiang: rev 1570714) * /hbase/branches/0.96/hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/HFileOutputFormat.java * /hbase/branches/0.96/hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/HFileOutputFormat2.java * /hbase/branches/0.96/hbase-server/src/test/java/org/apache/hadoop/hbase/mapreduce/TestHFileOutputFormat2.java > Using Cell instead of KeyValue in HFileOutputFormat > --- > > Key: HBASE-10526 > URL: https://issues.apache.org/jira/browse/HBASE-10526 > Project: HBase > Issue Type: Sub-task > Components: mapreduce >Reporter: Jimmy Xiang >Assignee: Jimmy Xiang > Fix For: 0.96.2, 0.98.1, 0.99.0 > > Attachments: hbase-10526.patch, hbase-10526_v1.1.patch, > hbase-10526_v2.patch, hbase-10526_v3.patch > > > HFileOutputFormat/KeyValueSortReducer use KeyValue. We should deprecate them > and use Cell instead. -- This message was sent by Atlassian JIRA (v6.1.5#6160)
[jira] [Commented] (HBASE-10581) ACL znode are left without PBed during upgrading hbase0.94* to hbase0.96+
[ https://issues.apache.org/jira/browse/HBASE-10581?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13909061#comment-13909061 ] Jeffrey Zhong commented on HBASE-10581: --- {quote} It's kind of crazy to even tacitly support customers writing their own custom znodes under /hbase , right? {quote} It sounds crazy but never under estimate our users' innovation here. In addition, cleaning data without knowing what it is also seems a bit dangerous. I'll wait more time to see if other more folks want to delete unknown znodes at the end. If that's the case, I'll amend the patch otherwise I'll check the current version early next week. Thanks for the reviews & feedbacks. > ACL znode are left without PBed during upgrading hbase0.94* to hbase0.96+ > - > > Key: HBASE-10581 > URL: https://issues.apache.org/jira/browse/HBASE-10581 > Project: HBase > Issue Type: Bug >Affects Versions: 0.98.0, 0.96.0, 0.96.1 >Reporter: Jeffrey Zhong >Assignee: Jeffrey Zhong >Priority: Critical > Attachments: hbase-10581.patch > > > ACL znodes are left in the upgrade process when upgrading 0.94 to 0.96+ > Those 0.94 znodes will choke HMaster because their data aren't PBed. -- This message was sent by Atlassian JIRA (v6.1.5#6160)
[jira] [Commented] (HBASE-10526) Using Cell instead of KeyValue in HFileOutputFormat
[ https://issues.apache.org/jira/browse/HBASE-10526?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13909060#comment-13909060 ] Hudson commented on HBASE-10526: SUCCESS: Integrated in hbase-0.96 #307 (See [https://builds.apache.org/job/hbase-0.96/307/]) HBASE-10526 Using Cell instead of KeyValue in HFileOutputFormat (jxiang: rev 1570714) * /hbase/branches/0.96/hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/HFileOutputFormat.java * /hbase/branches/0.96/hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/HFileOutputFormat2.java * /hbase/branches/0.96/hbase-server/src/test/java/org/apache/hadoop/hbase/mapreduce/TestHFileOutputFormat2.java > Using Cell instead of KeyValue in HFileOutputFormat > --- > > Key: HBASE-10526 > URL: https://issues.apache.org/jira/browse/HBASE-10526 > Project: HBase > Issue Type: Sub-task > Components: mapreduce >Reporter: Jimmy Xiang >Assignee: Jimmy Xiang > Fix For: 0.96.2, 0.98.1, 0.99.0 > > Attachments: hbase-10526.patch, hbase-10526_v1.1.patch, > hbase-10526_v2.patch, hbase-10526_v3.patch > > > HFileOutputFormat/KeyValueSortReducer use KeyValue. We should deprecate them > and use Cell instead. -- This message was sent by Atlassian JIRA (v6.1.5#6160)
[jira] [Created] (HBASE-10590) Update contents about tracing in the Reference Guide
Masatake Iwasaki created HBASE-10590: Summary: Update contents about tracing in the Reference Guide Key: HBASE-10590 URL: https://issues.apache.org/jira/browse/HBASE-10590 Project: HBase Issue Type: Improvement Components: documentation Reporter: Masatake Iwasaki Priority: Minor Adding explanation about client side settings and shell command for tracing. -- This message was sent by Atlassian JIRA (v6.1.5#6160)
[jira] [Resolved] (HBASE-10509) TestRowProcessorEndpoint fails with missing required field row_processor_result
[ https://issues.apache.org/jira/browse/HBASE-10509?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrew Purtell resolved HBASE-10509. Resolution: Cannot Reproduce For whatever reason I cannot reproduce this with the latest trunk or 0.98 branches. > TestRowProcessorEndpoint fails with missing required field > row_processor_result > --- > > Key: HBASE-10509 > URL: https://issues.apache.org/jira/browse/HBASE-10509 > Project: HBase > Issue Type: Bug > Environment: IBM J9 VM (build 2.7, JRE 1.7.0 Linux amd64-64 > Compressed References 20131114_175264 (JIT enabled, AOT enabled) >Reporter: Andrew Purtell >Assignee: Andrew Purtell >Priority: Minor > Attachments: 10509.patch > > > Seen with IBM JDK 7: > {noformat} > Caused by: com.google.protobuf.UninitializedMessageException: Message missing > required fields: row_processor_result > at > com.google.protobuf.AbstractMessage$Builder.newUninitializedMessageException(AbstractMessage.java:770) > at > org.apache.hadoop.hbase.protobuf.generated.RowProcessorProtos$ProcessResponse$Builder.build(RowProcessorProtos.java:1301) > at > org.apache.hadoop.hbase.protobuf.generated.RowProcessorProtos$ProcessResponse$Builder.build(RowProcessorProtos.java:1245) > at > org.apache.hadoop.hbase.regionserver.HRegion.execService(HRegion.java:5482) > {noformat} -- This message was sent by Atlassian JIRA (v6.1.5#6160)
[jira] [Updated] (HBASE-10509) TestRowProcessorEndpoint fails with missing required field row_processor_result
[ https://issues.apache.org/jira/browse/HBASE-10509?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrew Purtell updated HBASE-10509: --- Affects Version/s: (was: 0.99.0) (was: 0.98.0) > TestRowProcessorEndpoint fails with missing required field > row_processor_result > --- > > Key: HBASE-10509 > URL: https://issues.apache.org/jira/browse/HBASE-10509 > Project: HBase > Issue Type: Bug > Environment: IBM J9 VM (build 2.7, JRE 1.7.0 Linux amd64-64 > Compressed References 20131114_175264 (JIT enabled, AOT enabled) >Reporter: Andrew Purtell >Assignee: Andrew Purtell >Priority: Minor > Attachments: 10509.patch > > > Seen with IBM JDK 7: > {noformat} > Caused by: com.google.protobuf.UninitializedMessageException: Message missing > required fields: row_processor_result > at > com.google.protobuf.AbstractMessage$Builder.newUninitializedMessageException(AbstractMessage.java:770) > at > org.apache.hadoop.hbase.protobuf.generated.RowProcessorProtos$ProcessResponse$Builder.build(RowProcessorProtos.java:1301) > at > org.apache.hadoop.hbase.protobuf.generated.RowProcessorProtos$ProcessResponse$Builder.build(RowProcessorProtos.java:1245) > at > org.apache.hadoop.hbase.regionserver.HRegion.execService(HRegion.java:5482) > {noformat} -- This message was sent by Atlassian JIRA (v6.1.5#6160)
[jira] [Updated] (HBASE-10589) Reduce unnecessary TestRowProcessorEndpoint resource usage
[ https://issues.apache.org/jira/browse/HBASE-10589?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrew Purtell updated HBASE-10589: --- Resolution: Fixed Fix Version/s: 0.96.2 Hadoop Flags: Reviewed Status: Resolved (was: Patch Available) Committed to trunk, 0.98, and 0.96. Thanks Stack. > Reduce unnecessary TestRowProcessorEndpoint resource usage > -- > > Key: HBASE-10589 > URL: https://issues.apache.org/jira/browse/HBASE-10589 > Project: HBase > Issue Type: Improvement >Affects Versions: 0.98.0 >Reporter: Andrew Purtell >Assignee: Andrew Purtell >Priority: Trivial > Fix For: 0.96.2, 0.98.1, 0.99.0 > > Attachments: 10589.patch > > > We don't need 1000 concurrent threads when 100 will do. -- This message was sent by Atlassian JIRA (v6.1.5#6160)
[jira] [Commented] (HBASE-10589) Reduce unnecessary TestRowProcessorEndpoint resource usage
[ https://issues.apache.org/jira/browse/HBASE-10589?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13909042#comment-13909042 ] stack commented on HBASE-10589: --- +1 > Reduce unnecessary TestRowProcessorEndpoint resource usage > -- > > Key: HBASE-10589 > URL: https://issues.apache.org/jira/browse/HBASE-10589 > Project: HBase > Issue Type: Improvement >Affects Versions: 0.98.0 >Reporter: Andrew Purtell >Assignee: Andrew Purtell >Priority: Trivial > Fix For: 0.98.1, 0.99.0 > > Attachments: 10589.patch > > > We don't need 1000 concurrent threads when 100 will do. -- This message was sent by Atlassian JIRA (v6.1.5#6160)
[jira] [Updated] (HBASE-10589) Reduce unnecessary TestRowProcessorEndpoint resource usage
[ https://issues.apache.org/jira/browse/HBASE-10589?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrew Purtell updated HBASE-10589: --- Issue Type: Improvement (was: Sub-task) Parent: (was: HBASE-10509) > Reduce unnecessary TestRowProcessorEndpoint resource usage > -- > > Key: HBASE-10589 > URL: https://issues.apache.org/jira/browse/HBASE-10589 > Project: HBase > Issue Type: Improvement >Affects Versions: 0.98.0 >Reporter: Andrew Purtell >Assignee: Andrew Purtell >Priority: Trivial > Fix For: 0.98.1, 0.99.0 > > Attachments: 10589.patch > > > We don't need 1000 concurrent threads when 100 will do. -- This message was sent by Atlassian JIRA (v6.1.5#6160)
[jira] [Commented] (HBASE-10351) LoadBalancer changes for supporting region replicas
[ https://issues.apache.org/jira/browse/HBASE-10351?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13909030#comment-13909030 ] Sergey Shelukhin commented on HBASE-10351: -- oh, nm, I thought it's about MultiAction-Action part, the cluster actions I have seen > LoadBalancer changes for supporting region replicas > --- > > Key: HBASE-10351 > URL: https://issues.apache.org/jira/browse/HBASE-10351 > Project: HBase > Issue Type: Sub-task > Components: master >Affects Versions: 0.99.0 >Reporter: Enis Soztutar >Assignee: Enis Soztutar > Attachments: hbase-10351_v0.patch, hbase-10351_v1.patch, > hbase-10351_v3.patch > > > LoadBalancer has to be aware of and enforce placement of region replicas so > that the replicas are not co-hosted in the same server, host or rack. This > will ensure that the region is highly available during process / host / rack > failover. -- This message was sent by Atlassian JIRA (v6.1.5#6160)
[jira] [Commented] (HBASE-8304) Bulkload fail to remove files if fs.default.name / fs.defaultFS is configured without default port.
[ https://issues.apache.org/jira/browse/HBASE-8304?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13909028#comment-13909028 ] Brandon Li commented on HBASE-8304: --- {quote}+ if (desFs.getFileChecksum(tmpPath).equals(srcFs.getFileChecksum(srcPath)){quote} It will throw NPE if desFs doesn't overload FileSystem#getFileChecksum(). FileSystem#getCanonicalUri() can return a canonical path with port. However it's protected not public. You need to change it to public method first. > Bulkload fail to remove files if fs.default.name / fs.defaultFS is configured > without default port. > --- > > Key: HBASE-8304 > URL: https://issues.apache.org/jira/browse/HBASE-8304 > Project: HBase > Issue Type: Bug > Components: HFile, regionserver >Affects Versions: 0.94.5 >Reporter: Raymond Liu > Labels: bulkloader > Attachments: HBASE-9537.patch > > > When fs.default.name or fs.defaultFS in hadoop core-site.xml is configured as > hdfs://ip, and hbase.rootdir is configured as hdfs://ip:port/hbaserootdir > where port is the hdfs namenode's default port. the bulkload operation will > not remove the file in bulk output dir. Store::bulkLoadHfile will think > hdfs:://ip and hdfs:://ip:port as different filesystem and go with copy > approaching instead of rename. > The root cause is that hbase master will rewrite fs.default.name/fs.defaultFS > according to hbase.rootdir when regionserver started, thus, dest fs uri from > the hregion will not matching src fs uri passed from client. > any suggestion what is the best approaching to fix this issue? > I kind of think that we could check for default port if src uri come without > port info. -- This message was sent by Atlassian JIRA (v6.1.5#6160)
[jira] [Commented] (HBASE-10351) LoadBalancer changes for supporting region replicas
[ https://issues.apache.org/jira/browse/HBASE-10351?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13909029#comment-13909029 ] Sergey Shelukhin commented on HBASE-10351: -- Is this the same patch I reviewed on github? What is the refactor of Action class, I may have missed that > LoadBalancer changes for supporting region replicas > --- > > Key: HBASE-10351 > URL: https://issues.apache.org/jira/browse/HBASE-10351 > Project: HBase > Issue Type: Sub-task > Components: master >Affects Versions: 0.99.0 >Reporter: Enis Soztutar >Assignee: Enis Soztutar > Attachments: hbase-10351_v0.patch, hbase-10351_v1.patch, > hbase-10351_v3.patch > > > LoadBalancer has to be aware of and enforce placement of region replicas so > that the replicas are not co-hosted in the same server, host or rack. This > will ensure that the region is highly available during process / host / rack > failover. -- This message was sent by Atlassian JIRA (v6.1.5#6160)
[jira] [Updated] (HBASE-10589) Reduce unnecessary TestRowProcessorEndpoint resource usage
[ https://issues.apache.org/jira/browse/HBASE-10589?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrew Purtell updated HBASE-10589: --- Attachment: 10589.patch > Reduce unnecessary TestRowProcessorEndpoint resource usage > -- > > Key: HBASE-10589 > URL: https://issues.apache.org/jira/browse/HBASE-10589 > Project: HBase > Issue Type: Sub-task >Affects Versions: 0.98.0 >Reporter: Andrew Purtell >Assignee: Andrew Purtell >Priority: Trivial > Fix For: 0.98.1, 0.99.0 > > Attachments: 10589.patch > > > We don't need 1000 concurrent threads when 100 will do. -- This message was sent by Atlassian JIRA (v6.1.5#6160)
[jira] [Updated] (HBASE-10589) Reduce unnecessary TestRowProcessorEndpoint resource usage
[ https://issues.apache.org/jira/browse/HBASE-10589?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrew Purtell updated HBASE-10589: --- Status: Patch Available (was: Open) > Reduce unnecessary TestRowProcessorEndpoint resource usage > -- > > Key: HBASE-10589 > URL: https://issues.apache.org/jira/browse/HBASE-10589 > Project: HBase > Issue Type: Sub-task >Affects Versions: 0.98.0 >Reporter: Andrew Purtell >Assignee: Andrew Purtell >Priority: Trivial > Fix For: 0.98.1, 0.99.0 > > Attachments: 10589.patch > > > We don't need 1000 concurrent threads when 100 will do. -- This message was sent by Atlassian JIRA (v6.1.5#6160)
[jira] [Created] (HBASE-10589) Reduce unnecessary TestRowProcessorEndpoint resource usage
Andrew Purtell created HBASE-10589: -- Summary: Reduce unnecessary TestRowProcessorEndpoint resource usage Key: HBASE-10589 URL: https://issues.apache.org/jira/browse/HBASE-10589 Project: HBase Issue Type: Sub-task Affects Versions: 0.98.0 Reporter: Andrew Purtell Assignee: Andrew Purtell Priority: Trivial Fix For: 0.98.1, 0.99.0 We don't need 1000 concurrent threads when 100 will do. -- This message was sent by Atlassian JIRA (v6.1.5#6160)
[jira] [Commented] (HBASE-10362) HBCK changes for supporting region replicas
[ https://issues.apache.org/jira/browse/HBASE-10362?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13909023#comment-13909023 ] Enis Soztutar commented on HBASE-10362: --- Makes sense to skip checks for region replicas. Why do we set skipCheck in addServer() versus the constructor? > HBCK changes for supporting region replicas > --- > > Key: HBASE-10362 > URL: https://issues.apache.org/jira/browse/HBASE-10362 > Project: HBase > Issue Type: Sub-task > Components: hbck >Reporter: Enis Soztutar >Assignee: Devaraj Das > Fix For: 0.99.0 > > Attachments: 10362-1.txt, 10362-2.txt > > > We should support region replicas in HBCK. The changes are probably not that > intrusive. -- This message was sent by Atlassian JIRA (v6.1.5#6160)
[jira] [Commented] (HBASE-7295) Contention in HBaseClient.getConnection
[ https://issues.apache.org/jira/browse/HBASE-7295?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13909020#comment-13909020 ] Mubarak Seyed commented on HBASE-7295: -- Is there any update on this issue? Thanks. > Contention in HBaseClient.getConnection > --- > > Key: HBASE-7295 > URL: https://issues.apache.org/jira/browse/HBASE-7295 > Project: HBase > Issue Type: Improvement >Affects Versions: 0.94.3 >Reporter: Varun Sharma >Assignee: Varun Sharma > Attachments: 7295-0.94-v2.txt, 7295-0.94-v3.txt, 7295-0.94-v4.txt, > 7295-0.94-v5.txt, 7295-0.94.txt, 7295-trunk-v2.txt, 7295-trunk-v3.txt, > 7295-trunk-v3.txt, 7295-trunk-v4.txt, 7295-trunk.txt, 7295-trunk.txt, > TestSynchronized.java, TestVolatile.java, synchronized_output.txt, > volatile_output.txt > > > HBaseClient.getConnection() synchronizes on the connections object. We found > severe contention on a thrift gateway which was fanning out roughly 3000+ > calls per second to hbase region servers. The thrift gateway had 2000+ > threads for handling incoming connections. Threads were blocked on the > syncrhonized block - we set ipc.pool.size to 200. Since we are using > RoundRobin/ThreadLocal pool only - its not necessary to synchronize on > connections - it might lead to cases where we might go slightly over the > ipc.max.pool.size() but the additional connections would timeout after > maxIdleTime - underlying PoolMap connections object is thread safe. -- This message was sent by Atlassian JIRA (v6.1.5#6160)
[jira] [Resolved] (HBASE-10583) backport HBASE-8402 to 0.94 - ScanMetrics depends on number of rpc calls to the server.
[ https://issues.apache.org/jira/browse/HBASE-10583?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Lars Hofhansl resolved HBASE-10583. --- Resolution: Fixed Hadoop Flags: Reviewed Committed to 0.94. Thanks Liu and Himanshu. > backport HBASE-8402 to 0.94 - ScanMetrics depends on number of rpc calls to > the server. > --- > > Key: HBASE-10583 > URL: https://issues.apache.org/jira/browse/HBASE-10583 > Project: HBase > Issue Type: Bug >Reporter: Liu Shaohui >Assignee: Liu Shaohui > Fix For: 0.94.18 > > Attachments: HBASE-10583-v1.diff > > > see HBASE-8402 -- This message was sent by Atlassian JIRA (v6.1.5#6160)
[jira] [Updated] (HBASE-10583) backport HBASE-8402 to 0.94 - ScanMetrics depends on number of rpc calls to the server.
[ https://issues.apache.org/jira/browse/HBASE-10583?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Lars Hofhansl updated HBASE-10583: -- Summary: backport HBASE-8402 to 0.94 - ScanMetrics depends on number of rpc calls to the server. (was: backport HBASE-8402 to 0.94) > backport HBASE-8402 to 0.94 - ScanMetrics depends on number of rpc calls to > the server. > --- > > Key: HBASE-10583 > URL: https://issues.apache.org/jira/browse/HBASE-10583 > Project: HBase > Issue Type: Bug >Reporter: Liu Shaohui >Assignee: Liu Shaohui > Fix For: 0.94.18 > > Attachments: HBASE-10583-v1.diff > > > see HBASE-8402 -- This message was sent by Atlassian JIRA (v6.1.5#6160)
[jira] [Commented] (HBASE-10526) Using Cell instead of KeyValue in HFileOutputFormat
[ https://issues.apache.org/jira/browse/HBASE-10526?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13909011#comment-13909011 ] Hudson commented on HBASE-10526: FAILURE: Integrated in HBase-0.98 #176 (See [https://builds.apache.org/job/HBase-0.98/176/]) HBASE-10526 Using Cell instead of KeyValue in HFileOutputFormat (jxiang: rev 1570688) * /hbase/branches/0.98/hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/HFileOutputFormat.java * /hbase/branches/0.98/hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/HFileOutputFormat2.java * /hbase/branches/0.98/hbase-server/src/test/java/org/apache/hadoop/hbase/mapreduce/TestHFileOutputFormat2.java > Using Cell instead of KeyValue in HFileOutputFormat > --- > > Key: HBASE-10526 > URL: https://issues.apache.org/jira/browse/HBASE-10526 > Project: HBase > Issue Type: Sub-task > Components: mapreduce >Reporter: Jimmy Xiang >Assignee: Jimmy Xiang > Fix For: 0.96.2, 0.98.1, 0.99.0 > > Attachments: hbase-10526.patch, hbase-10526_v1.1.patch, > hbase-10526_v2.patch, hbase-10526_v3.patch > > > HFileOutputFormat/KeyValueSortReducer use KeyValue. We should deprecate them > and use Cell instead. -- This message was sent by Atlassian JIRA (v6.1.5#6160)
[jira] [Commented] (HBASE-10585) Avoid early creation of Node objects in LRUDictionary.BidirectionalLRUMap
[ https://issues.apache.org/jira/browse/HBASE-10585?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13909012#comment-13909012 ] Hudson commented on HBASE-10585: FAILURE: Integrated in HBase-0.98 #176 (See [https://builds.apache.org/job/HBase-0.98/176/]) HBASE-10585 Avoid early creation of Node objects in LRUDictionary.BidirectionalLRUMap.(Anoop) (anoopsamjohn: rev 1570671) * /hbase/branches/0.98/hbase-common/src/main/java/org/apache/hadoop/hbase/io/util/LRUDictionary.java > Avoid early creation of Node objects in LRUDictionary.BidirectionalLRUMap > - > > Key: HBASE-10585 > URL: https://issues.apache.org/jira/browse/HBASE-10585 > Project: HBase > Issue Type: Bug >Reporter: Anoop Sam John >Assignee: Anoop Sam John > Fix For: 0.98.1, 0.99.0 > > Attachments: HBASE-10585.patch > > > When LRUDictionary initialized with N as the size, the BidirectionalLRUMap > creates N Node objects and kept in an array. It will be better not doing this > eager creation. Can create Node object on demand if array's current position > Node element is null. Once it is created the object can be reused as we do > now. -- This message was sent by Atlassian JIRA (v6.1.5#6160)
[jira] [Updated] (HBASE-9117) Remove HTablePool and all HConnection pooling related APIs
[ https://issues.apache.org/jira/browse/HBASE-9117?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Enis Soztutar updated HBASE-9117: - Priority: Critical (was: Major) > Remove HTablePool and all HConnection pooling related APIs > -- > > Key: HBASE-9117 > URL: https://issues.apache.org/jira/browse/HBASE-9117 > Project: HBase > Issue Type: Bug >Reporter: Lars Hofhansl >Assignee: Nick Dimiduk >Priority: Critical > Fix For: 0.99.0 > > Attachments: HBASE-9117.00.patch, HBASE-9117.01.patch, > HBASE-9117.02.patch, HBASE-9117.03.patch, HBASE-9117.04.patch > > > The recommended way is now: > # Create an HConnection: HConnectionManager.createConnection(...) > # Create a light HTable: HConnection.getTable(...) > # table.close() > # connection.close() > All other API and pooling will be removed. -- This message was sent by Atlassian JIRA (v6.1.5#6160)
[jira] [Commented] (HBASE-9117) Remove HTablePool and all HConnection pooling related APIs
[ https://issues.apache.org/jira/browse/HBASE-9117?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13909000#comment-13909000 ] Enis Soztutar commented on HBASE-9117: -- Thanks Nick for rekindling this jira. This looks pretty important for 1.0. Let me mark this as critical. HBASE-10479 did some cleanup around this, but more might be needed. I think we should still deprecate stuff rather than removing those for 1.0 > Remove HTablePool and all HConnection pooling related APIs > -- > > Key: HBASE-9117 > URL: https://issues.apache.org/jira/browse/HBASE-9117 > Project: HBase > Issue Type: Bug >Reporter: Lars Hofhansl >Assignee: Nick Dimiduk > Fix For: 0.99.0 > > Attachments: HBASE-9117.00.patch, HBASE-9117.01.patch, > HBASE-9117.02.patch, HBASE-9117.03.patch, HBASE-9117.04.patch > > > The recommended way is now: > # Create an HConnection: HConnectionManager.createConnection(...) > # Create a light HTable: HConnection.getTable(...) > # table.close() > # connection.close() > All other API and pooling will be removed. -- This message was sent by Atlassian JIRA (v6.1.5#6160)
[jira] [Updated] (HBASE-10587) Master metrics clusterRequests is wrong
[ https://issues.apache.org/jira/browse/HBASE-10587?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jimmy Xiang updated HBASE-10587: Attachment: hbase-10587.patch > Master metrics clusterRequests is wrong > --- > > Key: HBASE-10587 > URL: https://issues.apache.org/jira/browse/HBASE-10587 > Project: HBase > Issue Type: Bug > Components: master, metrics >Affects Versions: 0.96.0 >Reporter: Jimmy Xiang >Assignee: Jimmy Xiang > Attachments: hbase-10587.patch > > > In the master jmx, metrics clusterRequests increases so fast. Looked into the > code and found the calculation is a little bit wrong. It's a counter. > However, for each region server report, the total number of requests is added > to clusterRequests. That means it's added multiple times. -- This message was sent by Atlassian JIRA (v6.1.5#6160)
[jira] [Updated] (HBASE-10587) Master metrics clusterRequests is wrong
[ https://issues.apache.org/jira/browse/HBASE-10587?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jimmy Xiang updated HBASE-10587: Status: Patch Available (was: Open) > Master metrics clusterRequests is wrong > --- > > Key: HBASE-10587 > URL: https://issues.apache.org/jira/browse/HBASE-10587 > Project: HBase > Issue Type: Bug > Components: master, metrics >Affects Versions: 0.96.0 >Reporter: Jimmy Xiang >Assignee: Jimmy Xiang > Attachments: hbase-10587.patch > > > In the master jmx, metrics clusterRequests increases so fast. Looked into the > code and found the calculation is a little bit wrong. It's a counter. > However, for each region server report, the total number of requests is added > to clusterRequests. That means it's added multiple times. -- This message was sent by Atlassian JIRA (v6.1.5#6160)
[jira] [Commented] (HBASE-10585) Avoid early creation of Node objects in LRUDictionary.BidirectionalLRUMap
[ https://issues.apache.org/jira/browse/HBASE-10585?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13908999#comment-13908999 ] Hudson commented on HBASE-10585: FAILURE: Integrated in HBase-0.98-on-Hadoop-1.1 #163 (See [https://builds.apache.org/job/HBase-0.98-on-Hadoop-1.1/163/]) HBASE-10585 Avoid early creation of Node objects in LRUDictionary.BidirectionalLRUMap.(Anoop) (anoopsamjohn: rev 1570671) * /hbase/branches/0.98/hbase-common/src/main/java/org/apache/hadoop/hbase/io/util/LRUDictionary.java > Avoid early creation of Node objects in LRUDictionary.BidirectionalLRUMap > - > > Key: HBASE-10585 > URL: https://issues.apache.org/jira/browse/HBASE-10585 > Project: HBase > Issue Type: Bug >Reporter: Anoop Sam John >Assignee: Anoop Sam John > Fix For: 0.98.1, 0.99.0 > > Attachments: HBASE-10585.patch > > > When LRUDictionary initialized with N as the size, the BidirectionalLRUMap > creates N Node objects and kept in an array. It will be better not doing this > eager creation. Can create Node object on demand if array's current position > Node element is null. Once it is created the object can be reused as we do > now. -- This message was sent by Atlassian JIRA (v6.1.5#6160)
[jira] [Commented] (HBASE-10526) Using Cell instead of KeyValue in HFileOutputFormat
[ https://issues.apache.org/jira/browse/HBASE-10526?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13908998#comment-13908998 ] Hudson commented on HBASE-10526: SUCCESS: Integrated in HBase-TRUNK #4942 (See [https://builds.apache.org/job/HBase-TRUNK/4942/]) HBASE-10526 Using Cell instead of KeyValue in HFileOutputFormat (jxiang: rev 1570702) * /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/HFileOutputFormat.java * /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/HFileOutputFormat2.java * /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/mapreduce/TestHFileOutputFormat2.java > Using Cell instead of KeyValue in HFileOutputFormat > --- > > Key: HBASE-10526 > URL: https://issues.apache.org/jira/browse/HBASE-10526 > Project: HBase > Issue Type: Sub-task > Components: mapreduce >Reporter: Jimmy Xiang >Assignee: Jimmy Xiang > Fix For: 0.96.2, 0.98.1, 0.99.0 > > Attachments: hbase-10526.patch, hbase-10526_v1.1.patch, > hbase-10526_v2.patch, hbase-10526_v3.patch > > > HFileOutputFormat/KeyValueSortReducer use KeyValue. We should deprecate them > and use Cell instead. -- This message was sent by Atlassian JIRA (v6.1.5#6160)
[jira] [Commented] (HBASE-10525) Allow the client to use a different thread for writing to ease interrupt
[ https://issues.apache.org/jira/browse/HBASE-10525?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13908976#comment-13908976 ] Enis Soztutar commented on HBASE-10525: --- Ok, the test (mentioned above) succeeds. +1. > Allow the client to use a different thread for writing to ease interrupt > > > Key: HBASE-10525 > URL: https://issues.apache.org/jira/browse/HBASE-10525 > Project: HBase > Issue Type: Bug > Components: Client >Affects Versions: 0.99.0 >Reporter: Nicolas Liochon >Assignee: Nicolas Liochon > Fix For: 0.99.0 > > Attachments: 10525.v1.patch, 10525.v2.patch, 10525.v3.patch, > 10525.v4.patch, 10525.v5.patch, 10525.v6.patch, 10525.v7.patch, > HBaseclient-EventualConsistency.pdf > > > This is an issue in the HBASE-10070 context, but as well more generally if > you want to interrupt an operation with a limited cost. > I will attach a doc with a more detailed explanation. > This adds a thread per region server; so it's otional. The first patch > activates it by default to see how it behaves on a full hadoop-qa run. The > target is to be unset by default. -- This message was sent by Atlassian JIRA (v6.1.5#6160)
[jira] [Updated] (HBASE-10547) TestFixedLengthWrapper#testReadWrite occasionally fails with the IBM JDK
[ https://issues.apache.org/jira/browse/HBASE-10547?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrew Purtell updated HBASE-10547: --- Resolution: Fixed Hadoop Flags: Reviewed Status: Resolved (was: Patch Available) Committed trivial test fix > TestFixedLengthWrapper#testReadWrite occasionally fails with the IBM JDK > > > Key: HBASE-10547 > URL: https://issues.apache.org/jira/browse/HBASE-10547 > Project: HBase > Issue Type: Bug >Affects Versions: 0.98.0, 0.96.1.1 > Environment: IBM J9 VM (build 2.7, JRE 1.7.0 Linux amd64-64 > Compressed References 20131114_175264 (JIT enabled, AOT enabled) >Reporter: Andrew Purtell >Assignee: Andrew Purtell >Priority: Trivial > Fix For: 0.96.2, 0.98.1, 0.99.0 > > Attachments: 10547.patch > > > Here's the trace. > {noformat} > Tests run: 4, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 29.288 sec > <<< FAILURE! > testReadWrite(org.apache.hadoop.hbase.types.TestFixedLengthWrapper) Time > elapsed: 0.025 sec <<< FAILURE! > arrays first differed at element [8]; expected:<-40> but was:<0> > at > org.junit.internal.ComparisonCriteria.arrayEquals(ComparisonCriteria.java:50) > at org.junit.Assert.internalArrayEquals(Assert.java:473) > at org.junit.Assert.assertArrayEquals(Assert.java:294) > at org.junit.Assert.assertArrayEquals(Assert.java:305) > at > org.apache.hadoop.hbase.types.TestFixedLengthWrapper.testReadWrite(TestFixedLengthWrapper.java:60) > {noformat} > This is with 0.98.0. -- This message was sent by Atlassian JIRA (v6.1.5#6160)
[jira] [Updated] (HBASE-10547) TestFixedLengthWrapper#testReadWrite occasionally fails with the IBM JDK
[ https://issues.apache.org/jira/browse/HBASE-10547?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrew Purtell updated HBASE-10547: --- Priority: Trivial (was: Minor) Affects Version/s: 0.96.1.1 Fix Version/s: 0.96.2 > TestFixedLengthWrapper#testReadWrite occasionally fails with the IBM JDK > > > Key: HBASE-10547 > URL: https://issues.apache.org/jira/browse/HBASE-10547 > Project: HBase > Issue Type: Bug >Affects Versions: 0.98.0, 0.96.1.1 > Environment: IBM J9 VM (build 2.7, JRE 1.7.0 Linux amd64-64 > Compressed References 20131114_175264 (JIT enabled, AOT enabled) >Reporter: Andrew Purtell >Assignee: Andrew Purtell >Priority: Trivial > Fix For: 0.96.2, 0.98.1, 0.99.0 > > Attachments: 10547.patch > > > Here's the trace. > {noformat} > Tests run: 4, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 29.288 sec > <<< FAILURE! > testReadWrite(org.apache.hadoop.hbase.types.TestFixedLengthWrapper) Time > elapsed: 0.025 sec <<< FAILURE! > arrays first differed at element [8]; expected:<-40> but was:<0> > at > org.junit.internal.ComparisonCriteria.arrayEquals(ComparisonCriteria.java:50) > at org.junit.Assert.internalArrayEquals(Assert.java:473) > at org.junit.Assert.assertArrayEquals(Assert.java:294) > at org.junit.Assert.assertArrayEquals(Assert.java:305) > at > org.apache.hadoop.hbase.types.TestFixedLengthWrapper.testReadWrite(TestFixedLengthWrapper.java:60) > {noformat} > This is with 0.98.0. -- This message was sent by Atlassian JIRA (v6.1.5#6160)
[jira] [Commented] (HBASE-10582) 0.94->0.96 Upgrade: ACL can't be repopulated when ACL table contains row for table '-ROOT' or '.META.'
[ https://issues.apache.org/jira/browse/HBASE-10582?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13908955#comment-13908955 ] Enis Soztutar commented on HBASE-10582: --- +1. Nice test. > 0.94->0.96 Upgrade: ACL can't be repopulated when ACL table contains row for > table '-ROOT' or '.META.' > -- > > Key: HBASE-10582 > URL: https://issues.apache.org/jira/browse/HBASE-10582 > Project: HBase > Issue Type: Bug >Affects Versions: 0.98.0, 0.96.0, 0.96.1 >Reporter: Jeffrey Zhong >Assignee: Jeffrey Zhong >Priority: Critical > Attachments: hbase-10582-v1.patch, hbase-10582.patch > > > When '-ROOT-', '.META' rows are contained in ACL table, during upgrade > process, ACL zk nodes can't be populated to zookeeper because > AccessControlLists#loadAll(HRegion) fails to load table permissions due to > parsePermissionRecord throws IllegalArgumentException from TableName.valueof. -- This message was sent by Atlassian JIRA (v6.1.5#6160)
[jira] [Updated] (HBASE-10586) hadoop2-compat IPC metric registred twice
[ https://issues.apache.org/jira/browse/HBASE-10586?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Matteo Bertozzi updated HBASE-10586: Resolution: Fixed Status: Resolved (was: Patch Available) > hadoop2-compat IPC metric registred twice > - > > Key: HBASE-10586 > URL: https://issues.apache.org/jira/browse/HBASE-10586 > Project: HBase > Issue Type: Bug > Components: metrics >Affects Versions: 0.98.0, 0.96.1.1 >Reporter: Matteo Bertozzi >Assignee: Matteo Bertozzi > Fix For: 0.96.2, 0.98.1, 0.99.0 > > Attachments: HBASE-10586-v0.patch > > > There is an extra snapshot/addrecord line in hadoop2-compat > MetricsHBaseServerSourceImpl resulting in IPC metrics with a ".1" > the extra line is not present in the hadoop1-compat "mirror" > {code} > "numCallsInGeneralQueue.1" : 0, > "numCallsInReplicationQueue.1" : 0, > "numCallsInPriorityQueue.1" : 0, > ... > {code} -- This message was sent by Atlassian JIRA (v6.1.5#6160)
[jira] [Commented] (HBASE-9117) Remove HTablePool and all HConnection pooling related APIs
[ https://issues.apache.org/jira/browse/HBASE-9117?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13908948#comment-13908948 ] Nick Dimiduk commented on HBASE-9117: - Rebasing onto HBASE-10479. Ouch. > Remove HTablePool and all HConnection pooling related APIs > -- > > Key: HBASE-9117 > URL: https://issues.apache.org/jira/browse/HBASE-9117 > Project: HBase > Issue Type: Bug >Reporter: Lars Hofhansl >Assignee: Nick Dimiduk > Fix For: 0.99.0 > > Attachments: HBASE-9117.00.patch, HBASE-9117.01.patch, > HBASE-9117.02.patch, HBASE-9117.03.patch, HBASE-9117.04.patch > > > The recommended way is now: > # Create an HConnection: HConnectionManager.createConnection(...) > # Create a light HTable: HConnection.getTable(...) > # table.close() > # connection.close() > All other API and pooling will be removed. -- This message was sent by Atlassian JIRA (v6.1.5#6160)
[jira] [Commented] (HBASE-10525) Allow the client to use a different thread for writing to ease interrupt
[ https://issues.apache.org/jira/browse/HBASE-10525?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13908943#comment-13908943 ] Enis Soztutar commented on HBASE-10525: --- Let me run the test one more time. I'll +1 after. > Allow the client to use a different thread for writing to ease interrupt > > > Key: HBASE-10525 > URL: https://issues.apache.org/jira/browse/HBASE-10525 > Project: HBase > Issue Type: Bug > Components: Client >Affects Versions: 0.99.0 >Reporter: Nicolas Liochon >Assignee: Nicolas Liochon > Fix For: 0.99.0 > > Attachments: 10525.v1.patch, 10525.v2.patch, 10525.v3.patch, > 10525.v4.patch, 10525.v5.patch, 10525.v6.patch, 10525.v7.patch, > HBaseclient-EventualConsistency.pdf > > > This is an issue in the HBASE-10070 context, but as well more generally if > you want to interrupt an operation with a limited cost. > I will attach a doc with a more detailed explanation. > This adds a thread per region server; so it's otional. The first patch > activates it by default to see how it behaves on a full hadoop-qa run. The > target is to be unset by default. -- This message was sent by Atlassian JIRA (v6.1.5#6160)
[jira] [Commented] (HBASE-10581) ACL znode are left without PBed during upgrading hbase0.94* to hbase0.96+
[ https://issues.apache.org/jira/browse/HBASE-10581?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13908935#comment-13908935 ] Andrew Purtell commented on HBASE-10581: I am -0 on this patch (not an objection strictly speaking) without Himanshu's suggestion. bq. I'm worried that if a customer has their own custom znodes. It's kind of crazy to even tacitly support customers writing their own custom znodes under /hbase , right? > ACL znode are left without PBed during upgrading hbase0.94* to hbase0.96+ > - > > Key: HBASE-10581 > URL: https://issues.apache.org/jira/browse/HBASE-10581 > Project: HBase > Issue Type: Bug >Affects Versions: 0.98.0, 0.96.0, 0.96.1 >Reporter: Jeffrey Zhong >Assignee: Jeffrey Zhong >Priority: Critical > Attachments: hbase-10581.patch > > > ACL znodes are left in the upgrade process when upgrading 0.94 to 0.96+ > Those 0.94 znodes will choke HMaster because their data aren't PBed. -- This message was sent by Atlassian JIRA (v6.1.5#6160)
[jira] [Commented] (HBASE-10351) LoadBalancer changes for supporting region replicas
[ https://issues.apache.org/jira/browse/HBASE-10351?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13908907#comment-13908907 ] Devaraj Das commented on HBASE-10351: - Other than the point Sergey raised about putting some comments in the code around the array manipulations, and maybe breaking the manipulations into smaller methods, it looks good to me. The one issue that is there I think is that the test (TestMasterOperationsForReplicas) may be flaky (the part where retain assignments is set to true). The reason being, that in the retainAssignment there is max iterations for choosing the server to place the region in but that may not be correct location given the replicas. I think it's fine to leave the balancer's assign methods as you currently have it (eventually the balancer.balance will fix it), but modify the test to be able to handle that. Another minor nit is that in roundRobinAssignment, we could land up in the catch-all anytime we have numReplicas > numRegionServers. The comment in the catch-all code should be updated. The refactor to do with introducing the Action class is good! > LoadBalancer changes for supporting region replicas > --- > > Key: HBASE-10351 > URL: https://issues.apache.org/jira/browse/HBASE-10351 > Project: HBase > Issue Type: Sub-task > Components: master >Affects Versions: 0.99.0 >Reporter: Enis Soztutar >Assignee: Enis Soztutar > Attachments: hbase-10351_v0.patch, hbase-10351_v1.patch, > hbase-10351_v3.patch > > > LoadBalancer has to be aware of and enforce placement of region replicas so > that the replicas are not co-hosted in the same server, host or rack. This > will ensure that the region is highly available during process / host / rack > failover. -- This message was sent by Atlassian JIRA (v6.1.5#6160)
[jira] [Commented] (HBASE-10547) TestFixedLengthWrapper#testReadWrite occasionally fails with the IBM JDK
[ https://issues.apache.org/jira/browse/HBASE-10547?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13908883#comment-13908883 ] Andrew Purtell commented on HBASE-10547: Yes, I plan to commit the patch on the issue which fixes the test, and resolve this JIRA. > TestFixedLengthWrapper#testReadWrite occasionally fails with the IBM JDK > > > Key: HBASE-10547 > URL: https://issues.apache.org/jira/browse/HBASE-10547 > Project: HBase > Issue Type: Bug >Affects Versions: 0.98.0 > Environment: IBM J9 VM (build 2.7, JRE 1.7.0 Linux amd64-64 > Compressed References 20131114_175264 (JIT enabled, AOT enabled) >Reporter: Andrew Purtell >Assignee: Andrew Purtell >Priority: Minor > Fix For: 0.98.1, 0.99.0 > > Attachments: 10547.patch > > > Here's the trace. > {noformat} > Tests run: 4, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 29.288 sec > <<< FAILURE! > testReadWrite(org.apache.hadoop.hbase.types.TestFixedLengthWrapper) Time > elapsed: 0.025 sec <<< FAILURE! > arrays first differed at element [8]; expected:<-40> but was:<0> > at > org.junit.internal.ComparisonCriteria.arrayEquals(ComparisonCriteria.java:50) > at org.junit.Assert.internalArrayEquals(Assert.java:473) > at org.junit.Assert.assertArrayEquals(Assert.java:294) > at org.junit.Assert.assertArrayEquals(Assert.java:305) > at > org.apache.hadoop.hbase.types.TestFixedLengthWrapper.testReadWrite(TestFixedLengthWrapper.java:60) > {noformat} > This is with 0.98.0. -- This message was sent by Atlassian JIRA (v6.1.5#6160)
[jira] [Commented] (HBASE-10547) TestFixedLengthWrapper#testReadWrite occasionally fails with the IBM JDK
[ https://issues.apache.org/jira/browse/HBASE-10547?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13908873#comment-13908873 ] Nick Dimiduk commented on HBASE-10547: -- Type library makes no assumptions about a zero'ed destination array. Could be I made that assumption in tests though. The library itself makes no allocations, instead depends on externally allocated arrays. Patch looks fine to me +1. It fixes tests on this vm? > TestFixedLengthWrapper#testReadWrite occasionally fails with the IBM JDK > > > Key: HBASE-10547 > URL: https://issues.apache.org/jira/browse/HBASE-10547 > Project: HBase > Issue Type: Bug >Affects Versions: 0.98.0 > Environment: IBM J9 VM (build 2.7, JRE 1.7.0 Linux amd64-64 > Compressed References 20131114_175264 (JIT enabled, AOT enabled) >Reporter: Andrew Purtell >Assignee: Andrew Purtell >Priority: Minor > Fix For: 0.98.1, 0.99.0 > > Attachments: 10547.patch > > > Here's the trace. > {noformat} > Tests run: 4, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 29.288 sec > <<< FAILURE! > testReadWrite(org.apache.hadoop.hbase.types.TestFixedLengthWrapper) Time > elapsed: 0.025 sec <<< FAILURE! > arrays first differed at element [8]; expected:<-40> but was:<0> > at > org.junit.internal.ComparisonCriteria.arrayEquals(ComparisonCriteria.java:50) > at org.junit.Assert.internalArrayEquals(Assert.java:473) > at org.junit.Assert.assertArrayEquals(Assert.java:294) > at org.junit.Assert.assertArrayEquals(Assert.java:305) > at > org.apache.hadoop.hbase.types.TestFixedLengthWrapper.testReadWrite(TestFixedLengthWrapper.java:60) > {noformat} > This is with 0.98.0. -- This message was sent by Atlassian JIRA (v6.1.5#6160)
[jira] [Commented] (HBASE-10583) backport HBASE-8402 to 0.94
[ https://issues.apache.org/jira/browse/HBASE-10583?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13908870#comment-13908870 ] Himanshu Vashishtha commented on HBASE-10583: - Yes, (that's what i remember after almost an year !) :) +1 to the patch. > backport HBASE-8402 to 0.94 > --- > > Key: HBASE-10583 > URL: https://issues.apache.org/jira/browse/HBASE-10583 > Project: HBase > Issue Type: Bug >Reporter: Liu Shaohui >Assignee: Liu Shaohui > Fix For: 0.94.18 > > Attachments: HBASE-10583-v1.diff > > > see HBASE-8402 -- This message was sent by Atlassian JIRA (v6.1.5#6160)
[jira] [Updated] (HBASE-10392) Correct references to hbase.regionserver.global.memstore.upperLimit
[ https://issues.apache.org/jira/browse/HBASE-10392?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Nick Dimiduk updated HBASE-10392: - Component/s: documentation > Correct references to hbase.regionserver.global.memstore.upperLimit > --- > > Key: HBASE-10392 > URL: https://issues.apache.org/jira/browse/HBASE-10392 > Project: HBase > Issue Type: Bug > Components: documentation >Reporter: Nick Dimiduk >Assignee: Nick Dimiduk > Fix For: 0.99.0 > > Attachments: HBASE-10392.0.patch, HBASE-10392.1.patch, > HBASE-10392.2.patch, HBASE-10392.3.patch, HBASE-10392.4.patch, > HBASE-10392.5.patch > > > As part of the awesome new HBASE-5349, a couple references to > {{hbase.regionserver.global.memstore.upperLimit}} was missed. Clean those up > to use the new {{hbase.regionserver.global.memstore.size}} instead. -- This message was sent by Atlassian JIRA (v6.1.5#6160)
[jira] [Updated] (HBASE-10392) Correct references to hbase.regionserver.global.memstore.upperLimit
[ https://issues.apache.org/jira/browse/HBASE-10392?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Nick Dimiduk updated HBASE-10392: - Resolution: Fixed Status: Resolved (was: Patch Available) Basically just a docs patch now. {{mvn site}} builds locally. Committed to trunk. Thanks for the review [~anoop.hbase]. > Correct references to hbase.regionserver.global.memstore.upperLimit > --- > > Key: HBASE-10392 > URL: https://issues.apache.org/jira/browse/HBASE-10392 > Project: HBase > Issue Type: Bug > Components: documentation >Reporter: Nick Dimiduk >Assignee: Nick Dimiduk > Fix For: 0.99.0 > > Attachments: HBASE-10392.0.patch, HBASE-10392.1.patch, > HBASE-10392.2.patch, HBASE-10392.3.patch, HBASE-10392.4.patch, > HBASE-10392.5.patch > > > As part of the awesome new HBASE-5349, a couple references to > {{hbase.regionserver.global.memstore.upperLimit}} was missed. Clean those up > to use the new {{hbase.regionserver.global.memstore.size}} instead. -- This message was sent by Atlassian JIRA (v6.1.5#6160)
[jira] [Commented] (HBASE-10585) Avoid early creation of Node objects in LRUDictionary.BidirectionalLRUMap
[ https://issues.apache.org/jira/browse/HBASE-10585?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13908864#comment-13908864 ] Hudson commented on HBASE-10585: SUCCESS: Integrated in HBase-TRUNK #4941 (See [https://builds.apache.org/job/HBase-TRUNK/4941/]) HBASE-10585 Avoid early creation of Node objects in LRUDictionary.BidirectionalLRUMap.(Anoop) (anoopsamjohn: rev 1570672) * /hbase/trunk/hbase-common/src/main/java/org/apache/hadoop/hbase/io/util/LRUDictionary.java > Avoid early creation of Node objects in LRUDictionary.BidirectionalLRUMap > - > > Key: HBASE-10585 > URL: https://issues.apache.org/jira/browse/HBASE-10585 > Project: HBase > Issue Type: Bug >Reporter: Anoop Sam John >Assignee: Anoop Sam John > Fix For: 0.98.1, 0.99.0 > > Attachments: HBASE-10585.patch > > > When LRUDictionary initialized with N as the size, the BidirectionalLRUMap > creates N Node objects and kept in an array. It will be better not doing this > eager creation. Can create Node object on demand if array's current position > Node element is null. Once it is created the object can be reused as we do > now. -- This message was sent by Atlassian JIRA (v6.1.5#6160)
[jira] [Commented] (HBASE-10582) 0.94->0.96 Upgrade: ACL can't be repopulated when ACL table contains row for table '-ROOT' or '.META.'
[ https://issues.apache.org/jira/browse/HBASE-10582?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13908861#comment-13908861 ] Hadoop QA commented on HBASE-10582: --- {color:red}-1 overall{color}. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12630363/hbase-10582-v1.patch against trunk revision . ATTACHMENT ID: 12630363 {color:green}+1 @author{color}. The patch does not contain any @author tags. {color:green}+1 tests included{color}. The patch appears to include 3 new or modified tests. {color:green}+1 hadoop1.0{color}. The patch compiles against the hadoop 1.0 profile. {color:green}+1 hadoop1.1{color}. The patch compiles against the hadoop 1.1 profile. {color:green}+1 javadoc{color}. The javadoc tool did not generate any warning messages. {color:green}+1 javac{color}. The applied patch does not increase the total number of javac compiler warnings. {color:green}+1 findbugs{color}. The patch does not introduce any new Findbugs (version 1.3.9) warnings. {color:green}+1 release audit{color}. The applied patch does not increase the total number of release audit warnings. {color:green}+1 lineLengths{color}. The patch does not introduce lines longer than 100 {color:green}+1 site{color}. The mvn site goal succeeds with this patch. {color:red}-1 core tests{color}. The patch failed these unit tests: {color:red}-1 core zombie tests{color}. There are 1 zombie test(s): at org.apache.hadoop.hbase.regionserver.wal.TestLogRolling.testLogRollOnDatanodeDeath(TestLogRolling.java:354) Test results: https://builds.apache.org/job/PreCommit-HBASE-Build/8770//testReport/ Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8770//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8770//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8770//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8770//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8770//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8770//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8770//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8770//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-thrift.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8770//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/8770//console This message is automatically generated. > 0.94->0.96 Upgrade: ACL can't be repopulated when ACL table contains row for > table '-ROOT' or '.META.' > -- > > Key: HBASE-10582 > URL: https://issues.apache.org/jira/browse/HBASE-10582 > Project: HBase > Issue Type: Bug >Affects Versions: 0.98.0, 0.96.0, 0.96.1 >Reporter: Jeffrey Zhong >Assignee: Jeffrey Zhong >Priority: Critical > Attachments: hbase-10582-v1.patch, hbase-10582.patch > > > When '-ROOT-', '.META' rows are contained in ACL table, during upgrade > process, ACL zk nodes can't be populated to zookeeper because > AccessControlLists#loadAll(HRegion) fails to load table permissions due to > parsePermissionRecord throws IllegalArgumentException from TableName.valueof. -- This message was sent by Atlassian JIRA (v6.1.5#6160)
[jira] [Commented] (HBASE-10451) Enable back Tag compression on HFiles
[ https://issues.apache.org/jira/browse/HBASE-10451?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13908858#comment-13908858 ] Andrew Purtell commented on HBASE-10451: Sounds good Anoop. > Enable back Tag compression on HFiles > - > > Key: HBASE-10451 > URL: https://issues.apache.org/jira/browse/HBASE-10451 > Project: HBase > Issue Type: Bug >Affects Versions: 0.98.0 >Reporter: Anoop Sam John >Assignee: Anoop Sam John >Priority: Critical > Fix For: 0.98.1, 0.99.0 > > Attachments: HBASE-10451.patch, HBASE-10451_V2.patch, > HBASE-10451_V3.patch, HBASE-10451_V4.patch, HBASE-10451_V5.patch > > > HBASE-10443 disables tag compression on HFiles. This Jira is to fix the > issues we have found out in HBASE-10443 and enable it back. -- This message was sent by Atlassian JIRA (v6.1.5#6160)
[jira] [Updated] (HBASE-10547) TestFixedLengthWrapper#testReadWrite occasionally fails with the IBM JDK
[ https://issues.apache.org/jira/browse/HBASE-10547?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrew Purtell updated HBASE-10547: --- Status: Patch Available (was: Open) > TestFixedLengthWrapper#testReadWrite occasionally fails with the IBM JDK > > > Key: HBASE-10547 > URL: https://issues.apache.org/jira/browse/HBASE-10547 > Project: HBase > Issue Type: Bug >Affects Versions: 0.98.0 > Environment: IBM J9 VM (build 2.7, JRE 1.7.0 Linux amd64-64 > Compressed References 20131114_175264 (JIT enabled, AOT enabled) >Reporter: Andrew Purtell >Assignee: Andrew Purtell >Priority: Minor > Fix For: 0.98.1, 0.99.0 > > Attachments: 10547.patch > > > Here's the trace. > {noformat} > Tests run: 4, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 29.288 sec > <<< FAILURE! > testReadWrite(org.apache.hadoop.hbase.types.TestFixedLengthWrapper) Time > elapsed: 0.025 sec <<< FAILURE! > arrays first differed at element [8]; expected:<-40> but was:<0> > at > org.junit.internal.ComparisonCriteria.arrayEquals(ComparisonCriteria.java:50) > at org.junit.Assert.internalArrayEquals(Assert.java:473) > at org.junit.Assert.assertArrayEquals(Assert.java:294) > at org.junit.Assert.assertArrayEquals(Assert.java:305) > at > org.apache.hadoop.hbase.types.TestFixedLengthWrapper.testReadWrite(TestFixedLengthWrapper.java:60) > {noformat} > This is with 0.98.0. -- This message was sent by Atlassian JIRA (v6.1.5#6160)
[jira] [Updated] (HBASE-10547) TestFixedLengthWrapper#testReadWrite occasionally fails with the IBM JDK
[ https://issues.apache.org/jira/browse/HBASE-10547?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrew Purtell updated HBASE-10547: --- Fix Version/s: 0.99.0 0.98.1 > TestFixedLengthWrapper#testReadWrite occasionally fails with the IBM JDK > > > Key: HBASE-10547 > URL: https://issues.apache.org/jira/browse/HBASE-10547 > Project: HBase > Issue Type: Bug >Affects Versions: 0.98.0 > Environment: IBM J9 VM (build 2.7, JRE 1.7.0 Linux amd64-64 > Compressed References 20131114_175264 (JIT enabled, AOT enabled) >Reporter: Andrew Purtell >Assignee: Andrew Purtell >Priority: Minor > Fix For: 0.98.1, 0.99.0 > > Attachments: 10547.patch > > > Here's the trace. > {noformat} > Tests run: 4, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 29.288 sec > <<< FAILURE! > testReadWrite(org.apache.hadoop.hbase.types.TestFixedLengthWrapper) Time > elapsed: 0.025 sec <<< FAILURE! > arrays first differed at element [8]; expected:<-40> but was:<0> > at > org.junit.internal.ComparisonCriteria.arrayEquals(ComparisonCriteria.java:50) > at org.junit.Assert.internalArrayEquals(Assert.java:473) > at org.junit.Assert.assertArrayEquals(Assert.java:294) > at org.junit.Assert.assertArrayEquals(Assert.java:305) > at > org.apache.hadoop.hbase.types.TestFixedLengthWrapper.testReadWrite(TestFixedLengthWrapper.java:60) > {noformat} > This is with 0.98.0. -- This message was sent by Atlassian JIRA (v6.1.5#6160)
[jira] [Updated] (HBASE-10547) TestFixedLengthWrapper#testReadWrite occasionally fails with the IBM JDK
[ https://issues.apache.org/jira/browse/HBASE-10547?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrew Purtell updated HBASE-10547: --- Attachment: 10547.patch > TestFixedLengthWrapper#testReadWrite occasionally fails with the IBM JDK > > > Key: HBASE-10547 > URL: https://issues.apache.org/jira/browse/HBASE-10547 > Project: HBase > Issue Type: Bug >Affects Versions: 0.98.0 > Environment: IBM J9 VM (build 2.7, JRE 1.7.0 Linux amd64-64 > Compressed References 20131114_175264 (JIT enabled, AOT enabled) >Reporter: Andrew Purtell >Assignee: Andrew Purtell >Priority: Minor > Attachments: 10547.patch > > > Here's the trace. > {noformat} > Tests run: 4, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 29.288 sec > <<< FAILURE! > testReadWrite(org.apache.hadoop.hbase.types.TestFixedLengthWrapper) Time > elapsed: 0.025 sec <<< FAILURE! > arrays first differed at element [8]; expected:<-40> but was:<0> > at > org.junit.internal.ComparisonCriteria.arrayEquals(ComparisonCriteria.java:50) > at org.junit.Assert.internalArrayEquals(Assert.java:473) > at org.junit.Assert.assertArrayEquals(Assert.java:294) > at org.junit.Assert.assertArrayEquals(Assert.java:305) > at > org.apache.hadoop.hbase.types.TestFixedLengthWrapper.testReadWrite(TestFixedLengthWrapper.java:60) > {noformat} > This is with 0.98.0. -- This message was sent by Atlassian JIRA (v6.1.5#6160)
[jira] [Commented] (HBASE-10547) TestFixedLengthWrapper#testReadWrite occasionally fails with the IBM JDK
[ https://issues.apache.org/jira/browse/HBASE-10547?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13908853#comment-13908853 ] Andrew Purtell commented on HBASE-10547: The IBM JDK does not zero fill new byte[] allocations. Gary discovered this on HBASE-10527. Attaching a patch which gets this test passing for me. [~ndimiduk], do you want to zero fill new byte[] allocations made by the type library? We have Bytes.zero for that. Perhaps Bytes.zero could use, if Unsafe is available, a helper that zeros the byte array 8 bytes at a time until there are fewer than that remaining? > TestFixedLengthWrapper#testReadWrite occasionally fails with the IBM JDK > > > Key: HBASE-10547 > URL: https://issues.apache.org/jira/browse/HBASE-10547 > Project: HBase > Issue Type: Bug >Affects Versions: 0.98.0 > Environment: IBM J9 VM (build 2.7, JRE 1.7.0 Linux amd64-64 > Compressed References 20131114_175264 (JIT enabled, AOT enabled) >Reporter: Andrew Purtell >Assignee: Andrew Purtell >Priority: Minor > > Here's the trace. > {noformat} > Tests run: 4, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 29.288 sec > <<< FAILURE! > testReadWrite(org.apache.hadoop.hbase.types.TestFixedLengthWrapper) Time > elapsed: 0.025 sec <<< FAILURE! > arrays first differed at element [8]; expected:<-40> but was:<0> > at > org.junit.internal.ComparisonCriteria.arrayEquals(ComparisonCriteria.java:50) > at org.junit.Assert.internalArrayEquals(Assert.java:473) > at org.junit.Assert.assertArrayEquals(Assert.java:294) > at org.junit.Assert.assertArrayEquals(Assert.java:305) > at > org.apache.hadoop.hbase.types.TestFixedLengthWrapper.testReadWrite(TestFixedLengthWrapper.java:60) > {noformat} > This is with 0.98.0. -- This message was sent by Atlassian JIRA (v6.1.5#6160)
[jira] [Updated] (HBASE-10547) TestFixedLengthWrapper#testReadWrite occasionally fails with the IBM JDK
[ https://issues.apache.org/jira/browse/HBASE-10547?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrew Purtell updated HBASE-10547: --- Attachment: (was: 10547.patch) > TestFixedLengthWrapper#testReadWrite occasionally fails with the IBM JDK > > > Key: HBASE-10547 > URL: https://issues.apache.org/jira/browse/HBASE-10547 > Project: HBase > Issue Type: Bug >Affects Versions: 0.98.0 > Environment: IBM J9 VM (build 2.7, JRE 1.7.0 Linux amd64-64 > Compressed References 20131114_175264 (JIT enabled, AOT enabled) >Reporter: Andrew Purtell >Assignee: Andrew Purtell >Priority: Minor > > Here's the trace. > {noformat} > Tests run: 4, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 29.288 sec > <<< FAILURE! > testReadWrite(org.apache.hadoop.hbase.types.TestFixedLengthWrapper) Time > elapsed: 0.025 sec <<< FAILURE! > arrays first differed at element [8]; expected:<-40> but was:<0> > at > org.junit.internal.ComparisonCriteria.arrayEquals(ComparisonCriteria.java:50) > at org.junit.Assert.internalArrayEquals(Assert.java:473) > at org.junit.Assert.assertArrayEquals(Assert.java:294) > at org.junit.Assert.assertArrayEquals(Assert.java:305) > at > org.apache.hadoop.hbase.types.TestFixedLengthWrapper.testReadWrite(TestFixedLengthWrapper.java:60) > {noformat} > This is with 0.98.0. -- This message was sent by Atlassian JIRA (v6.1.5#6160)