[jira] [Commented] (HBASE-7233) Remove Writable Interface from KeyValue
[ https://issues.apache.org/jira/browse/HBASE-7233?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13507170#comment-13507170 ] Matt Corgan commented on HBASE-7233: Need to watch our step with META and ROOT cells too until we figure that out > Remove Writable Interface from KeyValue > --- > > Key: HBASE-7233 > URL: https://issues.apache.org/jira/browse/HBASE-7233 > Project: HBase > Issue Type: Bug >Reporter: stack >Assignee: stack >Priority: Blocker > Attachments: 7233.txt > > > Undo KeyValue being a Writable. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-7233) Remove Writable Interface from KeyValue
[ https://issues.apache.org/jira/browse/HBASE-7233?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13507169#comment-13507169 ] Matt Corgan commented on HBASE-7233: I'm not up to speed on hbase/map-reduce integration. Will it still work ok with the KeyValueSortReducer? Otherwise, it's pretty easy to mimic the writable format within hbase. There's some methods in KeyValueTool that take a Cell parameter and write the KeyValue format bytes to ByteBuffers and arrays. Easy to add more for OutputStreams, etc > Remove Writable Interface from KeyValue > --- > > Key: HBASE-7233 > URL: https://issues.apache.org/jira/browse/HBASE-7233 > Project: HBase > Issue Type: Bug >Reporter: stack >Assignee: stack >Priority: Blocker > Attachments: 7233.txt > > > Undo KeyValue being a Writable. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-7234) Remove long-deprecated HServerAddress and HServerInfo Writables
[ https://issues.apache.org/jira/browse/HBASE-7234?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13507161#comment-13507161 ] Hadoop QA commented on HBASE-7234: -- {color:red}-1 overall{color}. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12555439/7234.txt against trunk revision . {color:green}+1 @author{color}. The patch does not contain any @author tags. {color:green}+1 tests included{color}. The patch appears to include 31 new or modified tests. {color:red}-1 patch{color}. The patch command could not apply the patch. Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/3426//console This message is automatically generated. > Remove long-deprecated HServerAddress and HServerInfo Writables > --- > > Key: HBASE-7234 > URL: https://issues.apache.org/jira/browse/HBASE-7234 > Project: HBase > Issue Type: Bug >Reporter: stack >Assignee: stack >Priority: Blocker > Fix For: 0.96.0 > > Attachments: 7234.txt > > > These classes have been deprecated since 0.92 or before. Remove them. > Remove them too because they are Writable. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-7236) add per-table/per-cf compaction configuration via metadata
[ https://issues.apache.org/jira/browse/HBASE-7236?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13507160#comment-13507160 ] stack commented on HBASE-7236: -- [~sershe] IIUC, you are expanding CompoundConfiguration to do table and 'overrides'? pros: 1. You go one place to get your config., the 'Configuration' (which could be a CompoundConfiguration w/ family or table specifics and overrides)? 2. Generalizes what Andrew did doing cache configuration over in hbase-6114? How would this help us get to changing configuration on the fly? Looks like it doesn't change our current story. CompoundConfiguration is setup in HRegion or ColumnFamily setup still.. If we start to record metadata on a table, say column types, would we use this mechanism? How would 'overrides' be specified in the shell say? (Patch doesn't seem to say) We have means of altering table and column descriptors. Where would 'overrides' go? bq. however, making it explicit and separate from miscellaneous metadata would be cleaner imho. Can you say more on above? HTableDescriptor and HColumnDescriptor dictionaries are key/value maps that get persisted into the filesystem when changed. We read them them throughout the codebase and we list them in master UIs, etc. Will they blow up under this new use case? HTD and HCD are mostly schema with a little config. This direction would seem to be using these descriptors to add table or column scoped configs. Should we be working to undo schema and config conflation rather than compound it? > add per-table/per-cf compaction configuration via metadata > -- > > Key: HBASE-7236 > URL: https://issues.apache.org/jira/browse/HBASE-7236 > Project: HBase > Issue Type: New Feature > Components: Compaction >Affects Versions: 0.96.0 >Reporter: Sergey Shelukhin >Assignee: Sergey Shelukhin > Attachments: HBASE-7236-PROTOTYPE.patch, HBASE-7236-PROTOTYPE.patch > > > Regardless of the compaction policy, it makes sense to have separate > configuration for compactions for different tables and column families, as > their access patterns and workloads can be different. In particular, for > tiered compactions that are being ported from 0.89-fb branch it is necessary > to have, to use it properly. > We might want to add support for compaction configuration via metadata on > table/cf. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-7232) Remove HbaseMapWritable
[ https://issues.apache.org/jira/browse/HBASE-7232?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13507154#comment-13507154 ] Hudson commented on HBASE-7232: --- Integrated in HBase-TRUNK #3581 (See [https://builds.apache.org/job/HBase-TRUNK/3581/]) HBASE-7232 Remove HbaseMapWritable (Revision 1415507) Result = FAILURE stack : Files : * /hbase/trunk/hbase-protocol/src/main/java/org/apache/hadoop/hbase/protobuf/generated/HFileProtos.java * /hbase/trunk/hbase-protocol/src/main/protobuf/HFile.proto * /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/io/HbaseMapWritable.java * /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/io/HbaseObjectWritable.java * /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/AbstractHFileReader.java * /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/AbstractHFileWriter.java * /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFile.java * /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileBlock.java * /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileBlockIndex.java * /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileReaderV1.java * /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileReaderV2.java * /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileWriterV1.java * /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileWriterV2.java * /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/InlineBlockWriter.java * /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/mapred/TableMapReduceUtil.java * /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/TableMapReduceUtil.java * /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/protobuf/ProtobufUtil.java * /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/util/CompoundBloomFilterWriter.java * /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/TestSerialization.java * /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/io/TestHbaseObjectWritable.java * /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestHFileReaderV1.java * /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestHFileWriterV2.java > Remove HbaseMapWritable > --- > > Key: HBASE-7232 > URL: https://issues.apache.org/jira/browse/HBASE-7232 > Project: HBase > Issue Type: Bug >Reporter: stack >Assignee: stack > Fix For: 0.96.0 > > Attachments: 7232.txt, 7232.txt, 7232v2.txt, 7232v3.txt, 7232v4.txt > > > Its used by hfile fileinfo only so need to convert fileinfo to remove this. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-7232) Remove HbaseMapWritable
[ https://issues.apache.org/jira/browse/HBASE-7232?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13507139#comment-13507139 ] Hadoop QA commented on HBASE-7232: -- {color:red}-1 overall{color}. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12555445/7232v4.txt against trunk revision . {color:green}+1 @author{color}. The patch does not contain any @author tags. {color:green}+1 tests included{color}. The patch appears to include 12 new or modified tests. {color:green}+1 hadoop2.0{color}. The patch compiles against the hadoop 2.0 profile. {color:red}-1 javadoc{color}. The javadoc tool appears to have generated 99 warning messages. {color:green}+1 javac{color}. The applied patch does not increase the total number of javac compiler warnings. {color:red}-1 findbugs{color}. The patch appears to introduce 26 new Findbugs (version 1.3.9) warnings. {color:green}+1 release audit{color}. The applied patch does not increase the total number of release audit warnings. {color:green}+1 core tests{color}. The patch passed unit tests in . Test results: https://builds.apache.org/job/PreCommit-HBASE-Build/3425//testReport/ Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/3425//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/3425//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/3425//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/3425//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/3425//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop1-compat.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/3425//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/3425//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/3425//console This message is automatically generated. > Remove HbaseMapWritable > --- > > Key: HBASE-7232 > URL: https://issues.apache.org/jira/browse/HBASE-7232 > Project: HBase > Issue Type: Bug >Reporter: stack >Assignee: stack > Fix For: 0.96.0 > > Attachments: 7232.txt, 7232.txt, 7232v2.txt, 7232v3.txt, 7232v4.txt > > > Its used by hfile fileinfo only so need to convert fileinfo to remove this. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-6423) Writes should not block reads on blocking updates to memstores
[ https://issues.apache.org/jira/browse/HBASE-6423?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13507138#comment-13507138 ] stack commented on HBASE-6423: -- [~jxiang] You doing the createConf stuff in the test that Lars above refers too because there is no setup in this old junit3 TestHRegion and you need to get your custom config. in there each time test runs? If so, +1 on commit. Otherwise, what Lars asks. > Writes should not block reads on blocking updates to memstores > -- > > Key: HBASE-6423 > URL: https://issues.apache.org/jira/browse/HBASE-6423 > Project: HBase > Issue Type: Bug >Reporter: Karthik Ranganathan >Assignee: Jimmy Xiang > Attachments: trunk-6423.patch, trunk-6423_v2.1.patch, > trunk-6423_v2.patch, trunk-6423_v3.2.patch > > > We have a big data use case where we turn off WAL and have a ton of reads and > writes. We found that: > 1. flushing a memstore takes a while (GZIP compression) > 2. incoming writes cause the new memstore to grow in an unbounded fashion > 3. this triggers blocking memstore updates > 4. in turn, this causes all the RPC handler threads to block on writes to > that memstore > 5. we are not able to read during this time as RPC handlers are blocked > At a higher level, we should not hold up the RPC threads while blocking > updates, and we should build in some sort of rate control. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-7232) Remove HbaseMapWritable
[ https://issues.apache.org/jira/browse/HBASE-7232?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] stack updated HBASE-7232: - Resolution: Fixed Fix Version/s: 0.96.0 Hadoop Flags: Reviewed Status: Resolved (was: Patch Available) Thanks for the reviews lads. Committed to trunk. > Remove HbaseMapWritable > --- > > Key: HBASE-7232 > URL: https://issues.apache.org/jira/browse/HBASE-7232 > Project: HBase > Issue Type: Bug >Reporter: stack >Assignee: stack > Fix For: 0.96.0 > > Attachments: 7232.txt, 7232.txt, 7232v2.txt, 7232v3.txt, 7232v4.txt > > > Its used by hfile fileinfo only so need to convert fileinfo to remove this. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-7233) Remove Writable Interface from KeyValue
[ https://issues.apache.org/jira/browse/HBASE-7233?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] stack updated HBASE-7233: - Attachment: 7233.txt HBASE-1379 added Writable Interface to KV. This patch removes it. In WALEdit it does a bit of placeholding till we convert WALEdit from being a Writable. Need to also chat w/ Matt Corgan after he is done drinking his Champagne about how we'll do serizliation/deserialization of KVs/Cells in his new Interface > Remove Writable Interface from KeyValue > --- > > Key: HBASE-7233 > URL: https://issues.apache.org/jira/browse/HBASE-7233 > Project: HBase > Issue Type: Bug >Reporter: stack >Assignee: stack >Priority: Blocker > Attachments: 7233.txt > > > Undo KeyValue being a Writable. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-7232) Remove HbaseMapWritable
[ https://issues.apache.org/jira/browse/HBASE-7232?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13507132#comment-13507132 ] stack commented on HBASE-7232: -- Thanks [~lhofhansl]. I'm glad you think that way because I think I'll leave the blooms writable stuff alone in hfile; it is woven into the hfile metadata and through blooms themselves and would be a bunch of work to undo so I think I'll leave them as they are for 0.96. HBW backing HFileInfo was a little odd anyways. As you say was key/values of byte []. Only used here and only at the time because it was a serializable Map. No harm getting rid of it as part of the Writables purge. I'm having to manual trigger patch builds. They broke. > Remove HbaseMapWritable > --- > > Key: HBASE-7232 > URL: https://issues.apache.org/jira/browse/HBASE-7232 > Project: HBase > Issue Type: Bug >Reporter: stack >Assignee: stack > Attachments: 7232.txt, 7232.txt, 7232v2.txt, 7232v3.txt, 7232v4.txt > > > Its used by hfile fileinfo only so need to convert fileinfo to remove this. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-7232) Remove HbaseMapWritable
[ https://issues.apache.org/jira/browse/HBASE-7232?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13507130#comment-13507130 ] Lars Hofhansl commented on HBASE-7232: -- Patch looks good. Personally I have no problem with HBaseMapWritable. It will always just write a generic Map to which we can add (and remove) any field we want, and it is only used with byte[] keys and values, so nothing really gained by protobuf'ing it. I'd be happy to change it from a generic to a concrete-type class that only allows byte[] keys and values. I think it'll make it easier to use HFiles in isolation. (But nothing against the patch either) > Remove HbaseMapWritable > --- > > Key: HBASE-7232 > URL: https://issues.apache.org/jira/browse/HBASE-7232 > Project: HBase > Issue Type: Bug >Reporter: stack >Assignee: stack > Attachments: 7232.txt, 7232.txt, 7232v2.txt, 7232v3.txt, 7232v4.txt > > > Its used by hfile fileinfo only so need to convert fileinfo to remove this. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-7232) Remove HbaseMapWritable
[ https://issues.apache.org/jira/browse/HBASE-7232?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13507114#comment-13507114 ] Hadoop QA commented on HBASE-7232: -- {color:red}-1 overall{color}. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12555445/7232v4.txt against trunk revision . {color:green}+1 @author{color}. The patch does not contain any @author tags. {color:green}+1 tests included{color}. The patch appears to include 12 new or modified tests. {color:green}+1 hadoop2.0{color}. The patch compiles against the hadoop 2.0 profile. {color:red}-1 javadoc{color}. The javadoc tool appears to have generated 99 warning messages. {color:green}+1 javac{color}. The applied patch does not increase the total number of javac compiler warnings. {color:red}-1 findbugs{color}. The patch appears to introduce 26 new Findbugs (version 1.3.9) warnings. {color:green}+1 release audit{color}. The applied patch does not increase the total number of release audit warnings. {color:green}+1 core tests{color}. The patch passed unit tests in . Test results: https://builds.apache.org/job/PreCommit-HBASE-Build/3424//testReport/ Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/3424//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/3424//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/3424//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/3424//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/3424//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop1-compat.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/3424//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/3424//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/3424//console This message is automatically generated. > Remove HbaseMapWritable > --- > > Key: HBASE-7232 > URL: https://issues.apache.org/jira/browse/HBASE-7232 > Project: HBase > Issue Type: Bug >Reporter: stack >Assignee: stack > Attachments: 7232.txt, 7232.txt, 7232v2.txt, 7232v3.txt, 7232v4.txt > > > Its used by hfile fileinfo only so need to convert fileinfo to remove this. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-7244) Provide a command or argument to startup, that formats znodes if provided
[ https://issues.apache.org/jira/browse/HBASE-7244?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13507098#comment-13507098 ] stack commented on HBASE-7244: -- Sounds good to me Harsh. > Provide a command or argument to startup, that formats znodes if provided > - > > Key: HBASE-7244 > URL: https://issues.apache.org/jira/browse/HBASE-7244 > Project: HBase > Issue Type: New Feature > Components: Zookeeper >Affects Versions: 0.94.0 >Reporter: Harsh J >Priority: Minor > > Many a times I've had to, and have seen instructions being thrown, to stop > cluster, clear out ZK and restart. > While this is only a quick (and painful to master) fix, it is certainly nifty > to some smaller cluster users but the process is far too long, roughly: > 1. Stop HBase > 2. Start zkCli.sh and connect to the right quorum > 3. Find and ensure the HBase parent znode from the configs (/hbase only by > default) > 4. Run an "rmr /hbase" in the zkCli.sh shell, or manually delete each znode > if on a lower version of ZK. > 5. Quit zkCli.sh and start HBase again > Perhaps it may be useful, if the start-hbase.sh itself accepted a formatZK > parameter. Such that, when you do a {{start-hbase.sh -formatZK}}, it does > steps 2-4 automatically for you. > For safety, we could make the formatter code ensure that no HBase instance is > actually active, and skip the format process if it is. Similar to a HDFS > NameNode's format, which would disallow if the name directories are locked. > Would this be a useful addition for administrators? Bigtop too can provide a > service subcommand that could do this. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (HBASE-7244) Provide a command or argument to startup, that formats znodes if provided
Harsh J created HBASE-7244: -- Summary: Provide a command or argument to startup, that formats znodes if provided Key: HBASE-7244 URL: https://issues.apache.org/jira/browse/HBASE-7244 Project: HBase Issue Type: New Feature Components: Zookeeper Affects Versions: 0.94.0 Reporter: Harsh J Priority: Minor Many a times I've had to, and have seen instructions being thrown, to stop cluster, clear out ZK and restart. While this is only a quick (and painful to master) fix, it is certainly nifty to some smaller cluster users but the process is far too long, roughly: 1. Stop HBase 2. Start zkCli.sh and connect to the right quorum 3. Find and ensure the HBase parent znode from the configs (/hbase only by default) 4. Run an "rmr /hbase" in the zkCli.sh shell, or manually delete each znode if on a lower version of ZK. 5. Quit zkCli.sh and start HBase again Perhaps it may be useful, if the start-hbase.sh itself accepted a formatZK parameter. Such that, when you do a {{start-hbase.sh -formatZK}}, it does steps 2-4 automatically for you. For safety, we could make the formatter code ensure that no HBase instance is actually active, and skip the format process if it is. Similar to a HDFS NameNode's format, which would disallow if the name directories are locked. Would this be a useful addition for administrators? Bigtop too can provide a service subcommand that could do this. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-7242) Use Runtime.exit() instead of Runtime.halt() upon HLog Sync failures
[ https://issues.apache.org/jira/browse/HBASE-7242?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13507089#comment-13507089 ] stack commented on HBASE-7242: -- What Kannan said (though if the abort flag is set, they might skip doing this)? > Use Runtime.exit() instead of Runtime.halt() upon HLog Sync failures > > > Key: HBASE-7242 > URL: https://issues.apache.org/jira/browse/HBASE-7242 > Project: HBase > Issue Type: Brainstorming >Reporter: Amitanand Aiyer >Priority: Minor > > Hey Guys, > Should we use Runtime.exit() instead of Runtime.halt(), when we fail a Hlog > sync. > The key difference is that Runtime.exit() is going to invoke the shutdown > hooks; while Runtime.halt() does not. > Why we might need this: >We had a HDFS name node reboot today on one of our cells, and this caused > multiple region servers to abort because they could not sync the Hlog. >However, since multiple RS died simultaneously, this seemed like a > co-related failure to the master. The master waits for the > Znode to expire; but, this could take up to few minutes after RS death (this > setting is in place so that we can withstand rack switch reboots, lasting a > couple of minutes, without region movement). > If the shutdown hooks are called, RS will close the ZK connection, causing > a immediate Znode expiry. This might help cut down the unavailability as > Regions can begin to get assigned faster. > While, we do want to abort on Hlog failure, I do not think it would hurt > giving the JVM a few seconds to shutdown gracefully. Please let me know > If I am missing something. > Thanks, > -Amit -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-7219) Make it so can connect remotely to a standalone hbase
[ https://issues.apache.org/jira/browse/HBASE-7219?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13507088#comment-13507088 ] stack commented on HBASE-7219: -- True on the regionserver content. Yes on how host identifies itself. I've not played here. Just made issue going of same complaint a few times in a row up on mailing list. Perhaps this issue could be fixed w/ some clarification in the refguide. > Make it so can connect remotely to a standalone hbase > - > > Key: HBASE-7219 > URL: https://issues.apache.org/jira/browse/HBASE-7219 > Project: HBase > Issue Type: Bug >Reporter: stack > > Should be able to connect from a remote client to a standalone instance. > HBase has 'localhost' in regionservers file and will write 'localhost' to > znode for master location which remote client can't use. Fix. This comes up > on mailing list w/ some frequency. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-6423) Writes should not block reads on blocking updates to memstores
[ https://issues.apache.org/jira/browse/HBASE-6423?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jimmy Xiang updated HBASE-6423: --- Release Note: Added new configuration parameters to adjust the wait time in getting a lock before acting on a region: hbase.busy.wait.multiplier.max (default 2) hbase.rowlock.wait.duration (same as the default RPC timeout time) For reading, it waits at most "hbase.rowlock.wait.duration" in getting a lock. For writing, it waits at most "hbase.rowlock.wait.duration" * min ( #rows affected, " hbase.busy.wait.multiplier.max"). However, it waits at most the server side call purge timeout time. Hadoop Flags: Reviewed Status: Patch Available (was: Open) > Writes should not block reads on blocking updates to memstores > -- > > Key: HBASE-6423 > URL: https://issues.apache.org/jira/browse/HBASE-6423 > Project: HBase > Issue Type: Bug >Reporter: Karthik Ranganathan >Assignee: Jimmy Xiang > Attachments: trunk-6423.patch, trunk-6423_v2.1.patch, > trunk-6423_v2.patch, trunk-6423_v3.2.patch > > > We have a big data use case where we turn off WAL and have a ton of reads and > writes. We found that: > 1. flushing a memstore takes a while (GZIP compression) > 2. incoming writes cause the new memstore to grow in an unbounded fashion > 3. this triggers blocking memstore updates > 4. in turn, this causes all the RPC handler threads to block on writes to > that memstore > 5. we are not able to read during this time as RPC handlers are blocked > At a higher level, we should not hold up the RPC threads while blocking > updates, and we should build in some sort of rate control. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-7236) add per-table/per-cf compaction configuration via metadata
[ https://issues.apache.org/jira/browse/HBASE-7236?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sergey Shelukhin updated HBASE-7236: Attachment: HBASE-7236-PROTOTYPE.patch Hmm... most good diff tools have an option to ignore whitespace changes. Attaching the patch w/o whitespace cleanup. > add per-table/per-cf compaction configuration via metadata > -- > > Key: HBASE-7236 > URL: https://issues.apache.org/jira/browse/HBASE-7236 > Project: HBase > Issue Type: New Feature > Components: Compaction >Affects Versions: 0.96.0 >Reporter: Sergey Shelukhin >Assignee: Sergey Shelukhin > Attachments: HBASE-7236-PROTOTYPE.patch, HBASE-7236-PROTOTYPE.patch > > > Regardless of the compaction policy, it makes sense to have separate > configuration for compactions for different tables and column families, as > their access patterns and workloads can be different. In particular, for > tiered compactions that are being ported from 0.89-fb branch it is necessary > to have, to use it properly. > We might want to add support for compaction configuration via metadata on > table/cf. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-7236) add per-table/per-cf compaction configuration via metadata
[ https://issues.apache.org/jira/browse/HBASE-7236?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sergey Shelukhin updated HBASE-7236: Status: Patch Available (was: Open) > add per-table/per-cf compaction configuration via metadata > -- > > Key: HBASE-7236 > URL: https://issues.apache.org/jira/browse/HBASE-7236 > Project: HBase > Issue Type: New Feature > Components: Compaction >Affects Versions: 0.96.0 >Reporter: Sergey Shelukhin >Assignee: Sergey Shelukhin > Attachments: HBASE-7236-PROTOTYPE.patch, HBASE-7236-PROTOTYPE.patch > > > Regardless of the compaction policy, it makes sense to have separate > configuration for compactions for different tables and column families, as > their access patterns and workloads can be different. In particular, for > tiered compactions that are being ported from 0.89-fb branch it is necessary > to have, to use it properly. > We might want to add support for compaction configuration via metadata on > table/cf. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-7220) Creating a table with 3000 regions on 2 nodes fails after 1 hour
[ https://issues.apache.org/jira/browse/HBASE-7220?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13507048#comment-13507048 ] Ted Yu commented on HBASE-7220: --- {code} if (fut == null || (!fut.isDone() && fut.getDelay(TimeUnit.MILLISECONDS) > 100)) return; synchronized (lock) { fut = executor.getExecutor().schedule(new JmxCacheBusterRunnable(), 5, TimeUnit.SECONDS); } {code} Is it possible that fut becomes non-null after the check but before synchronized block runs ? > Creating a table with 3000 regions on 2 nodes fails after 1 hour > > > Key: HBASE-7220 > URL: https://issues.apache.org/jira/browse/HBASE-7220 > Project: HBase > Issue Type: Bug > Components: metrics, Performance, regionserver >Affects Versions: 0.96.0 >Reporter: nkeywal >Assignee: Elliott Clark > Attachments: HBASE-7220-0.patch, HBASE-7220-1.patch, > HBASE-7220-2.patch > > > I'm trying to create a table with 3000 regions on two regions servers, from > the shell. > It's ok on trunk a standalone config. > It's ok on 0.94 > It's not ok on trunk: it fails after around 1 hour. > If I remove all the code related to metrics in HRegion, the 3000 regions are > created in 3 minutes (twice faster than the 0.94). > On trunk, the region server spends its time in "waitForWork", while the > master is in the tcp connection related code. It's a 1Gb network. > I haven't looked at the metric code itself. > Patch used to remove the metrics from HRegion: > {noformat} > index c70e9ab..6677e65 100644 > --- > a/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java > +++ > b/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java > @@ -364,7 +364,7 @@ public class HRegion implements HeapSize { // , Writable{ >private HTableDescriptor htableDescriptor = null; >private RegionSplitPolicy splitPolicy; > > - private final MetricsRegion metricsRegion; > + private final MetricsRegion metricsRegion = null; > >/** > * Should only be used for testing purposes > @@ -388,7 +388,7 @@ public class HRegion implements HeapSize { // , Writable{ > this.coprocessorHost = null; > this.scannerReadPoints = new ConcurrentHashMap(); > > -this.metricsRegion = new MetricsRegion(new > MetricsRegionWrapperImpl(this)); > +//this.metricsRegion = new MetricsRegion(new > MetricsRegionWrapperImpl(this)); >} > >/** > @@ -451,7 +451,7 @@ public class HRegion implements HeapSize { // , Writable{ > this.regiondir = getRegionDir(this.tableDir, encodedNameStr); > this.scannerReadPoints = new ConcurrentHashMap(); > > -this.metricsRegion = new MetricsRegion(new > MetricsRegionWrapperImpl(this)); > +//this.metricsRegion = new MetricsRegion(new > MetricsRegionWrapperImpl(this)); > > /* > * timestamp.slop provides a server-side constraint on the timestamp. > This > @@ -1024,7 +1024,7 @@ public class HRegion implements HeapSize { // , > Writable{ > status.setStatus("Running coprocessor post-close hooks"); > this.coprocessorHost.postClose(abort); >} > - this.metricsRegion.close(); > + //this.metricsRegion.close(); >status.markComplete("Closed"); >LOG.info("Closed " + this); >return result; > @@ -2331,11 +2331,11 @@ public class HRegion implements HeapSize { // , > Writable{ >if (noOfPuts > 0) { > // There were some Puts in the batch. > double noOfMutations = noOfPuts + noOfDeletes; > -this.metricsRegion.updatePut(); > +//this.metricsRegion.updatePut(); >} >if (noOfDeletes > 0) { > // There were some Deletes in the batch. > -this.metricsRegion.updateDelete(); > +//this.metricsRegion.updateDelete(); >} >if (!success) { > for (int i = firstIndex; i < lastIndexExclusive; i++) { > @@ -4270,7 +4270,7 @@ public class HRegion implements HeapSize { // , > Writable{ > > // do after lock > > -this.metricsRegion.updateGet(); > +//this.metricsRegion.updateGet(); > > return results; >} > @@ -4657,7 +4657,7 @@ public class HRegion implements HeapSize { // , > Writable{ >closeRegionOperation(); > } > > -this.metricsRegion.updateAppend(); > +//this.metricsRegion.updateAppend(); > > > if (flush) { > @@ -4795,7 +4795,7 @@ public class HRegion implements HeapSize { // , > Writable{ > mvcc.completeMemstoreInsert(w); >} >closeRegionOperation(); > - this.metricsRegion.updateIncrement(); > + //this.metricsRegion.updateIncrement(); > } > > if (flush) { > {noformat} -- This message is automatically generated by JIRA. If you think it was sent inc
[jira] [Resolved] (HBASE-5968) Proper html escaping for region names
[ https://issues.apache.org/jira/browse/HBASE-5968?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Enis Soztutar resolved HBASE-5968. -- Resolution: Duplicate Closing this in favor of HBASE-1299 > Proper html escaping for region names > - > > Key: HBASE-5968 > URL: https://issues.apache.org/jira/browse/HBASE-5968 > Project: HBase > Issue Type: Bug > Components: util >Affects Versions: 0.96.0 >Reporter: Enis Soztutar >Assignee: Enis Soztutar > > I noticed that we are not doing html escaping for the rs/master web > interfaces, so you can end up generating html like: > {code} > > > ci,,\xEEp/ > > > hostname > > > ,\xEEp/ > -n\xA8\xE0\x15\xDD\x80! > 2966724 > > {code} > This obviously does not render properly. > Also, my crazy theory is that it can be a security risk. Since the region > name is computed from table rows, which are most of the time user input. Thus > if the rows contain a "
[jira] [Commented] (HBASE-7220) Creating a table with 3000 regions on 2 nodes fails after 1 hour
[ https://issues.apache.org/jira/browse/HBASE-7220?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13507042#comment-13507042 ] Enis Soztutar commented on HBASE-7220: -- I've created HBASE-7243 for adding a test for this. > Creating a table with 3000 regions on 2 nodes fails after 1 hour > > > Key: HBASE-7220 > URL: https://issues.apache.org/jira/browse/HBASE-7220 > Project: HBase > Issue Type: Bug > Components: metrics, Performance, regionserver >Affects Versions: 0.96.0 >Reporter: nkeywal >Assignee: Elliott Clark > Attachments: HBASE-7220-0.patch, HBASE-7220-1.patch, > HBASE-7220-2.patch > > > I'm trying to create a table with 3000 regions on two regions servers, from > the shell. > It's ok on trunk a standalone config. > It's ok on 0.94 > It's not ok on trunk: it fails after around 1 hour. > If I remove all the code related to metrics in HRegion, the 3000 regions are > created in 3 minutes (twice faster than the 0.94). > On trunk, the region server spends its time in "waitForWork", while the > master is in the tcp connection related code. It's a 1Gb network. > I haven't looked at the metric code itself. > Patch used to remove the metrics from HRegion: > {noformat} > index c70e9ab..6677e65 100644 > --- > a/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java > +++ > b/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java > @@ -364,7 +364,7 @@ public class HRegion implements HeapSize { // , Writable{ >private HTableDescriptor htableDescriptor = null; >private RegionSplitPolicy splitPolicy; > > - private final MetricsRegion metricsRegion; > + private final MetricsRegion metricsRegion = null; > >/** > * Should only be used for testing purposes > @@ -388,7 +388,7 @@ public class HRegion implements HeapSize { // , Writable{ > this.coprocessorHost = null; > this.scannerReadPoints = new ConcurrentHashMap(); > > -this.metricsRegion = new MetricsRegion(new > MetricsRegionWrapperImpl(this)); > +//this.metricsRegion = new MetricsRegion(new > MetricsRegionWrapperImpl(this)); >} > >/** > @@ -451,7 +451,7 @@ public class HRegion implements HeapSize { // , Writable{ > this.regiondir = getRegionDir(this.tableDir, encodedNameStr); > this.scannerReadPoints = new ConcurrentHashMap(); > > -this.metricsRegion = new MetricsRegion(new > MetricsRegionWrapperImpl(this)); > +//this.metricsRegion = new MetricsRegion(new > MetricsRegionWrapperImpl(this)); > > /* > * timestamp.slop provides a server-side constraint on the timestamp. > This > @@ -1024,7 +1024,7 @@ public class HRegion implements HeapSize { // , > Writable{ > status.setStatus("Running coprocessor post-close hooks"); > this.coprocessorHost.postClose(abort); >} > - this.metricsRegion.close(); > + //this.metricsRegion.close(); >status.markComplete("Closed"); >LOG.info("Closed " + this); >return result; > @@ -2331,11 +2331,11 @@ public class HRegion implements HeapSize { // , > Writable{ >if (noOfPuts > 0) { > // There were some Puts in the batch. > double noOfMutations = noOfPuts + noOfDeletes; > -this.metricsRegion.updatePut(); > +//this.metricsRegion.updatePut(); >} >if (noOfDeletes > 0) { > // There were some Deletes in the batch. > -this.metricsRegion.updateDelete(); > +//this.metricsRegion.updateDelete(); >} >if (!success) { > for (int i = firstIndex; i < lastIndexExclusive; i++) { > @@ -4270,7 +4270,7 @@ public class HRegion implements HeapSize { // , > Writable{ > > // do after lock > > -this.metricsRegion.updateGet(); > +//this.metricsRegion.updateGet(); > > return results; >} > @@ -4657,7 +4657,7 @@ public class HRegion implements HeapSize { // , > Writable{ >closeRegionOperation(); > } > > -this.metricsRegion.updateAppend(); > +//this.metricsRegion.updateAppend(); > > > if (flush) { > @@ -4795,7 +4795,7 @@ public class HRegion implements HeapSize { // , > Writable{ > mvcc.completeMemstoreInsert(w); >} >closeRegionOperation(); > - this.metricsRegion.updateIncrement(); > + //this.metricsRegion.updateIncrement(); > } > > if (flush) { > {noformat} -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (HBASE-7243) Test for creating a large number of regions
Enis Soztutar created HBASE-7243: Summary: Test for creating a large number of regions Key: HBASE-7243 URL: https://issues.apache.org/jira/browse/HBASE-7243 Project: HBase Issue Type: Bug Components: Region Assignment, regionserver, test Reporter: Enis Soztutar Fix For: 0.96.0 After HBASE-7220, I think it will be good to write a unit test/IT to create a large number of regions. We can put a reasonable timeout to the test. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-6966) "Compressed RPCs for HBase" (HBASE-5355) port to trunk
[ https://issues.apache.org/jira/browse/HBASE-6966?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13507031#comment-13507031 ] Devaraj Das commented on HBASE-6966: Sorry for being silent on this one. The update on this one is when I tried this benchmark, it seemed like there is some bug which shows up sometimes and makes the benchmark application really slow (RPCs become really slow). I will debug it sometime.. > "Compressed RPCs for HBase" (HBASE-5355) port to trunk > -- > > Key: HBASE-6966 > URL: https://issues.apache.org/jira/browse/HBASE-6966 > Project: HBase > Issue Type: Improvement > Components: IPC/RPC >Reporter: Devaraj Das >Assignee: Devaraj Das > Fix For: 0.96.0 > > Attachments: 6966-1.patch, 6966-v1.1.txt, 6966-v1.2.txt, 6966-v2.txt > > > This jira will address the port of the compressed RPC implementation to > trunk. I am expecting the patch to be significantly different due to the PB > stuff in trunk, and hence filed a separate jira. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-6423) Writes should not block reads on blocking updates to memstores
[ https://issues.apache.org/jira/browse/HBASE-6423?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13507028#comment-13507028 ] Lars Hofhansl commented on HBASE-6423: -- This part's a bit ugly: {code} + protected Configuration createConf() { +Configuration conf = HBaseConfiguration.create(); +if (busyWaitDuration != null) { + conf.set("hbase.busy.wait.duration", busyWaitDuration); +} +return conf; + } {code} We're using the configuration pass this information around? Why isn't it in the configuration in the first place? > Writes should not block reads on blocking updates to memstores > -- > > Key: HBASE-6423 > URL: https://issues.apache.org/jira/browse/HBASE-6423 > Project: HBase > Issue Type: Bug >Reporter: Karthik Ranganathan >Assignee: Jimmy Xiang > Attachments: trunk-6423.patch, trunk-6423_v2.1.patch, > trunk-6423_v2.patch, trunk-6423_v3.2.patch > > > We have a big data use case where we turn off WAL and have a ton of reads and > writes. We found that: > 1. flushing a memstore takes a while (GZIP compression) > 2. incoming writes cause the new memstore to grow in an unbounded fashion > 3. this triggers blocking memstore updates > 4. in turn, this causes all the RPC handler threads to block on writes to > that memstore > 5. we are not able to read during this time as RPC handlers are blocked > At a higher level, we should not hold up the RPC threads while blocking > updates, and we should build in some sort of rate control. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-6423) Writes should not block reads on blocking updates to memstores
[ https://issues.apache.org/jira/browse/HBASE-6423?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jimmy Xiang updated HBASE-6423: --- Attachment: trunk-6423_v3.2.patch > Writes should not block reads on blocking updates to memstores > -- > > Key: HBASE-6423 > URL: https://issues.apache.org/jira/browse/HBASE-6423 > Project: HBase > Issue Type: Bug >Reporter: Karthik Ranganathan >Assignee: Jimmy Xiang > Attachments: trunk-6423.patch, trunk-6423_v2.1.patch, > trunk-6423_v2.patch, trunk-6423_v3.2.patch > > > We have a big data use case where we turn off WAL and have a ton of reads and > writes. We found that: > 1. flushing a memstore takes a while (GZIP compression) > 2. incoming writes cause the new memstore to grow in an unbounded fashion > 3. this triggers blocking memstore updates > 4. in turn, this causes all the RPC handler threads to block on writes to > that memstore > 5. we are not able to read during this time as RPC handlers are blocked > At a higher level, we should not hold up the RPC threads while blocking > updates, and we should build in some sort of rate control. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-7236) add per-table/per-cf compaction configuration via metadata
[ https://issues.apache.org/jira/browse/HBASE-7236?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13507019#comment-13507019 ] Ted Yu commented on HBASE-7236: --- I saw quite some changes which only affect white space. Can you simplify your patch so that it is easier to review ? Thanks > add per-table/per-cf compaction configuration via metadata > -- > > Key: HBASE-7236 > URL: https://issues.apache.org/jira/browse/HBASE-7236 > Project: HBase > Issue Type: New Feature > Components: Compaction >Affects Versions: 0.96.0 >Reporter: Sergey Shelukhin >Assignee: Sergey Shelukhin > Attachments: HBASE-7236-PROTOTYPE.patch > > > Regardless of the compaction policy, it makes sense to have separate > configuration for compactions for different tables and column families, as > their access patterns and workloads can be different. In particular, for > tiered compactions that are being ported from 0.89-fb branch it is necessary > to have, to use it properly. > We might want to add support for compaction configuration via metadata on > table/cf. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-7226) HRegion.checkAndMutate uses incorrect comparison result for <, <=, > and >=
[ https://issues.apache.org/jira/browse/HBASE-7226?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Lars Hofhansl updated HBASE-7226: - Fix Version/s: (was: 0.94.2) 0.94.4 > HRegion.checkAndMutate uses incorrect comparison result for <, <=, > and >= > --- > > Key: HBASE-7226 > URL: https://issues.apache.org/jira/browse/HBASE-7226 > Project: HBase > Issue Type: Bug > Components: regionserver >Affects Versions: 0.94.2 > Environment: 0.94.2 >Reporter: Feng Honghua >Priority: Minor > Fix For: 0.94.4 > > Attachments: HRegion_HBASE_7226_0.94.2.patch > > Original Estimate: 10m > Remaining Estimate: 10m > > in HRegion.checkAndMutate, incorrect comparison results are used for <, <=, > > and >=, as below: > switch (compareOp) { > case LESS: > matches = compareResult <= 0; // should be '<' here > break; > case LESS_OR_EQUAL: > matches = compareResult < 0; // should be '<=' here > break; > case EQUAL: > matches = compareResult == 0; > break; > case NOT_EQUAL: > matches = compareResult != 0; > break; > case GREATER_OR_EQUAL: > matches = compareResult > 0; // should be '>=' here > break; > case GREATER: > matches = compareResult >= 0; // should be '>' here > break; -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-7202) Family Store Files are not archived on admin.deleteColumn()
[ https://issues.apache.org/jira/browse/HBASE-7202?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Lars Hofhansl updated HBASE-7202: - Fix Version/s: (was: 0.94.3) 0.94.4 > Family Store Files are not archived on admin.deleteColumn() > --- > > Key: HBASE-7202 > URL: https://issues.apache.org/jira/browse/HBASE-7202 > Project: HBase > Issue Type: Bug >Affects Versions: 0.94.2, 0.96.0 >Reporter: Matteo Bertozzi >Assignee: Matteo Bertozzi > Fix For: 0.96.0, 0.94.4 > > Attachments: HBASE-7202-v1.patch, HBASE-7202-v2.patch > > > using HBaseAdmin.deleteColumn() the files are not archived but deleted > directory. > This causes problems with snapshots, and other systems that relies on files > to be archived. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-7232) Remove HbaseMapWritable
[ https://issues.apache.org/jira/browse/HBASE-7232?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] stack updated HBASE-7232: - Attachment: 7232v4.txt Was missing files. > Remove HbaseMapWritable > --- > > Key: HBASE-7232 > URL: https://issues.apache.org/jira/browse/HBASE-7232 > Project: HBase > Issue Type: Bug >Reporter: stack >Assignee: stack > Attachments: 7232.txt, 7232.txt, 7232v2.txt, 7232v3.txt, 7232v4.txt > > > Its used by hfile fileinfo only so need to convert fileinfo to remove this. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-7232) Remove HbaseMapWritable
[ https://issues.apache.org/jira/browse/HBASE-7232?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] stack updated HBASE-7232: - Attachment: 7232v3.txt Address reviewers' comments and make it actually pass tests. > Remove HbaseMapWritable > --- > > Key: HBASE-7232 > URL: https://issues.apache.org/jira/browse/HBASE-7232 > Project: HBase > Issue Type: Bug >Reporter: stack >Assignee: stack > Attachments: 7232.txt, 7232.txt, 7232v2.txt, 7232v3.txt > > > Its used by hfile fileinfo only so need to convert fileinfo to remove this. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-7242) Use Runtime.exit() instead of Runtime.halt() upon HLog Sync failures
[ https://issues.apache.org/jira/browse/HBASE-7242?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13506993#comment-13506993 ] Kannan Muthukkaruppan commented on HBASE-7242: -- Currently, don't the shutdown hooks also try to flush/close the regions before closing the ZK connection? > Use Runtime.exit() instead of Runtime.halt() upon HLog Sync failures > > > Key: HBASE-7242 > URL: https://issues.apache.org/jira/browse/HBASE-7242 > Project: HBase > Issue Type: Brainstorming >Reporter: Amitanand Aiyer >Priority: Minor > > Hey Guys, > Should we use Runtime.exit() instead of Runtime.halt(), when we fail a Hlog > sync. > The key difference is that Runtime.exit() is going to invoke the shutdown > hooks; while Runtime.halt() does not. > Why we might need this: >We had a HDFS name node reboot today on one of our cells, and this caused > multiple region servers to abort because they could not sync the Hlog. >However, since multiple RS died simultaneously, this seemed like a > co-related failure to the master. The master waits for the > Znode to expire; but, this could take up to few minutes after RS death (this > setting is in place so that we can withstand rack switch reboots, lasting a > couple of minutes, without region movement). > If the shutdown hooks are called, RS will close the ZK connection, causing > a immediate Znode expiry. This might help cut down the unavailability as > Regions can begin to get assigned faster. > While, we do want to abort on Hlog failure, I do not think it would hurt > giving the JVM a few seconds to shutdown gracefully. Please let me know > If I am missing something. > Thanks, > -Amit -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-7240) Cleanup old snapshots on start
[ https://issues.apache.org/jira/browse/HBASE-7240?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13506992#comment-13506992 ] Jesse Yates commented on HBASE-7240: It would be a delete of the /hbase/snapshot/.tmp directory. Might change it we continue running snapshots across master failure sometime in the future, but for the moment, its just a single RPC. > Cleanup old snapshots on start > -- > > Key: HBASE-7240 > URL: https://issues.apache.org/jira/browse/HBASE-7240 > Project: HBase > Issue Type: Sub-task > Components: Client, master, regionserver, snapshots, Zookeeper >Affects Versions: hbase-6055 >Reporter: Jesse Yates > Fix For: hbase-6055 > > > If the master is hard stopped (i.e. kill -9), the snapshot handler or > SnapshotManager may not have a chance to cleanup after the snapshot, leaving > extraneous files in the working snapshot directory (/hbase/.snapshot/.tmp > directory). -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-7240) Cleanup old snapshots on start
[ https://issues.apache.org/jira/browse/HBASE-7240?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13506989#comment-13506989 ] Ted Yu commented on HBASE-7240: --- Cleanup at startup would be completed quickly, right ? I assume this wouldn't affect total recovery time much. > Cleanup old snapshots on start > -- > > Key: HBASE-7240 > URL: https://issues.apache.org/jira/browse/HBASE-7240 > Project: HBase > Issue Type: Sub-task > Components: Client, master, regionserver, snapshots, Zookeeper >Affects Versions: hbase-6055 >Reporter: Jesse Yates > Fix For: hbase-6055 > > > If the master is hard stopped (i.e. kill -9), the snapshot handler or > SnapshotManager may not have a chance to cleanup after the snapshot, leaving > extraneous files in the working snapshot directory (/hbase/.snapshot/.tmp > directory). -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-7234) Remove long-deprecated HServerAddress and HServerInfo Writables
[ https://issues.apache.org/jira/browse/HBASE-7234?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] stack updated HBASE-7234: - Fix Version/s: 0.96.0 Status: Patch Available (was: Open) > Remove long-deprecated HServerAddress and HServerInfo Writables > --- > > Key: HBASE-7234 > URL: https://issues.apache.org/jira/browse/HBASE-7234 > Project: HBase > Issue Type: Bug >Reporter: stack >Assignee: stack >Priority: Blocker > Fix For: 0.96.0 > > Attachments: 7234.txt > > > These classes have been deprecated since 0.92 or before. Remove them. > Remove them too because they are Writable. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-7232) Remove HbaseMapWritable
[ https://issues.apache.org/jira/browse/HBASE-7232?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13506984#comment-13506984 ] stack commented on HBASE-7232: -- bq. In HBaseObjectWritable is it cleaner to just increment the code (like on line 258) rather than putting Object in the map ? Yes. Thanks. bq. Would having separate implementations of the HFile.FileInfo with different reader methods be worth it ? More pain than it is worth IMO. bq. HFileWriterV2 is a white space only change is that intended ? Let me remove from the next revision. bq. Seems like most of the CompoundBloomFilter classes belong in io. Worth moving them now ? Not as part of this patch I'd say. They need a bit of work to undo Writables. Might mess up backward compatibility moving their location. Would need to check if class name is written to the hfile). bq. Should CompoundBloomFilterWriter#cacheOnWrite() be renamed to getCacheOnWrite ? Yes. Thanks for the review. > Remove HbaseMapWritable > --- > > Key: HBASE-7232 > URL: https://issues.apache.org/jira/browse/HBASE-7232 > Project: HBase > Issue Type: Bug >Reporter: stack >Assignee: stack > Attachments: 7232.txt, 7232.txt, 7232v2.txt > > > Its used by hfile fileinfo only so need to convert fileinfo to remove this. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-7234) Remove long-deprecated HServerAddress and HServerInfo Writables
[ https://issues.apache.org/jira/browse/HBASE-7234?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] stack updated HBASE-7234: - Attachment: 7234.txt > Remove long-deprecated HServerAddress and HServerInfo Writables > --- > > Key: HBASE-7234 > URL: https://issues.apache.org/jira/browse/HBASE-7234 > Project: HBase > Issue Type: Bug >Reporter: stack >Assignee: stack >Priority: Blocker > Attachments: 7234.txt > > > These classes have been deprecated since 0.92 or before. Remove them. > Remove them too because they are Writable. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-7236) add per-table/per-cf compaction configuration via metadata
[ https://issues.apache.org/jira/browse/HBASE-7236?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sergey Shelukhin updated HBASE-7236: Attachment: HBASE-7236-PROTOTYPE.patch After poking around a bit, I think it's a good idea to have direct config key override. Justification is that having separately names metadata fields will be hard to manage given the potential number of fields. Ditto for JSON serialization - it would require special logic to validate/use overrides, and shell support would be painful. Perhaps we can white-list configs that can be overridden this way, too, inside HTableDescriptor. Here's the prototype for table level only, and with no shell support and tests for now. Technically, this approach already works in Store for column family (CompoundConfiguration is created with family metadata overriding Configuration, so if someone adds a key with correct name to family metadata it will override xml config within Store); however, making it explicit and separate from miscellaneous metadata would be cleaner imho. Please comment... Thanks! > add per-table/per-cf compaction configuration via metadata > -- > > Key: HBASE-7236 > URL: https://issues.apache.org/jira/browse/HBASE-7236 > Project: HBase > Issue Type: New Feature > Components: Compaction >Affects Versions: 0.96.0 >Reporter: Sergey Shelukhin >Assignee: Sergey Shelukhin > Attachments: HBASE-7236-PROTOTYPE.patch > > > Regardless of the compaction policy, it makes sense to have separate > configuration for compactions for different tables and column families, as > their access patterns and workloads can be different. In particular, for > tiered compactions that are being ported from 0.89-fb branch it is necessary > to have, to use it properly. > We might want to add support for compaction configuration via metadata on > table/cf. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-7219) Make it so can connect remotely to a standalone hbase
[ https://issues.apache.org/jira/browse/HBASE-7219?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13506974#comment-13506974 ] Jean-Daniel Cryans commented on HBASE-7219: --- bq. HBase has 'localhost' in regionservers file This doesn't have anything to do with how we report addresses tho, it's only used for ssh. bq. and will write 'localhost' to znode for master location which remote client can't use AFAIK we don't force localhost in there, what we do force is looking up localhost for the zookeeper server by default and even then a remote client could just specify the right address and it should work. If localhost is reported for the master or RS, it's a problem with how the node identifies itself. > Make it so can connect remotely to a standalone hbase > - > > Key: HBASE-7219 > URL: https://issues.apache.org/jira/browse/HBASE-7219 > Project: HBase > Issue Type: Bug >Reporter: stack > > Should be able to connect from a remote client to a standalone instance. > HBase has 'localhost' in regionservers file and will write 'localhost' to > znode for master location which remote client can't use. Fix. This comes up > on mailing list w/ some frequency. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-7242) Use Runtime.exit() instead of Runtime.halt() upon HLog Sync failures
[ https://issues.apache.org/jira/browse/HBASE-7242?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Amitanand Aiyer updated HBASE-7242: --- Summary: Use Runtime.exit() instead of Runtime.halt() upon HLog Sync failures (was: Use Runtime.exit() instead of Runtime.halt() upon HLog flush failures) > Use Runtime.exit() instead of Runtime.halt() upon HLog Sync failures > > > Key: HBASE-7242 > URL: https://issues.apache.org/jira/browse/HBASE-7242 > Project: HBase > Issue Type: Brainstorming >Reporter: Amitanand Aiyer >Priority: Minor > > Hey Guys, > Should we use Runtime.exit() instead of Runtime.halt(), when we fail a Hlog > sync. > The key difference is that Runtime.exit() is going to invoke the shutdown > hooks; while Runtime.halt() does not. > Why we might need this: >We had a HDFS name node reboot today on one of our cells, and this caused > multiple region servers to abort because they could not sync the Hlog. >However, since multiple RS died simultaneously, this seemed like a > co-related failure to the master. The master waits for the > Znode to expire; but, this could take up to few minutes after RS death (this > setting is in place so that we can withstand rack switch reboots, lasting a > couple of minutes, without region movement). > If the shutdown hooks are called, RS will close the ZK connection, causing > a immediate Znode expiry. This might help cut down the unavailability as > Regions can begin to get assigned faster. > While, we do want to abort on Hlog failure, I do not think it would hurt > giving the JVM a few seconds to shutdown gracefully. Please let me know > If I am missing something. > Thanks, > -Amit -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (HBASE-7242) Use Runtime.exit() instead of Runtime.halt() upon HLog flush failures
Amitanand Aiyer created HBASE-7242: -- Summary: Use Runtime.exit() instead of Runtime.halt() upon HLog flush failures Key: HBASE-7242 URL: https://issues.apache.org/jira/browse/HBASE-7242 Project: HBase Issue Type: Brainstorming Reporter: Amitanand Aiyer Priority: Minor Hey Guys, Should we use Runtime.exit() instead of Runtime.halt(), when we fail a Hlog sync. The key difference is that Runtime.exit() is going to invoke the shutdown hooks; while Runtime.halt() does not. Why we might need this: We had a HDFS name node reboot today on one of our cells, and this caused multiple region servers to abort because they could not sync the Hlog. However, since multiple RS died simultaneously, this seemed like a co-related failure to the master. The master waits for the Znode to expire; but, this could take up to few minutes after RS death (this setting is in place so that we can withstand rack switch reboots, lasting a couple of minutes, without region movement). If the shutdown hooks are called, RS will close the ZK connection, causing a immediate Znode expiry. This might help cut down the unavailability as Regions can begin to get assigned faster. While, we do want to abort on Hlog failure, I do not think it would hurt giving the JVM a few seconds to shutdown gracefully. Please let me know If I am missing something. Thanks, -Amit -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-7215) Put, Delete, Increment, Result, all all HBase M/R classes still implement/use Writable
[ https://issues.apache.org/jira/browse/HBASE-7215?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13506969#comment-13506969 ] Hudson commented on HBASE-7215: --- Integrated in HBase-TRUNK-on-Hadoop-2.0.0 #280 (See [https://builds.apache.org/job/HBase-TRUNK-on-Hadoop-2.0.0/280/]) HBASE-7215 Put, Delete, Increment, Result, all all HBase M/R classes still implement/use Writable (Revision 1415412) Result = FAILURE larsh : Files : * /hbase/trunk/hbase-examples/src/main/java/org/apache/hadoop/hbase/mapreduce/IndexBuilder.java * /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/client/Action.java * /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/client/Append.java * /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/client/Delete.java * /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/client/Get.java * /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/client/Increment.java * /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/client/MultiAction.java * /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/client/MultiPut.java * /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/client/MultiPutResponse.java * /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/client/MultiResponse.java * /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/client/OperationWithAttributes.java * /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/client/Put.java * /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/client/Result.java * /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/client/Scan.java * /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/client/ScannerCallable.java * /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/io/HbaseObjectWritable.java * /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/ipc/HBaseClient.java * /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/mapred/TableMap.java * /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/mapred/TableMapReduceUtil.java * /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/mapred/TableRecordReaderImpl.java * /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/mapred/TableReduce.java * /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/IdentityTableReducer.java * /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/ImportTsv.java * /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/MultiTableOutputFormat.java * /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/MutationSerialization.java * /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/ResultSerialization.java * /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/TableMapReduceUtil.java * /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/TableOutputFormat.java * /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/TableReducer.java * /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java * /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegionServer.java * /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/rest/client/RemoteHTable.java * /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/TestSerialization.java * /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestAttributes.java * /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestMultiParallel.java * /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestTimestampsFilter.java > Put, Delete, Increment, Result, all all HBase M/R classes still implement/use > Writable > -- > > Key: HBASE-7215 > URL: https://issues.apache.org/jira/browse/HBASE-7215 > Project: HBase > Issue Type: Bug >Reporter: Lars Hofhansl >Assignee: Lars Hofhansl >Priority: Blocker > Fix For: 0.96.0 > > Attachments: 7215-v2.txt, 7215v3_mutableresult.txt, 7215v3.txt, > 7215v4.txt, 7215v5.txt, 7215v6.txt, 7215v7.txt, 7215v7.txt, 7251-SKETCH.txt, > MutableResult.java > > > Making blocker as suggested by Stack. > At least the following still use Put/Delete as writables. > * IdentityTableReduce.java > * MultiPut.java > * HRegionServer.checkAndMutate -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-7241) [refGuide] Update to Performannce/writing/presplit
[ https://issues.apache.org/jira/browse/HBASE-7241?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13506970#comment-13506970 ] Hudson commented on HBASE-7241: --- Integrated in HBase-TRUNK-on-Hadoop-2.0.0 #280 (See [https://builds.apache.org/job/HBase-TRUNK-on-Hadoop-2.0.0/280/]) hbase-7241. refGuide. Perf/Schema design cleanup. (Revision 1415422) Result = FAILURE > [refGuide] Update to Performannce/writing/presplit > -- > > Key: HBASE-7241 > URL: https://issues.apache.org/jira/browse/HBASE-7241 > Project: HBase > Issue Type: Improvement >Reporter: Doug Meil >Assignee: Doug Meil >Priority: Minor > Attachments: docbkx_hbase_7241.patch > > > Took the pre-split example that was in Performance/Writing/Pre-split and > moved it to the Schema Design/RowKey-PreSplit section that was just created. > Updated the Perf/pre-split section to additionally refer to the new > RowKey-PreSplit section. > -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-7204) Make hbck ErrorReporter pluggable
[ https://issues.apache.org/jira/browse/HBASE-7204?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jimmy Xiang updated HBASE-7204: --- Hadoop Flags: Reviewed Status: Patch Available (was: Open) Try hadoopqa again. > Make hbck ErrorReporter pluggable > - > > Key: HBASE-7204 > URL: https://issues.apache.org/jira/browse/HBASE-7204 > Project: HBase > Issue Type: Improvement > Components: hbck >Reporter: Jimmy Xiang >Assignee: Jimmy Xiang >Priority: Minor > Attachments: 0.94-7204.patch, trunk-7204.patch, > trunk-7204_v2.1.patch, trunk-7204_v2.patch > > > Make hbck ErrorReporter pluggable so that it can be replaced dynamically. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-7204) Make hbck ErrorReporter pluggable
[ https://issues.apache.org/jira/browse/HBASE-7204?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jimmy Xiang updated HBASE-7204: --- Status: Open (was: Patch Available) > Make hbck ErrorReporter pluggable > - > > Key: HBASE-7204 > URL: https://issues.apache.org/jira/browse/HBASE-7204 > Project: HBase > Issue Type: Improvement > Components: hbck >Reporter: Jimmy Xiang >Assignee: Jimmy Xiang >Priority: Minor > Attachments: 0.94-7204.patch, trunk-7204.patch, > trunk-7204_v2.1.patch, trunk-7204_v2.patch > > > Make hbck ErrorReporter pluggable so that it can be replaced dynamically. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-7204) Make hbck ErrorReporter pluggable
[ https://issues.apache.org/jira/browse/HBASE-7204?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jimmy Xiang updated HBASE-7204: --- Attachment: trunk-7204_v2.1.patch > Make hbck ErrorReporter pluggable > - > > Key: HBASE-7204 > URL: https://issues.apache.org/jira/browse/HBASE-7204 > Project: HBase > Issue Type: Improvement > Components: hbck >Reporter: Jimmy Xiang >Assignee: Jimmy Xiang >Priority: Minor > Attachments: 0.94-7204.patch, trunk-7204.patch, > trunk-7204_v2.1.patch, trunk-7204_v2.patch > > > Make hbck ErrorReporter pluggable so that it can be replaced dynamically. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-7204) Make hbck ErrorReporter pluggable
[ https://issues.apache.org/jira/browse/HBASE-7204?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jimmy Xiang updated HBASE-7204: --- Attachment: 0.94-7204.patch > Make hbck ErrorReporter pluggable > - > > Key: HBASE-7204 > URL: https://issues.apache.org/jira/browse/HBASE-7204 > Project: HBase > Issue Type: Improvement > Components: hbck >Reporter: Jimmy Xiang >Assignee: Jimmy Xiang >Priority: Minor > Attachments: 0.94-7204.patch, trunk-7204.patch, trunk-7204_v2.patch > > > Make hbck ErrorReporter pluggable so that it can be replaced dynamically. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-7241) [refGuide] Update to Performannce/writing/presplit
[ https://issues.apache.org/jira/browse/HBASE-7241?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13506962#comment-13506962 ] Hudson commented on HBASE-7241: --- Integrated in HBase-TRUNK #3580 (See [https://builds.apache.org/job/HBase-TRUNK/3580/]) hbase-7241. refGuide. Perf/Schema design cleanup. (Revision 1415422) Result = FAILURE > [refGuide] Update to Performannce/writing/presplit > -- > > Key: HBASE-7241 > URL: https://issues.apache.org/jira/browse/HBASE-7241 > Project: HBase > Issue Type: Improvement >Reporter: Doug Meil >Assignee: Doug Meil >Priority: Minor > Attachments: docbkx_hbase_7241.patch > > > Took the pre-split example that was in Performance/Writing/Pre-split and > moved it to the Schema Design/RowKey-PreSplit section that was just created. > Updated the Perf/pre-split section to additionally refer to the new > RowKey-PreSplit section. > -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-7215) Put, Delete, Increment, Result, all all HBase M/R classes still implement/use Writable
[ https://issues.apache.org/jira/browse/HBASE-7215?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13506961#comment-13506961 ] Hudson commented on HBASE-7215: --- Integrated in HBase-TRUNK #3580 (See [https://builds.apache.org/job/HBase-TRUNK/3580/]) HBASE-7215 Put, Delete, Increment, Result, all all HBase M/R classes still implement/use Writable (Revision 1415412) Result = FAILURE larsh : Files : * /hbase/trunk/hbase-examples/src/main/java/org/apache/hadoop/hbase/mapreduce/IndexBuilder.java * /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/client/Action.java * /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/client/Append.java * /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/client/Delete.java * /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/client/Get.java * /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/client/Increment.java * /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/client/MultiAction.java * /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/client/MultiPut.java * /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/client/MultiPutResponse.java * /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/client/MultiResponse.java * /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/client/OperationWithAttributes.java * /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/client/Put.java * /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/client/Result.java * /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/client/Scan.java * /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/client/ScannerCallable.java * /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/io/HbaseObjectWritable.java * /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/ipc/HBaseClient.java * /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/mapred/TableMap.java * /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/mapred/TableMapReduceUtil.java * /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/mapred/TableRecordReaderImpl.java * /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/mapred/TableReduce.java * /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/IdentityTableReducer.java * /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/ImportTsv.java * /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/MultiTableOutputFormat.java * /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/MutationSerialization.java * /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/ResultSerialization.java * /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/TableMapReduceUtil.java * /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/TableOutputFormat.java * /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/TableReducer.java * /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java * /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegionServer.java * /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/rest/client/RemoteHTable.java * /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/TestSerialization.java * /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestAttributes.java * /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestMultiParallel.java * /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestTimestampsFilter.java > Put, Delete, Increment, Result, all all HBase M/R classes still implement/use > Writable > -- > > Key: HBASE-7215 > URL: https://issues.apache.org/jira/browse/HBASE-7215 > Project: HBase > Issue Type: Bug >Reporter: Lars Hofhansl >Assignee: Lars Hofhansl >Priority: Blocker > Fix For: 0.96.0 > > Attachments: 7215-v2.txt, 7215v3_mutableresult.txt, 7215v3.txt, > 7215v4.txt, 7215v5.txt, 7215v6.txt, 7215v7.txt, 7215v7.txt, 7251-SKETCH.txt, > MutableResult.java > > > Making blocker as suggested by Stack. > At least the following still use Put/Delete as writables. > * IdentityTableReduce.java > * MultiPut.java > * HRegionServer.checkAndMutate -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-7213) Have HLog files for .META. edits only
[ https://issues.apache.org/jira/browse/HBASE-7213?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13506953#comment-13506953 ] Todd Lipcon commented on HBASE-7213: Yea, I don't think you need to strictly sequence this after the multi-WAL work. But it would be nice to have the "end goal" in mind while doing this work. Sorry, haven't had time to look at the in-progress patch, but if there's a simple solution that works OK now, no sense blocking it for the perfect end-game solution later. > Have HLog files for .META. edits only > - > > Key: HBASE-7213 > URL: https://issues.apache.org/jira/browse/HBASE-7213 > Project: HBase > Issue Type: Improvement > Components: master, regionserver >Reporter: Devaraj Das >Assignee: Devaraj Das > Attachments: 7213-in-progress.patch > > > Over on HBASE-6774, there is a discussion on separating out the edits for > .META. regions from the other regions' edits w.r.t where the edits are > written. This jira is to track an implementation of that. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-6052) Convert .META. and -ROOT- content to pb
[ https://issues.apache.org/jira/browse/HBASE-6052?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13506951#comment-13506951 ] Jean-Daniel Cryans commented on HBASE-6052: --- Enis, Can we remove this log message? {code} public static HRegionInfo getHRegionInfo(Result data) { ... if (LOG.isDebugEnabled()) { LOG.debug("Current INFO from scan results = " + info); } return info; } {code} It's not too spammy if you have a few regions but as soon as you reach the hundreds it becomes quite excessive. For example, try running this: bq. bin/hbase org.apache.hadoop.hbase.PerformanceEvaluation --presplit=500 randomWrite 5 > Convert .META. and -ROOT- content to pb > --- > > Key: HBASE-6052 > URL: https://issues.apache.org/jira/browse/HBASE-6052 > Project: HBase > Issue Type: Sub-task >Reporter: stack >Assignee: Enis Soztutar >Priority: Blocker > Fix For: 0.96.0 > > Attachments: 6052-v5.txt, 6052_v8.patch, HBASE-6052_v1.patch, > HBASE-6052_v2.patch, HBASE-6052_v3.patch, HBASE-6052_v4.patch, > HBASE-6052_v4.patch, HBASE-6052_v7.patch, HBASE-6052_v8.patch, > HBASE-6052_v9.patch, TestMetaMigrationConvertToPB.tgz > > -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Assigned] (HBASE-6367) List backup masters in ui.
[ https://issues.apache.org/jira/browse/HBASE-6367?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jeffrey Zhong reassigned HBASE-6367: Assignee: Jeffrey Zhong > List backup masters in ui. > -- > > Key: HBASE-6367 > URL: https://issues.apache.org/jira/browse/HBASE-6367 > Project: HBase > Issue Type: Improvement >Reporter: Elliott Clark >Assignee: Jeffrey Zhong >Priority: Minor > Labels: noob > > Right now only the active master shows any information on the web ui. It > would be nice to see that there are backup masters waiting. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-6367) List backup masters in ui.
[ https://issues.apache.org/jira/browse/HBASE-6367?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13506940#comment-13506940 ] Jeffrey Zhong commented on HBASE-6367: -- Starting to work on this one. > List backup masters in ui. > -- > > Key: HBASE-6367 > URL: https://issues.apache.org/jira/browse/HBASE-6367 > Project: HBase > Issue Type: Improvement >Reporter: Elliott Clark >Priority: Minor > Labels: noob > > Right now only the active master shows any information on the web ui. It > would be nice to see that there are backup masters waiting. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-7240) Cleanup old snapshots on start
[ https://issues.apache.org/jira/browse/HBASE-7240?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13506939#comment-13506939 ] Matteo Bertozzi commented on HBASE-7240: +1 on just have the cleanup at startup > Cleanup old snapshots on start > -- > > Key: HBASE-7240 > URL: https://issues.apache.org/jira/browse/HBASE-7240 > Project: HBase > Issue Type: Sub-task > Components: Client, master, regionserver, snapshots, Zookeeper >Affects Versions: hbase-6055 >Reporter: Jesse Yates > Fix For: hbase-6055 > > > If the master is hard stopped (i.e. kill -9), the snapshot handler or > SnapshotManager may not have a chance to cleanup after the snapshot, leaving > extraneous files in the working snapshot directory (/hbase/.snapshot/.tmp > directory). -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-5877) When a query fails because the region has moved, let the regionserver return the new address to the client
[ https://issues.apache.org/jira/browse/HBASE-5877?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13506935#comment-13506935 ] Jean-Daniel Cryans commented on HBASE-5877: --- Nicolas, Do you think this log message could be removed? {noformat} 12/11/29 15:17:36 INFO client.HConnectionManager$HConnectionImplementation: Region TestTable,0001966229,1354231005211.1bbba78dda968874d2981c322ed3319f. moved from 572ba57e-1cab-4f9c-a071-782e5a1a7184.cs1cloud.internal:60020, updating client location cache. New server: 20590793-0e19-4eb4-b2f6-05de8244f716.cs1cloud.internal:60020 {noformat} Right now I'm running some loading tests and I'm getting walls of text every time a split happens and it's basically the same message repeated hundreds of times. We used to have a similar message before but we removed it since it's pretty spammy (or we set it to DEBUG, can't remember). > When a query fails because the region has moved, let the regionserver return > the new address to the client > -- > > Key: HBASE-5877 > URL: https://issues.apache.org/jira/browse/HBASE-5877 > Project: HBase > Issue Type: Improvement > Components: Client, master, regionserver >Affects Versions: 0.96.0 >Reporter: nkeywal >Assignee: nkeywal >Priority: Minor > Fix For: 0.96.0 > > Attachments: 5877.v12.patch, 5877.v15.patch, 5877-v16.txt, > 5877-v17.txt, 5877-v17.txt, 5877.v18.patch, 5877.v18.patch, 5877.v18.patch, > 5877.v1.patch, 5877.v6.patch > > > This is mainly useful when we do a rolling restart. This will decrease the > load on the master and the network load. > Note that a region is not immediately opened after a close. So: > - it seems preferable to wait before retrying on the other server. An > optimisation would be to have an heuristic depending on when the region was > closed. > - during a rolling restart, the server moves the regions then stops. So we > may have failures when the server is stopped, and this patch won't help. > The implementation in the first patch does: > - on the region move, there is an added parameter on the regionserver#close > to say where we are sending the region > - the regionserver keeps a list of what was moved. Each entry is kept 100 > seconds. > - the regionserver sends a specific exception when it receives a query on a > moved region. This exception contains the new address. > - the client analyses the exeptions and update its cache accordingly... -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-7213) Have HLog files for .META. edits only
[ https://issues.apache.org/jira/browse/HBASE-7213?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13506923#comment-13506923 ] Devaraj Das commented on HBASE-7213: [~tlipcon], good points there. But I'd like to separate the META SPOF work from the full fledged multiwal work (for the full multi-wal case, we'd need to fix up things like replication, and those can be skipped from the meta-only design that this jira attempts to do). [~yuzhih...@gmail.com], I guess you are right. In theory, one could read the HLogFactory as the WALFactory... Thoughts? (i am in the process of extending the patch I previously posted to a fully functional one) > Have HLog files for .META. edits only > - > > Key: HBASE-7213 > URL: https://issues.apache.org/jira/browse/HBASE-7213 > Project: HBase > Issue Type: Improvement > Components: master, regionserver >Reporter: Devaraj Das >Assignee: Devaraj Das > Attachments: 7213-in-progress.patch > > > Over on HBASE-6774, there is a discussion on separating out the edits for > .META. regions from the other regions' edits w.r.t where the edits are > written. This jira is to track an implementation of that. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-7232) Remove HbaseMapWritable
[ https://issues.apache.org/jira/browse/HBASE-7232?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13506907#comment-13506907 ] Elliott Clark commented on HBASE-7232: -- * In HBaseObjectWritable is it cleaner to just increment the code (like on line 258) rather than putting Object in the map ? * Would having separate implementations of the HFile.FileInfo with different reader methods be worth it ? ( hfilev1 and hvile <=v2.1 would have the writable. Everything else uses the pb. Could make removing writable version easier later) * HFileWriterV2 is a white space only change is that intended ? Some thoughts about code this patch happens to touch: * Seems like most of the CompoundBloomFilter classes belong in io. Worth moving them now ? * Should CompoundBloomFilterWriter#cacheOnWrite() be renamed to getCacheOnWrite ? > Remove HbaseMapWritable > --- > > Key: HBASE-7232 > URL: https://issues.apache.org/jira/browse/HBASE-7232 > Project: HBase > Issue Type: Bug >Reporter: stack >Assignee: stack > Attachments: 7232.txt, 7232.txt, 7232v2.txt > > > Its used by hfile fileinfo only so need to convert fileinfo to remove this. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-7241) [refGuide] Update to Performannce/writing/presplit
[ https://issues.apache.org/jira/browse/HBASE-7241?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Doug Meil updated HBASE-7241: - Status: Patch Available (was: Open) > [refGuide] Update to Performannce/writing/presplit > -- > > Key: HBASE-7241 > URL: https://issues.apache.org/jira/browse/HBASE-7241 > Project: HBase > Issue Type: Improvement >Reporter: Doug Meil >Assignee: Doug Meil >Priority: Minor > Attachments: docbkx_hbase_7241.patch > > > Took the pre-split example that was in Performance/Writing/Pre-split and > moved it to the Schema Design/RowKey-PreSplit section that was just created. > Updated the Perf/pre-split section to additionally refer to the new > RowKey-PreSplit section. > -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-7241) [refGuide] Update to Performannce/writing/presplit
[ https://issues.apache.org/jira/browse/HBASE-7241?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Doug Meil updated HBASE-7241: - Resolution: Fixed Status: Resolved (was: Patch Available) > [refGuide] Update to Performannce/writing/presplit > -- > > Key: HBASE-7241 > URL: https://issues.apache.org/jira/browse/HBASE-7241 > Project: HBase > Issue Type: Improvement >Reporter: Doug Meil >Assignee: Doug Meil >Priority: Minor > Attachments: docbkx_hbase_7241.patch > > > Took the pre-split example that was in Performance/Writing/Pre-split and > moved it to the Schema Design/RowKey-PreSplit section that was just created. > Updated the Perf/pre-split section to additionally refer to the new > RowKey-PreSplit section. > -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-7241) [refGuide] Update to Performannce/writing/presplit
[ https://issues.apache.org/jira/browse/HBASE-7241?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Doug Meil updated HBASE-7241: - Attachment: docbkx_hbase_7241.patch > [refGuide] Update to Performannce/writing/presplit > -- > > Key: HBASE-7241 > URL: https://issues.apache.org/jira/browse/HBASE-7241 > Project: HBase > Issue Type: Improvement >Reporter: Doug Meil >Assignee: Doug Meil >Priority: Minor > Attachments: docbkx_hbase_7241.patch > > > Took the pre-split example that was in Performance/Writing/Pre-split and > moved it to the Schema Design/RowKey-PreSplit section that was just created. > Updated the Perf/pre-split section to additionally refer to the new > RowKey-PreSplit section. > -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (HBASE-7241) [refGuide] Update to Performannce/writing/presplit
Doug Meil created HBASE-7241: Summary: [refGuide] Update to Performannce/writing/presplit Key: HBASE-7241 URL: https://issues.apache.org/jira/browse/HBASE-7241 Project: HBase Issue Type: Improvement Reporter: Doug Meil Assignee: Doug Meil Priority: Minor Took the pre-split example that was in Performance/Writing/Pre-split and moved it to the Schema Design/RowKey-PreSplit section that was just created. Updated the Perf/pre-split section to additionally refer to the new RowKey-PreSplit section. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-7204) Make hbck ErrorReporter pluggable
[ https://issues.apache.org/jira/browse/HBASE-7204?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13506885#comment-13506885 ] Jonathan Hsieh commented on HBASE-7204: --- Thanks for making the changes! feel free to commit if the test fix is trivial > Make hbck ErrorReporter pluggable > - > > Key: HBASE-7204 > URL: https://issues.apache.org/jira/browse/HBASE-7204 > Project: HBase > Issue Type: Improvement > Components: hbck >Reporter: Jimmy Xiang >Assignee: Jimmy Xiang >Priority: Minor > Attachments: trunk-7204.patch, trunk-7204_v2.patch > > > Make hbck ErrorReporter pluggable so that it can be replaced dynamically. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-7213) Have HLog files for .META. edits only
[ https://issues.apache.org/jira/browse/HBASE-7213?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13506884#comment-13506884 ] Ted Yu commented on HBASE-7213: --- We already have HLogFactory in trunk. bq. to lump this with the multi-WAL work Can I interpret the above as saying that multi-WAL work should be done at the same time, if not earlier ? Since HLogFactory can hand out the unique instance for .META., it is not far from handing out different instances (for different regions) which is what HBASE-5699 tries to do. > Have HLog files for .META. edits only > - > > Key: HBASE-7213 > URL: https://issues.apache.org/jira/browse/HBASE-7213 > Project: HBase > Issue Type: Improvement > Components: master, regionserver >Reporter: Devaraj Das >Assignee: Devaraj Das > Attachments: 7213-in-progress.patch > > > Over on HBASE-6774, there is a discussion on separating out the edits for > .META. regions from the other regions' edits w.r.t where the edits are > written. This jira is to track an implementation of that. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-7215) Put, Delete, Increment, Result, all all HBase M/R classes still implement/use Writable
[ https://issues.apache.org/jira/browse/HBASE-7215?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Lars Hofhansl updated HBASE-7215: - Resolution: Fixed Hadoop Flags: Reviewed Status: Resolved (was: Patch Available) Committed to trunk. Thanks for the help and review Stack. > Put, Delete, Increment, Result, all all HBase M/R classes still implement/use > Writable > -- > > Key: HBASE-7215 > URL: https://issues.apache.org/jira/browse/HBASE-7215 > Project: HBase > Issue Type: Bug >Reporter: Lars Hofhansl >Assignee: Lars Hofhansl >Priority: Blocker > Fix For: 0.96.0 > > Attachments: 7215-v2.txt, 7215v3_mutableresult.txt, 7215v3.txt, > 7215v4.txt, 7215v5.txt, 7215v6.txt, 7215v7.txt, 7215v7.txt, 7251-SKETCH.txt, > MutableResult.java > > > Making blocker as suggested by Stack. > At least the following still use Put/Delete as writables. > * IdentityTableReduce.java > * MultiPut.java > * HRegionServer.checkAndMutate -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-7204) Make hbck ErrorReporter pluggable
[ https://issues.apache.org/jira/browse/HBASE-7204?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13506877#comment-13506877 ] Jimmy Xiang commented on HBASE-7204: Thanks for the review. I need to do a minor change to fix a test failure. I will do that before I commit. > Make hbck ErrorReporter pluggable > - > > Key: HBASE-7204 > URL: https://issues.apache.org/jira/browse/HBASE-7204 > Project: HBase > Issue Type: Improvement > Components: hbck >Reporter: Jimmy Xiang >Assignee: Jimmy Xiang >Priority: Minor > Attachments: trunk-7204.patch, trunk-7204_v2.patch > > > Make hbck ErrorReporter pluggable so that it can be replaced dynamically. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-7215) Put, Delete, Increment, Result, all all HBase M/R classes still implement/use Writable
[ https://issues.apache.org/jira/browse/HBASE-7215?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Lars Hofhansl updated HBASE-7215: - Summary: Put, Delete, Increment, Result, all all HBase M/R classes still implement/use Writable (was: Put, Delete, Increment, and Result still implement Writable) > Put, Delete, Increment, Result, all all HBase M/R classes still implement/use > Writable > -- > > Key: HBASE-7215 > URL: https://issues.apache.org/jira/browse/HBASE-7215 > Project: HBase > Issue Type: Bug >Reporter: Lars Hofhansl >Assignee: Lars Hofhansl >Priority: Blocker > Fix For: 0.96.0 > > Attachments: 7215-v2.txt, 7215v3_mutableresult.txt, 7215v3.txt, > 7215v4.txt, 7215v5.txt, 7215v6.txt, 7215v7.txt, 7215v7.txt, 7251-SKETCH.txt, > MutableResult.java > > > Making blocker as suggested by Stack. > At least the following still use Put/Delete as writables. > * IdentityTableReduce.java > * MultiPut.java > * HRegionServer.checkAndMutate -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-7204) Make hbck ErrorReporter pluggable
[ https://issues.apache.org/jira/browse/HBASE-7204?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13506873#comment-13506873 ] Jonathan Hsieh commented on HBASE-7204: --- +1 lgtm. > Make hbck ErrorReporter pluggable > - > > Key: HBASE-7204 > URL: https://issues.apache.org/jira/browse/HBASE-7204 > Project: HBase > Issue Type: Improvement > Components: hbck >Reporter: Jimmy Xiang >Assignee: Jimmy Xiang >Priority: Minor > Attachments: trunk-7204.patch, trunk-7204_v2.patch > > > Make hbck ErrorReporter pluggable so that it can be replaced dynamically. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-7204) Make hbck ErrorReporter pluggable
[ https://issues.apache.org/jira/browse/HBASE-7204?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jimmy Xiang updated HBASE-7204: --- Release Note: Now hbck runs with ToolRunner, and can accept configurations from command line. Status: Patch Available (was: Open) Addressed Jon and Ram's comments. Now hbck runs with ToolRunner, and can accept configurations from command line. > Make hbck ErrorReporter pluggable > - > > Key: HBASE-7204 > URL: https://issues.apache.org/jira/browse/HBASE-7204 > Project: HBase > Issue Type: Improvement > Components: hbck >Reporter: Jimmy Xiang >Assignee: Jimmy Xiang >Priority: Minor > Attachments: trunk-7204.patch, trunk-7204_v2.patch > > > Make hbck ErrorReporter pluggable so that it can be replaced dynamically. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-7204) Make hbck ErrorReporter pluggable
[ https://issues.apache.org/jira/browse/HBASE-7204?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jimmy Xiang updated HBASE-7204: --- Attachment: trunk-7204_v2.patch > Make hbck ErrorReporter pluggable > - > > Key: HBASE-7204 > URL: https://issues.apache.org/jira/browse/HBASE-7204 > Project: HBase > Issue Type: Improvement > Components: hbck >Reporter: Jimmy Xiang >Assignee: Jimmy Xiang >Priority: Minor > Attachments: trunk-7204.patch, trunk-7204_v2.patch > > > Make hbck ErrorReporter pluggable so that it can be replaced dynamically. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-7055) port HBASE-6371 tier-based compaction from 0.89-fb to trunk - first slice (not configurable by cf or dynamically)
[ https://issues.apache.org/jira/browse/HBASE-7055?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13506850#comment-13506850 ] Sergey Shelukhin commented on HBASE-7055: - It doesn't read CF values directly, what it does as encapsulate the logic such as runtime-based defaults (e.g. memstore flush size for min compaction size, throttling default, etc.) and some validation. This logic was already there, now it's just in separate class. For tiered one, also getting the requisite config for tiers based on tier count. Also, as an aside, configs are already mixed inside HStore (see CompoundConfiguration creation in ctor). > port HBASE-6371 tier-based compaction from 0.89-fb to trunk - first slice > (not configurable by cf or dynamically) > - > > Key: HBASE-7055 > URL: https://issues.apache.org/jira/browse/HBASE-7055 > Project: HBase > Issue Type: Task > Components: Compaction >Affects Versions: 0.96.0 >Reporter: Sergey Shelukhin >Assignee: Sergey Shelukhin > Fix For: 0.96.0 > > Attachments: HBASE-6371-squashed.patch, HBASE-6371-v2-squashed.patch, > HBASE-6371-v3-refactor-only-squashed.patch, > HBASE-6371-v4-refactor-only-squashed.patch, > HBASE-6371-v5-refactor-only-squashed.patch, HBASE-7055-v0.patch, > HBASE-7055-v1.patch > > > There's divergence in the code :( > See HBASE-6371 for details. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-7221) RowKey utility class for rowkey construction
[ https://issues.apache.org/jira/browse/HBASE-7221?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13506848#comment-13506848 ] Doug Meil commented on HBASE-7221: -- One more thought on size: then again, I could do what ArrayList does with it's overloaded constructor - use that size initially, and then auto-size if needed. But you could still define the exact size if you wanted for performance purposes. that's probably the nicest possible approach. > RowKey utility class for rowkey construction > > > Key: HBASE-7221 > URL: https://issues.apache.org/jira/browse/HBASE-7221 > Project: HBase > Issue Type: Improvement >Reporter: Doug Meil >Assignee: Doug Meil >Priority: Minor > Attachments: HBASE_7221.patch, hbase-common_hbase_7221_2.patch > > > A common question in the dist-lists is how to construct rowkeys, particularly > composite keys. Put/Get/Scan specifies byte[] as the rowkey, but it's up to > you to sensibly populate that byte-array, and that's where things tend to go > off the rails. > The intent of this RowKey utility class isn't meant to add functionality into > Put/Get/Scan, but rather make it simpler for folks to construct said arrays. > Example: > {code} >RowKey key = RowKey.create(RowKey.SIZEOF_MD5_HASH + RowKey.SIZEOF_LONG); >key.addHash(a); >key.add(b); >byte bytes[] = key.getBytes(); > {code} -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-5844) Delete the region servers znode after a regions server crash
[ https://issues.apache.org/jira/browse/HBASE-5844?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13506832#comment-13506832 ] Jean-Daniel Cryans commented on HBASE-5844: --- Encountered another problem that I think I can link to this jira, I was trying to run HBase from trunk without internet access and like in my Sept 25th comment, I get an empty line after start-hbase.sh but now nothing is running. The .log file doesn't show anything after logging ulimit and nothing's in the .out file. After running some bash -x, I was able to figure out that the nohup output was being suppressed. See: {noformat} jdcryans-MBPr:hbase-github jdcryans$ ./bin/start-hbase.sh jdcryans-MBPr:hbase-github jdcryans$ jdcryans-MBPr:hbase-github jdcryans$ bash -x ./bin/start-hbase.sh ... some stuff then + /Users/jdcryans/git/hbase-github/bin/hbase-daemon.sh start master jdcryans-MBPr:hbase-github jdcryans$ bash -x /Users/jdcryans/git/hbase-github/bin/hbase-daemon.sh start master ... more stuff + nohup /Users/jdcryans/git/hbase-github/bin/hbase-daemon.sh --config /Users/jdcryans/git/hbase-github/bin/../conf internal_start master jdcryans-MBPr:hbase-github jdcryans$ nohup /Users/jdcryans/git/hbase-github/bin/hbase-daemon.sh --config /Users/jdcryans/git/hbase-github/bin/../conf internal_start master appending output to nohup.out {noformat} So now I see that it's writing to nohup.out, which in turn tells me what really happened: {noformat} Caused by: java.lang.ClassNotFoundException: org.apache.zookeeper.KeeperException at java.net.URLClassLoader$1.run(URLClassLoader.java:202) at java.security.AccessController.doPrivileged(Native Method) at java.net.URLClassLoader.findClass(URLClassLoader.java:190) at java.lang.ClassLoader.loadClass(ClassLoader.java:306) at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:301) at java.lang.ClassLoader.loadClass(ClassLoader.java:247) {noformat} Reproing can be done by physically deleting any jar listed in target/cached_classpath.txt. In my case I think the jar wasn't available because I had no internet connection. I wonder what other errors it could hide like this. > Delete the region servers znode after a regions server crash > > > Key: HBASE-5844 > URL: https://issues.apache.org/jira/browse/HBASE-5844 > Project: HBase > Issue Type: Improvement > Components: regionserver, scripts >Affects Versions: 0.96.0 >Reporter: nkeywal >Assignee: nkeywal > Fix For: 0.96.0 > > Attachments: 5844.v1.patch, 5844.v2.patch, 5844.v3.patch, > 5844.v3.patch, 5844.v4.patch > > > today, if the regions server crashes, its znode is not deleted in ZooKeeper. > So the recovery process will stop only after a timeout, usually 30s. > By deleting the znode in start script, we remove this delay and the recovery > starts immediately. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-7221) RowKey utility class for rowkey construction
[ https://issues.apache.org/jira/browse/HBASE-7221?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13506812#comment-13506812 ] Doug Meil commented on HBASE-7221: -- re: "Builder" Yeah, I really wasn't going for a builder pattern. Elliott had a concern about the name "RowKey" (I must admit I'm still partial to it because there isn't a class with that name anywhere in the codebase). I wasn't really aiming for a builder pattern in the first place because I didn't want to necessarily force people to destroy and re-create the RowKey/Builder for each rowkey they create - that's why the reset method is there. The only thing that would have to get reset was the backing byte array. re: "fixed size" I wanted any particular instance to have a fixed size so that the backing byte-array didn't have resize like an ArrayList (and wind up burning a lot of byte-arrays in the process). So it's "easier" to create rowkeys than without the utility, but not without required thought. If your table had multiple length keys, there's nothing wrong with creating 2 different instances, one for each length. That's where I was coming from. re: "formatting" I'll fix that. Doh! Thanks! > RowKey utility class for rowkey construction > > > Key: HBASE-7221 > URL: https://issues.apache.org/jira/browse/HBASE-7221 > Project: HBase > Issue Type: Improvement >Reporter: Doug Meil >Assignee: Doug Meil >Priority: Minor > Attachments: HBASE_7221.patch, hbase-common_hbase_7221_2.patch > > > A common question in the dist-lists is how to construct rowkeys, particularly > composite keys. Put/Get/Scan specifies byte[] as the rowkey, but it's up to > you to sensibly populate that byte-array, and that's where things tend to go > off the rails. > The intent of this RowKey utility class isn't meant to add functionality into > Put/Get/Scan, but rather make it simpler for folks to construct said arrays. > Example: > {code} >RowKey key = RowKey.create(RowKey.SIZEOF_MD5_HASH + RowKey.SIZEOF_LONG); >key.addHash(a); >key.add(b); >byte bytes[] = key.getBytes(); > {code} -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-7240) Cleanup old snapshots on start
[ https://issues.apache.org/jira/browse/HBASE-7240?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13506808#comment-13506808 ] Jesse Yates commented on HBASE-7240: We could create a background chore that periodically checks this directory against the running snapshots, but its likely a rare occurrence that we fail a snapshot and can't cleanup after it. We probably just need to add a little cleanup mechanism to on startup (could just drop this into the SnapshotManager as it also plays well into a possible future goal of recovering snapshot attempts between master failures). > Cleanup old snapshots on start > -- > > Key: HBASE-7240 > URL: https://issues.apache.org/jira/browse/HBASE-7240 > Project: HBase > Issue Type: Sub-task > Components: Client, master, regionserver, snapshots, Zookeeper >Affects Versions: hbase-6055 >Reporter: Jesse Yates > Fix For: hbase-6055 > > > If the master is hard stopped (i.e. kill -9), the snapshot handler or > SnapshotManager may not have a chance to cleanup after the snapshot, leaving > extraneous files in the working snapshot directory (/hbase/.snapshot/.tmp > directory). -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (HBASE-7240) Cleanup old snapshots on start
Jesse Yates created HBASE-7240: -- Summary: Cleanup old snapshots on start Key: HBASE-7240 URL: https://issues.apache.org/jira/browse/HBASE-7240 Project: HBase Issue Type: Sub-task Affects Versions: hbase-6055 Reporter: Jesse Yates Fix For: hbase-6055 If the master is hard stopped (i.e. kill -9), the snapshot handler or SnapshotManager may not have a chance to cleanup after the snapshot, leaving extraneous files in the working snapshot directory (/hbase/.snapshot/.tmp directory). -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-5926) Delete the master znode after a master crash
[ https://issues.apache.org/jira/browse/HBASE-5926?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13506801#comment-13506801 ] Jean-Daniel Cryans commented on HBASE-5926: --- This jira has the odd side-effect of printing out a lot of garbage when running in standalone and killing it with -9, gist of it being: {noformat} 2012-11-29 13:08:27,227 WARN org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper: Possibly transient ZooKeeper exception: org.apache.zookeeper.KeeperException$ConnectionLossException: KeeperErrorCode = ConnectionLoss for /hbase/master 2012-11-29 13:08:27,227 ERROR org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper: ZooKeeper getData failed after 0 retries 2012-11-29 13:08:27,227 WARN org.apache.hadoop.hbase.zookeeper.ZKUtil: clean znode for master Unable to get data of znode /hbase/master org.apache.zookeeper.KeeperException$ConnectionLossException: KeeperErrorCode = ConnectionLoss for /hbase/master at org.apache.zookeeper.KeeperException.create(KeeperException.java:99) at org.apache.zookeeper.KeeperException.create(KeeperException.java:51) at org.apache.zookeeper.ZooKeeper.getData(ZooKeeper.java:1131) at org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper.getData(RecoverableZooKeeper.java:291) at org.apache.hadoop.hbase.zookeeper.ZKUtil.getDataNoWatch(ZKUtil.java:562) at org.apache.hadoop.hbase.zookeeper.MasterAddressTracker.deleteIfEquals(MasterAddressTracker.java:168) at org.apache.hadoop.hbase.ZNodeClearer.clear(ZNodeClearer.java:150) at org.apache.hadoop.hbase.master.HMasterCommandLine.run(HMasterCommandLine.java:110) at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65) at org.apache.hadoop.hbase.util.ServerCommandLine.doMain(ServerCommandLine.java:78) at org.apache.hadoop.hbase.master.HMaster.main(HMaster.java:2298) {noformat} Basically the znode cleaner fails hard because ZK is offline. I was confused to see more logs being printed out after running the kill. > Delete the master znode after a master crash > > > Key: HBASE-5926 > URL: https://issues.apache.org/jira/browse/HBASE-5926 > Project: HBase > Issue Type: Improvement > Components: master, scripts >Affects Versions: 0.96.0 >Reporter: nkeywal >Assignee: nkeywal >Priority: Minor > Fix For: 0.96.0 > > Attachments: 5926.v10.patch, 5926.v11.patch, 5926.v13.patch, > 5926.v14.patch, 5926.v6.patch, 5926.v8.patch, 5926.v9.patch > > > This is the continuation of the work done in HBASE-5844. > But we can't apply exactly the same strategy: for the region server, there is > a znode per region server, while for the master & backup master there is a > single znode for both. > So if we apply the same strategy as for a regionserver, we may have this > scenario: > 1) Master starts > 2) Backup master starts > 3) Master dies > 4) ZK detects it > 5) Backup master receives the update from ZK > 6) Backup master creates the new master node and become the main master > 7) Previous master script continues > 8) Previous master script deletes the master node in ZK > 9) => issue: we deleted the node just created by the new master > This should not happen often (usually the znode will be deleted soon enough), > but it can happen. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-7232) Remove HbaseMapWritable
[ https://issues.apache.org/jira/browse/HBASE-7232?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] stack updated HBASE-7232: - Status: Patch Available (was: Open) > Remove HbaseMapWritable > --- > > Key: HBASE-7232 > URL: https://issues.apache.org/jira/browse/HBASE-7232 > Project: HBase > Issue Type: Bug >Reporter: stack >Assignee: stack > Attachments: 7232.txt, 7232.txt, 7232v2.txt > > > Its used by hfile fileinfo only so need to convert fileinfo to remove this. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-7232) Remove HbaseMapWritable
[ https://issues.apache.org/jira/browse/HBASE-7232?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] stack updated HBASE-7232: - Status: Open (was: Patch Available) > Remove HbaseMapWritable > --- > > Key: HBASE-7232 > URL: https://issues.apache.org/jira/browse/HBASE-7232 > Project: HBase > Issue Type: Bug >Reporter: stack >Assignee: stack > Attachments: 7232.txt, 7232.txt, 7232v2.txt > > > Its used by hfile fileinfo only so need to convert fileinfo to remove this. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-7232) Remove HbaseMapWritable
[ https://issues.apache.org/jira/browse/HBASE-7232?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] stack updated HBASE-7232: - Attachment: 7232v2.txt Cleaned out more Writables from hfile package. > Remove HbaseMapWritable > --- > > Key: HBASE-7232 > URL: https://issues.apache.org/jira/browse/HBASE-7232 > Project: HBase > Issue Type: Bug >Reporter: stack >Assignee: stack > Attachments: 7232.txt, 7232.txt, 7232v2.txt > > > Its used by hfile fileinfo only so need to convert fileinfo to remove this. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-7055) port HBASE-6371 tier-based compaction from 0.89-fb to trunk - first slice (not configurable by cf or dynamically)
[ https://issues.apache.org/jira/browse/HBASE-7055?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13506795#comment-13506795 ] Andrew Purtell commented on HBASE-7055: --- bq. CompactionConfiguration is base compaction config, it is not just xml-based, it uses runtime store-specific settings. TierBased one adds more on top of that; it seems that Tier-stuff doesn't belong to the main CompactionConfiguration; and main CompactionConfiguration is not as simple as generic Configuration. It's also Store (e.g. region/cf) specific. Up to now we've had two distinct and cleanly separable configuration mechanisms. The heavyweight Configuration which carries global, and currently static configuration, and the table and column descriptors that can be CF specific by definition and updated at runtime without requiring a process restart. Pardon if I've misunderstood but mixing these would blur static and dynamic configuration and that doesn't seem a good design option. > port HBASE-6371 tier-based compaction from 0.89-fb to trunk - first slice > (not configurable by cf or dynamically) > - > > Key: HBASE-7055 > URL: https://issues.apache.org/jira/browse/HBASE-7055 > Project: HBase > Issue Type: Task > Components: Compaction >Affects Versions: 0.96.0 >Reporter: Sergey Shelukhin >Assignee: Sergey Shelukhin > Fix For: 0.96.0 > > Attachments: HBASE-6371-squashed.patch, HBASE-6371-v2-squashed.patch, > HBASE-6371-v3-refactor-only-squashed.patch, > HBASE-6371-v4-refactor-only-squashed.patch, > HBASE-6371-v5-refactor-only-squashed.patch, HBASE-7055-v0.patch, > HBASE-7055-v1.patch > > > There's divergence in the code :( > See HBASE-6371 for details. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-7236) add per-table/per-cf compaction configuration via metadata
[ https://issues.apache.org/jira/browse/HBASE-7236?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13506761#comment-13506761 ] Andrew Purtell commented on HBASE-7236: --- Encode compaction selection as JSON in the attribute I'd say. > add per-table/per-cf compaction configuration via metadata > -- > > Key: HBASE-7236 > URL: https://issues.apache.org/jira/browse/HBASE-7236 > Project: HBase > Issue Type: New Feature > Components: Compaction >Affects Versions: 0.96.0 >Reporter: Sergey Shelukhin >Assignee: Sergey Shelukhin > > Regardless of the compaction policy, it makes sense to have separate > configuration for compactions for different tables and column families, as > their access patterns and workloads can be different. In particular, for > tiered compactions that are being ported from 0.89-fb branch it is necessary > to have, to use it properly. > We might want to add support for compaction configuration via metadata on > table/cf. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-7124) typo in pom.xml with "exlude", no definition of "test.exclude.pattern"
[ https://issues.apache.org/jira/browse/HBASE-7124?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13506755#comment-13506755 ] Jesse Yates commented on HBASE-7124: Looks good to me. I'll commit today, unless there are any objections. > typo in pom.xml with "exlude", no definition of "test.exclude.pattern" > -- > > Key: HBASE-7124 > URL: https://issues.apache.org/jira/browse/HBASE-7124 > Project: HBase > Issue Type: Bug >Affects Versions: 0.94.0 >Reporter: Li Ping Zhang >Assignee: Li Ping Zhang >Priority: Minor > Labels: patch > Attachments: HBASE-7124-0.94.patch, HBASE-7124-v1.patch > > Original Estimate: 4h > Remaining Estimate: 4h > > There is a typo in pom.xml with "exlude", and there is no definition of > "test.exclude.pattern". -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-7239) Verify protobuf serialization is correctly chunking upon read to avoid direct memory OOMs
[ https://issues.apache.org/jira/browse/HBASE-7239?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] stack updated HBASE-7239: - Priority: Critical (was: Major) > Verify protobuf serialization is correctly chunking upon read to avoid direct > memory OOMs > - > > Key: HBASE-7239 > URL: https://issues.apache.org/jira/browse/HBASE-7239 > Project: HBase > Issue Type: Sub-task >Reporter: Lars Hofhansl >Priority: Critical > Fix For: 0.96.0 > > > Result.readFields() used to read from the input stream in 8k chunks to avoid > OOM issues with direct memory. > (Reading variable sized chunks into direct memory prevent the JVM from > reusing the allocated direct memory and direct memory is only collected > during full GCs) > This is just to verify protobufs parseFrom type methods do the right thing as > well so that we do not reintroduce this problem. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-7238) Size based scan metric broken by protobufs
[ https://issues.apache.org/jira/browse/HBASE-7238?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] stack updated HBASE-7238: - Priority: Critical (was: Major) > Size based scan metric broken by protobufs > -- > > Key: HBASE-7238 > URL: https://issues.apache.org/jira/browse/HBASE-7238 > Project: HBase > Issue Type: Sub-task >Reporter: Lars Hofhansl >Priority: Critical > Fix For: 0.96.0 > > > See ScannerCallable. HBASE-7215 comments that portion, but it did not work > before, because Results.bytes is no longer used with protobufs. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-7215) Put, Delete, Increment, and Result still implement Writable
[ https://issues.apache.org/jira/browse/HBASE-7215?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13506739#comment-13506739 ] Lars Hofhansl commented on HBASE-7215: -- Filed two sub tasks. OK will commit in the next hour without an extra RB review unless I hear objections. > Put, Delete, Increment, and Result still implement Writable > --- > > Key: HBASE-7215 > URL: https://issues.apache.org/jira/browse/HBASE-7215 > Project: HBase > Issue Type: Bug >Reporter: Lars Hofhansl >Assignee: Lars Hofhansl >Priority: Blocker > Fix For: 0.96.0 > > Attachments: 7215-v2.txt, 7215v3_mutableresult.txt, 7215v3.txt, > 7215v4.txt, 7215v5.txt, 7215v6.txt, 7215v7.txt, 7215v7.txt, 7251-SKETCH.txt, > MutableResult.java > > > Making blocker as suggested by Stack. > At least the following still use Put/Delete as writables. > * IdentityTableReduce.java > * MultiPut.java > * HRegionServer.checkAndMutate -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (HBASE-7239) Verify protobuf serialization is correctly chunking upon read to avoid direct memory OOMs
Lars Hofhansl created HBASE-7239: Summary: Verify protobuf serialization is correctly chunking upon read to avoid direct memory OOMs Key: HBASE-7239 URL: https://issues.apache.org/jira/browse/HBASE-7239 Project: HBase Issue Type: Sub-task Reporter: Lars Hofhansl Result.readFields() used to read from the input stream in 8k chunks to avoid OOM issues with direct memory. (Reading variable sized chunks into direct memory prevent the JVM from reusing the allocated direct memory and direct memory is only collected during full GCs) This is just to verify protobufs parseFrom type methods do the right thing as well so that we do not reintroduce this problem. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (HBASE-7238) Size based scan metric broken by protobufs
Lars Hofhansl created HBASE-7238: Summary: Size based scan metric broken by protobufs Key: HBASE-7238 URL: https://issues.apache.org/jira/browse/HBASE-7238 Project: HBase Issue Type: Sub-task Reporter: Lars Hofhansl Fix For: 0.96.0 See ScannerCallable. HBASE-7215 comments that portion, but it did not work before, because Results.bytes is no longer used with protobufs. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-7215) Put, Delete, Increment, and Result still implement Writable
[ https://issues.apache.org/jira/browse/HBASE-7215?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13506733#comment-13506733 ] stack commented on HBASE-7215: -- Make new issues for the two probs above? > Put, Delete, Increment, and Result still implement Writable > --- > > Key: HBASE-7215 > URL: https://issues.apache.org/jira/browse/HBASE-7215 > Project: HBase > Issue Type: Bug >Reporter: Lars Hofhansl >Assignee: Lars Hofhansl >Priority: Blocker > Fix For: 0.96.0 > > Attachments: 7215-v2.txt, 7215v3_mutableresult.txt, 7215v3.txt, > 7215v4.txt, 7215v5.txt, 7215v6.txt, 7215v7.txt, 7215v7.txt, 7251-SKETCH.txt, > MutableResult.java > > > Making blocker as suggested by Stack. > At least the following still use Put/Delete as writables. > * IdentityTableReduce.java > * MultiPut.java > * HRegionServer.checkAndMutate -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-7213) Have HLog files for .META. edits only
[ https://issues.apache.org/jira/browse/HBASE-7213?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13506730#comment-13506730 ] Todd Lipcon commented on HBASE-7213: I agree it seems to make sense to lump this with the multi-WAL work. Perhaps an interface like "WALFactory" or "WALProvider", which, given a region name, gives back a WAL instance? The basic implementation would always provide the single WAL. Then, we could add the feature that returns a different WAL for META alone. More complex implementations could choose to give different tenants of a cluster separate WALs, etc. > Have HLog files for .META. edits only > - > > Key: HBASE-7213 > URL: https://issues.apache.org/jira/browse/HBASE-7213 > Project: HBase > Issue Type: Improvement > Components: master, regionserver >Reporter: Devaraj Das >Assignee: Devaraj Das > Attachments: 7213-in-progress.patch > > > Over on HBASE-6774, there is a discussion on separating out the edits for > .META. regions from the other regions' edits w.r.t where the edits are > written. This jira is to track an implementation of that. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-7215) Put, Delete, Increment, and Result still implement Writable
[ https://issues.apache.org/jira/browse/HBASE-7215?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13506721#comment-13506721 ] stack commented on HBASE-7215: -- +1 on commit > Put, Delete, Increment, and Result still implement Writable > --- > > Key: HBASE-7215 > URL: https://issues.apache.org/jira/browse/HBASE-7215 > Project: HBase > Issue Type: Bug >Reporter: Lars Hofhansl >Assignee: Lars Hofhansl >Priority: Blocker > Fix For: 0.96.0 > > Attachments: 7215-v2.txt, 7215v3_mutableresult.txt, 7215v3.txt, > 7215v4.txt, 7215v5.txt, 7215v6.txt, 7215v7.txt, 7215v7.txt, 7251-SKETCH.txt, > MutableResult.java > > > Making blocker as suggested by Stack. > At least the following still use Put/Delete as writables. > * IdentityTableReduce.java > * MultiPut.java > * HRegionServer.checkAndMutate -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-7215) Put, Delete, Increment, and Result still implement Writable
[ https://issues.apache.org/jira/browse/HBASE-7215?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13506720#comment-13506720 ] Lars Hofhansl commented on HBASE-7215: -- That looks pretty good! Should probably still upload to RB for easier review. Two sticky points I know about: # Result deserialization used to do 8k chunked reading from the input stream to avoid OOM'ing on the JVMs direct memory. I have no idea how protobufs reads from the stream when deserializing so we may get that problem back. # The size metric in ScannerCallable was broken when protobufs were introduced, because that just measured the size of the bytes stream (that is no longer filled with protobufs). This patch just comments that part, because it was broken anyway. > Put, Delete, Increment, and Result still implement Writable > --- > > Key: HBASE-7215 > URL: https://issues.apache.org/jira/browse/HBASE-7215 > Project: HBase > Issue Type: Bug >Reporter: Lars Hofhansl >Assignee: Lars Hofhansl >Priority: Blocker > Fix For: 0.96.0 > > Attachments: 7215-v2.txt, 7215v3_mutableresult.txt, 7215v3.txt, > 7215v4.txt, 7215v5.txt, 7215v6.txt, 7215v7.txt, 7215v7.txt, 7251-SKETCH.txt, > MutableResult.java > > > Making blocker as suggested by Stack. > At least the following still use Put/Delete as writables. > * IdentityTableReduce.java > * MultiPut.java > * HRegionServer.checkAndMutate -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-7204) Make hbck ErrorReporter pluggable
[ https://issues.apache.org/jira/browse/HBASE-7204?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13506711#comment-13506711 ] Jimmy Xiang commented on HBASE-7204: Okay. Let me convert it to Tool and post a new patch. > Make hbck ErrorReporter pluggable > - > > Key: HBASE-7204 > URL: https://issues.apache.org/jira/browse/HBASE-7204 > Project: HBase > Issue Type: Improvement > Components: hbck >Reporter: Jimmy Xiang >Assignee: Jimmy Xiang >Priority: Minor > Attachments: trunk-7204.patch > > > Make hbck ErrorReporter pluggable so that it can be replaced dynamically. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-7237) system metadata for tables/cfs needs to be validated on the master
[ https://issues.apache.org/jira/browse/HBASE-7237?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sergey Shelukhin updated HBASE-7237: Affects Version/s: 0.96.0 > system metadata for tables/cfs needs to be validated on the master > -- > > Key: HBASE-7237 > URL: https://issues.apache.org/jira/browse/HBASE-7237 > Project: HBase > Issue Type: Improvement >Affects Versions: 0.96.0 >Reporter: Sergey Shelukhin >Priority: Minor > > shell currently validates whatever metadata user provides as argument to > alter, however while looking at some other issue I noticed that user and > system metadata is stored in the same dictionary in the descriptor, so shell > validation is easy to bypass by setting a "user" metadata parameter with the > same name as the system parameter. > E.g. I just set MAX_FILESIZE to "moo" via CONFIG. > This can be fixed in the shell, however the general problem I think is that > system configuration should be validated server-side (e.g. on the master), > not just on the client. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-7237) system metadata for tables/cfs needs to be validated on the master
[ https://issues.apache.org/jira/browse/HBASE-7237?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sergey Shelukhin updated HBASE-7237: Description: shell currently validates whatever metadata user provides as argument to alter, however while looking at some other issue I noticed that user and system metadata is stored in the same dictionary in the descriptor, so shell validation is easy to bypass by setting a "user" metadata parameter with the same name as the system parameter. E.g. I just set MAX_FILESIZE to "moo" via CONFIG. This can be fixed in the shell, however the general problem I think is that system configuration should be validated server-side (e.g. on the master), not just on the client. was: shell currently validates what used provides with alter, however while looking at some other issue I noticed that user and system metadata is stored in the same dictionary on the server, so it is easy to bypass by setting a "user" metadata parameter with the same name as the system parameter. E.g. I just set MAX_FILESIZE to "moo" via CONFIG. This can be fixed in the shell, however the general problem I think is that system configuration should be validated server-side (e.g. on the master), not just on the client. > system metadata for tables/cfs needs to be validated on the master > -- > > Key: HBASE-7237 > URL: https://issues.apache.org/jira/browse/HBASE-7237 > Project: HBase > Issue Type: Improvement >Affects Versions: 0.96.0 >Reporter: Sergey Shelukhin >Priority: Minor > > shell currently validates whatever metadata user provides as argument to > alter, however while looking at some other issue I noticed that user and > system metadata is stored in the same dictionary in the descriptor, so shell > validation is easy to bypass by setting a "user" metadata parameter with the > same name as the system parameter. > E.g. I just set MAX_FILESIZE to "moo" via CONFIG. > This can be fixed in the shell, however the general problem I think is that > system configuration should be validated server-side (e.g. on the master), > not just on the client. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-7204) Make hbck ErrorReporter pluggable
[ https://issues.apache.org/jira/browse/HBASE-7204?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jimmy Xiang updated HBASE-7204: --- Status: Open (was: Patch Available) > Make hbck ErrorReporter pluggable > - > > Key: HBASE-7204 > URL: https://issues.apache.org/jira/browse/HBASE-7204 > Project: HBase > Issue Type: Improvement > Components: hbck >Reporter: Jimmy Xiang >Assignee: Jimmy Xiang >Priority: Minor > Attachments: trunk-7204.patch > > > Make hbck ErrorReporter pluggable so that it can be replaced dynamically. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (HBASE-7237) system metadata for tables/cfs needs to be validated on the master
Sergey Shelukhin created HBASE-7237: --- Summary: system metadata for tables/cfs needs to be validated on the master Key: HBASE-7237 URL: https://issues.apache.org/jira/browse/HBASE-7237 Project: HBase Issue Type: Improvement Reporter: Sergey Shelukhin Priority: Minor shell currently validates what used provides with alter, however while looking at some other issue I noticed that user and system metadata is stored in the same dictionary on the server, so it is easy to bypass by setting a "user" metadata parameter with the same name as the system parameter. E.g. I just set MAX_FILESIZE to "moo" via CONFIG. This can be fixed in the shell, however the general problem I think is that system configuration should be validated server-side (e.g. on the master), not just on the client. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-4709) Hadoop metrics2 setup in test MiniDFSClusters spewing JMX errors
[ https://issues.apache.org/jira/browse/HBASE-4709?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13506707#comment-13506707 ] Elliott Clark commented on HBASE-4709: -- My thought was putting the defaults in the log4j conf + in both of the MetricsAssertHelpers. > Hadoop metrics2 setup in test MiniDFSClusters spewing JMX errors > > > Key: HBASE-4709 > URL: https://issues.apache.org/jira/browse/HBASE-4709 > Project: HBase > Issue Type: Bug > Components: test >Affects Versions: 0.92.0, 0.94.0, 0.94.1 >Reporter: Gary Helmling >Priority: Minor > Fix For: 0.96.0 > > Attachments: 4709_workaround.v1.patch > > > Since switching over HBase to build with Hadoop 0.20.205.0, we've been > getting a lot of metrics related errors in the log files for tests: > {noformat} > 2011-10-30 22:00:22,858 INFO [main] log.Slf4jLog(67): jetty-6.1.26 > 2011-10-30 22:00:22,871 INFO [main] log.Slf4jLog(67): Extract > jar:file:/home/jenkins/.m2/repository/org/apache/hadoop/hadoop-core/0.20.205.0/hadoop-core-0.20.205.0.jar!/webapps/datanode > to /tmp/Jetty_localhost_55751_datanode.kw16hy/webapp > 2011-10-30 22:00:23,048 INFO [main] log.Slf4jLog(67): Started > SelectChannelConnector@localhost:55751 > Starting DataNode 1 with dfs.data.dir: > /home/jenkins/jenkins-slave/workspace/HBase-TRUNK/trunk/target/test-data/7ba65a16-03ad-4624-b769-57405945ef58/dfscluster_3775fc23-1b51-4966-8133-205564bae762/dfs/data/data3,/home/jenkins/jenkins-slave/workspace/HBase-TRUNK/trunk/target/test-data/7ba65a16-03ad-4624-b769-57405945ef58/dfscluster_3775fc23-1b51-4966-8133-205564bae762/dfs/data/data4 > 2011-10-30 22:00:23,237 WARN [main] impl.MetricsSystemImpl(137): Metrics > system not started: Cannot locate configuration: tried > hadoop-metrics2-datanode.properties, hadoop-metrics2.properties > 2011-10-30 22:00:23,237 WARN [main] util.MBeans(59): > Hadoop:service=DataNode,name=MetricsSystem,sub=Control > javax.management.InstanceAlreadyExistsException: MXBean already registered > with name Hadoop:service=NameNode,name=MetricsSystem,sub=Control > at > com.sun.jmx.mbeanserver.MXBeanLookup.addReference(MXBeanLookup.java:120) > at > com.sun.jmx.mbeanserver.MXBeanSupport.register(MXBeanSupport.java:143) > at > com.sun.jmx.mbeanserver.MBeanSupport.preRegister2(MBeanSupport.java:183) > at > com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerDynamicMBean(DefaultMBeanServerInterceptor.java:941) > at > com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerObject(DefaultMBeanServerInterceptor.java:917) > at > com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerMBean(DefaultMBeanServerInterceptor.java:312) > at > com.sun.jmx.mbeanserver.JmxMBeanServer.registerMBean(JmxMBeanServer.java:482) > at org.apache.hadoop.metrics2.util.MBeans.register(MBeans.java:56) > at > org.apache.hadoop.metrics2.impl.MetricsSystemImpl.initSystemMBean(MetricsSystemImpl.java:500) > at > org.apache.hadoop.metrics2.impl.MetricsSystemImpl.init(MetricsSystemImpl.java:140) > at > org.apache.hadoop.metrics2.lib.DefaultMetricsSystem.init(DefaultMetricsSystem.java:40) > at > org.apache.hadoop.metrics2.lib.DefaultMetricsSystem.initialize(DefaultMetricsSystem.java:50) > at > org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:1483) > at > org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:1459) > at > org.apache.hadoop.hdfs.MiniDFSCluster.startDataNodes(MiniDFSCluster.java:417) > at org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:280) > at > org.apache.hadoop.hbase.HBaseTestingUtility.startMiniDFSCluster(HBaseTestingUtility.java:349) > at > org.apache.hadoop.hbase.HBaseTestingUtility.startMiniCluster(HBaseTestingUtility.java:518) > at > org.apache.hadoop.hbase.HBaseTestingUtility.startMiniCluster(HBaseTestingUtility.java:474) > at > org.apache.hadoop.hbase.HBaseTestingUtility.startMiniCluster(HBaseTestingUtility.java:461) > {noformat} > This seems to be due to errors initializing the new hadoop metrics2 code by > default, when running in a mini cluster. The errors themselves seem to be > harmless -- they're not breaking any tests -- but we should figure out what > configuration we need to eliminate them. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-7215) Put, Delete, Increment, and Result still implement Writable
[ https://issues.apache.org/jira/browse/HBASE-7215?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13506704#comment-13506704 ] Hadoop QA commented on HBASE-7215: -- {color:red}-1 overall{color}. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12555249/7215v7.txt against trunk revision . {color:green}+1 @author{color}. The patch does not contain any @author tags. {color:green}+1 tests included{color}. The patch appears to include 12 new or modified tests. {color:green}+1 hadoop2.0{color}. The patch compiles against the hadoop 2.0 profile. {color:red}-1 javadoc{color}. The javadoc tool appears to have generated 99 warning messages. {color:green}+1 javac{color}. The applied patch does not increase the total number of javac compiler warnings. {color:red}-1 findbugs{color}. The patch appears to introduce 25 new Findbugs (version 1.3.9) warnings. {color:green}+1 release audit{color}. The applied patch does not increase the total number of release audit warnings. {color:green}+1 core tests{color}. The patch passed unit tests in . Test results: https://builds.apache.org/job/PreCommit-HBASE-Build/3420//testReport/ Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/3420//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/3420//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/3420//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/3420//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/3420//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop1-compat.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/3420//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/3420//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/3420//console This message is automatically generated. > Put, Delete, Increment, and Result still implement Writable > --- > > Key: HBASE-7215 > URL: https://issues.apache.org/jira/browse/HBASE-7215 > Project: HBase > Issue Type: Bug >Reporter: Lars Hofhansl >Assignee: Lars Hofhansl >Priority: Blocker > Fix For: 0.96.0 > > Attachments: 7215-v2.txt, 7215v3_mutableresult.txt, 7215v3.txt, > 7215v4.txt, 7215v5.txt, 7215v6.txt, 7215v7.txt, 7215v7.txt, 7251-SKETCH.txt, > MutableResult.java > > > Making blocker as suggested by Stack. > At least the following still use Put/Delete as writables. > * IdentityTableReduce.java > * MultiPut.java > * HRegionServer.checkAndMutate -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-7204) Make hbck ErrorReporter pluggable
[ https://issues.apache.org/jira/browse/HBASE-7204?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13506700#comment-13506700 ] Jonathan Hsieh commented on HBASE-7204: --- I'm -0 on the current approach (not going to block but don't like), +1 if tool is used. Using Tool is really simple -- just rename main to run and replace main with the equivalent of this. {code} public static void main(String[] args) throws Exception { int ret = ToolRunner.run(new LoadIncrementalHFiles(HBaseConfiguration.create()), args); System.exit(ret); } {code} > Make hbck ErrorReporter pluggable > - > > Key: HBASE-7204 > URL: https://issues.apache.org/jira/browse/HBASE-7204 > Project: HBase > Issue Type: Improvement > Components: hbck >Reporter: Jimmy Xiang >Assignee: Jimmy Xiang >Priority: Minor > Attachments: trunk-7204.patch > > > Make hbck ErrorReporter pluggable so that it can be replaced dynamically. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-7204) Make hbck ErrorReporter pluggable
[ https://issues.apache.org/jira/browse/HBASE-7204?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13506695#comment-13506695 ] Jimmy Xiang commented on HBASE-7204: Cool. Can we handle that in a separate jira so that we can have this capability now (overriding this conf from command line for hbck)? > Make hbck ErrorReporter pluggable > - > > Key: HBASE-7204 > URL: https://issues.apache.org/jira/browse/HBASE-7204 > Project: HBase > Issue Type: Improvement > Components: hbck >Reporter: Jimmy Xiang >Assignee: Jimmy Xiang >Priority: Minor > Attachments: trunk-7204.patch > > > Make hbck ErrorReporter pluggable so that it can be replaced dynamically. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira