I have been looking at the code in HBase, but, I don't really understand what this error happens. Why can I put in HBase those keys?
2014-04-30 17:57 GMT+02:00 Guillermo Ortiz <[email protected]<javascript:_e(%7B%7D,'cvml','[email protected]');> >: > I'm using HBase with MapReduce to load a lot of data, so I have decide to > do it with bulk load. > > > I parse my keys with SHA1, but when I try to load them, I got this > exception. > > java.io.IOException: Added a key not lexically larger than previous > key=\x00(6e9e59f36a7ec2ac54635b2d353e53e677839046\x01l\x00\x00\x01E\xB3>\xC9\xC7\x0E, > > lastkey=\x00(b313a9f1f57c8a07c81dc3221c6151cf3637506a\x01l\x00\x00\x01E\xAE\x18k\x87\x0E > at > org.apache.hadoop.hbase.io.hfile.AbstractHFileWriter.checkKey(AbstractHFileWriter.java:207) > at > org.apache.hadoop.hbase.io.hfile.HFileWriterV2.append(HFileWriterV2.java:324) > at > org.apache.hadoop.hbase.io.hfile.HFileWriterV2.append(HFileWriterV2.java:289) > at > org.apache.hadoop.hbase.regionserver.StoreFile$Writer.append(StoreFile.java:1206) > at > org.apache.hadoop.hbase.mapreduce.HFileOutputFormat$1.write(HFileOutputFormat.java:168) > at > org.apache.hadoop.hbase.mapreduce.HFileOutputFormat$1.write(HFileOutputFormat.java:124) > at > org.apache.hadoop.mapred.ReduceTask$NewTrackingRecordWriter.write(ReduceTask.java:551) > at > org.apache.hadoop.mapreduce.task.TaskInputOutputContextImpl.write(TaskInputOutputContextImpl.java:85) > > I work with HBase 0.94.6. I have been loking for if I could define any > reducer, since, I have defined no one. I have read something about > KeyValueSortReducer but, I don'tknow if there's something that extends > TableReducer or I'm lookging for a wrong way. > > >
