And what is happening on the server
ip-10-68-145-124.ec2.internal:60020 such that 14 attempts at getting a
region failed.  Is that region on line during this time or being
moved?  If not online, why not?  Was server opening the region taking
too long (because of high-load?).  Grep around the region name in
master log to see what was happening with it at the time of the below
fails.

Folks copy from one table to the other all the time w/o need of an
hdfs intermediary resting stop.

St.Ack

On Thu, Jan 12, 2012 at 9:46 AM, Ted Yu <[email protected]> wrote:
> I think you need to manipulate the keyvalue to match the new row.
> Take a look at the check:
>
>    //Checking that the row of the kv is the same as the put
>    int res = Bytes.compareTo(this.row, 0, row.length,
>        kv.getBuffer(), kv.getRowOffset(), kv.getRowLength());
>    if(res != 0) {
>      throw new IOException("The row in the recently added KeyValue " +
>
> Cheers
>
> On Thu, Jan 12, 2012 at 9:12 AM, T Vinod Gupta <[email protected]>wrote:
>
>> hbase version -
>> hbase(main):001:0> version
>> 0.90.3-cdh3u1, r, Mon Jul 18 08:23:50 PDT 2011
>>
>> here are the different exceptions -
>>
>> when copying table to another table -
>> 12/01/12 11:06:41 INFO mapred.JobClient: Task Id :
>> attempt_201201120656_0012_m_000001_0, Status : FAILED
>> org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed
>> 14 actions: NotServingRegionException: 14 times, servers with issues:
>> ip-10-68-145-124.ec2.internal:60020,
>>        at
>>
>> org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.processBatch(HConnectionManager.java:1227)
>>        at
>>
>> org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.processBatchOfPuts(HConnectionManager.java:1241)
>>        at
>> org.apache.hadoop.hbase.client.HTable.flushCommits(HTable.java:826)
>>        at org.apache.hadoop.hbase.client.HTable.doPut(HTable.java:682)
>>        at org.apache.hadoop.hbase.client.HTable.put(HTable.java:667)
>>        at
>>
>> org.apache.hadoop.hbase.mapreduce.TableOutputFormat$TableRecordWriter.write(TableOutputFormat.java:127)
>>        at
>>
>> org.apache.hadoop.hbase.mapreduce.TableOutputFormat$TableRecordWriter.write(TableOutputFormat.java:82)
>>        at
>>
>> org.apache.hadoop.mapred.MapTask$NewDirectOutputCollector.write(MapTask.java:531)
>>        at
>>
>> org.apache.hadoop.mapreduce.TaskInputOutputContext.write(TaskInputOutputContext.java:80)
>>        at
>>
>> com.akanksh.information.hbasetest.HBaseTimestampSwapper$SwapperMapper.map(HBaseTimestampSwapper.java:62)
>>        at
>>
>> com.akanksh.information.hbasetest.HBaseTimestampSwapper$SwapperMapper.map(HBaseTimestampSwapper.java:31)
>>        at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:144)
>>        at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:647)
>>        at org.apache.hadoop.mapred.MapTask.run(MapTask.java:323)
>>        at org.apache.hadoop.mapred.Child$4.run(Child.java:270)
>>        at java.security.AccessController.doPrivileged(Native Method)
>>        at javax.security.auth.Subject.doAs(Subject.java:416)
>>        at
>>
>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1127)
>>        at org.apache.hadoop.mapred.Child.main(Child.java:264)
>>
>> region server logs say this -
>> 2012-01-10 00:00:52,545 WARN org.apache.hadoop.ipc.HBaseServer: IPC Server
>> handl
>> er 9 on 60020, responseTooLarge for: next(-5685114053145855194, 50) from
>> 10.68.1
>> 45.124:44423: Size: 121.7m
>>
>> when doing special export and then import, here is the stack trace -
>> java.io.IOException: The row in the recently added KeyValue
>> 84784841:1319846400:daily:PotentialReach doesn't match the original one
>> 84784841:PotentialReach:daily:1319846400
>>        at org.apache.hadoop.hbase.client.Put.add(Put.java:168)
>>        at
>>
>> org.apache.hadoop.hbase.mapreduce.Import$Importer.resultToPut(Import.java:70)
>>        at
>> org.apache.hadoop.hbase.mapreduce.Import$Importer.map(Import.java:60)
>>        at
>> org.apache.hadoop.hbase.mapreduce.Import$Importer.map(Import.java:45)
>>        at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:144)
>>        at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:647)
>>        at org.apache.hadoop.mapred.MapTask.run(MapTask.java:323)
>>        at org.apache.hadoop.mapred.Child$4.run(Child.java:270)
>>        at java.security.AccessController.doPrivileged(Native Method)
>>        at javax.security.auth.Subject.doAs(Subject.java:416)
>>        at
>>
>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1127)
>>        at org.apache.hadoop.mapred.Child.main(Child.java:264)
>>
>>
>> On Thu, Jan 12, 2012 at 5:13 AM, <[email protected]> wrote:
>>
>> > What version of hbase did you use ?
>> >
>> > Can you post the stack trace for the exception ?
>> >
>> > Thanks
>> >
>> >
>> >
>> > On Jan 12, 2012, at 3:37 AM, T Vinod Gupta <[email protected]>
>> wrote:
>> >
>> > > I am badly stuck and can't find a way out. i want to change my rowkey
>> > > schema while copying data from 1 table to another. but a map reduce job
>> > to
>> > > do this won't work because of large row sizes (responseTooLarge
>> errors).
>> > so
>> > > i am left with a 2 steps processing of exporting to hdfs files and
>> > > importing from them to the 2nd table. so i wrote a custom exporter that
>> > > changes the rowkey to newRowKey when doing context.write(newRowKey,
>> > > result). but when i import these new files into new table, it doesnt
>> work
>> > > due to this exception in put - "The row in the recently added ...
>> doesn't
>> > > match the original one ....".
>> > >
>> > > is there no way out for me? please help
>> > >
>> > > thanks
>> >
>>

Reply via email to