Can you give the whole stack trace for WrongRowIOException ?

Was the cluster running Export using the same version of hbase (
1.0.0-cdh5.5.2) ?

Thanks

On Mon, Nov 21, 2016 at 4:35 PM, Julian Jaffe <[email protected]>
wrote:

> Hbase Version: 1.0.0-cdh5.5.2
>
> We're importing the data using `hbase
> org.apache.hadoop.hbase.mapreduce.Import  'table.name' /path/to/backup`
> (The data was exported from an HBase instance on another cluster using
> `hbase org.apache.hadoop.hbase.mapreduce.Export` and then distcp'd between
> the clusters).
>
> On Mon, Nov 21, 2016 at 4:29 PM, Ted Yu <[email protected]> wrote:
>
> > I did a quick search - there was no relevant JIRA or discussion thread at
> > first glance.
> >
> > Which hbase release are you using ?
> >
> > How do you import the data ?
> >
> > More details would be helpful.
> >
> > Thanks
> >
> > On Mon, Nov 21, 2016 at 2:48 PM, Julian Jaffe <[email protected]>
> > wrote:
> >
> > > When importing data into a fresh HBase instance, after some time the
> > import
> > > throws the following exception:
> > >
> > > Error: org.apache.hadoop.hbase.client.WrongRowIOException: The row in
> > > \x00\x00\x0767341283611_10153807927108612\x00\x80\x00\
> > > x00\x00\x84)L\xA7/IN:nme/1461847340445/Put/vlen=42/seqid=0
> > > doesn't match the original one
> > > \x00\x00\x0767341283611_10153805927108612\x00\x80\x00\
> x00\x00\x84)L\xA7
> > >
> > > (The non-matching row differs on different runs).
> > >
> > > If the import is allowed to run to completion, the row count of the
> data
> > > imported is less than the row count of the source data.
> > >
> > > Googling for this error only turns up the source code that generates
> the
> > > error, so it doesn't seem to be a common problem.
> > >
> > > Can anyone provide any guidance?
> > >
> > > Julian Jaffe
> > >
> >
>

Reply via email to