2011/12/8 Dou Xiaofeng <[email protected]>: > My command: > hadoop jar $HBASE_HOME/hbase-0.90.4-cdh3u2.jar importtsv > -Dimporttsv.separator=, -Dimporttsv.bulk.output=/tmp/output > -Dimporttsv.columns=HBASE_ROW_KEY,e:a,e:b,e:c t1 /tmp/1 > >
Understood. Whats your problem? That importtsv does not pre-create the table for you? St.Ack > Usage: importtsv -Dimporttsv.columns=a,b,c <tablename> <inputdir> > > Imports the given input directory of TSV data into the specified table. > > The column names of the TSV data must be specified using the > -Dimporttsv.columns option. This option takes the form of comma-separated > column names, where each column name is either a simple column family, or a > columnfamily:qualifier. The special column name HBASE_ROW_KEY is used to > designate that this column should be used as the row key for each imported > record. You must specify exactly one column to be the row key, and you must > specify a column name for every column that exists in the input data. > > In order to prepare data for a bulk data load, pass the option: > -Dimporttsv.bulk.output=/path/for/output > Note: if you do not use this option, then the target table must already > exist in HBase --look this line. > > Other options that may be specified with -D include: > -Dimporttsv.skip.bad.lines=false - fail if encountering an invalid line > '-Dimporttsv.separator=|' - eg separate on pipes instead of tabs > > > -----邮件原件----- > 发件人: [email protected] [mailto:[email protected]] 代表 Stack > 发送时间: 2011年12月9日 14:24 > 收件人: [email protected] > 主题: Re: 答复: TableNotFoundException: Cannot find row in .META. for table > > On Thu, Dec 8, 2011 at 9:32 PM, Dou Xiaofeng <[email protected]> wrote: >> The table t1 not exist. >> If I create it by hbase client manually, the importtsv does not throw error, >> but I assign the bulk.output in the command, it should be not to create the >> table. >> > > Sorry, I don't follow the last bit of the sentence above where you say > '... but I assign the bulk.output in the command....' > > St.Ack
