Hi Oliver, This is a limitation of the direct MySQL connector. This connector uses the MySQL batch utilities (mysqldump/mysqlimport) to do the data transfer directly over to HDFS. It does not support populating the transferred data into HDFS.
This is one of the areas of improvement in Sqoop 2. See [1] for more details. [1] https://cwiki.apache.org/confluence/display/SQOOP/Sqoop+2#Sqoop2-IntroducingaReducePhase Thanks, Arvind On Thu, Jan 5, 2012 at 7:22 AM, Oliver Meyn <oli...@mineallmeyn.com> wrote: > Hi all, > > I'm trying to sqoop from mysql into HBase. Everything works fine if I don't > use --direct. When I add --direct however the results get written into hdfs, > and no error messages are generated. It's as if the --hbase* params are all > being ignored. Is this by design? Or a bug? > > I'm using sqoop-1.3.0-cdh3u2, and all other hadoop-y pieces are cdh3u2. > Here's my options file (I've tried on the command line, and I've tried moving > the --direct option to the start, end, and middle of the options list - all > to no avail). > > import > # use mysqldump > --direct > --mysql-delimiters > > #connect to db > --connect > jdbc:mysql://xxx/portal > --username > xxx > --password > xxx > > # from table raw_occurrence_record > --table > raw_occurrence_record > --split-by > id > --where > 'id < 100000' > > # to hbase > --hbase-table > sqoop_ror_mini > --column-family > v > --hbase-create-table > > # code generation reuse > --jar-file > raw_occurrence_record.jar > --class-name > raw_occurrence_record > > Thanks, > Oliver > > >