Hi Chalcy,
sequence file and hive are incompatible features at the moment. Please consider 
following section in user guide:

http://sqoop.apache.org/docs/1.4.1-incubating/SqoopUserGuide.html#_importing_data_into_hive

I believe that sqoop should be dying when user will request the hive import and 
sequence file output at the same time. Would you mind sharing your sqoop 
version, entire used command line and log generated with --verbose flag? I 
would like to make sure that sqoop is determining incompatible parameters 
correctly.

Jarcec

On Thu, Jun 21, 2012 at 11:37:42AM -0400, Chalcy wrote:
> Hi sqoop, hive, compression experts,
> 
> 
> 
> When sqoop importing into hive with snappy compression and as sequence
> file, the number of rows imported is shown correctly in the
> logging(12/06/21 09:34:24 INFO mapreduce.ImportJobBase: Retrieved 10000
> records.) but when I do count(*) on the hive table I get 13714 rows.  Also
> the data when I do select * from table limit 100; gives garbage.
> 
> What am I not setting right?
> 
> Also we found a  open issue https://issues.cloudera.org/browse/SQOOP-200 -
> Is this resolved in future sqoop versions?
> 
> Thanks,
> 
> Chalcy

Attachment: signature.asc
Description: Digital signature

Reply via email to