-----------------------------------------------------------
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/33104/#review82349
-----------------------------------------------------------
The patch looks good to me. Just one question - it seems that it's causing two
test failures on my machine:
Testcase: testFieldWithHiveDelims took 5.021 sec
Testcase: testGenerateOnly took 0.329 sec
Testcase: testHiveExitFails took 1.7 sec
Testcase: testDate took 1.695 sec
Testcase: testFieldWithHiveDelimsReplacement took 1.617 sec
Testcase: testCustomDelimiters took 1.598 sec
Testcase: testHiveDropAndReplaceOptionValidation took 0.036 sec
Testcase: testCreateOverwriteHiveImport took 0.103 sec
Testcase: testCreateOnlyHiveImport took 0.055 sec
Testcase: testAppendHiveImportAsParquet took 15.383 sec
Caused an ERROR
null
java.util.NoSuchElementException
at
org.kitesdk.data.spi.filesystem.MultiFileDatasetReader.next(MultiFileDatasetReader.java:144)
at
com.cloudera.sqoop.hive.TestHiveImport.verifyHiveDataset(TestHiveImport.java:292)
at
com.cloudera.sqoop.hive.TestHiveImport.testAppendHiveImportAsParquet(TestHiveImport.java:383)
Testcase: testNormalHiveImport took 1.58 sec
Testcase: testNormalHiveImportAsParquet took 3.46 sec
Testcase: testImportWithBadPartitionKey took 3.068 sec
Testcase: testCreateOverwriteHiveImportAsParquet took 4.107 sec
Caused an ERROR
Failure during job; return status 1
java.io.IOException: Failure during job; return status 1
at
com.cloudera.sqoop.testutil.ImportJobTestCase.runImport(ImportJobTestCase.java:236)
at
com.cloudera.sqoop.testutil.ImportJobTestCase.runImport(ImportJobTestCase.java:210)
at
com.cloudera.sqoop.hive.TestHiveImport.runImportTest(TestHiveImport.java:215)
at
com.cloudera.sqoop.hive.TestHiveImport.testCreateOverwriteHiveImportAsParquet(TestHiveImport.java:356)
Testcase: testImportHiveWithPartitions took 1.51 sec
Testcase: testNumeric took 1.476 sec
I'm wondering if you see the same failures Stanley?
- Jarek Cecho
On May 3, 2015, 3:40 p.m., Qian Xu wrote:
>
> -----------------------------------------------------------
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/33104/
> -----------------------------------------------------------
>
> (Updated May 3, 2015, 3:40 p.m.)
>
>
> Review request for Sqoop.
>
>
> Bugs: SQOOP-2295
> https://issues.apache.org/jira/browse/SQOOP-2295
>
>
> Repository: sqoop-trunk
>
>
> Description
> -------
>
> Currently, an existing dataset will throw an exception. This differs from
> `--as-textfile`. I've checked the user manual, the handling of HDFS and Hive
> are indeed different. For HDFS, unless `--append` is specified, the job will
> fail when destination exists already. For Hive, unless `--create-hive-table`
> is specified, the job will become append mode. The patch has made the
> handling of `--as-textfile` and `--as-parquetfile` consistent.
>
>
> Diffs
> -----
>
> src/docs/man/hive-args.txt 7d9e427
> src/docs/man/sqoop-create-hive-table.txt 7aebcc1
> src/docs/user/create-hive-table.txt 3aa34fd
> src/docs/user/hive-args.txt 53de92d
> src/java/org/apache/sqoop/mapreduce/DataDrivenImportJob.java d5bfae2
> src/java/org/apache/sqoop/mapreduce/ParquetJob.java df55dbc
> src/test/com/cloudera/sqoop/hive/TestHiveImport.java fa717cb
> src/test/com/cloudera/sqoop/testutil/BaseSqoopTestCase.java 7934791
> testdata/hive/scripts/normalImportAsParquet.q e434e9b
>
> Diff: https://reviews.apache.org/r/33104/diff/
>
>
> Testing
> -------
>
> Manually tested append, new create and overwrite cases.
>
>
> Thanks,
>
> Qian Xu
>
>