[jira] [Updated] (SQOOP-3473) --autoreset-to-one-mapper does not work well with --query

2020-04-23 Thread Eric Lin (Jira)


 [ 
https://issues.apache.org/jira/browse/SQOOP-3473?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Lin updated SQOOP-3473:

Description: 
In sqoop doc, it mentions that *autoreset-to-one-mapper* Cannot be used with 
*split-by * option. However, when running Sqoop command with 
*autoreset-to-one-mapper*  and *--query*, it fails and says *split-by* is 
missing. Example below:

{noformat}
sqoop import --connect jdbc:mysql://mysql-host.com/test --username username 
--password password --query "SELECT * FROM test_table WHERE 1=1 AND 
\$CONDITIONS" --delete-target-dir --target-di/tmp/test_table_data 
--autoreset-to-one-mapper
Warning: 
/opt/cloudera/parcels/CDH-5.13.3-1.cdh5.13.3.p3573.3750/bin/../lib/sqoop/../accumulo
 does not exist! Accumulo imports will fail.
Please set $ACCUMULO_HOME to the root of your Accumulo installation.
20/04/23 17:07:18 INFO sqoop.Sqoop: Running Sqoop version: 1.4.6-cdh5.13.3
20/04/23 17:07:18 WARN tool.BaseSqoopTool: Setting your password on the 
command-line is insecure. Consider using -P instead.
When importing query results in parallel, you must specify --split-by.
Try --help for usage instructions.
{noformat}

It fails at code below, and I think we should add extra checking to skip it if 
*autoreset-to-one-mapper* is already passed. I tested in older version, but I 
can see code logic is the same in the latest version

https://github.com/apache/sqoop/blob/release-1.4.7-rc0/src/java/org/apache/sqoop/tool/ImportTool.java#L1068-L1072

To workaround, simply add "-m 1" at the end to force with one mapper.

  was:
In sqoop doc, it mentions that *autoreset-to-one-mapper* Cannot be used with 
*split-by * option. However, when running Sqoop command with 
*autoreset-to-one-mapper*  and *--query*, it fails and says *split-by* is 
missing. Example below:

{noformat}
sqoop import --connect jdbc:mysql://mysql-host.com/test --username username 
--password password --query "SELECT * FROM test_table WHERE 1=1 AND 
\$CONDITIONS" --delete-target-dir --target-di/tmp/test_table_data 
--autoreset-to-one-mapper
Warning: 
/opt/cloudera/parcels/CDH-5.13.3-1.cdh5.13.3.p3573.3750/bin/../lib/sqoop/../accumulo
 does not exist! Accumulo imports will fail.
Please set $ACCUMULO_HOME to the root of your Accumulo installation.
20/04/23 17:07:18 INFO sqoop.Sqoop: Running Sqoop version: 1.4.6-cdh5.13.3
20/04/23 17:07:18 WARN tool.BaseSqoopTool: Setting your password on the 
command-line is insecure. Consider using -P instead.
When importing query results in parallel, you must specify --split-by.
Try --help for usage instructions.
{noformat}

It fails here, and I think we should add extra checking to skip it if 
*autoreset-to-one-mapper* is already passed. I tested in older version, but I 
can see code logic is the same in the latest version

https://github.com/apache/sqoop/blob/release-1.4.7-rc0/src/java/org/apache/sqoop/tool/ImportTool.java#L1068-L1072

To workaround, simply add "-m 1" at the end to force with one mapper.


> --autoreset-to-one-mapper does not work well with --query
> -
>
> Key: SQOOP-3473
> URL: https://issues.apache.org/jira/browse/SQOOP-3473
> Project: Sqoop
>  Issue Type: Bug
>Affects Versions: 1.4.7
>Reporter: Eric Lin
>Priority: Major
>
> In sqoop doc, it mentions that *autoreset-to-one-mapper* Cannot be used with 
> *split-by * option. However, when running Sqoop command with 
> *autoreset-to-one-mapper*  and *--query*, it fails and says *split-by* is 
> missing. Example below:
> {noformat}
> sqoop import --connect jdbc:mysql://mysql-host.com/test --username username 
> --password password --query "SELECT * FROM test_table WHERE 1=1 AND 
> \$CONDITIONS" --delete-target-dir --target-di/tmp/test_table_data 
> --autoreset-to-one-mapper
> Warning: 
> /opt/cloudera/parcels/CDH-5.13.3-1.cdh5.13.3.p3573.3750/bin/../lib/sqoop/../accumulo
>  does not exist! Accumulo imports will fail.
> Please set $ACCUMULO_HOME to the root of your Accumulo installation.
> 20/04/23 17:07:18 INFO sqoop.Sqoop: Running Sqoop version: 1.4.6-cdh5.13.3
> 20/04/23 17:07:18 WARN tool.BaseSqoopTool: Setting your password on the 
> command-line is insecure. Consider using -P instead.
> When importing query results in parallel, you must specify --split-by.
> Try --help for usage instructions.
> {noformat}
> It fails at code below, and I think we should add extra checking to skip it 
> if *autoreset-to-one-mapper* is already passed. I tested in older version, 
> but I can see code logic is the same in the latest version
> https://github.com/apache/sqoop/blob/release-1.4.7-rc0/src/java/org/apache/sqoop/tool/ImportTool.java#L1068-L1072
> To workaround, simply add "-m 1" at the end to force with one mapper.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (SQOOP-3473) --autoreset-to-one-mapper does not work well with --query

2020-04-23 Thread Eric Lin (Jira)
Eric Lin created SQOOP-3473:
---

 Summary: --autoreset-to-one-mapper does not work well with --query
 Key: SQOOP-3473
 URL: https://issues.apache.org/jira/browse/SQOOP-3473
 Project: Sqoop
  Issue Type: Bug
Affects Versions: 1.4.7
Reporter: Eric Lin


In sqoop doc, it mentions that *--autoreset-to-one-mapper* Cannot be used with 
*--split-by * option. However, when running Sqoop command with 
*--autoreset-to-one-mapper*  and *--query*, it fails and says *--split-by* is 
missing. Example below:

{noformat}
sqoop import --connect jdbc:mysql://mysql-host.com/test --username username 
--password password --query "SELECT * FROM test_table WHERE 1=1 AND 
\$CONDITIONS" --delete-target-dir --target-di/tmp/test_table_data 
--autoreset-to-one-mapper
Warning: 
/opt/cloudera/parcels/CDH-5.13.3-1.cdh5.13.3.p3573.3750/bin/../lib/sqoop/../accumulo
 does not exist! Accumulo imports will fail.
Please set $ACCUMULO_HOME to the root of your Accumulo installation.
20/04/23 17:07:18 INFO sqoop.Sqoop: Running Sqoop version: 1.4.6-cdh5.13.3
20/04/23 17:07:18 WARN tool.BaseSqoopTool: Setting your password on the 
command-line is insecure. Consider using -P instead.
When importing query results in parallel, you must specify --split-by.
Try --help for usage instructions.
{noformat}

It fails here, and I think we should add extra checking to skip it if 
*--autoreset-to-one-mapper* is already passed. I tested in older version, but I 
can see code logic is the same in the latest version

https://github.com/apache/sqoop/blob/release-1.4.7-rc0/src/java/org/apache/sqoop/tool/ImportTool.java#L1068-L1072

To workaround, simply add "-m 1" at the end to force with one mapper.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (SQOOP-3473) --autoreset-to-one-mapper does not work well with --query

2020-04-23 Thread Eric Lin (Jira)


 [ 
https://issues.apache.org/jira/browse/SQOOP-3473?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Lin updated SQOOP-3473:

Description: 
In sqoop doc, it mentions that *autoreset-to-one-mapper* Cannot be used with 
*split-by * option. However, when running Sqoop command with 
*autoreset-to-one-mapper*  and *--query*, it fails and says *split-by* is 
missing. Example below:

{noformat}
sqoop import --connect jdbc:mysql://mysql-host.com/test --username username 
--password password --query "SELECT * FROM test_table WHERE 1=1 AND 
\$CONDITIONS" --delete-target-dir --target-di/tmp/test_table_data 
--autoreset-to-one-mapper
Warning: 
/opt/cloudera/parcels/CDH-5.13.3-1.cdh5.13.3.p3573.3750/bin/../lib/sqoop/../accumulo
 does not exist! Accumulo imports will fail.
Please set $ACCUMULO_HOME to the root of your Accumulo installation.
20/04/23 17:07:18 INFO sqoop.Sqoop: Running Sqoop version: 1.4.6-cdh5.13.3
20/04/23 17:07:18 WARN tool.BaseSqoopTool: Setting your password on the 
command-line is insecure. Consider using -P instead.
When importing query results in parallel, you must specify --split-by.
Try --help for usage instructions.
{noformat}

It fails here, and I think we should add extra checking to skip it if 
*autoreset-to-one-mapper* is already passed. I tested in older version, but I 
can see code logic is the same in the latest version

https://github.com/apache/sqoop/blob/release-1.4.7-rc0/src/java/org/apache/sqoop/tool/ImportTool.java#L1068-L1072

To workaround, simply add "-m 1" at the end to force with one mapper.

  was:
In sqoop doc, it mentions that *--autoreset-to-one-mapper* Cannot be used with 
*--split-by * option. However, when running Sqoop command with 
*--autoreset-to-one-mapper*  and *--query*, it fails and says *--split-by* is 
missing. Example below:

{noformat}
sqoop import --connect jdbc:mysql://mysql-host.com/test --username username 
--password password --query "SELECT * FROM test_table WHERE 1=1 AND 
\$CONDITIONS" --delete-target-dir --target-di/tmp/test_table_data 
--autoreset-to-one-mapper
Warning: 
/opt/cloudera/parcels/CDH-5.13.3-1.cdh5.13.3.p3573.3750/bin/../lib/sqoop/../accumulo
 does not exist! Accumulo imports will fail.
Please set $ACCUMULO_HOME to the root of your Accumulo installation.
20/04/23 17:07:18 INFO sqoop.Sqoop: Running Sqoop version: 1.4.6-cdh5.13.3
20/04/23 17:07:18 WARN tool.BaseSqoopTool: Setting your password on the 
command-line is insecure. Consider using -P instead.
When importing query results in parallel, you must specify --split-by.
Try --help for usage instructions.
{noformat}

It fails here, and I think we should add extra checking to skip it if 
*--autoreset-to-one-mapper* is already passed. I tested in older version, but I 
can see code logic is the same in the latest version

https://github.com/apache/sqoop/blob/release-1.4.7-rc0/src/java/org/apache/sqoop/tool/ImportTool.java#L1068-L1072

To workaround, simply add "-m 1" at the end to force with one mapper.


> --autoreset-to-one-mapper does not work well with --query
> -
>
> Key: SQOOP-3473
> URL: https://issues.apache.org/jira/browse/SQOOP-3473
> Project: Sqoop
>  Issue Type: Bug
>Affects Versions: 1.4.7
>Reporter: Eric Lin
>Priority: Major
>
> In sqoop doc, it mentions that *autoreset-to-one-mapper* Cannot be used with 
> *split-by * option. However, when running Sqoop command with 
> *autoreset-to-one-mapper*  and *--query*, it fails and says *split-by* is 
> missing. Example below:
> {noformat}
> sqoop import --connect jdbc:mysql://mysql-host.com/test --username username 
> --password password --query "SELECT * FROM test_table WHERE 1=1 AND 
> \$CONDITIONS" --delete-target-dir --target-di/tmp/test_table_data 
> --autoreset-to-one-mapper
> Warning: 
> /opt/cloudera/parcels/CDH-5.13.3-1.cdh5.13.3.p3573.3750/bin/../lib/sqoop/../accumulo
>  does not exist! Accumulo imports will fail.
> Please set $ACCUMULO_HOME to the root of your Accumulo installation.
> 20/04/23 17:07:18 INFO sqoop.Sqoop: Running Sqoop version: 1.4.6-cdh5.13.3
> 20/04/23 17:07:18 WARN tool.BaseSqoopTool: Setting your password on the 
> command-line is insecure. Consider using -P instead.
> When importing query results in parallel, you must specify --split-by.
> Try --help for usage instructions.
> {noformat}
> It fails here, and I think we should add extra checking to skip it if 
> *autoreset-to-one-mapper* is already passed. I tested in older version, but I 
> can see code logic is the same in the latest version
> https://github.com/apache/sqoop/blob/release-1.4.7-rc0/src/java/org/apache/sqoop/tool/ImportTool.java#L1068-L1072
> To workaround, simply add "-m 1" at the end to force with one mapper.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (SQOOP-3433) Improve Error Message if No User Mapping Found

2019-08-08 Thread Eric Lin (JIRA)


 [ 
https://issues.apache.org/jira/browse/SQOOP-3433?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Lin reassigned SQOOP-3433:
---

Assignee: Eric Lin

> Improve Error Message if No User Mapping Found
> --
>
> Key: SQOOP-3433
> URL: https://issues.apache.org/jira/browse/SQOOP-3433
> Project: Sqoop
>  Issue Type: Improvement
>  Components: hive-integration
>Affects Versions: 1.5.0
>Reporter: David Mollitor
>Assignee: Eric Lin
>Priority: Minor
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> {code:java|title=TableDefWriter.java}
>   if (!found) {
> throw new IllegalArgumentException("No column by the name " + column
> + "found while importing data");
>   }
> {code}
> I noticed that the word 'found' does not have a space before it, so the error 
> message is a bit unclear.   Unfortunately, there is a unit test that affirms 
> this incorrect behavior.
> Also, there may be more than one column name unaccounted for, so it would be 
> difficult to troubleshoot this issue by having to re-run this method several 
> times to get each erroneous column name.  Instead, report all of the missing 
> columns in the error message.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Commented] (SQOOP-3077) Add support for (import + --hcatalog + --as-avrodatafile) with RDBMS type TIMESTAMP

2018-08-22 Thread Eric Lin (JIRA)


[ 
https://issues.apache.org/jira/browse/SQOOP-3077?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16589678#comment-16589678
 ] 

Eric Lin commented on SQOOP-3077:
-

I will see if I can add this feature.

> Add support for (import + --hcatalog + --as-avrodatafile) with RDBMS type 
> TIMESTAMP
> ---
>
> Key: SQOOP-3077
> URL: https://issues.apache.org/jira/browse/SQOOP-3077
> Project: Sqoop
>  Issue Type: Improvement
>Reporter: Markus Kemper
>Assignee: Eric Lin
>Priority: Major
>
> Please consider adding support for --hcatalog import and TIMESTAMPS, the Avro 
> Specification suggest that Logical Types support TIMESTAMPS.
> Avro Doc:
> https://avro.apache.org/docs/1.8.1/spec.html#Logical+Types
> {noformat}
> #
> # STEP 01 - Setup Table and Data
> #
> sqoop eval --connect $MYCONN --username $MYUSER --password $MYPSWD --query 
> "drop table t1_dates"
> sqoop eval --connect $MYCONN --username $MYUSER --password $MYPSWD --query 
> "create table t1_dates (c1_int integer, c2_date date, c3_timestamp timestamp)"
> sqoop eval --connect $MYCONN --username $MYUSER --password $MYPSWD --query 
> "select dbms_metadata.get_ddl('TABLE', 'T1_DATES', 'SQOOP') from dual"
> sqoop eval --connect $MYCONN --username $MYUSER --password $MYPSWD --query 
> "insert into t1_dates values (1, current_date, current_timestamp)"
> sqoop eval --connect $MYCONN --username $MYUSER --password $MYPSWD --query 
> "select * from t1_dates"
> Output:
> 
> | DBMS_METADATA.GET_DDL('TABLE','T1_DATES','SQOOP') | 
> 
> | 
>   CREATE TABLE "SQOOP"."T1_DATES" 
>(  "C1_INT" NUMBER(*,0), 
>   "C2_DATE" DATE, 
>   "C3_TIMESTAMP" TIMESTAMP (6)
>) SEGMENT CREATION DEFERRED 
>   PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255 
>  NOCOMPRESS LOGGING
>   TABLESPACE "SQOOP"  | 
> 
> ---
> 
> | C1_INT   | C2_DATE | C3_TIMESTAMP | 
> 
> | 1| 2016-12-10 15:48:23.0 | 2016-12-10 15:48:23.707327 | 
> 
> #
> # STEP 02 - Import with Text Format
> #
> beeline -u jdbc:hive2:// -e "use default; drop table t1_dates_text;"
> sqoop import --connect $MYCONN --username $MYUSER --password $MYPSWD --table 
> T1_DATES --hcatalog-database default --hcatalog-table t1_dates_text 
> --create-hcatalog-table --hcatalog-storage-stanza 'stored as textfile' 
> --num-mappers 1 --map-column-hive c2_date=date,c3_timestamp=timestamp
> beeline -u jdbc:hive2:// -e "use default; describe t1_dates_text; select * 
> from t1_dates_text;"
> +-+--+
> | createtab_stmt  |
> +-+--+
> | CREATE TABLE `t1_dates_text`(   |
> |   `c1_int` decimal(38,0),   |
> |   `c2_date` date,   |
> |   `c3_timestamp` timestamp) |
> | ROW FORMAT SERDE|
> |   'org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe'  |
> | STORED AS INPUTFORMAT   |
> |   'org.apache.hadoop.mapred.TextInputFormat'|
> | OUTPUTFORMAT|
> |   'org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat'  |
> | LOCATION|
> |   'hdfs://nameservice1/user/hive/warehouse/t1_dates_text'   |
> | TBLPROPERTIES ( |
> |   'transient_lastDdlTime'='1481386391') |
> +-+--+
> --
> +---++-+--+
> | t1_dates_text.c1_int  | t1_dates_text.c2_date  | t1_dates_text.c3_timestamp 
>  |
> +---++-+--+
> | 1 | 2016-12-10 | 2016-12-10 15:48:23.707327 
>  |
> +---++-+--+
> #
> # STEP 03 - Import with Avro Format (default)
> #
> beeline -u jdbc:hive2:// -e "use default; drop table t1_dates_text;"
> sqoop import --connect $MYCONN --username $MYUSER --password $MYPSWD --table 
> T1_DATES --hcatalog-database default --hcatalog-table t1_dates_avro 
> --create-hcatalog-table 

[jira] [Assigned] (SQOOP-3077) Add support for (import + --hcatalog + --as-avrodatafile) with RDBMS type TIMESTAMP

2018-08-22 Thread Eric Lin (JIRA)


 [ 
https://issues.apache.org/jira/browse/SQOOP-3077?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Lin reassigned SQOOP-3077:
---

Assignee: Eric Lin

> Add support for (import + --hcatalog + --as-avrodatafile) with RDBMS type 
> TIMESTAMP
> ---
>
> Key: SQOOP-3077
> URL: https://issues.apache.org/jira/browse/SQOOP-3077
> Project: Sqoop
>  Issue Type: Improvement
>Reporter: Markus Kemper
>Assignee: Eric Lin
>Priority: Major
>
> Please consider adding support for --hcatalog import and TIMESTAMPS, the Avro 
> Specification suggest that Logical Types support TIMESTAMPS.
> Avro Doc:
> https://avro.apache.org/docs/1.8.1/spec.html#Logical+Types
> {noformat}
> #
> # STEP 01 - Setup Table and Data
> #
> sqoop eval --connect $MYCONN --username $MYUSER --password $MYPSWD --query 
> "drop table t1_dates"
> sqoop eval --connect $MYCONN --username $MYUSER --password $MYPSWD --query 
> "create table t1_dates (c1_int integer, c2_date date, c3_timestamp timestamp)"
> sqoop eval --connect $MYCONN --username $MYUSER --password $MYPSWD --query 
> "select dbms_metadata.get_ddl('TABLE', 'T1_DATES', 'SQOOP') from dual"
> sqoop eval --connect $MYCONN --username $MYUSER --password $MYPSWD --query 
> "insert into t1_dates values (1, current_date, current_timestamp)"
> sqoop eval --connect $MYCONN --username $MYUSER --password $MYPSWD --query 
> "select * from t1_dates"
> Output:
> 
> | DBMS_METADATA.GET_DDL('TABLE','T1_DATES','SQOOP') | 
> 
> | 
>   CREATE TABLE "SQOOP"."T1_DATES" 
>(  "C1_INT" NUMBER(*,0), 
>   "C2_DATE" DATE, 
>   "C3_TIMESTAMP" TIMESTAMP (6)
>) SEGMENT CREATION DEFERRED 
>   PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255 
>  NOCOMPRESS LOGGING
>   TABLESPACE "SQOOP"  | 
> 
> ---
> 
> | C1_INT   | C2_DATE | C3_TIMESTAMP | 
> 
> | 1| 2016-12-10 15:48:23.0 | 2016-12-10 15:48:23.707327 | 
> 
> #
> # STEP 02 - Import with Text Format
> #
> beeline -u jdbc:hive2:// -e "use default; drop table t1_dates_text;"
> sqoop import --connect $MYCONN --username $MYUSER --password $MYPSWD --table 
> T1_DATES --hcatalog-database default --hcatalog-table t1_dates_text 
> --create-hcatalog-table --hcatalog-storage-stanza 'stored as textfile' 
> --num-mappers 1 --map-column-hive c2_date=date,c3_timestamp=timestamp
> beeline -u jdbc:hive2:// -e "use default; describe t1_dates_text; select * 
> from t1_dates_text;"
> +-+--+
> | createtab_stmt  |
> +-+--+
> | CREATE TABLE `t1_dates_text`(   |
> |   `c1_int` decimal(38,0),   |
> |   `c2_date` date,   |
> |   `c3_timestamp` timestamp) |
> | ROW FORMAT SERDE|
> |   'org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe'  |
> | STORED AS INPUTFORMAT   |
> |   'org.apache.hadoop.mapred.TextInputFormat'|
> | OUTPUTFORMAT|
> |   'org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat'  |
> | LOCATION|
> |   'hdfs://nameservice1/user/hive/warehouse/t1_dates_text'   |
> | TBLPROPERTIES ( |
> |   'transient_lastDdlTime'='1481386391') |
> +-+--+
> --
> +---++-+--+
> | t1_dates_text.c1_int  | t1_dates_text.c2_date  | t1_dates_text.c3_timestamp 
>  |
> +---++-+--+
> | 1 | 2016-12-10 | 2016-12-10 15:48:23.707327 
>  |
> +---++-+--+
> #
> # STEP 03 - Import with Avro Format (default)
> #
> beeline -u jdbc:hive2:// -e "use default; drop table t1_dates_text;"
> sqoop import --connect $MYCONN --username $MYUSER --password $MYPSWD --table 
> T1_DATES --hcatalog-database default --hcatalog-table t1_dates_avro 
> --create-hcatalog-table --hcatalog-storage-stanza 'stored as avro' 
> --num-mappers 1
> 

[jira] [Commented] (SQOOP-3077) Add support for (import + --hcatalog + --as-avrodatafile) with RDBMS type TIMESTAMP

2018-08-22 Thread Eric Lin (JIRA)


[ 
https://issues.apache.org/jira/browse/SQOOP-3077?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16589565#comment-16589565
 ] 

Eric Lin commented on SQOOP-3077:
-

This is fixed upstream via: https://issues.apache.org/jira/browse/HIVE-8131, 
but not available in CDH yet. So I guess Sqoop upstream change should be able 
to go ahead

> Add support for (import + --hcatalog + --as-avrodatafile) with RDBMS type 
> TIMESTAMP
> ---
>
> Key: SQOOP-3077
> URL: https://issues.apache.org/jira/browse/SQOOP-3077
> Project: Sqoop
>  Issue Type: Improvement
>Reporter: Markus Kemper
>Priority: Major
>
> Please consider adding support for --hcatalog import and TIMESTAMPS, the Avro 
> Specification suggest that Logical Types support TIMESTAMPS.
> Avro Doc:
> https://avro.apache.org/docs/1.8.1/spec.html#Logical+Types
> {noformat}
> #
> # STEP 01 - Setup Table and Data
> #
> sqoop eval --connect $MYCONN --username $MYUSER --password $MYPSWD --query 
> "drop table t1_dates"
> sqoop eval --connect $MYCONN --username $MYUSER --password $MYPSWD --query 
> "create table t1_dates (c1_int integer, c2_date date, c3_timestamp timestamp)"
> sqoop eval --connect $MYCONN --username $MYUSER --password $MYPSWD --query 
> "select dbms_metadata.get_ddl('TABLE', 'T1_DATES', 'SQOOP') from dual"
> sqoop eval --connect $MYCONN --username $MYUSER --password $MYPSWD --query 
> "insert into t1_dates values (1, current_date, current_timestamp)"
> sqoop eval --connect $MYCONN --username $MYUSER --password $MYPSWD --query 
> "select * from t1_dates"
> Output:
> 
> | DBMS_METADATA.GET_DDL('TABLE','T1_DATES','SQOOP') | 
> 
> | 
>   CREATE TABLE "SQOOP"."T1_DATES" 
>(  "C1_INT" NUMBER(*,0), 
>   "C2_DATE" DATE, 
>   "C3_TIMESTAMP" TIMESTAMP (6)
>) SEGMENT CREATION DEFERRED 
>   PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255 
>  NOCOMPRESS LOGGING
>   TABLESPACE "SQOOP"  | 
> 
> ---
> 
> | C1_INT   | C2_DATE | C3_TIMESTAMP | 
> 
> | 1| 2016-12-10 15:48:23.0 | 2016-12-10 15:48:23.707327 | 
> 
> #
> # STEP 02 - Import with Text Format
> #
> beeline -u jdbc:hive2:// -e "use default; drop table t1_dates_text;"
> sqoop import --connect $MYCONN --username $MYUSER --password $MYPSWD --table 
> T1_DATES --hcatalog-database default --hcatalog-table t1_dates_text 
> --create-hcatalog-table --hcatalog-storage-stanza 'stored as textfile' 
> --num-mappers 1 --map-column-hive c2_date=date,c3_timestamp=timestamp
> beeline -u jdbc:hive2:// -e "use default; describe t1_dates_text; select * 
> from t1_dates_text;"
> +-+--+
> | createtab_stmt  |
> +-+--+
> | CREATE TABLE `t1_dates_text`(   |
> |   `c1_int` decimal(38,0),   |
> |   `c2_date` date,   |
> |   `c3_timestamp` timestamp) |
> | ROW FORMAT SERDE|
> |   'org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe'  |
> | STORED AS INPUTFORMAT   |
> |   'org.apache.hadoop.mapred.TextInputFormat'|
> | OUTPUTFORMAT|
> |   'org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat'  |
> | LOCATION|
> |   'hdfs://nameservice1/user/hive/warehouse/t1_dates_text'   |
> | TBLPROPERTIES ( |
> |   'transient_lastDdlTime'='1481386391') |
> +-+--+
> --
> +---++-+--+
> | t1_dates_text.c1_int  | t1_dates_text.c2_date  | t1_dates_text.c3_timestamp 
>  |
> +---++-+--+
> | 1 | 2016-12-10 | 2016-12-10 15:48:23.707327 
>  |
> +---++-+--+
> #
> # STEP 03 - Import with Avro Format (default)
> #
> beeline -u jdbc:hive2:// -e "use default; drop table t1_dates_text;"
> sqoop import --connect $MYCONN --username $MYUSER --password $MYPSWD --table 
> T1_DATES 

[jira] [Commented] (SQOOP-3077) Add support for (import + --hcatalog + --as-avrodatafile) with RDBMS type TIMESTAMP

2018-08-22 Thread Eric Lin (JIRA)


[ 
https://issues.apache.org/jira/browse/SQOOP-3077?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16589549#comment-16589549
 ] 

Eric Lin commented on SQOOP-3077:
-

It looks like Hive itself does not support AVRO Timestamp logical type yet:

https://github.com/cloudera/hive/blob/cdh5-1.1.0_5.15.0/serde/src/java/org/apache/hadoop/hive/serde2/avro/TypeInfoToSchema.java#L102-L166

> Add support for (import + --hcatalog + --as-avrodatafile) with RDBMS type 
> TIMESTAMP
> ---
>
> Key: SQOOP-3077
> URL: https://issues.apache.org/jira/browse/SQOOP-3077
> Project: Sqoop
>  Issue Type: Improvement
>Reporter: Markus Kemper
>Priority: Major
>
> Please consider adding support for --hcatalog import and TIMESTAMPS, the Avro 
> Specification suggest that Logical Types support TIMESTAMPS.
> Avro Doc:
> https://avro.apache.org/docs/1.8.1/spec.html#Logical+Types
> {noformat}
> #
> # STEP 01 - Setup Table and Data
> #
> sqoop eval --connect $MYCONN --username $MYUSER --password $MYPSWD --query 
> "drop table t1_dates"
> sqoop eval --connect $MYCONN --username $MYUSER --password $MYPSWD --query 
> "create table t1_dates (c1_int integer, c2_date date, c3_timestamp timestamp)"
> sqoop eval --connect $MYCONN --username $MYUSER --password $MYPSWD --query 
> "select dbms_metadata.get_ddl('TABLE', 'T1_DATES', 'SQOOP') from dual"
> sqoop eval --connect $MYCONN --username $MYUSER --password $MYPSWD --query 
> "insert into t1_dates values (1, current_date, current_timestamp)"
> sqoop eval --connect $MYCONN --username $MYUSER --password $MYPSWD --query 
> "select * from t1_dates"
> Output:
> 
> | DBMS_METADATA.GET_DDL('TABLE','T1_DATES','SQOOP') | 
> 
> | 
>   CREATE TABLE "SQOOP"."T1_DATES" 
>(  "C1_INT" NUMBER(*,0), 
>   "C2_DATE" DATE, 
>   "C3_TIMESTAMP" TIMESTAMP (6)
>) SEGMENT CREATION DEFERRED 
>   PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255 
>  NOCOMPRESS LOGGING
>   TABLESPACE "SQOOP"  | 
> 
> ---
> 
> | C1_INT   | C2_DATE | C3_TIMESTAMP | 
> 
> | 1| 2016-12-10 15:48:23.0 | 2016-12-10 15:48:23.707327 | 
> 
> #
> # STEP 02 - Import with Text Format
> #
> beeline -u jdbc:hive2:// -e "use default; drop table t1_dates_text;"
> sqoop import --connect $MYCONN --username $MYUSER --password $MYPSWD --table 
> T1_DATES --hcatalog-database default --hcatalog-table t1_dates_text 
> --create-hcatalog-table --hcatalog-storage-stanza 'stored as textfile' 
> --num-mappers 1 --map-column-hive c2_date=date,c3_timestamp=timestamp
> beeline -u jdbc:hive2:// -e "use default; describe t1_dates_text; select * 
> from t1_dates_text;"
> +-+--+
> | createtab_stmt  |
> +-+--+
> | CREATE TABLE `t1_dates_text`(   |
> |   `c1_int` decimal(38,0),   |
> |   `c2_date` date,   |
> |   `c3_timestamp` timestamp) |
> | ROW FORMAT SERDE|
> |   'org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe'  |
> | STORED AS INPUTFORMAT   |
> |   'org.apache.hadoop.mapred.TextInputFormat'|
> | OUTPUTFORMAT|
> |   'org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat'  |
> | LOCATION|
> |   'hdfs://nameservice1/user/hive/warehouse/t1_dates_text'   |
> | TBLPROPERTIES ( |
> |   'transient_lastDdlTime'='1481386391') |
> +-+--+
> --
> +---++-+--+
> | t1_dates_text.c1_int  | t1_dates_text.c2_date  | t1_dates_text.c3_timestamp 
>  |
> +---++-+--+
> | 1 | 2016-12-10 | 2016-12-10 15:48:23.707327 
>  |
> +---++-+--+
> #
> # STEP 03 - Import with Avro Format (default)
> #
> beeline -u jdbc:hive2:// -e "use default; drop table t1_dates_text;"
> sqoop import --connect $MYCONN --username 

[jira] [Commented] (SQOOP-3042) Sqoop does not clear compile directory under /tmp/sqoop-/compile automatically

2018-07-12 Thread Eric Lin (JIRA)


[ 
https://issues.apache.org/jira/browse/SQOOP-3042?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16542527#comment-16542527
 ] 

Eric Lin commented on SQOOP-3042:
-

Thanks [~vasas] for you help to review and resolve this JIRA, much appreciated!!

> Sqoop does not clear compile directory under /tmp/sqoop-/compile 
> automatically
> 
>
> Key: SQOOP-3042
> URL: https://issues.apache.org/jira/browse/SQOOP-3042
> Project: Sqoop
>  Issue Type: Bug
>Affects Versions: 1.4.6
>Reporter: Eric Lin
>Assignee: Eric Lin
>Priority: Critical
>  Labels: patch
> Fix For: 3.0.0
>
> Attachments: SQOOP-3042.1.patch, SQOOP-3042.2.patch, 
> SQOOP-3042.4.patch, SQOOP-3042.5.patch, SQOOP-3042.6.patch, 
> SQOOP-3042.7.patch, SQOOP-3042.9.patch
>
>
> After running sqoop, all the temp files generated by ClassWriter are left 
> behind on disk, so anyone can check those JAVA files to see the schema of 
> those tables that Sqoop has been interacting with. By default, the directory 
> is under /tmp/sqoop-/compile.
> In class org.apache.sqoop.SqoopOptions, function getNonceJarDir(), I can see 
> that we did add "deleteOnExit" on the temp dir:
> {code}
> for (int attempts = 0; attempts < MAX_DIR_CREATE_ATTEMPTS; attempts++) {
>   hashDir = new File(baseDir, RandomHash.generateMD5String());
>   while (hashDir.exists()) {
> hashDir = new File(baseDir, RandomHash.generateMD5String());
>   }
>   if (hashDir.mkdirs()) {
> // We created the directory. Use it.
> // If this directory is not actually filled with files, delete it
> // when the JVM quits.
> hashDir.deleteOnExit();
> break;
>   }
> }
> {code}
> However, I believe it failed to delete due to directory is not empty.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (SQOOP-3042) Sqoop does not clear compile directory under /tmp/sqoop-/compile automatically

2018-07-12 Thread Eric Lin (JIRA)


[ 
https://issues.apache.org/jira/browse/SQOOP-3042?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16541296#comment-16541296
 ] 

Eric Lin commented on SQOOP-3042:
-

uploading SQOOP-3042.9.patch, which includes test cases and updated docs.

> Sqoop does not clear compile directory under /tmp/sqoop-/compile 
> automatically
> 
>
> Key: SQOOP-3042
> URL: https://issues.apache.org/jira/browse/SQOOP-3042
> Project: Sqoop
>  Issue Type: Bug
>Affects Versions: 1.4.6
>Reporter: Eric Lin
>Assignee: Eric Lin
>Priority: Critical
>  Labels: patch
> Attachments: SQOOP-3042.1.patch, SQOOP-3042.2.patch, 
> SQOOP-3042.4.patch, SQOOP-3042.5.patch, SQOOP-3042.6.patch, 
> SQOOP-3042.7.patch, SQOOP-3042.9.patch
>
>
> After running sqoop, all the temp files generated by ClassWriter are left 
> behind on disk, so anyone can check those JAVA files to see the schema of 
> those tables that Sqoop has been interacting with. By default, the directory 
> is under /tmp/sqoop-/compile.
> In class org.apache.sqoop.SqoopOptions, function getNonceJarDir(), I can see 
> that we did add "deleteOnExit" on the temp dir:
> {code}
> for (int attempts = 0; attempts < MAX_DIR_CREATE_ATTEMPTS; attempts++) {
>   hashDir = new File(baseDir, RandomHash.generateMD5String());
>   while (hashDir.exists()) {
> hashDir = new File(baseDir, RandomHash.generateMD5String());
>   }
>   if (hashDir.mkdirs()) {
> // We created the directory. Use it.
> // If this directory is not actually filled with files, delete it
> // when the JVM quits.
> hashDir.deleteOnExit();
> break;
>   }
> }
> {code}
> However, I believe it failed to delete due to directory is not empty.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (SQOOP-3042) Sqoop does not clear compile directory under /tmp/sqoop-/compile automatically

2018-07-12 Thread Eric Lin (JIRA)


 [ 
https://issues.apache.org/jira/browse/SQOOP-3042?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Lin updated SQOOP-3042:

Attachment: SQOOP-3042.9.patch

> Sqoop does not clear compile directory under /tmp/sqoop-/compile 
> automatically
> 
>
> Key: SQOOP-3042
> URL: https://issues.apache.org/jira/browse/SQOOP-3042
> Project: Sqoop
>  Issue Type: Bug
>Affects Versions: 1.4.6
>Reporter: Eric Lin
>Assignee: Eric Lin
>Priority: Critical
>  Labels: patch
> Attachments: SQOOP-3042.1.patch, SQOOP-3042.2.patch, 
> SQOOP-3042.4.patch, SQOOP-3042.5.patch, SQOOP-3042.6.patch, 
> SQOOP-3042.7.patch, SQOOP-3042.9.patch
>
>
> After running sqoop, all the temp files generated by ClassWriter are left 
> behind on disk, so anyone can check those JAVA files to see the schema of 
> those tables that Sqoop has been interacting with. By default, the directory 
> is under /tmp/sqoop-/compile.
> In class org.apache.sqoop.SqoopOptions, function getNonceJarDir(), I can see 
> that we did add "deleteOnExit" on the temp dir:
> {code}
> for (int attempts = 0; attempts < MAX_DIR_CREATE_ATTEMPTS; attempts++) {
>   hashDir = new File(baseDir, RandomHash.generateMD5String());
>   while (hashDir.exists()) {
> hashDir = new File(baseDir, RandomHash.generateMD5String());
>   }
>   if (hashDir.mkdirs()) {
> // We created the directory. Use it.
> // If this directory is not actually filled with files, delete it
> // when the JVM quits.
> hashDir.deleteOnExit();
> break;
>   }
> }
> {code}
> However, I believe it failed to delete due to directory is not empty.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (SQOOP-3042) Sqoop does not clear compile directory under /tmp/sqoop-/compile automatically

2018-07-11 Thread Eric Lin (JIRA)


 [ 
https://issues.apache.org/jira/browse/SQOOP-3042?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Lin updated SQOOP-3042:

Attachment: SQOOP-3042.6.patch

> Sqoop does not clear compile directory under /tmp/sqoop-/compile 
> automatically
> 
>
> Key: SQOOP-3042
> URL: https://issues.apache.org/jira/browse/SQOOP-3042
> Project: Sqoop
>  Issue Type: Bug
>Affects Versions: 1.4.6
>Reporter: Eric Lin
>Assignee: Eric Lin
>Priority: Critical
>  Labels: patch
> Attachments: SQOOP-3042.1.patch, SQOOP-3042.2.patch, 
> SQOOP-3042.4.patch, SQOOP-3042.5.patch, SQOOP-3042.6.patch
>
>
> After running sqoop, all the temp files generated by ClassWriter are left 
> behind on disk, so anyone can check those JAVA files to see the schema of 
> those tables that Sqoop has been interacting with. By default, the directory 
> is under /tmp/sqoop-/compile.
> In class org.apache.sqoop.SqoopOptions, function getNonceJarDir(), I can see 
> that we did add "deleteOnExit" on the temp dir:
> {code}
> for (int attempts = 0; attempts < MAX_DIR_CREATE_ATTEMPTS; attempts++) {
>   hashDir = new File(baseDir, RandomHash.generateMD5String());
>   while (hashDir.exists()) {
> hashDir = new File(baseDir, RandomHash.generateMD5String());
>   }
>   if (hashDir.mkdirs()) {
> // We created the directory. Use it.
> // If this directory is not actually filled with files, delete it
> // when the JVM quits.
> hashDir.deleteOnExit();
> break;
>   }
> }
> {code}
> However, I believe it failed to delete due to directory is not empty.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (SQOOP-3330) Sqoop --append does not work with -Dmapreduce.output.basename

2018-07-11 Thread Eric Lin (JIRA)


 [ 
https://issues.apache.org/jira/browse/SQOOP-3330?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Lin updated SQOOP-3330:

Attachment: SQOOP-3330-2.patch

> Sqoop --append does not work with -Dmapreduce.output.basename
> -
>
> Key: SQOOP-3330
> URL: https://issues.apache.org/jira/browse/SQOOP-3330
> Project: Sqoop
>  Issue Type: Bug
>  Components: tools
>Affects Versions: 1.4.7
>Reporter: Eric Lin
>Assignee: Eric Lin
>Priority: Major
> Attachments: SQOOP-3330-1.patch, SQOOP-3330-2.patch
>
>
> When adding --append to Sqoop directory import with 
> -Dmapreduce.output.basename, all files will be ignored, which end up nothing 
> being imported. See below DEBUG output:
> {code}
> sqoop import -Dmapreduce.output.basename="eric-test" --connect 
> jdbc:mysql://mysql-host.com/test --username root --password 'root' --table 
> test --target-dir /tmp/ericlin-test/sqoop/test --fields-terminated-by '\t' 
> --verbose --append
> 18/05/28 22:24:44 INFO util.AppendUtils: Appending to directory test
> 18/05/28 22:24:44 DEBUG util.AppendUtils: Filename: _SUCCESS ignored
> 18/05/28 22:24:44 DEBUG util.AppendUtils: Filename: eric-test-m-0 ignored
> 18/05/28 22:24:44 DEBUG util.AppendUtils: Filename: eric-test-m-1 ignored
> 18/05/28 22:24:44 DEBUG util.AppendUtils: Filename: eric-test-m-2 ignored
> 18/05/28 22:24:44 DEBUG util.AppendUtils: Deleting temporary folder 
> 14935e396acc4ea7b9a6236c66064c9b_test
> {code}
> This is due to AppendUtils only recognizes file name starts with 
> "part.*-([0-9]"
> https://github.com/apache/sqoop/blob/branch-1.4.7/src/java/org/apache/sqoop/util/AppendUtils.java#L46



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (SQOOP-3186) Add Sqoop1 (import + --incremental + --check-column) support for functions/expressions

2018-07-05 Thread Eric Lin (JIRA)


 [ 
https://issues.apache.org/jira/browse/SQOOP-3186?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Lin updated SQOOP-3186:

Attachment: SQOOP-3186.2.patch

> Add Sqoop1 (import + --incremental + --check-column) support for 
> functions/expressions
> --
>
> Key: SQOOP-3186
> URL: https://issues.apache.org/jira/browse/SQOOP-3186
> Project: Sqoop
>  Issue Type: Improvement
>Reporter: Markus Kemper
>Assignee: Eric Lin
>Priority: Major
> Attachments: SQOOP-3186.2.patch, SQOOP-3186.patch
>
>
> Add Sqoop1 (import + --incremental + --check-column) support for 
> functions/expressions, for example:
> *Example*
> {noformat}
> sqoop import \
> --connect $MYCONN --username $MYUSER --password $MYPSWD \
> --table T1 --target-dir /path/directory --merge-key C1 \
> --incremental lastmodified  --last-value '2017-01-01 00:00:00.0' \
> --check-column nvl(C4,to_date('2017-01-01 00:00:00')
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (SQOOP-3042) Sqoop does not clear compile directory under /tmp/sqoop-/compile automatically

2018-07-05 Thread Eric Lin (JIRA)


[ 
https://issues.apache.org/jira/browse/SQOOP-3042?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16533387#comment-16533387
 ] 

Eric Lin commented on SQOOP-3042:
-

Added patch based on latest trunk, waiting for review.

> Sqoop does not clear compile directory under /tmp/sqoop-/compile 
> automatically
> 
>
> Key: SQOOP-3042
> URL: https://issues.apache.org/jira/browse/SQOOP-3042
> Project: Sqoop
>  Issue Type: Bug
>Affects Versions: 1.4.6
>Reporter: Eric Lin
>Assignee: Eric Lin
>Priority: Critical
>  Labels: patch
> Attachments: SQOOP-3042.1.patch, SQOOP-3042.2.patch, 
> SQOOP-3042.4.patch, SQOOP-3042.5.patch
>
>
> After running sqoop, all the temp files generated by ClassWriter are left 
> behind on disk, so anyone can check those JAVA files to see the schema of 
> those tables that Sqoop has been interacting with. By default, the directory 
> is under /tmp/sqoop-/compile.
> In class org.apache.sqoop.SqoopOptions, function getNonceJarDir(), I can see 
> that we did add "deleteOnExit" on the temp dir:
> {code}
> for (int attempts = 0; attempts < MAX_DIR_CREATE_ATTEMPTS; attempts++) {
>   hashDir = new File(baseDir, RandomHash.generateMD5String());
>   while (hashDir.exists()) {
> hashDir = new File(baseDir, RandomHash.generateMD5String());
>   }
>   if (hashDir.mkdirs()) {
> // We created the directory. Use it.
> // If this directory is not actually filled with files, delete it
> // when the JVM quits.
> hashDir.deleteOnExit();
> break;
>   }
> }
> {code}
> However, I believe it failed to delete due to directory is not empty.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (SQOOP-3039) Sqoop unable to export Time data "13:14:12.1234" into Time colum in RMDBS

2018-06-02 Thread Eric Lin (JIRA)


[ 
https://issues.apache.org/jira/browse/SQOOP-3039?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16499189#comment-16499189
 ] 

Eric Lin commented on SQOOP-3039:
-

Update based on latest review feedback: https://reviews.apache.org/r/60587/ 
(SQOOP-3039.3.patch)

> Sqoop unable to export Time data "13:14:12.1234" into Time colum in RMDBS
> -
>
> Key: SQOOP-3039
> URL: https://issues.apache.org/jira/browse/SQOOP-3039
> Project: Sqoop
>  Issue Type: Bug
>  Components: tools
>Affects Versions: 1.4.6, 1.4.7
>Reporter: Eric Lin
>Assignee: Eric Lin
>Priority: Minor
> Attachments: SQOOP-3039.2.patch, SQOOP-3039.3.patch, 
> SQOOP-3039.3.patch, SQOOP-3039.patch
>
>
> To re-produce:
> Set up MySQL database with following schema:
> {code}
> CREATE TABLE `test` (
>   `a` time(2) DEFAULT NULL
> ) ENGINE=InnoDB DEFAULT CHARSET=latin1
> {code}
> Store the following data in HDFS:
> {code}
> 16:56:53.0999
> 16:56:54.1
> 16:56:53.
> 16:56:54.1230
> {code}
> run Sqoop export command to copy data from HDFS into MySQL:
> {code}
> qoop export --connect jdbc:mysql:///test --username root 
> --password password --table test  -m 1 --driver com.mysql.jdbc.Driver  
> --export-dir /tmp/test
> {code}
> Command will fail with the following error:
> {code}
> java.lang.RuntimeException: Can't parse input data: '16:56:53.0999'
> at t5.__loadFromFields(t5.java:223)
> at t5.parse(t5.java:166)
> at 
> org.apache.sqoop.mapreduce.TextExportMapper.map(TextExportMapper.java:89)
> at 
> org.apache.sqoop.mapreduce.TextExportMapper.map(TextExportMapper.java:39)
> at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:145)
> at 
> org.apache.sqoop.mapreduce.AutoProgressMapper.run(AutoProgressMapper.java:64)
> at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:784)
> at org.apache.hadoop.mapred.MapTask.run(MapTask.java:341)
> at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:163)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
> at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158)
> Caused by: java.lang.NumberFormatException: For input string: "53.0999"
> at 
> java.lang.NumberFormatException.forInputString(NumberFormatException.java:65)
> at java.lang.Integer.parseInt(Integer.java:580)
> at java.lang.Integer.parseInt(Integer.java:615)
> at java.sql.Time.valueOf(Time.java:108)
> at t5.__loadFromFields(t5.java:215)
> ... 12 more
> {code}
> Looks like Sqoop uses java.sql.Time.valueOf function to convert 
> "16:56:53.0999" to Time object, however, this function only accepts Time in 
> "hh:mm:ss" format:
> https://docs.oracle.com/javase/7/docs/api/java/sql/Time.html#valueOf(java.lang.String)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (SQOOP-3039) Sqoop unable to export Time data "13:14:12.1234" into Time colum in RMDBS

2018-06-02 Thread Eric Lin (JIRA)


 [ 
https://issues.apache.org/jira/browse/SQOOP-3039?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Lin updated SQOOP-3039:

Affects Version/s: 1.4.7

> Sqoop unable to export Time data "13:14:12.1234" into Time colum in RMDBS
> -
>
> Key: SQOOP-3039
> URL: https://issues.apache.org/jira/browse/SQOOP-3039
> Project: Sqoop
>  Issue Type: Bug
>  Components: tools
>Affects Versions: 1.4.6, 1.4.7
>Reporter: Eric Lin
>Assignee: Eric Lin
>Priority: Minor
> Attachments: SQOOP-3039.2.patch, SQOOP-3039.3.patch, 
> SQOOP-3039.3.patch, SQOOP-3039.patch
>
>
> To re-produce:
> Set up MySQL database with following schema:
> {code}
> CREATE TABLE `test` (
>   `a` time(2) DEFAULT NULL
> ) ENGINE=InnoDB DEFAULT CHARSET=latin1
> {code}
> Store the following data in HDFS:
> {code}
> 16:56:53.0999
> 16:56:54.1
> 16:56:53.
> 16:56:54.1230
> {code}
> run Sqoop export command to copy data from HDFS into MySQL:
> {code}
> qoop export --connect jdbc:mysql:///test --username root 
> --password password --table test  -m 1 --driver com.mysql.jdbc.Driver  
> --export-dir /tmp/test
> {code}
> Command will fail with the following error:
> {code}
> java.lang.RuntimeException: Can't parse input data: '16:56:53.0999'
> at t5.__loadFromFields(t5.java:223)
> at t5.parse(t5.java:166)
> at 
> org.apache.sqoop.mapreduce.TextExportMapper.map(TextExportMapper.java:89)
> at 
> org.apache.sqoop.mapreduce.TextExportMapper.map(TextExportMapper.java:39)
> at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:145)
> at 
> org.apache.sqoop.mapreduce.AutoProgressMapper.run(AutoProgressMapper.java:64)
> at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:784)
> at org.apache.hadoop.mapred.MapTask.run(MapTask.java:341)
> at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:163)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
> at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158)
> Caused by: java.lang.NumberFormatException: For input string: "53.0999"
> at 
> java.lang.NumberFormatException.forInputString(NumberFormatException.java:65)
> at java.lang.Integer.parseInt(Integer.java:580)
> at java.lang.Integer.parseInt(Integer.java:615)
> at java.sql.Time.valueOf(Time.java:108)
> at t5.__loadFromFields(t5.java:215)
> ... 12 more
> {code}
> Looks like Sqoop uses java.sql.Time.valueOf function to convert 
> "16:56:53.0999" to Time object, however, this function only accepts Time in 
> "hh:mm:ss" format:
> https://docs.oracle.com/javase/7/docs/api/java/sql/Time.html#valueOf(java.lang.String)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (SQOOP-3039) Sqoop unable to export Time data "13:14:12.1234" into Time colum in RMDBS

2018-06-02 Thread Eric Lin (JIRA)


 [ 
https://issues.apache.org/jira/browse/SQOOP-3039?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Lin updated SQOOP-3039:

Attachment: SQOOP-3039.3.patch

> Sqoop unable to export Time data "13:14:12.1234" into Time colum in RMDBS
> -
>
> Key: SQOOP-3039
> URL: https://issues.apache.org/jira/browse/SQOOP-3039
> Project: Sqoop
>  Issue Type: Bug
>  Components: tools
>Affects Versions: 1.4.6
>Reporter: Eric Lin
>Assignee: Eric Lin
>Priority: Minor
> Attachments: SQOOP-3039.2.patch, SQOOP-3039.3.patch, 
> SQOOP-3039.3.patch, SQOOP-3039.patch
>
>
> To re-produce:
> Set up MySQL database with following schema:
> {code}
> CREATE TABLE `test` (
>   `a` time(2) DEFAULT NULL
> ) ENGINE=InnoDB DEFAULT CHARSET=latin1
> {code}
> Store the following data in HDFS:
> {code}
> 16:56:53.0999
> 16:56:54.1
> 16:56:53.
> 16:56:54.1230
> {code}
> run Sqoop export command to copy data from HDFS into MySQL:
> {code}
> qoop export --connect jdbc:mysql:///test --username root 
> --password password --table test  -m 1 --driver com.mysql.jdbc.Driver  
> --export-dir /tmp/test
> {code}
> Command will fail with the following error:
> {code}
> java.lang.RuntimeException: Can't parse input data: '16:56:53.0999'
> at t5.__loadFromFields(t5.java:223)
> at t5.parse(t5.java:166)
> at 
> org.apache.sqoop.mapreduce.TextExportMapper.map(TextExportMapper.java:89)
> at 
> org.apache.sqoop.mapreduce.TextExportMapper.map(TextExportMapper.java:39)
> at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:145)
> at 
> org.apache.sqoop.mapreduce.AutoProgressMapper.run(AutoProgressMapper.java:64)
> at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:784)
> at org.apache.hadoop.mapred.MapTask.run(MapTask.java:341)
> at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:163)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
> at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158)
> Caused by: java.lang.NumberFormatException: For input string: "53.0999"
> at 
> java.lang.NumberFormatException.forInputString(NumberFormatException.java:65)
> at java.lang.Integer.parseInt(Integer.java:580)
> at java.lang.Integer.parseInt(Integer.java:615)
> at java.sql.Time.valueOf(Time.java:108)
> at t5.__loadFromFields(t5.java:215)
> ... 12 more
> {code}
> Looks like Sqoop uses java.sql.Time.valueOf function to convert 
> "16:56:53.0999" to Time object, however, this function only accepts Time in 
> "hh:mm:ss" format:
> https://docs.oracle.com/javase/7/docs/api/java/sql/Time.html#valueOf(java.lang.String)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (SQOOP-3330) Sqoop --append does not work with -Dmapreduce.output.basename

2018-06-02 Thread Eric Lin (JIRA)


[ 
https://issues.apache.org/jira/browse/SQOOP-3330?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16498908#comment-16498908
 ] 

Eric Lin commented on SQOOP-3330:
-

First version of patch uploaded, review request at: 
https://reviews.apache.org/r/67424/

> Sqoop --append does not work with -Dmapreduce.output.basename
> -
>
> Key: SQOOP-3330
> URL: https://issues.apache.org/jira/browse/SQOOP-3330
> Project: Sqoop
>  Issue Type: Bug
>  Components: tools
>Affects Versions: 1.4.7
>Reporter: Eric Lin
>Assignee: Eric Lin
>Priority: Major
> Attachments: SQOOP-3330-1.patch
>
>
> When adding --append to Sqoop directory import with 
> -Dmapreduce.output.basename, all files will be ignored, which end up nothing 
> being imported. See below DEBUG output:
> {code}
> sqoop import -Dmapreduce.output.basename="eric-test" --connect 
> jdbc:mysql://mysql-host.com/test --username root --password 'root' --table 
> test --target-dir /tmp/ericlin-test/sqoop/test --fields-terminated-by '\t' 
> --verbose --append
> 18/05/28 22:24:44 INFO util.AppendUtils: Appending to directory test
> 18/05/28 22:24:44 DEBUG util.AppendUtils: Filename: _SUCCESS ignored
> 18/05/28 22:24:44 DEBUG util.AppendUtils: Filename: eric-test-m-0 ignored
> 18/05/28 22:24:44 DEBUG util.AppendUtils: Filename: eric-test-m-1 ignored
> 18/05/28 22:24:44 DEBUG util.AppendUtils: Filename: eric-test-m-2 ignored
> 18/05/28 22:24:44 DEBUG util.AppendUtils: Deleting temporary folder 
> 14935e396acc4ea7b9a6236c66064c9b_test
> {code}
> This is due to AppendUtils only recognizes file name starts with 
> "part.*-([0-9]"
> https://github.com/apache/sqoop/blob/branch-1.4.7/src/java/org/apache/sqoop/util/AppendUtils.java#L46



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (SQOOP-3330) Sqoop --append does not work with -Dmapreduce.output.basename

2018-05-29 Thread Eric Lin (JIRA)


 [ 
https://issues.apache.org/jira/browse/SQOOP-3330?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Lin updated SQOOP-3330:

Description: 
When adding --append to Sqoop directory import with 
-Dmapreduce.output.basename, all files will be ignored, which end up nothing 
being imported. See below DEBUG output:

{code}
sqoop import -Dmapreduce.output.basename="eric-test" --connect 
jdbc:mysql://mysql-host.com/test --username root --password 'root' --table test 
--target-dir /tmp/ericlin-test/sqoop/test --fields-terminated-by '\t' --verbose 
--append

18/05/28 22:24:44 INFO util.AppendUtils: Appending to directory test
18/05/28 22:24:44 DEBUG util.AppendUtils: Filename: _SUCCESS ignored
18/05/28 22:24:44 DEBUG util.AppendUtils: Filename: eric-test-m-0 ignored
18/05/28 22:24:44 DEBUG util.AppendUtils: Filename: eric-test-m-1 ignored
18/05/28 22:24:44 DEBUG util.AppendUtils: Filename: eric-test-m-2 ignored
18/05/28 22:24:44 DEBUG util.AppendUtils: Deleting temporary folder 
14935e396acc4ea7b9a6236c66064c9b_test
{code}

This is due to AppendUtils only recognizes file name starts with "part.*-([0-9]"

https://github.com/apache/sqoop/blob/branch-1.4.7/src/java/org/apache/sqoop/util/AppendUtils.java#L46

  was:
When adding --append to Sqoop directory import with 
-Dmapreduce.output.basename, all files will be ignored, which end up nothing 
being imported. See below DEBUG output:

sqoop import -Dmapreduce.output.basename="eric-test" --connect 
jdbc:mysql://mysql-host.com/test --username root --password 'root' --table test 
--target-dir /tmp/ericlin-test/sqoop/test --fields-terminated-by '\t' --verbose 
--append

{code}
18/05/28 22:24:44 INFO util.AppendUtils: Appending to directory test
18/05/28 22:24:44 DEBUG util.AppendUtils: Filename: _SUCCESS ignored
18/05/28 22:24:44 DEBUG util.AppendUtils: Filename: eric-test-m-0 ignored
18/05/28 22:24:44 DEBUG util.AppendUtils: Filename: eric-test-m-1 ignored
18/05/28 22:24:44 DEBUG util.AppendUtils: Filename: eric-test-m-2 ignored
18/05/28 22:24:44 DEBUG util.AppendUtils: Deleting temporary folder 
14935e396acc4ea7b9a6236c66064c9b_test
{code}

This is due to AppendUtils only recognizes file name starts with "part.*-([0-9]"

https://github.com/apache/sqoop/blob/branch-1.4.7/src/java/org/apache/sqoop/util/AppendUtils.java#L46


> Sqoop --append does not work with -Dmapreduce.output.basename
> -
>
> Key: SQOOP-3330
> URL: https://issues.apache.org/jira/browse/SQOOP-3330
> Project: Sqoop
>  Issue Type: Bug
>  Components: tools
>Affects Versions: 1.4.7
>Reporter: Eric Lin
>Assignee: Eric Lin
>Priority: Major
>
> When adding --append to Sqoop directory import with 
> -Dmapreduce.output.basename, all files will be ignored, which end up nothing 
> being imported. See below DEBUG output:
> {code}
> sqoop import -Dmapreduce.output.basename="eric-test" --connect 
> jdbc:mysql://mysql-host.com/test --username root --password 'root' --table 
> test --target-dir /tmp/ericlin-test/sqoop/test --fields-terminated-by '\t' 
> --verbose --append
> 18/05/28 22:24:44 INFO util.AppendUtils: Appending to directory test
> 18/05/28 22:24:44 DEBUG util.AppendUtils: Filename: _SUCCESS ignored
> 18/05/28 22:24:44 DEBUG util.AppendUtils: Filename: eric-test-m-0 ignored
> 18/05/28 22:24:44 DEBUG util.AppendUtils: Filename: eric-test-m-1 ignored
> 18/05/28 22:24:44 DEBUG util.AppendUtils: Filename: eric-test-m-2 ignored
> 18/05/28 22:24:44 DEBUG util.AppendUtils: Deleting temporary folder 
> 14935e396acc4ea7b9a6236c66064c9b_test
> {code}
> This is due to AppendUtils only recognizes file name starts with 
> "part.*-([0-9]"
> https://github.com/apache/sqoop/blob/branch-1.4.7/src/java/org/apache/sqoop/util/AppendUtils.java#L46



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (SQOOP-3330) Sqoop --append does not work with -Dmapreduce.output.basename

2018-05-29 Thread Eric Lin (JIRA)
Eric Lin created SQOOP-3330:
---

 Summary: Sqoop --append does not work with 
-Dmapreduce.output.basename
 Key: SQOOP-3330
 URL: https://issues.apache.org/jira/browse/SQOOP-3330
 Project: Sqoop
  Issue Type: Bug
  Components: tools
Affects Versions: 1.4.7
Reporter: Eric Lin


When adding --append to Sqoop directory import with 
-Dmapreduce.output.basename, all files will be ignored, which end up nothing 
being imported. See below DEBUG output:

sqoop import -Dmapreduce.output.basename="eric-test" --connect 
jdbc:mysql://mysql-host.com/test --username root --password 'root' --table test 
--target-dir /tmp/ericlin-test/sqoop/test --fields-terminated-by '\t' --verbose 
--append

{code}
18/05/28 22:24:44 INFO util.AppendUtils: Appending to directory test
18/05/28 22:24:44 DEBUG util.AppendUtils: Filename: _SUCCESS ignored
18/05/28 22:24:44 DEBUG util.AppendUtils: Filename: eric-test-m-0 ignored
18/05/28 22:24:44 DEBUG util.AppendUtils: Filename: eric-test-m-1 ignored
18/05/28 22:24:44 DEBUG util.AppendUtils: Filename: eric-test-m-2 ignored
18/05/28 22:24:44 DEBUG util.AppendUtils: Deleting temporary folder 
14935e396acc4ea7b9a6236c66064c9b_test
{code}

This is due to AppendUtils only recognizes file name starts with "part.*-([0-9]"

https://github.com/apache/sqoop/blob/branch-1.4.7/src/java/org/apache/sqoop/util/AppendUtils.java#L46



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (SQOOP-3042) Sqoop does not clear compile directory under /tmp/sqoop-/compile automatically

2018-03-23 Thread Eric Lin (JIRA)

[ 
https://issues.apache.org/jira/browse/SQOOP-3042?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16411372#comment-16411372
 ] 

Eric Lin commented on SQOOP-3042:
-

Hi [~dkozlowski],

The patch has not been reviewed yet, I will try to update the patch with latest 
trunk code and push for review again.

> Sqoop does not clear compile directory under /tmp/sqoop-/compile 
> automatically
> 
>
> Key: SQOOP-3042
> URL: https://issues.apache.org/jira/browse/SQOOP-3042
> Project: Sqoop
>  Issue Type: Bug
>Affects Versions: 1.4.6
>Reporter: Eric Lin
>Assignee: Eric Lin
>Priority: Critical
>  Labels: patch
> Attachments: SQOOP-3042.1.patch, SQOOP-3042.2.patch, 
> SQOOP-3042.4.patch
>
>
> After running sqoop, all the temp files generated by ClassWriter are left 
> behind on disk, so anyone can check those JAVA files to see the schema of 
> those tables that Sqoop has been interacting with. By default, the directory 
> is under /tmp/sqoop-/compile.
> In class org.apache.sqoop.SqoopOptions, function getNonceJarDir(), I can see 
> that we did add "deleteOnExit" on the temp dir:
> {code}
> for (int attempts = 0; attempts < MAX_DIR_CREATE_ATTEMPTS; attempts++) {
>   hashDir = new File(baseDir, RandomHash.generateMD5String());
>   while (hashDir.exists()) {
> hashDir = new File(baseDir, RandomHash.generateMD5String());
>   }
>   if (hashDir.mkdirs()) {
> // We created the directory. Use it.
> // If this directory is not actually filled with files, delete it
> // when the JVM quits.
> hashDir.deleteOnExit();
> break;
>   }
> }
> {code}
> However, I believe it failed to delete due to directory is not empty.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (SQOOP-3186) Add Sqoop1 (import + --incremental + --check-column) support for functions/expressions

2017-08-14 Thread Eric Lin (JIRA)

[ 
https://issues.apache.org/jira/browse/SQOOP-3186?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16125498#comment-16125498
 ] 

Eric Lin commented on SQOOP-3186:
-

[~BoglarkaEgyed],

Sorry, I forgot to add to review. Now done: https://reviews.apache.org/r/61615/

I am still working on adding new test case, however, due to lack of support for 
having functions inside aggregation function call in HSQL, like 
SUM(COALEASE(col1, 1)), I can't get my test going. So I am still working on it.

Can you please help to at least review if my change so far makes sense?

Thanks

> Add Sqoop1 (import + --incremental + --check-column) support for 
> functions/expressions
> --
>
> Key: SQOOP-3186
> URL: https://issues.apache.org/jira/browse/SQOOP-3186
> Project: Sqoop
>  Issue Type: Improvement
>Reporter: Markus Kemper
>Assignee: Eric Lin
> Attachments: SQOOP-3186.patch
>
>
> Add Sqoop1 (import + --incremental + --check-column) support for 
> functions/expressions, for example:
> *Example*
> {noformat}
> sqoop import \
> --connect $MYCONN --username $MYUSER --password $MYPSWD \
> --table T1 --target-dir /path/directory --merge-key C1 \
> --incremental lastmodified  --last-value '2017-01-01 00:00:00.0' \
> --check-column nvl(C4,to_date('2017-01-01 00:00:00')
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Comment Edited] (SQOOP-3186) Add Sqoop1 (import + --incremental + --check-column) support for functions/expressions

2017-07-27 Thread Eric Lin (JIRA)

[ 
https://issues.apache.org/jira/browse/SQOOP-3186?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16102926#comment-16102926
 ] 

Eric Lin edited comment on SQOOP-3186 at 7/27/17 8:32 AM:
--

First iteration, added {code}--check-column-expr{code} parameter to support 
this feature, as {code}--check-column{code} is still need to determined column 
data type and it is not easy to extract column name from the 
function/expression.

Suggestions welcome.


was (Author: ericlin):
First iteration, added "--check-column-expr" parameter to support this feature, 
as "--check-column" is still need to determined column data type and it is not 
easy to extract column name from the function/expression.

Suggestions welcome.

> Add Sqoop1 (import + --incremental + --check-column) support for 
> functions/expressions
> --
>
> Key: SQOOP-3186
> URL: https://issues.apache.org/jira/browse/SQOOP-3186
> Project: Sqoop
>  Issue Type: Improvement
>Reporter: Markus Kemper
>Assignee: Eric Lin
> Attachments: SQOOP-3186.patch
>
>
> Add Sqoop1 (import + --incremental + --check-column) support for 
> functions/expressions, for example:
> *Example*
> {noformat}
> sqoop import \
> --connect $MYCONN --username $MYUSER --password $MYPSWD \
> --table T1 --target-dir /path/directory --merge-key C1 \
> --incremental lastmodified  --last-value '2017-01-01 00:00:00.0' \
> --check-column nvl(C4,to_date('2017-01-01 00:00:00')
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Comment Edited] (SQOOP-3186) Add Sqoop1 (import + --incremental + --check-column) support for functions/expressions

2017-07-27 Thread Eric Lin (JIRA)

[ 
https://issues.apache.org/jira/browse/SQOOP-3186?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16102926#comment-16102926
 ] 

Eric Lin edited comment on SQOOP-3186 at 7/27/17 8:32 AM:
--

First iteration, added {code}--check-column-expr{code} parameter to support 
this feature, as {code}--check-column{code} is still needed to determine column 
data type and it is not easy to extract column name from the 
function/expression.

Suggestions welcome.


was (Author: ericlin):
First iteration, added {code}--check-column-expr{code} parameter to support 
this feature, as {code}--check-column{code} is still need to determined column 
data type and it is not easy to extract column name from the 
function/expression.

Suggestions welcome.

> Add Sqoop1 (import + --incremental + --check-column) support for 
> functions/expressions
> --
>
> Key: SQOOP-3186
> URL: https://issues.apache.org/jira/browse/SQOOP-3186
> Project: Sqoop
>  Issue Type: Improvement
>Reporter: Markus Kemper
>Assignee: Eric Lin
> Attachments: SQOOP-3186.patch
>
>
> Add Sqoop1 (import + --incremental + --check-column) support for 
> functions/expressions, for example:
> *Example*
> {noformat}
> sqoop import \
> --connect $MYCONN --username $MYUSER --password $MYPSWD \
> --table T1 --target-dir /path/directory --merge-key C1 \
> --incremental lastmodified  --last-value '2017-01-01 00:00:00.0' \
> --check-column nvl(C4,to_date('2017-01-01 00:00:00')
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (SQOOP-3186) Add Sqoop1 (import + --incremental + --check-column) support for functions/expressions

2017-07-27 Thread Eric Lin (JIRA)

 [ 
https://issues.apache.org/jira/browse/SQOOP-3186?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Lin updated SQOOP-3186:

Attachment: SQOOP-3186.patch

First iteration, added "--check-column-expr" parameter to support this feature, 
as "--check-column" is still need to determined column data type and it is not 
easy to extract column name from the function/expression.

Suggestions welcome.

> Add Sqoop1 (import + --incremental + --check-column) support for 
> functions/expressions
> --
>
> Key: SQOOP-3186
> URL: https://issues.apache.org/jira/browse/SQOOP-3186
> Project: Sqoop
>  Issue Type: Improvement
>Reporter: Markus Kemper
>Assignee: Eric Lin
> Attachments: SQOOP-3186.patch
>
>
> Add Sqoop1 (import + --incremental + --check-column) support for 
> functions/expressions, for example:
> *Example*
> {noformat}
> sqoop import \
> --connect $MYCONN --username $MYUSER --password $MYPSWD \
> --table T1 --target-dir /path/directory --merge-key C1 \
> --incremental lastmodified  --last-value '2017-01-01 00:00:00.0' \
> --check-column nvl(C4,to_date('2017-01-01 00:00:00')
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Assigned] (SQOOP-3215) Sqoop create hive table to support other formats(avro,parquet)

2017-07-26 Thread Eric Lin (JIRA)

 [ 
https://issues.apache.org/jira/browse/SQOOP-3215?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Lin reassigned SQOOP-3215:
---

Assignee: Eric Lin

> Sqoop create hive table to support other formats(avro,parquet)
> --
>
> Key: SQOOP-3215
> URL: https://issues.apache.org/jira/browse/SQOOP-3215
> Project: Sqoop
>  Issue Type: Improvement
>Affects Versions: 1.4.6
>Reporter: Nitish Khanna
>Assignee: Eric Lin
>
> Hi Team,
> Sqoop doesn't support any other format apart from text format when we make 
> use of "create-hive-table".
> It would be great if sqoop could create avro,parquet etc format table(schema 
> only).
> I tried below command to create avro format table in hive.
> [root@host-10-17-81-13 ~]# sqoop create-hive-table --connect $MYCONN 
> --username $MYUSER --password $MYPSWD --table test_table --hive-table 
> test_table_avro --as-avrodatafile
> Warning: 
> /opt/cloudera/parcels/CDH-5.8.3-1.cdh5.8.3.p0.2/bin/../lib/sqoop/../accumulo 
> does not exist! Accumulo imports will fail.
> Please set $ACCUMULO_HOME to the root of your Accumulo installation.
> 17/07/26 21:23:38 INFO sqoop.Sqoop: Running Sqoop version: 1.4.6-cdh5.8.3
> 17/07/26 21:23:38 WARN tool.BaseSqoopTool: Setting your password on the 
> command-line is insecure. Consider using -P instead.
> 17/07/26 21:23:38 ERROR tool.BaseSqoopTool: Error parsing arguments for 
> create-hive-table:
> 17/07/26 21:23:38 ERROR tool.BaseSqoopTool: Unrecognized argument: 
> --as-avrodatafile
> Please correct me if i missed anything.
> Regards
> Nitish Khanna



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Assigned] (SQOOP-3186) Add Sqoop1 (import + --incremental + --check-column) support for functions/expressions

2017-07-06 Thread Eric Lin (JIRA)

 [ 
https://issues.apache.org/jira/browse/SQOOP-3186?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Lin reassigned SQOOP-3186:
---

Assignee: Eric Lin

> Add Sqoop1 (import + --incremental + --check-column) support for 
> functions/expressions
> --
>
> Key: SQOOP-3186
> URL: https://issues.apache.org/jira/browse/SQOOP-3186
> Project: Sqoop
>  Issue Type: Improvement
>Reporter: Markus Kemper
>Assignee: Eric Lin
>
> Add Sqoop1 (import + --incremental + --check-column) support for 
> functions/expressions, for example:
> *Example*
> {noformat}
> sqoop import \
> --connect $MYCONN --username $MYUSER --password $MYPSWD \
> --table T1 --target-dir /path/directory --merge-key C1 \
> --incremental lastmodified  --last-value '2017-01-01 00:00:00.0' \
> --check-column nvl(C4,to_date('2017-01-01 00:00:00')
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Assigned] (SQOOP-3188) Sqoop1 (import + --target-dir) with empty directory (/usr/lib/hive) fails with error (java.lang.NoClassDefFoundError: org/json/JSONObject)

2017-07-06 Thread Eric Lin (JIRA)

 [ 
https://issues.apache.org/jira/browse/SQOOP-3188?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Lin reassigned SQOOP-3188:
---

Assignee: Eric Lin

> Sqoop1 (import + --target-dir) with empty directory (/usr/lib/hive) fails 
> with error (java.lang.NoClassDefFoundError: org/json/JSONObject)
> --
>
> Key: SQOOP-3188
> URL: https://issues.apache.org/jira/browse/SQOOP-3188
> Project: Sqoop
>  Issue Type: Bug
>Reporter: Markus Kemper
>Assignee: Eric Lin
>
> Sqoop1 (import + --target-dir) with empty directory (/usr/lib/hive) fails 
> with error (java.lang.NoClassDefFoundError: org/json/JSONObject), see test 
> case below.
> *Test Case*
> {noformat}
> #
> # STEP 01 - Create Table and Data
> #
> export MYCONN=jdbc:mysql://mysql.sqoop.com:3306/sqoop
> export MYUSER=sqoop
> export MYPSWD=sqoop
> sqoop eval --connect $MYCONN --username $MYUSER --password $MYPSWD --query 
> "drop table t1"
> sqoop eval --connect $MYCONN --username $MYUSER --password $MYPSWD --query 
> "create table t1 (c1 int, c2 date, c3 varchar(10))"
> sqoop eval --connect $MYCONN --username $MYUSER --password $MYPSWD --query 
> "insert into t1 values (1, current_date, 'some data')"
> sqoop eval --connect $MYCONN --username $MYUSER --password $MYPSWD --query 
> "select * from t1"
> Output:
> -
> | c1  | c2 | c3 | 
> -
> | 1   | 2017-05-10 | some data  | 
> -
> #
> # STEP 02 - Import Data into HDFS 
> #
> sqoop import --connect $MYCONN --username $MYUSER --password $MYPSWD --table 
> t1 --target-dir /user/root/t1 --delete-target-dir --num-mappers 1
> hdfs dfs -cat /user/root/t1/part*
> Output:
> 17/05/10 13:46:24 INFO mapreduce.ImportJobBase: Transferred 23 bytes in 22.65 
> seconds (1.0155 bytes/sec)
> 17/05/10 13:46:24 INFO mapreduce.ImportJobBase: Retrieved 1 records.
> ~
> 1,2017-05-10,some data
> #
> # STEP 03 - Create Bogus Hive Directory and Attempt to Import into HDFS
> #
> mkdir /usr/lib/hive
> chmod 777 /usr/lib/hive
> sqoop import --connect $MYCONN --username $MYUSER --password $MYPSWD --table 
> t1 --target-dir /user/root/t1 --delete-target-dir --num-mappers 1
> Output:
> 17/05/10 13:47:44 INFO mapreduce.ImportJobBase: Beginning import of t1
> Exception in thread "main" java.lang.NoClassDefFoundError: org/json/JSONObject
>   at 
> org.apache.sqoop.util.SqoopJsonUtil.getJsonStringforMap(SqoopJsonUtil.java:43)
>   at org.apache.sqoop.SqoopOptions.writeProperties(SqoopOptions.java:776)
>   at 
> org.apache.sqoop.mapreduce.JobBase.putSqoopOptionsToConfiguration(JobBase.java:388)
>   at org.apache.sqoop.mapreduce.JobBase.createJob(JobBase.java:374)
>   at 
> org.apache.sqoop.mapreduce.ImportJobBase.runImport(ImportJobBase.java:256)
>   at org.apache.sqoop.manager.SqlManager.importTable(SqlManager.java:692)
>   at 
> org.apache.sqoop.manager.MySQLManager.importTable(MySQLManager.java:127)
>   at org.apache.sqoop.tool.ImportTool.importTable(ImportTool.java:513)
>   at org.apache.sqoop.tool.ImportTool.run(ImportTool.java:621)
>   at org.apache.sqoop.Sqoop.run(Sqoop.java:147)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
>   at org.apache.sqoop.Sqoop.runSqoop(Sqoop.java:183)
>   at org.apache.sqoop.Sqoop.runTool(Sqoop.java:234)
>   at org.apache.sqoop.Sqoop.runTool(Sqoop.java:243)
>   at org.apache.sqoop.Sqoop.main(Sqoop.java:252)
> Caused by: java.lang.ClassNotFoundException: org.json.JSONObject
>   at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
>   at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:425)
>   at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
>   ... 15 more
> #
> # STEP 04 - Remove Bogus Hive Directory and Attempt to Import into HDFS 
> #
> rm -rf /usr/lib/hive
> sqoop import --connect $MYCONN --username $MYUSER --password $MYPSWD --table 
> t1 --target-dir /user/root/t1 --delete-target-dir --num-mappers 1
> hdfs dfs -cat /user/root/t1/part*
> Output:
> 17/05/10 13:52:30 INFO mapreduce.ImportJobBase: Transferred 23 bytes in 
> 22.6361 seconds (1.0161 bytes/sec)
> 17/05/10 13:52:30 INFO mapreduce.ImportJobBase: Retrieved 1 records.
> ~
> 1,2017-05-10,some data
> {noformat}



--
This 

[jira] [Commented] (SQOOP-3189) Issue while reading double(data type) data from Sql server DB and loading in to Hive String (data type) with Sqoop

2017-07-06 Thread Eric Lin (JIRA)

[ 
https://issues.apache.org/jira/browse/SQOOP-3189?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16076266#comment-16076266
 ] 

Eric Lin commented on SQOOP-3189:
-

Hi [~sravan777],

Can you please clarify whether this is Sqoop1 or Sqoop2 issue? Can you please 
also provide the full command you used to run Sqoop?

This will help us to understand the issue better.

Thanks

> Issue while reading double(data type) data from Sql server DB and loading in 
> to Hive String (data type) with Sqoop
> --
>
> Key: SQOOP-3189
> URL: https://issues.apache.org/jira/browse/SQOOP-3189
> Project: Sqoop
>  Issue Type: Bug
>  Components: sqoop2-client
>Reporter: sravan kumar
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (SQOOP-3193) Sqoop comamnd to overwrite the table

2017-07-06 Thread Eric Lin (JIRA)

 [ 
https://issues.apache.org/jira/browse/SQOOP-3193?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Lin updated SQOOP-3193:

Component/s: (was: sqoop2-shell)

> Sqoop comamnd to overwrite the table
> 
>
> Key: SQOOP-3193
> URL: https://issues.apache.org/jira/browse/SQOOP-3193
> Project: Sqoop
>  Issue Type: Bug
>Affects Versions: 1.4.7
> Environment: linux
>Reporter: Jayanthi R
>  Labels: features
> Fix For: no-release
>
>
> I am using sqoop import command and I want to overwrite the data when I use 
> it for second or third time (except the first time)
> sqoop import --connect jdbc:postgresql://reuxeuls677.bp.com/auditdb 
> --username audit_user --password theaudituserloginpassword --table 
> test_automation --hive-overwrite --hive-table sqoophive -m 1  --hive-import 
> --hive-database 'test_automation_db' --create-hive-table --hive-table 
> test_automation;
> But, I am getting the following error:
> 17/06/06 09:34:13 ERROR tool.ImportTool: Encountered IOException running 
> import job: org.apache.hadoop.mapred.FileAlreadyExistsException: Output 
> directory hdfs://HDPEMDCPROD/user/dluser/test_automation already exists



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (SQOOP-3193) Sqoop comamnd to overwrite the table

2017-07-06 Thread Eric Lin (JIRA)

 [ 
https://issues.apache.org/jira/browse/SQOOP-3193?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Lin updated SQOOP-3193:

Affects Version/s: (was: 1.99.7)
   1.4.7

> Sqoop comamnd to overwrite the table
> 
>
> Key: SQOOP-3193
> URL: https://issues.apache.org/jira/browse/SQOOP-3193
> Project: Sqoop
>  Issue Type: Bug
>Affects Versions: 1.4.7
> Environment: linux
>Reporter: Jayanthi R
>  Labels: features
> Fix For: no-release
>
>
> I am using sqoop import command and I want to overwrite the data when I use 
> it for second or third time (except the first time)
> sqoop import --connect jdbc:postgresql://reuxeuls677.bp.com/auditdb 
> --username audit_user --password theaudituserloginpassword --table 
> test_automation --hive-overwrite --hive-table sqoophive -m 1  --hive-import 
> --hive-database 'test_automation_db' --create-hive-table --hive-table 
> test_automation;
> But, I am getting the following error:
> 17/06/06 09:34:13 ERROR tool.ImportTool: Encountered IOException running 
> import job: org.apache.hadoop.mapred.FileAlreadyExistsException: Output 
> directory hdfs://HDPEMDCPROD/user/dluser/test_automation already exists



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (SQOOP-3039) Sqoop unable to export Time data "13:14:12.1234" into Time colum in RMDBS

2017-07-03 Thread Eric Lin (JIRA)

 [ 
https://issues.apache.org/jira/browse/SQOOP-3039?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Lin updated SQOOP-3039:

Attachment: (was: SQOOP-3039.3.patch)

> Sqoop unable to export Time data "13:14:12.1234" into Time colum in RMDBS
> -
>
> Key: SQOOP-3039
> URL: https://issues.apache.org/jira/browse/SQOOP-3039
> Project: Sqoop
>  Issue Type: Bug
>  Components: tools
>Affects Versions: 1.4.6
>Reporter: Eric Lin
>Assignee: Eric Lin
>Priority: Minor
> Attachments: SQOOP-3039.2.patch, SQOOP-3039.3.patch, SQOOP-3039.patch
>
>
> To re-produce:
> Set up MySQL database with following schema:
> {code}
> CREATE TABLE `test` (
>   `a` time(2) DEFAULT NULL
> ) ENGINE=InnoDB DEFAULT CHARSET=latin1
> {code}
> Store the following data in HDFS:
> {code}
> 16:56:53.0999
> 16:56:54.1
> 16:56:53.
> 16:56:54.1230
> {code}
> run Sqoop export command to copy data from HDFS into MySQL:
> {code}
> qoop export --connect jdbc:mysql:///test --username root 
> --password password --table test  -m 1 --driver com.mysql.jdbc.Driver  
> --export-dir /tmp/test
> {code}
> Command will fail with the following error:
> {code}
> java.lang.RuntimeException: Can't parse input data: '16:56:53.0999'
> at t5.__loadFromFields(t5.java:223)
> at t5.parse(t5.java:166)
> at 
> org.apache.sqoop.mapreduce.TextExportMapper.map(TextExportMapper.java:89)
> at 
> org.apache.sqoop.mapreduce.TextExportMapper.map(TextExportMapper.java:39)
> at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:145)
> at 
> org.apache.sqoop.mapreduce.AutoProgressMapper.run(AutoProgressMapper.java:64)
> at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:784)
> at org.apache.hadoop.mapred.MapTask.run(MapTask.java:341)
> at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:163)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
> at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158)
> Caused by: java.lang.NumberFormatException: For input string: "53.0999"
> at 
> java.lang.NumberFormatException.forInputString(NumberFormatException.java:65)
> at java.lang.Integer.parseInt(Integer.java:580)
> at java.lang.Integer.parseInt(Integer.java:615)
> at java.sql.Time.valueOf(Time.java:108)
> at t5.__loadFromFields(t5.java:215)
> ... 12 more
> {code}
> Looks like Sqoop uses java.sql.Time.valueOf function to convert 
> "16:56:53.0999" to Time object, however, this function only accepts Time in 
> "hh:mm:ss" format:
> https://docs.oracle.com/javase/7/docs/api/java/sql/Time.html#valueOf(java.lang.String)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (SQOOP-3039) Sqoop unable to export Time data "13:14:12.1234" into Time colum in RMDBS

2017-07-03 Thread Eric Lin (JIRA)

 [ 
https://issues.apache.org/jira/browse/SQOOP-3039?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Lin updated SQOOP-3039:

Attachment: SQOOP-3039.3.patch

> Sqoop unable to export Time data "13:14:12.1234" into Time colum in RMDBS
> -
>
> Key: SQOOP-3039
> URL: https://issues.apache.org/jira/browse/SQOOP-3039
> Project: Sqoop
>  Issue Type: Bug
>  Components: tools
>Affects Versions: 1.4.6
>Reporter: Eric Lin
>Assignee: Eric Lin
>Priority: Minor
> Attachments: SQOOP-3039.2.patch, SQOOP-3039.3.patch, SQOOP-3039.patch
>
>
> To re-produce:
> Set up MySQL database with following schema:
> {code}
> CREATE TABLE `test` (
>   `a` time(2) DEFAULT NULL
> ) ENGINE=InnoDB DEFAULT CHARSET=latin1
> {code}
> Store the following data in HDFS:
> {code}
> 16:56:53.0999
> 16:56:54.1
> 16:56:53.
> 16:56:54.1230
> {code}
> run Sqoop export command to copy data from HDFS into MySQL:
> {code}
> qoop export --connect jdbc:mysql:///test --username root 
> --password password --table test  -m 1 --driver com.mysql.jdbc.Driver  
> --export-dir /tmp/test
> {code}
> Command will fail with the following error:
> {code}
> java.lang.RuntimeException: Can't parse input data: '16:56:53.0999'
> at t5.__loadFromFields(t5.java:223)
> at t5.parse(t5.java:166)
> at 
> org.apache.sqoop.mapreduce.TextExportMapper.map(TextExportMapper.java:89)
> at 
> org.apache.sqoop.mapreduce.TextExportMapper.map(TextExportMapper.java:39)
> at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:145)
> at 
> org.apache.sqoop.mapreduce.AutoProgressMapper.run(AutoProgressMapper.java:64)
> at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:784)
> at org.apache.hadoop.mapred.MapTask.run(MapTask.java:341)
> at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:163)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
> at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158)
> Caused by: java.lang.NumberFormatException: For input string: "53.0999"
> at 
> java.lang.NumberFormatException.forInputString(NumberFormatException.java:65)
> at java.lang.Integer.parseInt(Integer.java:580)
> at java.lang.Integer.parseInt(Integer.java:615)
> at java.sql.Time.valueOf(Time.java:108)
> at t5.__loadFromFields(t5.java:215)
> ... 12 more
> {code}
> Looks like Sqoop uses java.sql.Time.valueOf function to convert 
> "16:56:53.0999" to Time object, however, this function only accepts Time in 
> "hh:mm:ss" format:
> https://docs.oracle.com/javase/7/docs/api/java/sql/Time.html#valueOf(java.lang.String)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (SQOOP-3039) Sqoop unable to export Time data "13:14:12.1234" into Time colum in RMDBS

2017-07-03 Thread Eric Lin (JIRA)

[ 
https://issues.apache.org/jira/browse/SQOOP-3039?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16071976#comment-16071976
 ] 

Eric Lin commented on SQOOP-3039:
-

Review request created: https://reviews.apache.org/r/60587/

> Sqoop unable to export Time data "13:14:12.1234" into Time colum in RMDBS
> -
>
> Key: SQOOP-3039
> URL: https://issues.apache.org/jira/browse/SQOOP-3039
> Project: Sqoop
>  Issue Type: Bug
>  Components: tools
>Affects Versions: 1.4.6
>Reporter: Eric Lin
>Assignee: Eric Lin
>Priority: Minor
> Attachments: SQOOP-3039.2.patch, SQOOP-3039.3.patch, SQOOP-3039.patch
>
>
> To re-produce:
> Set up MySQL database with following schema:
> {code}
> CREATE TABLE `test` (
>   `a` time(2) DEFAULT NULL
> ) ENGINE=InnoDB DEFAULT CHARSET=latin1
> {code}
> Store the following data in HDFS:
> {code}
> 16:56:53.0999
> 16:56:54.1
> 16:56:53.
> 16:56:54.1230
> {code}
> run Sqoop export command to copy data from HDFS into MySQL:
> {code}
> qoop export --connect jdbc:mysql:///test --username root 
> --password password --table test  -m 1 --driver com.mysql.jdbc.Driver  
> --export-dir /tmp/test
> {code}
> Command will fail with the following error:
> {code}
> java.lang.RuntimeException: Can't parse input data: '16:56:53.0999'
> at t5.__loadFromFields(t5.java:223)
> at t5.parse(t5.java:166)
> at 
> org.apache.sqoop.mapreduce.TextExportMapper.map(TextExportMapper.java:89)
> at 
> org.apache.sqoop.mapreduce.TextExportMapper.map(TextExportMapper.java:39)
> at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:145)
> at 
> org.apache.sqoop.mapreduce.AutoProgressMapper.run(AutoProgressMapper.java:64)
> at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:784)
> at org.apache.hadoop.mapred.MapTask.run(MapTask.java:341)
> at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:163)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
> at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158)
> Caused by: java.lang.NumberFormatException: For input string: "53.0999"
> at 
> java.lang.NumberFormatException.forInputString(NumberFormatException.java:65)
> at java.lang.Integer.parseInt(Integer.java:580)
> at java.lang.Integer.parseInt(Integer.java:615)
> at java.sql.Time.valueOf(Time.java:108)
> at t5.__loadFromFields(t5.java:215)
> ... 12 more
> {code}
> Looks like Sqoop uses java.sql.Time.valueOf function to convert 
> "16:56:53.0999" to Time object, however, this function only accepts Time in 
> "hh:mm:ss" format:
> https://docs.oracle.com/javase/7/docs/api/java/sql/Time.html#valueOf(java.lang.String)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (SQOOP-3039) Sqoop unable to export Time data "13:14:12.1234" into Time colum in RMDBS

2017-07-03 Thread Eric Lin (JIRA)

 [ 
https://issues.apache.org/jira/browse/SQOOP-3039?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Lin updated SQOOP-3039:

Attachment: SQOOP-3039.3.patch

Attached new patch based on latest trunk code.

> Sqoop unable to export Time data "13:14:12.1234" into Time colum in RMDBS
> -
>
> Key: SQOOP-3039
> URL: https://issues.apache.org/jira/browse/SQOOP-3039
> Project: Sqoop
>  Issue Type: Bug
>  Components: tools
>Affects Versions: 1.4.6
>Reporter: Eric Lin
>Assignee: Eric Lin
>Priority: Minor
> Attachments: SQOOP-3039.2.patch, SQOOP-3039.3.patch, SQOOP-3039.patch
>
>
> To re-produce:
> Set up MySQL database with following schema:
> {code}
> CREATE TABLE `test` (
>   `a` time(2) DEFAULT NULL
> ) ENGINE=InnoDB DEFAULT CHARSET=latin1
> {code}
> Store the following data in HDFS:
> {code}
> 16:56:53.0999
> 16:56:54.1
> 16:56:53.
> 16:56:54.1230
> {code}
> run Sqoop export command to copy data from HDFS into MySQL:
> {code}
> qoop export --connect jdbc:mysql:///test --username root 
> --password password --table test  -m 1 --driver com.mysql.jdbc.Driver  
> --export-dir /tmp/test
> {code}
> Command will fail with the following error:
> {code}
> java.lang.RuntimeException: Can't parse input data: '16:56:53.0999'
> at t5.__loadFromFields(t5.java:223)
> at t5.parse(t5.java:166)
> at 
> org.apache.sqoop.mapreduce.TextExportMapper.map(TextExportMapper.java:89)
> at 
> org.apache.sqoop.mapreduce.TextExportMapper.map(TextExportMapper.java:39)
> at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:145)
> at 
> org.apache.sqoop.mapreduce.AutoProgressMapper.run(AutoProgressMapper.java:64)
> at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:784)
> at org.apache.hadoop.mapred.MapTask.run(MapTask.java:341)
> at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:163)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
> at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158)
> Caused by: java.lang.NumberFormatException: For input string: "53.0999"
> at 
> java.lang.NumberFormatException.forInputString(NumberFormatException.java:65)
> at java.lang.Integer.parseInt(Integer.java:580)
> at java.lang.Integer.parseInt(Integer.java:615)
> at java.sql.Time.valueOf(Time.java:108)
> at t5.__loadFromFields(t5.java:215)
> ... 12 more
> {code}
> Looks like Sqoop uses java.sql.Time.valueOf function to convert 
> "16:56:53.0999" to Time object, however, this function only accepts Time in 
> "hh:mm:ss" format:
> https://docs.oracle.com/javase/7/docs/api/java/sql/Time.html#valueOf(java.lang.String)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (SQOOP-3150) issue with sqoop hive import with partitions

2017-07-02 Thread Eric Lin (JIRA)

 [ 
https://issues.apache.org/jira/browse/SQOOP-3150?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Lin resolved SQOOP-3150.
-
   Resolution: Not A Bug
Fix Version/s: no-release

This is not a bug, resolving it.

> issue with sqoop hive import with partitions
> 
>
> Key: SQOOP-3150
> URL: https://issues.apache.org/jira/browse/SQOOP-3150
> Project: Sqoop
>  Issue Type: Bug
>  Components: hive-integration
>Affects Versions: 1.4.6
> Environment: Cent-Os
>Reporter: Ankit Kumar
>Assignee: Eric Lin
>  Labels: features
> Fix For: no-release
>
>
> Sqoop Command:
>   sqoop import \
>   ...
>   --hive-import  \
>   --hive-overwrite  \
>   --hive-table employees_p  \
>   --hive-partition-key date  \
>   --hive-partition-value 10-03-2017  \
>   --target-dir ..\
>   -m 1  
>   
>   hive-table script:
>   employees_p is a partitioned table on date(string) column
>   
>   Issue:- 
>   Case1: When  --target-dir 
> /user/hdfs/landing/staging/Hive/partitioned/EMPLOYEES \
>   while running above sqoop command, gets an error "directory already 
> exissts".
>   
>   When : --target-dir 
> /user/hdfs/landing/staging/Hive/partitioned/EMPLOYEES/anyname 
>   2. Above sqoop command creates a hive partition (date=10-03-2017) and 
> directory as
>   '/user/hdfs/landing/staging/Hive/partitioned/EMPLOYEES/date=10-03-2017'
>   
> Expected Behaviour:- As in sqoop command  --hive-partition-key and  
> --hive-partition-value is present, so it should auto create partioned 
> directory inside EMPLOYEES.
> ie. '/user/hdfs/landing/staging/Hive/partitioned/EMPLOYEES/date=10-03-2017'



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (SQOOP-2272) Import decimal columns from mysql to hive 0.14

2017-07-02 Thread Eric Lin (JIRA)

 [ 
https://issues.apache.org/jira/browse/SQOOP-2272?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Lin updated SQOOP-2272:

Attachment: SQOOP-2272.5.patch

Attaching latest patch based on latest trunk code changes.

> Import decimal columns from mysql to hive 0.14
> --
>
> Key: SQOOP-2272
> URL: https://issues.apache.org/jira/browse/SQOOP-2272
> Project: Sqoop
>  Issue Type: Bug
>  Components: sqoop2-shell
>Affects Versions: 1.4.5
>Reporter: Pawan Pawar
>Assignee: Eric Lin
> Attachments: SQOOP-2272.2.patch, SQOOP-2272.3.patch, 
> SQOOP-2272.4.patch, SQOOP-2272.5.patch, SQOOP-2272.patch
>
>
> I am importing data from mysql to hive. several columns in source table are 
> of type decimal. but sqoop convert this types into the double. How can I 
> import that table with same precision and scale in hive.
> My query is:
> sqoop import --connect 
> jdbc:mysql://localhost:3306/SourceDataBase?zeroDateTimeBehavior=convertToNull 
> --username root --password root --hive-table MyHiveDatabaseName.MyTableName 
> --hive-import  --hive-table MyHiveDatabaseName.MyTableName --query 'select * 
> from MyTableName where $CONDITIONS' -m 1 --target-dir 
> /user/hive/warehouse/MyHiveDatabaseName/MyTableName 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (SQOOP-3042) Sqoop does not clear compile directory under /tmp/sqoop-/compile automatically

2017-06-11 Thread Eric Lin (JIRA)

[ 
https://issues.apache.org/jira/browse/SQOOP-3042?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16045927#comment-16045927
 ] 

Eric Lin commented on SQOOP-3042:
-

Review link: https://reviews.apache.org/r/54528/

> Sqoop does not clear compile directory under /tmp/sqoop-/compile 
> automatically
> 
>
> Key: SQOOP-3042
> URL: https://issues.apache.org/jira/browse/SQOOP-3042
> Project: Sqoop
>  Issue Type: Bug
>Affects Versions: 1.4.6
>Reporter: Eric Lin
>Assignee: Eric Lin
>Priority: Critical
>  Labels: patch
> Attachments: SQOOP-3042.1.patch, SQOOP-3042.2.patch, 
> SQOOP-3042.4.patch
>
>
> After running sqoop, all the temp files generated by ClassWriter are left 
> behind on disk, so anyone can check those JAVA files to see the schema of 
> those tables that Sqoop has been interacting with. By default, the directory 
> is under /tmp/sqoop-/compile.
> In class org.apache.sqoop.SqoopOptions, function getNonceJarDir(), I can see 
> that we did add "deleteOnExit" on the temp dir:
> {code}
> for (int attempts = 0; attempts < MAX_DIR_CREATE_ATTEMPTS; attempts++) {
>   hashDir = new File(baseDir, RandomHash.generateMD5String());
>   while (hashDir.exists()) {
> hashDir = new File(baseDir, RandomHash.generateMD5String());
>   }
>   if (hashDir.mkdirs()) {
> // We created the directory. Use it.
> // If this directory is not actually filled with files, delete it
> // when the JVM quits.
> hashDir.deleteOnExit();
> break;
>   }
> }
> {code}
> However, I believe it failed to delete due to directory is not empty.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (SQOOP-3042) Sqoop does not clear compile directory under /tmp/sqoop-/compile automatically

2017-06-11 Thread Eric Lin (JIRA)

 [ 
https://issues.apache.org/jira/browse/SQOOP-3042?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Lin updated SQOOP-3042:

Attachment: SQOOP-3042.4.patch

Attaching latest patch which adds "--delete-compile-dir" option so that users 
can control whether or not to delete the directory.

> Sqoop does not clear compile directory under /tmp/sqoop-/compile 
> automatically
> 
>
> Key: SQOOP-3042
> URL: https://issues.apache.org/jira/browse/SQOOP-3042
> Project: Sqoop
>  Issue Type: Bug
>Affects Versions: 1.4.6
>Reporter: Eric Lin
>Assignee: Eric Lin
>Priority: Critical
>  Labels: patch
> Attachments: SQOOP-3042.1.patch, SQOOP-3042.2.patch, 
> SQOOP-3042.4.patch
>
>
> After running sqoop, all the temp files generated by ClassWriter are left 
> behind on disk, so anyone can check those JAVA files to see the schema of 
> those tables that Sqoop has been interacting with. By default, the directory 
> is under /tmp/sqoop-/compile.
> In class org.apache.sqoop.SqoopOptions, function getNonceJarDir(), I can see 
> that we did add "deleteOnExit" on the temp dir:
> {code}
> for (int attempts = 0; attempts < MAX_DIR_CREATE_ATTEMPTS; attempts++) {
>   hashDir = new File(baseDir, RandomHash.generateMD5String());
>   while (hashDir.exists()) {
> hashDir = new File(baseDir, RandomHash.generateMD5String());
>   }
>   if (hashDir.mkdirs()) {
> // We created the directory. Use it.
> // If this directory is not actually filled with files, delete it
> // when the JVM quits.
> hashDir.deleteOnExit();
> break;
>   }
> }
> {code}
> However, I believe it failed to delete due to directory is not empty.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (SQOOP-3188) Sqoop1 (import + --target-dir) with empty directory (/usr/lib/hive) fails with error (java.lang.NoClassDefFoundError: org/json/JSONObject)

2017-05-30 Thread Eric Lin (JIRA)

[ 
https://issues.apache.org/jira/browse/SQOOP-3188?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16029229#comment-16029229
 ] 

Eric Lin commented on SQOOP-3188:
-

Hi [~markuskem...@me.com],

I tried to test this against trunk code, it does NOT seem to be an issue for 
me. I created /usr/lib/hive directory with 777 permission and nothing under it, 
my sqoop import works as normal. Maybe it is environment specific?

> Sqoop1 (import + --target-dir) with empty directory (/usr/lib/hive) fails 
> with error (java.lang.NoClassDefFoundError: org/json/JSONObject)
> --
>
> Key: SQOOP-3188
> URL: https://issues.apache.org/jira/browse/SQOOP-3188
> Project: Sqoop
>  Issue Type: Bug
>Reporter: Markus Kemper
>
> Sqoop1 (import + --target-dir) with empty directory (/usr/lib/hive) fails 
> with error (java.lang.NoClassDefFoundError: org/json/JSONObject), see test 
> case below.
> *Test Case*
> {noformat}
> #
> # STEP 01 - Create Table and Data
> #
> export MYCONN=jdbc:mysql://mysql.sqoop.com:3306/sqoop
> export MYUSER=sqoop
> export MYPSWD=sqoop
> sqoop eval --connect $MYCONN --username $MYUSER --password $MYPSWD --query 
> "drop table t1"
> sqoop eval --connect $MYCONN --username $MYUSER --password $MYPSWD --query 
> "create table t1 (c1 int, c2 date, c3 varchar(10))"
> sqoop eval --connect $MYCONN --username $MYUSER --password $MYPSWD --query 
> "insert into t1 values (1, current_date, 'some data')"
> sqoop eval --connect $MYCONN --username $MYUSER --password $MYPSWD --query 
> "select * from t1"
> Output:
> -
> | c1  | c2 | c3 | 
> -
> | 1   | 2017-05-10 | some data  | 
> -
> #
> # STEP 02 - Import Data into HDFS 
> #
> sqoop import --connect $MYCONN --username $MYUSER --password $MYPSWD --table 
> t1 --target-dir /user/root/t1 --delete-target-dir --num-mappers 1
> hdfs dfs -cat /user/root/t1/part*
> Output:
> 17/05/10 13:46:24 INFO mapreduce.ImportJobBase: Transferred 23 bytes in 22.65 
> seconds (1.0155 bytes/sec)
> 17/05/10 13:46:24 INFO mapreduce.ImportJobBase: Retrieved 1 records.
> ~
> 1,2017-05-10,some data
> #
> # STEP 03 - Create Bogus Hive Directory and Attempt to Import into HDFS
> #
> mkdir /usr/lib/hive
> chmod 777 /usr/lib/hive
> sqoop import --connect $MYCONN --username $MYUSER --password $MYPSWD --table 
> t1 --target-dir /user/root/t1 --delete-target-dir --num-mappers 1
> Output:
> 17/05/10 13:47:44 INFO mapreduce.ImportJobBase: Beginning import of t1
> Exception in thread "main" java.lang.NoClassDefFoundError: org/json/JSONObject
>   at 
> org.apache.sqoop.util.SqoopJsonUtil.getJsonStringforMap(SqoopJsonUtil.java:43)
>   at org.apache.sqoop.SqoopOptions.writeProperties(SqoopOptions.java:776)
>   at 
> org.apache.sqoop.mapreduce.JobBase.putSqoopOptionsToConfiguration(JobBase.java:388)
>   at org.apache.sqoop.mapreduce.JobBase.createJob(JobBase.java:374)
>   at 
> org.apache.sqoop.mapreduce.ImportJobBase.runImport(ImportJobBase.java:256)
>   at org.apache.sqoop.manager.SqlManager.importTable(SqlManager.java:692)
>   at 
> org.apache.sqoop.manager.MySQLManager.importTable(MySQLManager.java:127)
>   at org.apache.sqoop.tool.ImportTool.importTable(ImportTool.java:513)
>   at org.apache.sqoop.tool.ImportTool.run(ImportTool.java:621)
>   at org.apache.sqoop.Sqoop.run(Sqoop.java:147)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
>   at org.apache.sqoop.Sqoop.runSqoop(Sqoop.java:183)
>   at org.apache.sqoop.Sqoop.runTool(Sqoop.java:234)
>   at org.apache.sqoop.Sqoop.runTool(Sqoop.java:243)
>   at org.apache.sqoop.Sqoop.main(Sqoop.java:252)
> Caused by: java.lang.ClassNotFoundException: org.json.JSONObject
>   at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
>   at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:425)
>   at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
>   ... 15 more
> #
> # STEP 04 - Remove Bogus Hive Directory and Attempt to Import into HDFS 
> #
> rm -rf /usr/lib/hive
> sqoop import --connect $MYCONN --username $MYUSER --password $MYPSWD --table 
> t1 --target-dir /user/root/t1 --delete-target-dir --num-mappers 1
> hdfs dfs -cat /user/root/t1/part*
> Output:
> 

[jira] [Comment Edited] (SQOOP-3158) Columns added to Mysql after initial sqoop import, export back to table with same schema fails

2017-05-30 Thread Eric Lin (JIRA)

[ 
https://issues.apache.org/jira/browse/SQOOP-3158?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16029216#comment-16029216
 ] 

Eric Lin edited comment on SQOOP-3158 at 5/30/17 9:39 AM:
--

Latest patch based on Szabolcs' feedback from review: SQOOP-3158.4.patch


was (Author: ericlin):
Latest patch based on Szabolcs' feedback from review.


> Columns added to Mysql after initial sqoop import, export back to table with 
> same schema fails 
> ---
>
> Key: SQOOP-3158
> URL: https://issues.apache.org/jira/browse/SQOOP-3158
> Project: Sqoop
>  Issue Type: Improvement
>Affects Versions: 1.4.6
>Reporter: viru reddy
>Assignee: Eric Lin
>  Labels: newbie
> Attachments: SQOOP-3158.2.patch, SQOOP-3158.3.patch, 
> SQOOP-3158.4.patch, SQOOP-3158.patch
>
>
> I have table in MySQL with 2 columns until yesterday. The columns are id and 
> name.
> 1,Raj
> 2,Jack
> I have imported this data into HDFS yesterday itself as a file. Today we 
> added a new column to the table in MySQL called salary. The table looks like 
> below.
> 1,Raj
> 2,Jack
> 3,Jill,2000
> 4,Nick,3000
> Now I have done Incremental import on this table as a file.
> Part-m-0 file contains
> 1,Raj
> 2,Jack
> Part-m-1 file contains
> 3,Jill,2000
> 4,Nick,3000
> Now I created a new table in MySQL with same schema as Original MySQL table 
> with columns id name and salary.
> When I do sqoop export only last 2 rows are getting inserted to the new table 
> in MySQL  and the sqoop export fails
> How can I reflect all the rows to be inserted to the table.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (SQOOP-3158) Columns added to Mysql after initial sqoop import, export back to table with same schema fails

2017-05-30 Thread Eric Lin (JIRA)

 [ 
https://issues.apache.org/jira/browse/SQOOP-3158?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Lin updated SQOOP-3158:

Attachment: SQOOP-3158.4.patch

Latest patch based on Szabolcs' feedback from review.


> Columns added to Mysql after initial sqoop import, export back to table with 
> same schema fails 
> ---
>
> Key: SQOOP-3158
> URL: https://issues.apache.org/jira/browse/SQOOP-3158
> Project: Sqoop
>  Issue Type: Improvement
>Affects Versions: 1.4.6
>Reporter: viru reddy
>Assignee: Eric Lin
>  Labels: newbie
> Attachments: SQOOP-3158.2.patch, SQOOP-3158.3.patch, 
> SQOOP-3158.4.patch, SQOOP-3158.patch
>
>
> I have table in MySQL with 2 columns until yesterday. The columns are id and 
> name.
> 1,Raj
> 2,Jack
> I have imported this data into HDFS yesterday itself as a file. Today we 
> added a new column to the table in MySQL called salary. The table looks like 
> below.
> 1,Raj
> 2,Jack
> 3,Jill,2000
> 4,Nick,3000
> Now I have done Incremental import on this table as a file.
> Part-m-0 file contains
> 1,Raj
> 2,Jack
> Part-m-1 file contains
> 3,Jill,2000
> 4,Nick,3000
> Now I created a new table in MySQL with same schema as Original MySQL table 
> with columns id name and salary.
> When I do sqoop export only last 2 rows are getting inserted to the new table 
> in MySQL  and the sqoop export fails
> How can I reflect all the rows to be inserted to the table.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (SQOOP-3187) Sqoop import as PARQUET to S3 failed

2017-05-28 Thread Eric Lin (JIRA)

[ 
https://issues.apache.org/jira/browse/SQOOP-3187?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16027790#comment-16027790
 ] 

Eric Lin commented on SQOOP-3187:
-

Hi [~sk...@yahoo.com],

After checking the source code, I believe that currently Sqoop uses kite-sdk 
1.0.0 version, which does not have s3 support. S3 support only added in kite 
from version 1.1.0 onwards via https://issues.cloudera.org/browse/KITE-938.

We will have to wait for kite to be upgraded in Sqoop. I will check further.

> Sqoop import as PARQUET to S3 failed
> 
>
> Key: SQOOP-3187
> URL: https://issues.apache.org/jira/browse/SQOOP-3187
> Project: Sqoop
>  Issue Type: Bug
>Affects Versions: 1.4.6
>Reporter: Surendra Nichenametla
>Assignee: Eric Lin
>
> Sqoop import as parquet file to S3 fails. Command and error are give below.
> However, import to a HDFS location works though.
> sqoop import --connect "jdbc:oracle:thin:@:1521/ORCL" --table 
> mytable --username myuser --password mypass --target-dir s3://bucket/foo/bar/ 
> --columns col1,col2 -m1 --as-parquetfile
> 17/05/09 21:00:18 ERROR tool.ImportTool: Imported Failed: Wrong FS: 
> s3://bucket/foo/bar, expected: hdfs://master-ip:8020
> P.S. I tried this from Amazon EMR cluster.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Assigned] (SQOOP-3187) Sqoop import as PARQUET to S3 failed

2017-05-26 Thread Eric Lin (JIRA)

 [ 
https://issues.apache.org/jira/browse/SQOOP-3187?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Lin reassigned SQOOP-3187:
---

Assignee: Eric Lin

> Sqoop import as PARQUET to S3 failed
> 
>
> Key: SQOOP-3187
> URL: https://issues.apache.org/jira/browse/SQOOP-3187
> Project: Sqoop
>  Issue Type: Bug
>Affects Versions: 1.4.6
>Reporter: Surendra Nichenametla
>Assignee: Eric Lin
>
> Sqoop import as parquet file to S3 fails. Command and error are give below.
> However, import to a HDFS location works though.
> sqoop import --connect "jdbc:oracle:thin:@:1521/ORCL" --table 
> mytable --username myuser --password mypass --target-dir s3://bucket/foo/bar/ 
> --columns col1,col2 -m1 --as-parquetfile
> 17/05/09 21:00:18 ERROR tool.ImportTool: Imported Failed: Wrong FS: 
> s3://bucket/foo/bar, expected: hdfs://master-ip:8020
> P.S. I tried this from Amazon EMR cluster.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Comment Edited] (SQOOP-3158) Columns added to Mysql after initial sqoop import, export back to table with same schema fails

2017-05-21 Thread Eric Lin (JIRA)

[ 
https://issues.apache.org/jira/browse/SQOOP-3158?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16018783#comment-16018783
 ] 

Eric Lin edited comment on SQOOP-3158 at 5/21/17 11:10 AM:
---

Latest patch based on review: SQOOP-3158.3.patch


was (Author: ericlin):
Latest patch based on review.

> Columns added to Mysql after initial sqoop import, export back to table with 
> same schema fails 
> ---
>
> Key: SQOOP-3158
> URL: https://issues.apache.org/jira/browse/SQOOP-3158
> Project: Sqoop
>  Issue Type: Improvement
>Affects Versions: 1.4.6
>Reporter: viru reddy
>Assignee: Eric Lin
>  Labels: newbie
> Attachments: SQOOP-3158.2.patch, SQOOP-3158.3.patch, SQOOP-3158.patch
>
>
> I have table in MySQL with 2 columns until yesterday. The columns are id and 
> name.
> 1,Raj
> 2,Jack
> I have imported this data into HDFS yesterday itself as a file. Today we 
> added a new column to the table in MySQL called salary. The table looks like 
> below.
> 1,Raj
> 2,Jack
> 3,Jill,2000
> 4,Nick,3000
> Now I have done Incremental import on this table as a file.
> Part-m-0 file contains
> 1,Raj
> 2,Jack
> Part-m-1 file contains
> 3,Jill,2000
> 4,Nick,3000
> Now I created a new table in MySQL with same schema as Original MySQL table 
> with columns id name and salary.
> When I do sqoop export only last 2 rows are getting inserted to the new table 
> in MySQL  and the sqoop export fails
> How can I reflect all the rows to be inserted to the table.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (SQOOP-3158) Columns added to Mysql after initial sqoop import, export back to table with same schema fails

2017-05-21 Thread Eric Lin (JIRA)

 [ 
https://issues.apache.org/jira/browse/SQOOP-3158?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Lin updated SQOOP-3158:

Attachment: SQOOP-3158.3.patch

Latest patch based on review.

> Columns added to Mysql after initial sqoop import, export back to table with 
> same schema fails 
> ---
>
> Key: SQOOP-3158
> URL: https://issues.apache.org/jira/browse/SQOOP-3158
> Project: Sqoop
>  Issue Type: Improvement
>Affects Versions: 1.4.6
>Reporter: viru reddy
>Assignee: Eric Lin
>  Labels: newbie
> Attachments: SQOOP-3158.2.patch, SQOOP-3158.3.patch, SQOOP-3158.patch
>
>
> I have table in MySQL with 2 columns until yesterday. The columns are id and 
> name.
> 1,Raj
> 2,Jack
> I have imported this data into HDFS yesterday itself as a file. Today we 
> added a new column to the table in MySQL called salary. The table looks like 
> below.
> 1,Raj
> 2,Jack
> 3,Jill,2000
> 4,Nick,3000
> Now I have done Incremental import on this table as a file.
> Part-m-0 file contains
> 1,Raj
> 2,Jack
> Part-m-1 file contains
> 3,Jill,2000
> 4,Nick,3000
> Now I created a new table in MySQL with same schema as Original MySQL table 
> with columns id name and salary.
> When I do sqoop export only last 2 rows are getting inserted to the new table 
> in MySQL  and the sqoop export fails
> How can I reflect all the rows to be inserted to the table.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Assigned] (SQOOP-3134) Add option to configure Avro schema output file name with (import + --as-avrodatafile)

2017-05-07 Thread Eric Lin (JIRA)

 [ 
https://issues.apache.org/jira/browse/SQOOP-3134?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Lin reassigned SQOOP-3134:
---

Assignee: Eric Lin

> Add option to configure Avro schema output file name with (import + 
> --as-avrodatafile) 
> ---
>
> Key: SQOOP-3134
> URL: https://issues.apache.org/jira/browse/SQOOP-3134
> Project: Sqoop
>  Issue Type: Improvement
>Reporter: Markus Kemper
>Assignee: Eric Lin
>
> Please consider adding an option to configure the Avro schema output file 
> name that is created with Sqoop (import + --as-avrodatafile), example cases 
> below.
> {noformat}
> #
> # STEP 01 - Create Data
> #
> export MYCONN=jdbc:mysql://mysql.cloudera.com:3306/db_coe
> export MYUSER=sqoop
> export MYPSWD=cloudera
> sqoop list-tables --connect $MYCONN --username $MYUSER --password $MYPSWD
> sqoop eval --connect $MYCONN --username $MYUSER --password $MYPSWD --query 
> "drop table t1"
> sqoop eval --connect $MYCONN --username $MYUSER --password $MYPSWD --query 
> "create table t1 (c1 int, c2 date, c3 varchar(10))"
> sqoop eval --connect $MYCONN --username $MYUSER --password $MYPSWD --query 
> "insert into t1 values (1, current_date, 'some data')"
> sqoop eval --connect $MYCONN --username $MYUSER --password $MYPSWD --query 
> "select * from t1"
> -
> | c1  | c2 | c3 | 
> -
> | 1   | 2017-02-13 | some data  | 
> -
> #
> # STEP 02 - Import + --table + --as-avrodatafile
> #
> sqoop import --connect $MYCONN --username $MYUSER --password $MYPSWD --table 
> t1 --target-dir /user/root/t1 --delete-target-dir --num-mappers 1 
> --as-avrodatafile 
> ls -l ./*
> Output:
> 17/02/13 12:14:52 INFO mapreduce.ImportJobBase: Transferred 413 bytes in 
> 20.6988 seconds (19.9529 bytes/sec)
> 17/02/13 12:14:52 INFO mapreduce.ImportJobBase: Retrieved 1 records.
> 
> -rw-r--r-- 1 root root   492 Feb 13 12:14 ./t1.avsc < want option to 
> configure this file name
> -rw-r--r-- 1 root root 12462 Feb 13 12:14 ./t1.java
> #
> # STEP 03 - Import + --query + --as-avrodatafile
> #
> sqoop import --connect $MYCONN --username $MYUSER --password $MYPSWD --query 
> "select * from t1 where \$CONDITIONS" --split-by c1 --target-dir 
> /user/root/t1 --delete-target-dir --num-mappers 1 --as-avrodatafile 
> ls -l ./*
> Output:
> 17/02/13 12:16:58 INFO mapreduce.ImportJobBase: Transferred 448 bytes in 
> 25.2757 seconds (17.7245 bytes/sec)
> 17/02/13 12:16:58 INFO mapreduce.ImportJobBase: Retrieved 1 records.
> ~
> -rw-r--r-- 1 root root   527 Feb 13 12:16 ./AutoGeneratedSchema.avsc < 
> want option to configure this file name
> -rw-r--r-- 1 root root 12590 Feb 13 12:16 ./QueryResult.java
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (SQOOP-3134) Add option to configure Avro schema output file name with (import + --as-avrodatafile)

2017-05-07 Thread Eric Lin (JIRA)

[ 
https://issues.apache.org/jira/browse/SQOOP-3134?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15999808#comment-15999808
 ] 

Eric Lin commented on SQOOP-3134:
-

[~markuskem...@me.com],

Is there any specific reason you want to do this? I can check if I can get this 
implemented. But I would like understand the intension.

Thanks

> Add option to configure Avro schema output file name with (import + 
> --as-avrodatafile) 
> ---
>
> Key: SQOOP-3134
> URL: https://issues.apache.org/jira/browse/SQOOP-3134
> Project: Sqoop
>  Issue Type: Improvement
>Reporter: Markus Kemper
>
> Please consider adding an option to configure the Avro schema output file 
> name that is created with Sqoop (import + --as-avrodatafile), example cases 
> below.
> {noformat}
> #
> # STEP 01 - Create Data
> #
> export MYCONN=jdbc:mysql://mysql.cloudera.com:3306/db_coe
> export MYUSER=sqoop
> export MYPSWD=cloudera
> sqoop list-tables --connect $MYCONN --username $MYUSER --password $MYPSWD
> sqoop eval --connect $MYCONN --username $MYUSER --password $MYPSWD --query 
> "drop table t1"
> sqoop eval --connect $MYCONN --username $MYUSER --password $MYPSWD --query 
> "create table t1 (c1 int, c2 date, c3 varchar(10))"
> sqoop eval --connect $MYCONN --username $MYUSER --password $MYPSWD --query 
> "insert into t1 values (1, current_date, 'some data')"
> sqoop eval --connect $MYCONN --username $MYUSER --password $MYPSWD --query 
> "select * from t1"
> -
> | c1  | c2 | c3 | 
> -
> | 1   | 2017-02-13 | some data  | 
> -
> #
> # STEP 02 - Import + --table + --as-avrodatafile
> #
> sqoop import --connect $MYCONN --username $MYUSER --password $MYPSWD --table 
> t1 --target-dir /user/root/t1 --delete-target-dir --num-mappers 1 
> --as-avrodatafile 
> ls -l ./*
> Output:
> 17/02/13 12:14:52 INFO mapreduce.ImportJobBase: Transferred 413 bytes in 
> 20.6988 seconds (19.9529 bytes/sec)
> 17/02/13 12:14:52 INFO mapreduce.ImportJobBase: Retrieved 1 records.
> 
> -rw-r--r-- 1 root root   492 Feb 13 12:14 ./t1.avsc < want option to 
> configure this file name
> -rw-r--r-- 1 root root 12462 Feb 13 12:14 ./t1.java
> #
> # STEP 03 - Import + --query + --as-avrodatafile
> #
> sqoop import --connect $MYCONN --username $MYUSER --password $MYPSWD --query 
> "select * from t1 where \$CONDITIONS" --split-by c1 --target-dir 
> /user/root/t1 --delete-target-dir --num-mappers 1 --as-avrodatafile 
> ls -l ./*
> Output:
> 17/02/13 12:16:58 INFO mapreduce.ImportJobBase: Transferred 448 bytes in 
> 25.2757 seconds (17.7245 bytes/sec)
> 17/02/13 12:16:58 INFO mapreduce.ImportJobBase: Retrieved 1 records.
> ~
> -rw-r--r-- 1 root root   527 Feb 13 12:16 ./AutoGeneratedSchema.avsc < 
> want option to configure this file name
> -rw-r--r-- 1 root root 12590 Feb 13 12:16 ./QueryResult.java
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (SQOOP-3156) PostgreSQL direct connector is ignoring --columns

2017-05-07 Thread Eric Lin (JIRA)

[ 
https://issues.apache.org/jira/browse/SQOOP-3156?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15999805#comment-15999805
 ] 

Eric Lin commented on SQOOP-3156:
-

I was not able to re-produce the issue earlier, but I will try again to confirm.

> PostgreSQL direct connector is ignoring --columns
> -
>
> Key: SQOOP-3156
> URL: https://issues.apache.org/jira/browse/SQOOP-3156
> Project: Sqoop
>  Issue Type: Bug
>  Components: connectors/postgresql
>Affects Versions: 1.4.6
>Reporter: Shane Mathews
>Assignee: Eric Lin
>Priority: Minor
>
> DirectPostgresqlManager seems to directly query the columns of the table and 
> use those instead of using the passed in --columns
> can be reproduced using something like this:
> sqoop import --connect jdbc:postgresql://foo.com/chdbfoo --username foouser 
> --password foopassword --hive-import --direct --table footable --columns 
> "col1,col2,col3" --where "col1 is not null " --direct-split-size 268435456 
> --hive-table foohive_table
> if that table has more columns than col1, col2, and col3, those will also be 
> queried



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (SQOOP-3158) Columns added to Mysql after initial sqoop import, export back to table with same schema fails

2017-05-06 Thread Eric Lin (JIRA)

 [ 
https://issues.apache.org/jira/browse/SQOOP-3158?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Lin updated SQOOP-3158:

Attachment: SQOOP-3158.2.patch

> Columns added to Mysql after initial sqoop import, export back to table with 
> same schema fails 
> ---
>
> Key: SQOOP-3158
> URL: https://issues.apache.org/jira/browse/SQOOP-3158
> Project: Sqoop
>  Issue Type: Improvement
>Affects Versions: 1.4.6
>Reporter: viru reddy
>Assignee: Eric Lin
>  Labels: newbie
> Attachments: SQOOP-3158.2.patch, SQOOP-3158.patch
>
>
> I have table in MySQL with 2 columns until yesterday. The columns are id and 
> name.
> 1,Raj
> 2,Jack
> I have imported this data into HDFS yesterday itself as a file. Today we 
> added a new column to the table in MySQL called salary. The table looks like 
> below.
> 1,Raj
> 2,Jack
> 3,Jill,2000
> 4,Nick,3000
> Now I have done Incremental import on this table as a file.
> Part-m-0 file contains
> 1,Raj
> 2,Jack
> Part-m-1 file contains
> 3,Jill,2000
> 4,Nick,3000
> Now I created a new table in MySQL with same schema as Original MySQL table 
> with columns id name and salary.
> When I do sqoop export only last 2 rows are getting inserted to the new table 
> in MySQL  and the sqoop export fails
> How can I reflect all the rows to be inserted to the table.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (SQOOP-3158) Columns added to Mysql after initial sqoop import, export back to table with same schema fails

2017-05-06 Thread Eric Lin (JIRA)

 [ 
https://issues.apache.org/jira/browse/SQOOP-3158?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Lin updated SQOOP-3158:

Attachment: (was: SQOOP-3158.2.patch)

> Columns added to Mysql after initial sqoop import, export back to table with 
> same schema fails 
> ---
>
> Key: SQOOP-3158
> URL: https://issues.apache.org/jira/browse/SQOOP-3158
> Project: Sqoop
>  Issue Type: Improvement
>Affects Versions: 1.4.6
>Reporter: viru reddy
>Assignee: Eric Lin
>  Labels: newbie
> Attachments: SQOOP-3158.patch
>
>
> I have table in MySQL with 2 columns until yesterday. The columns are id and 
> name.
> 1,Raj
> 2,Jack
> I have imported this data into HDFS yesterday itself as a file. Today we 
> added a new column to the table in MySQL called salary. The table looks like 
> below.
> 1,Raj
> 2,Jack
> 3,Jill,2000
> 4,Nick,3000
> Now I have done Incremental import on this table as a file.
> Part-m-0 file contains
> 1,Raj
> 2,Jack
> Part-m-1 file contains
> 3,Jill,2000
> 4,Nick,3000
> Now I created a new table in MySQL with same schema as Original MySQL table 
> with columns id name and salary.
> When I do sqoop export only last 2 rows are getting inserted to the new table 
> in MySQL  and the sqoop export fails
> How can I reflect all the rows to be inserted to the table.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (SQOOP-3158) Columns added to Mysql after initial sqoop import, export back to table with same schema fails

2017-04-18 Thread Eric Lin (JIRA)

 [ 
https://issues.apache.org/jira/browse/SQOOP-3158?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Lin updated SQOOP-3158:

Affects Version/s: 1.4.6

> Columns added to Mysql after initial sqoop import, export back to table with 
> same schema fails 
> ---
>
> Key: SQOOP-3158
> URL: https://issues.apache.org/jira/browse/SQOOP-3158
> Project: Sqoop
>  Issue Type: Improvement
>Affects Versions: 1.4.6
>Reporter: viru reddy
>Assignee: Eric Lin
>  Labels: newbie
> Attachments: SQOOP-3158.patch
>
>
> I have table in MySQL with 2 columns until yesterday. The columns are id and 
> name.
> 1,Raj
> 2,Jack
> I have imported this data into HDFS yesterday itself as a file. Today we 
> added a new column to the table in MySQL called salary. The table looks like 
> below.
> 1,Raj
> 2,Jack
> 3,Jill,2000
> 4,Nick,3000
> Now I have done Incremental import on this table as a file.
> Part-m-0 file contains
> 1,Raj
> 2,Jack
> Part-m-1 file contains
> 3,Jill,2000
> 4,Nick,3000
> Now I created a new table in MySQL with same schema as Original MySQL table 
> with columns id name and salary.
> When I do sqoop export only last 2 rows are getting inserted to the new table 
> in MySQL  and the sqoop export fails
> How can I reflect all the rows to be inserted to the table.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (SQOOP-3150) issue with sqoop hive import with partitions

2017-04-16 Thread Eric Lin (JIRA)

[ 
https://issues.apache.org/jira/browse/SQOOP-3150?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15970334#comment-15970334
 ] 

Eric Lin commented on SQOOP-3150:
-

Hi Ankit,

I just did some review on the issue you raised, and I noticed that the 
--target-dir is not used to control where the hive table will be created, or 
the destination of the target partition data will be stored. Rather, the 
--target-dir is used to control ONLY the data that is generated before loading 
into Hive table.

For example, you specified --target-dir as 
"/user/hdfs/landing/staging/Hive/partitioned/EMPLOYEES", so the data will be 
stored into this directory and the final Hive query that will import data into 
Hive will be something like below:

LOAD DATA INPATH 
'hdfs://localhost:9000/user/hdfs/landing/staging/Hive/partitioned/EMPLOYEES' 
OVERWRITE INTO TABLE `employees_p` PARTITION (date='10-03-2017');

You will have no control of where the final directory that the partition goes 
into in Hive.

Hope that makes sense to you. So this is not a bug, but work as expected.

> issue with sqoop hive import with partitions
> 
>
> Key: SQOOP-3150
> URL: https://issues.apache.org/jira/browse/SQOOP-3150
> Project: Sqoop
>  Issue Type: Bug
>  Components: hive-integration
>Affects Versions: 1.4.6
> Environment: Cent-Os
>Reporter: Ankit Kumar
>Assignee: Eric Lin
>  Labels: features
>
> Sqoop Command:
>   sqoop import \
>   ...
>   --hive-import  \
>   --hive-overwrite  \
>   --hive-table employees_p  \
>   --hive-partition-key date  \
>   --hive-partition-value 10-03-2017  \
>   --target-dir ..\
>   -m 1  
>   
>   hive-table script:
>   employees_p is a partitioned table on date(string) column
>   
>   Issue:- 
>   Case1: When  --target-dir 
> /user/hdfs/landing/staging/Hive/partitioned/EMPLOYEES \
>   while running above sqoop command, gets an error "directory already 
> exissts".
>   
>   When : --target-dir 
> /user/hdfs/landing/staging/Hive/partitioned/EMPLOYEES/anyname 
>   2. Above sqoop command creates a hive partition (date=10-03-2017) and 
> directory as
>   '/user/hdfs/landing/staging/Hive/partitioned/EMPLOYEES/date=10-03-2017'
>   
> Expected Behaviour:- As in sqoop command  --hive-partition-key and  
> --hive-partition-value is present, so it should auto create partioned 
> directory inside EMPLOYEES.
> ie. '/user/hdfs/landing/staging/Hive/partitioned/EMPLOYEES/date=10-03-2017'



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (SQOOP-3150) issue with sqoop hive import with partitions

2017-04-15 Thread Eric Lin (JIRA)

[ 
https://issues.apache.org/jira/browse/SQOOP-3150?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15969934#comment-15969934
 ] 

Eric Lin commented on SQOOP-3150:
-

Will try to re-produce the issue first using latest sqoop code.

> issue with sqoop hive import with partitions
> 
>
> Key: SQOOP-3150
> URL: https://issues.apache.org/jira/browse/SQOOP-3150
> Project: Sqoop
>  Issue Type: Bug
>  Components: hive-integration
>Affects Versions: 1.4.6
> Environment: Cent-Os
>Reporter: Ankit Kumar
>Assignee: Eric Lin
>  Labels: features
>
> Sqoop Command:
>   sqoop import \
>   ...
>   --hive-import  \
>   --hive-overwrite  \
>   --hive-table employees_p  \
>   --hive-partition-key date  \
>   --hive-partition-value 10-03-2017  \
>   --target-dir ..\
>   -m 1  
>   
>   hive-table script:
>   employees_p is a partitioned table on date(string) column
>   
>   Issue:- 
>   Case1: When  --target-dir 
> /user/hdfs/landing/staging/Hive/partitioned/EMPLOYEES \
>   while running above sqoop command, gets an error "directory already 
> exissts".
>   
>   When : --target-dir 
> /user/hdfs/landing/staging/Hive/partitioned/EMPLOYEES/anyname 
>   2. Above sqoop command creates a hive partition (date=10-03-2017) and 
> directory as
>   '/user/hdfs/landing/staging/Hive/partitioned/EMPLOYEES/date=10-03-2017'
>   
> Expected Behaviour:- As in sqoop command  --hive-partition-key and  
> --hive-partition-value is present, so it should auto create partioned 
> directory inside EMPLOYEES.
> ie. '/user/hdfs/landing/staging/Hive/partitioned/EMPLOYEES/date=10-03-2017'



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Assigned] (SQOOP-3150) issue with sqoop hive import with partitions

2017-04-15 Thread Eric Lin (JIRA)

 [ 
https://issues.apache.org/jira/browse/SQOOP-3150?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Lin reassigned SQOOP-3150:
---

Assignee: Eric Lin

> issue with sqoop hive import with partitions
> 
>
> Key: SQOOP-3150
> URL: https://issues.apache.org/jira/browse/SQOOP-3150
> Project: Sqoop
>  Issue Type: Bug
>  Components: hive-integration
>Affects Versions: 1.4.6
> Environment: Cent-Os
>Reporter: Ankit Kumar
>Assignee: Eric Lin
>  Labels: features
>
> Sqoop Command:
>   sqoop import \
>   ...
>   --hive-import  \
>   --hive-overwrite  \
>   --hive-table employees_p  \
>   --hive-partition-key date  \
>   --hive-partition-value 10-03-2017  \
>   --target-dir ..\
>   -m 1  
>   
>   hive-table script:
>   employees_p is a partitioned table on date(string) column
>   
>   Issue:- 
>   Case1: When  --target-dir 
> /user/hdfs/landing/staging/Hive/partitioned/EMPLOYEES \
>   while running above sqoop command, gets an error "directory already 
> exissts".
>   
>   When : --target-dir 
> /user/hdfs/landing/staging/Hive/partitioned/EMPLOYEES/anyname 
>   2. Above sqoop command creates a hive partition (date=10-03-2017) and 
> directory as
>   '/user/hdfs/landing/staging/Hive/partitioned/EMPLOYEES/date=10-03-2017'
>   
> Expected Behaviour:- As in sqoop command  --hive-partition-key and  
> --hive-partition-value is present, so it should auto create partioned 
> directory inside EMPLOYEES.
> ie. '/user/hdfs/landing/staging/Hive/partitioned/EMPLOYEES/date=10-03-2017'



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (SQOOP-3158) Columns added to Mysql after initial sqoop import, export back to table with same schema fails

2017-04-15 Thread Eric Lin (JIRA)

[ 
https://issues.apache.org/jira/browse/SQOOP-3158?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15969927#comment-15969927
 ] 

Eric Lin commented on SQOOP-3158:
-

Review: https://reviews.apache.org/r/58466/

> Columns added to Mysql after initial sqoop import, export back to table with 
> same schema fails 
> ---
>
> Key: SQOOP-3158
> URL: https://issues.apache.org/jira/browse/SQOOP-3158
> Project: Sqoop
>  Issue Type: Improvement
>Reporter: viru reddy
>Assignee: Eric Lin
>  Labels: newbie
> Attachments: SQOOP-3158.patch
>
>
> I have table in MySQL with 2 columns until yesterday. The columns are id and 
> name.
> 1,Raj
> 2,Jack
> I have imported this data into HDFS yesterday itself as a file. Today we 
> added a new column to the table in MySQL called salary. The table looks like 
> below.
> 1,Raj
> 2,Jack
> 3,Jill,2000
> 4,Nick,3000
> Now I have done Incremental import on this table as a file.
> Part-m-0 file contains
> 1,Raj
> 2,Jack
> Part-m-1 file contains
> 3,Jill,2000
> 4,Nick,3000
> Now I created a new table in MySQL with same schema as Original MySQL table 
> with columns id name and salary.
> When I do sqoop export only last 2 rows are getting inserted to the new table 
> in MySQL  and the sqoop export fails
> How can I reflect all the rows to be inserted to the table.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (SQOOP-3158) Columns added to Mysql after initial sqoop import, export back to table with same schema fails

2017-04-14 Thread Eric Lin (JIRA)

 [ 
https://issues.apache.org/jira/browse/SQOOP-3158?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Lin updated SQOOP-3158:

Attachment: SQOOP-3158.patch

Attaching PATCH for this issue. Basically we need to check if there are more 
data for the current row, and if nothing, assign "null" value to it, so that 
NULL value will be entered into table.

However, if the column itself is defined as NOT NULL in MySQL, the export will 
still fail. This should be expected.

> Columns added to Mysql after initial sqoop import, export back to table with 
> same schema fails 
> ---
>
> Key: SQOOP-3158
> URL: https://issues.apache.org/jira/browse/SQOOP-3158
> Project: Sqoop
>  Issue Type: Improvement
>Reporter: viru reddy
>Assignee: Eric Lin
>  Labels: newbie
> Attachments: SQOOP-3158.patch
>
>
> I have table in MySQL with 2 columns until yesterday. The columns are id and 
> name.
> 1,Raj
> 2,Jack
> I have imported this data into HDFS yesterday itself as a file. Today we 
> added a new column to the table in MySQL called salary. The table looks like 
> below.
> 1,Raj
> 2,Jack
> 3,Jill,2000
> 4,Nick,3000
> Now I have done Incremental import on this table as a file.
> Part-m-0 file contains
> 1,Raj
> 2,Jack
> Part-m-1 file contains
> 3,Jill,2000
> 4,Nick,3000
> Now I created a new table in MySQL with same schema as Original MySQL table 
> with columns id name and salary.
> When I do sqoop export only last 2 rows are getting inserted to the new table 
> in MySQL  and the sqoop export fails
> How can I reflect all the rows to be inserted to the table.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Assigned] (SQOOP-3158) Columns added to Mysql after initial sqoop import, export back to table with same schema fails

2017-04-14 Thread Eric Lin (JIRA)

 [ 
https://issues.apache.org/jira/browse/SQOOP-3158?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Lin reassigned SQOOP-3158:
---

Assignee: Eric Lin

> Columns added to Mysql after initial sqoop import, export back to table with 
> same schema fails 
> ---
>
> Key: SQOOP-3158
> URL: https://issues.apache.org/jira/browse/SQOOP-3158
> Project: Sqoop
>  Issue Type: Improvement
>Reporter: viru reddy
>Assignee: Eric Lin
>  Labels: newbie
>
> I have table in MySQL with 2 columns until yesterday. The columns are id and 
> name.
> 1,Raj
> 2,Jack
> I have imported this data into HDFS yesterday itself as a file. Today we 
> added a new column to the table in MySQL called salary. The table looks like 
> below.
> 1,Raj
> 2,Jack
> 3,Jill,2000
> 4,Nick,3000
> Now I have done Incremental import on this table as a file.
> Part-m-0 file contains
> 1,Raj
> 2,Jack
> Part-m-1 file contains
> 3,Jill,2000
> 4,Nick,3000
> Now I created a new table in MySQL with same schema as Original MySQL table 
> with columns id name and salary.
> When I do sqoop export only last 2 rows are getting inserted to the new table 
> in MySQL  and the sqoop export fails
> How can I reflect all the rows to be inserted to the table.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Assigned] (SQOOP-3156) PostgreSQL direct connector is ignoring --columns

2017-04-14 Thread Eric Lin (JIRA)

 [ 
https://issues.apache.org/jira/browse/SQOOP-3156?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Lin reassigned SQOOP-3156:
---

Assignee: Eric Lin

> PostgreSQL direct connector is ignoring --columns
> -
>
> Key: SQOOP-3156
> URL: https://issues.apache.org/jira/browse/SQOOP-3156
> Project: Sqoop
>  Issue Type: Bug
>  Components: connectors/postgresql
>Affects Versions: 1.4.6
>Reporter: Shane Mathews
>Assignee: Eric Lin
>Priority: Minor
>
> DirectPostgresqlManager seems to directly query the columns of the table and 
> use those instead of using the passed in --columns
> can be reproduced using something like this:
> sqoop import --connect jdbc:postgresql://foo.com/chdbfoo --username foouser 
> --password foopassword --hive-import --direct --table footable --columns 
> "col1,col2,col3" --where "col1 is not null " --direct-split-size 268435456 
> --hive-table foohive_table
> if that table has more columns than col1, col2, and col3, those will also be 
> queried



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (SQOOP-2272) Import decimal columns from mysql to hive 0.14

2017-03-28 Thread Eric Lin (JIRA)

 [ 
https://issues.apache.org/jira/browse/SQOOP-2272?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Lin updated SQOOP-2272:

Attachment: SQOOP-2272.4.patch

Update latest patch based on review

> Import decimal columns from mysql to hive 0.14
> --
>
> Key: SQOOP-2272
> URL: https://issues.apache.org/jira/browse/SQOOP-2272
> Project: Sqoop
>  Issue Type: Bug
>  Components: sqoop2-shell
>Affects Versions: 1.4.5
>Reporter: Pawan Pawar
>Assignee: Eric Lin
> Attachments: SQOOP-2272.2.patch, SQOOP-2272.3.patch, 
> SQOOP-2272.4.patch, SQOOP-2272.patch
>
>
> I am importing data from mysql to hive. several columns in source table are 
> of type decimal. but sqoop convert this types into the double. How can I 
> import that table with same precision and scale in hive.
> My query is:
> sqoop import --connect 
> jdbc:mysql://localhost:3306/SourceDataBase?zeroDateTimeBehavior=convertToNull 
> --username root --password root --hive-table MyHiveDatabaseName.MyTableName 
> --hive-import  --hive-table MyHiveDatabaseName.MyTableName --query 'select * 
> from MyTableName where $CONDITIONS' -m 1 --target-dir 
> /user/hive/warehouse/MyHiveDatabaseName/MyTableName 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (SQOOP-3152) --map-column-hive to support DECIMAL(xx,xx)

2017-03-20 Thread Eric Lin (JIRA)

 [ 
https://issues.apache.org/jira/browse/SQOOP-3152?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Lin updated SQOOP-3152:

Attachment: SQOOP-3152.2.patch

Latest patch based on review feedback.

> --map-column-hive to support DECIMAL(xx,xx)
> ---
>
> Key: SQOOP-3152
> URL: https://issues.apache.org/jira/browse/SQOOP-3152
> Project: Sqoop
>  Issue Type: Bug
>Affects Versions: 1.4.6
>Reporter: Eric Lin
>Assignee: Eric Lin
>Priority: Minor
> Attachments: SQOOP-3152.2.patch, SQOOP-3152.patch
>
>
> The following command:
> sqoop import --connect jdbc:mysql://localhost/test --username root --password 
> 'cloudera' --table decimal_table  -m 1 --driver com.mysql.jdbc.Driver 
> --verbose --hive-import --hive-database default --hive-table decimal_table 
> --hive-overwrite --map-column-hive a='DECIMAL(10,4)'
> will fail with below error:
> {code}
> 17/03/13 18:42:09 DEBUG sqoop.Sqoop: Malformed mapping.  Column mapping 
> should be the form key=value[,key=value]*
> java.lang.IllegalArgumentException: Malformed mapping.  Column mapping should 
> be the form key=value[,key=value]*
>   at 
> org.apache.sqoop.SqoopOptions.parseColumnMapping(SqoopOptions.java:1333)
>   at 
> org.apache.sqoop.SqoopOptions.setMapColumnHive(SqoopOptions.java:1349)
>   at 
> org.apache.sqoop.tool.BaseSqoopTool.applyHiveOptions(BaseSqoopTool.java:1198)
>   at org.apache.sqoop.tool.ImportTool.applyOptions(ImportTool.java:1011)
>   at org.apache.sqoop.tool.SqoopTool.parseArguments(SqoopTool.java:435)
>   at org.apache.sqoop.Sqoop.run(Sqoop.java:135)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
>   at org.apache.sqoop.Sqoop.runSqoop(Sqoop.java:183)
>   at org.apache.sqoop.Sqoop.runTool(Sqoop.java:234)
>   at org.apache.sqoop.Sqoop.runTool(Sqoop.java:243)
>   at org.apache.sqoop.Sqoop.main(Sqoop.java:252)
> Malformed mapping.  Column mapping should be the form key=value[,key=value]*
> {code}
> --map-column-hive should support DECIMAL(10,5) format.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (SQOOP-2272) Import decimal columns from mysql to hive 0.14

2017-03-20 Thread Eric Lin (JIRA)

 [ 
https://issues.apache.org/jira/browse/SQOOP-2272?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Lin updated SQOOP-2272:

Attachment: SQOOP-2272.3.patch

> Import decimal columns from mysql to hive 0.14
> --
>
> Key: SQOOP-2272
> URL: https://issues.apache.org/jira/browse/SQOOP-2272
> Project: Sqoop
>  Issue Type: Bug
>  Components: sqoop2-shell
>Affects Versions: 1.4.5
>Reporter: Pawan Pawar
>Assignee: Eric Lin
> Attachments: SQOOP-2272.2.patch, SQOOP-2272.3.patch, SQOOP-2272.patch
>
>
> I am importing data from mysql to hive. several columns in source table are 
> of type decimal. but sqoop convert this types into the double. How can I 
> import that table with same precision and scale in hive.
> My query is:
> sqoop import --connect 
> jdbc:mysql://localhost:3306/SourceDataBase?zeroDateTimeBehavior=convertToNull 
> --username root --password root --hive-table MyHiveDatabaseName.MyTableName 
> --hive-import  --hive-table MyHiveDatabaseName.MyTableName --query 'select * 
> from MyTableName where $CONDITIONS' -m 1 --target-dir 
> /user/hive/warehouse/MyHiveDatabaseName/MyTableName 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (SQOOP-2272) Import decimal columns from mysql to hive 0.14

2017-03-19 Thread Eric Lin (JIRA)

 [ 
https://issues.apache.org/jira/browse/SQOOP-2272?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Lin updated SQOOP-2272:

Attachment: SQOOP-2272.2.patch

Latest patch based on review feedback.

> Import decimal columns from mysql to hive 0.14
> --
>
> Key: SQOOP-2272
> URL: https://issues.apache.org/jira/browse/SQOOP-2272
> Project: Sqoop
>  Issue Type: Bug
>  Components: sqoop2-shell
>Affects Versions: 1.4.5
>Reporter: Pawan Pawar
>Assignee: Eric Lin
> Attachments: SQOOP-2272.2.patch, SQOOP-2272.patch
>
>
> I am importing data from mysql to hive. several columns in source table are 
> of type decimal. but sqoop convert this types into the double. How can I 
> import that table with same precision and scale in hive.
> My query is:
> sqoop import --connect 
> jdbc:mysql://localhost:3306/SourceDataBase?zeroDateTimeBehavior=convertToNull 
> --username root --password root --hive-table MyHiveDatabaseName.MyTableName 
> --hive-import  --hive-table MyHiveDatabaseName.MyTableName --query 'select * 
> from MyTableName where $CONDITIONS' -m 1 --target-dir 
> /user/hive/warehouse/MyHiveDatabaseName/MyTableName 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (SQOOP-2272) Import decimal columns from mysql to hive 0.14

2017-03-19 Thread Eric Lin (JIRA)

 [ 
https://issues.apache.org/jira/browse/SQOOP-2272?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Lin updated SQOOP-2272:

Attachment: (was: SQOOP-2272.2.patch)

> Import decimal columns from mysql to hive 0.14
> --
>
> Key: SQOOP-2272
> URL: https://issues.apache.org/jira/browse/SQOOP-2272
> Project: Sqoop
>  Issue Type: Bug
>  Components: sqoop2-shell
>Affects Versions: 1.4.5
>Reporter: Pawan Pawar
>Assignee: Eric Lin
> Attachments: SQOOP-2272.patch
>
>
> I am importing data from mysql to hive. several columns in source table are 
> of type decimal. but sqoop convert this types into the double. How can I 
> import that table with same precision and scale in hive.
> My query is:
> sqoop import --connect 
> jdbc:mysql://localhost:3306/SourceDataBase?zeroDateTimeBehavior=convertToNull 
> --username root --password root --hive-table MyHiveDatabaseName.MyTableName 
> --hive-import  --hive-table MyHiveDatabaseName.MyTableName --query 'select * 
> from MyTableName where $CONDITIONS' -m 1 --target-dir 
> /user/hive/warehouse/MyHiveDatabaseName/MyTableName 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Issue Comment Deleted] (SQOOP-2272) Import decimal columns from mysql to hive 0.14

2017-03-19 Thread Eric Lin (JIRA)

 [ 
https://issues.apache.org/jira/browse/SQOOP-2272?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Lin updated SQOOP-2272:

Comment: was deleted

(was: Latest patch based on review feedback.)

> Import decimal columns from mysql to hive 0.14
> --
>
> Key: SQOOP-2272
> URL: https://issues.apache.org/jira/browse/SQOOP-2272
> Project: Sqoop
>  Issue Type: Bug
>  Components: sqoop2-shell
>Affects Versions: 1.4.5
>Reporter: Pawan Pawar
>Assignee: Eric Lin
> Attachments: SQOOP-2272.patch
>
>
> I am importing data from mysql to hive. several columns in source table are 
> of type decimal. but sqoop convert this types into the double. How can I 
> import that table with same precision and scale in hive.
> My query is:
> sqoop import --connect 
> jdbc:mysql://localhost:3306/SourceDataBase?zeroDateTimeBehavior=convertToNull 
> --username root --password root --hive-table MyHiveDatabaseName.MyTableName 
> --hive-import  --hive-table MyHiveDatabaseName.MyTableName --query 'select * 
> from MyTableName where $CONDITIONS' -m 1 --target-dir 
> /user/hive/warehouse/MyHiveDatabaseName/MyTableName 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (SQOOP-2272) Import decimal columns from mysql to hive 0.14

2017-03-19 Thread Eric Lin (JIRA)

 [ 
https://issues.apache.org/jira/browse/SQOOP-2272?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Lin updated SQOOP-2272:

Attachment: SQOOP-2272.2.patch

Latest patch based on review feedback.

> Import decimal columns from mysql to hive 0.14
> --
>
> Key: SQOOP-2272
> URL: https://issues.apache.org/jira/browse/SQOOP-2272
> Project: Sqoop
>  Issue Type: Bug
>  Components: sqoop2-shell
>Affects Versions: 1.4.5
>Reporter: Pawan Pawar
>Assignee: Eric Lin
> Attachments: SQOOP-2272.2.patch, SQOOP-2272.patch
>
>
> I am importing data from mysql to hive. several columns in source table are 
> of type decimal. but sqoop convert this types into the double. How can I 
> import that table with same precision and scale in hive.
> My query is:
> sqoop import --connect 
> jdbc:mysql://localhost:3306/SourceDataBase?zeroDateTimeBehavior=convertToNull 
> --username root --password root --hive-table MyHiveDatabaseName.MyTableName 
> --hive-import  --hive-table MyHiveDatabaseName.MyTableName --query 'select * 
> from MyTableName where $CONDITIONS' -m 1 --target-dir 
> /user/hive/warehouse/MyHiveDatabaseName/MyTableName 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (SQOOP-3059) sqoop import into hive converting decimals to decimals

2017-03-14 Thread Eric Lin (JIRA)

[ 
https://issues.apache.org/jira/browse/SQOOP-3059?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15923954#comment-15923954
 ] 

Eric Lin commented on SQOOP-3059:
-

I believe this is duplicate of SQOOP-2272, which has a patch already on the way.

> sqoop import into hive converting decimals to decimals
> --
>
> Key: SQOOP-3059
> URL: https://issues.apache.org/jira/browse/SQOOP-3059
> Project: Sqoop
>  Issue Type: Improvement
>  Components: hive-integration
> Environment: Redhat 6.6, Sqoop 1.4.6
>Reporter: Ying Cao
>Priority: Minor
>
> By default sqoop import into hive converting decimals to double, Decimal type 
> provide more accurate values wider range than Double, so by default way some 
> data will be truncated.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (SQOOP-3152) --map-column-hive to support DECIMAL(xx,xx)

2017-03-13 Thread Eric Lin (JIRA)

[ 
https://issues.apache.org/jira/browse/SQOOP-3152?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15923260#comment-15923260
 ] 

Eric Lin commented on SQOOP-3152:
-

After the patch, you will still see the following WARNING message:

{code}
17/03/14 10:53:53 WARN hive.TableDefWriter: Column a had to be cast to a less 
precise type in Hive
{code}

This will be addressed in SQOOP-2272

> --map-column-hive to support DECIMAL(xx,xx)
> ---
>
> Key: SQOOP-3152
> URL: https://issues.apache.org/jira/browse/SQOOP-3152
> Project: Sqoop
>  Issue Type: Bug
>Affects Versions: 1.4.6
>Reporter: Eric Lin
>Assignee: Eric Lin
>Priority: Minor
> Attachments: SQOOP-3152.patch
>
>
> The following command:
> sqoop import --connect jdbc:mysql://localhost/test --username root --password 
> 'cloudera' --table decimal_table  -m 1 --driver com.mysql.jdbc.Driver 
> --verbose --hive-import --hive-database default --hive-table decimal_table 
> --hive-overwrite --map-column-hive a='DECIMAL(10,4)'
> will fail with below error:
> {code}
> 17/03/13 18:42:09 DEBUG sqoop.Sqoop: Malformed mapping.  Column mapping 
> should be the form key=value[,key=value]*
> java.lang.IllegalArgumentException: Malformed mapping.  Column mapping should 
> be the form key=value[,key=value]*
>   at 
> org.apache.sqoop.SqoopOptions.parseColumnMapping(SqoopOptions.java:1333)
>   at 
> org.apache.sqoop.SqoopOptions.setMapColumnHive(SqoopOptions.java:1349)
>   at 
> org.apache.sqoop.tool.BaseSqoopTool.applyHiveOptions(BaseSqoopTool.java:1198)
>   at org.apache.sqoop.tool.ImportTool.applyOptions(ImportTool.java:1011)
>   at org.apache.sqoop.tool.SqoopTool.parseArguments(SqoopTool.java:435)
>   at org.apache.sqoop.Sqoop.run(Sqoop.java:135)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
>   at org.apache.sqoop.Sqoop.runSqoop(Sqoop.java:183)
>   at org.apache.sqoop.Sqoop.runTool(Sqoop.java:234)
>   at org.apache.sqoop.Sqoop.runTool(Sqoop.java:243)
>   at org.apache.sqoop.Sqoop.main(Sqoop.java:252)
> Malformed mapping.  Column mapping should be the form key=value[,key=value]*
> {code}
> --map-column-hive should support DECIMAL(10,5) format.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (SQOOP-3152) --map-column-hive to support DECIMAL(xx,xx)

2017-03-13 Thread Eric Lin (JIRA)

[ 
https://issues.apache.org/jira/browse/SQOOP-3152?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15923261#comment-15923261
 ] 

Eric Lin commented on SQOOP-3152:
-

Review link: https://reviews.apache.org/r/57576/

> --map-column-hive to support DECIMAL(xx,xx)
> ---
>
> Key: SQOOP-3152
> URL: https://issues.apache.org/jira/browse/SQOOP-3152
> Project: Sqoop
>  Issue Type: Bug
>Affects Versions: 1.4.6
>Reporter: Eric Lin
>Assignee: Eric Lin
>Priority: Minor
> Attachments: SQOOP-3152.patch
>
>
> The following command:
> sqoop import --connect jdbc:mysql://localhost/test --username root --password 
> 'cloudera' --table decimal_table  -m 1 --driver com.mysql.jdbc.Driver 
> --verbose --hive-import --hive-database default --hive-table decimal_table 
> --hive-overwrite --map-column-hive a='DECIMAL(10,4)'
> will fail with below error:
> {code}
> 17/03/13 18:42:09 DEBUG sqoop.Sqoop: Malformed mapping.  Column mapping 
> should be the form key=value[,key=value]*
> java.lang.IllegalArgumentException: Malformed mapping.  Column mapping should 
> be the form key=value[,key=value]*
>   at 
> org.apache.sqoop.SqoopOptions.parseColumnMapping(SqoopOptions.java:1333)
>   at 
> org.apache.sqoop.SqoopOptions.setMapColumnHive(SqoopOptions.java:1349)
>   at 
> org.apache.sqoop.tool.BaseSqoopTool.applyHiveOptions(BaseSqoopTool.java:1198)
>   at org.apache.sqoop.tool.ImportTool.applyOptions(ImportTool.java:1011)
>   at org.apache.sqoop.tool.SqoopTool.parseArguments(SqoopTool.java:435)
>   at org.apache.sqoop.Sqoop.run(Sqoop.java:135)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
>   at org.apache.sqoop.Sqoop.runSqoop(Sqoop.java:183)
>   at org.apache.sqoop.Sqoop.runTool(Sqoop.java:234)
>   at org.apache.sqoop.Sqoop.runTool(Sqoop.java:243)
>   at org.apache.sqoop.Sqoop.main(Sqoop.java:252)
> Malformed mapping.  Column mapping should be the form key=value[,key=value]*
> {code}
> --map-column-hive should support DECIMAL(10,5) format.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (SQOOP-3152) --map-column-hive to support DECIMAL(xx,xx)

2017-03-13 Thread Eric Lin (JIRA)

 [ 
https://issues.apache.org/jira/browse/SQOOP-3152?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Lin updated SQOOP-3152:

Attachment: SQOOP-3152.patch

Patch added.

> --map-column-hive to support DECIMAL(xx,xx)
> ---
>
> Key: SQOOP-3152
> URL: https://issues.apache.org/jira/browse/SQOOP-3152
> Project: Sqoop
>  Issue Type: Bug
>Affects Versions: 1.4.6
>Reporter: Eric Lin
>Assignee: Eric Lin
>Priority: Minor
> Attachments: SQOOP-3152.patch
>
>
> The following command:
> sqoop import --connect jdbc:mysql://localhost/test --username root --password 
> 'cloudera' --table decimal_table  -m 1 --driver com.mysql.jdbc.Driver 
> --verbose --hive-import --hive-database default --hive-table decimal_table 
> --hive-overwrite --map-column-hive a='DECIMAL(10,4)'
> will fail with below error:
> {code}
> 17/03/13 18:42:09 DEBUG sqoop.Sqoop: Malformed mapping.  Column mapping 
> should be the form key=value[,key=value]*
> java.lang.IllegalArgumentException: Malformed mapping.  Column mapping should 
> be the form key=value[,key=value]*
>   at 
> org.apache.sqoop.SqoopOptions.parseColumnMapping(SqoopOptions.java:1333)
>   at 
> org.apache.sqoop.SqoopOptions.setMapColumnHive(SqoopOptions.java:1349)
>   at 
> org.apache.sqoop.tool.BaseSqoopTool.applyHiveOptions(BaseSqoopTool.java:1198)
>   at org.apache.sqoop.tool.ImportTool.applyOptions(ImportTool.java:1011)
>   at org.apache.sqoop.tool.SqoopTool.parseArguments(SqoopTool.java:435)
>   at org.apache.sqoop.Sqoop.run(Sqoop.java:135)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
>   at org.apache.sqoop.Sqoop.runSqoop(Sqoop.java:183)
>   at org.apache.sqoop.Sqoop.runTool(Sqoop.java:234)
>   at org.apache.sqoop.Sqoop.runTool(Sqoop.java:243)
>   at org.apache.sqoop.Sqoop.main(Sqoop.java:252)
> Malformed mapping.  Column mapping should be the form key=value[,key=value]*
> {code}
> --map-column-hive should support DECIMAL(10,5) format.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (SQOOP-3152) --map-column-hive to support DECIMAL(xx,xx)

2017-03-13 Thread Eric Lin (JIRA)
Eric Lin created SQOOP-3152:
---

 Summary: --map-column-hive to support DECIMAL(xx,xx)
 Key: SQOOP-3152
 URL: https://issues.apache.org/jira/browse/SQOOP-3152
 Project: Sqoop
  Issue Type: Bug
Reporter: Eric Lin
Assignee: Eric Lin
Priority: Minor


The following command:

sqoop import --connect jdbc:mysql://localhost/test --username root --password 
'cloudera' --table decimal_table  -m 1 --driver com.mysql.jdbc.Driver --verbose 
--hive-import --hive-database default --hive-table decimal_table 
--hive-overwrite --map-column-hive a='DECIMAL(10,4)'

will fail with below error:

{code}
17/03/13 18:42:09 DEBUG sqoop.Sqoop: Malformed mapping.  Column mapping should 
be the form key=value[,key=value]*
java.lang.IllegalArgumentException: Malformed mapping.  Column mapping should 
be the form key=value[,key=value]*
at 
org.apache.sqoop.SqoopOptions.parseColumnMapping(SqoopOptions.java:1333)
at 
org.apache.sqoop.SqoopOptions.setMapColumnHive(SqoopOptions.java:1349)
at 
org.apache.sqoop.tool.BaseSqoopTool.applyHiveOptions(BaseSqoopTool.java:1198)
at org.apache.sqoop.tool.ImportTool.applyOptions(ImportTool.java:1011)
at org.apache.sqoop.tool.SqoopTool.parseArguments(SqoopTool.java:435)
at org.apache.sqoop.Sqoop.run(Sqoop.java:135)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
at org.apache.sqoop.Sqoop.runSqoop(Sqoop.java:183)
at org.apache.sqoop.Sqoop.runTool(Sqoop.java:234)
at org.apache.sqoop.Sqoop.runTool(Sqoop.java:243)
at org.apache.sqoop.Sqoop.main(Sqoop.java:252)
Malformed mapping.  Column mapping should be the form key=value[,key=value]*
{code}

--map-column-hive should support DECIMAL(10,5) format.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (SQOOP-2272) Import decimal columns from mysql to hive 0.14

2017-03-13 Thread Eric Lin (JIRA)

[ 
https://issues.apache.org/jira/browse/SQOOP-2272?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15907106#comment-15907106
 ] 

Eric Lin commented on SQOOP-2272:
-

Review: https://reviews.apache.org/r/57551/

> Import decimal columns from mysql to hive 0.14
> --
>
> Key: SQOOP-2272
> URL: https://issues.apache.org/jira/browse/SQOOP-2272
> Project: Sqoop
>  Issue Type: Bug
>  Components: sqoop2-shell
>Affects Versions: 1.4.5
>Reporter: Pawan Pawar
>Assignee: Eric Lin
> Attachments: SQOOP-2272.patch
>
>
> I am importing data from mysql to hive. several columns in source table are 
> of type decimal. but sqoop convert this types into the double. How can I 
> import that table with same precision and scale in hive.
> My query is:
> sqoop import --connect 
> jdbc:mysql://localhost:3306/SourceDataBase?zeroDateTimeBehavior=convertToNull 
> --username root --password root --hive-table MyHiveDatabaseName.MyTableName 
> --hive-import  --hive-table MyHiveDatabaseName.MyTableName --query 'select * 
> from MyTableName where $CONDITIONS' -m 1 --target-dir 
> /user/hive/warehouse/MyHiveDatabaseName/MyTableName 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (SQOOP-2272) Import decimal columns from mysql to hive 0.14

2017-03-13 Thread Eric Lin (JIRA)

 [ 
https://issues.apache.org/jira/browse/SQOOP-2272?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Lin updated SQOOP-2272:

Attachment: SQOOP-2272.patch

Allowing sqoop to convert DECIMAL to DECIMAL in Hive

> Import decimal columns from mysql to hive 0.14
> --
>
> Key: SQOOP-2272
> URL: https://issues.apache.org/jira/browse/SQOOP-2272
> Project: Sqoop
>  Issue Type: Bug
>  Components: sqoop2-shell
>Affects Versions: 1.4.5
>Reporter: Pawan Pawar
>Assignee: Eric Lin
> Attachments: SQOOP-2272.patch
>
>
> I am importing data from mysql to hive. several columns in source table are 
> of type decimal. but sqoop convert this types into the double. How can I 
> import that table with same precision and scale in hive.
> My query is:
> sqoop import --connect 
> jdbc:mysql://localhost:3306/SourceDataBase?zeroDateTimeBehavior=convertToNull 
> --username root --password root --hive-table MyHiveDatabaseName.MyTableName 
> --hive-import  --hive-table MyHiveDatabaseName.MyTableName --query 'select * 
> from MyTableName where $CONDITIONS' -m 1 --target-dir 
> /user/hive/warehouse/MyHiveDatabaseName/MyTableName 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Assigned] (SQOOP-2272) Import decimal columns from mysql to hive 0.14

2017-03-12 Thread Eric Lin (JIRA)

 [ 
https://issues.apache.org/jira/browse/SQOOP-2272?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Lin reassigned SQOOP-2272:
---

Assignee: Eric Lin

> Import decimal columns from mysql to hive 0.14
> --
>
> Key: SQOOP-2272
> URL: https://issues.apache.org/jira/browse/SQOOP-2272
> Project: Sqoop
>  Issue Type: Bug
>  Components: sqoop2-shell
>Affects Versions: 1.4.5
>Reporter: Pawan Pawar
>Assignee: Eric Lin
>
> I am importing data from mysql to hive. several columns in source table are 
> of type decimal. but sqoop convert this types into the double. How can I 
> import that table with same precision and scale in hive.
> My query is:
> sqoop import --connect 
> jdbc:mysql://localhost:3306/SourceDataBase?zeroDateTimeBehavior=convertToNull 
> --username root --password root --hive-table MyHiveDatabaseName.MyTableName 
> --hive-import  --hive-table MyHiveDatabaseName.MyTableName --query 'select * 
> from MyTableName where $CONDITIONS' -m 1 --target-dir 
> /user/hive/warehouse/MyHiveDatabaseName/MyTableName 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (SQOOP-2272) Import decimal columns from mysql to hive 0.14

2017-03-12 Thread Eric Lin (JIRA)

[ 
https://issues.apache.org/jira/browse/SQOOP-2272?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15906470#comment-15906470
 ] 

Eric Lin commented on SQOOP-2272:
-

I will try to see if I can add a patch to this issue.

> Import decimal columns from mysql to hive 0.14
> --
>
> Key: SQOOP-2272
> URL: https://issues.apache.org/jira/browse/SQOOP-2272
> Project: Sqoop
>  Issue Type: Bug
>  Components: sqoop2-shell
>Affects Versions: 1.4.5
>Reporter: Pawan Pawar
>Assignee: Eric Lin
>
> I am importing data from mysql to hive. several columns in source table are 
> of type decimal. but sqoop convert this types into the double. How can I 
> import that table with same precision and scale in hive.
> My query is:
> sqoop import --connect 
> jdbc:mysql://localhost:3306/SourceDataBase?zeroDateTimeBehavior=convertToNull 
> --username root --password root --hive-table MyHiveDatabaseName.MyTableName 
> --hive-import  --hive-table MyHiveDatabaseName.MyTableName --query 'select * 
> from MyTableName where $CONDITIONS' -m 1 --target-dir 
> /user/hive/warehouse/MyHiveDatabaseName/MyTableName 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Comment Edited] (SQOOP-3135) Not enough error message for debugging when parameters missing

2017-02-22 Thread Eric Lin (JIRA)

[ 
https://issues.apache.org/jira/browse/SQOOP-3135?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15879641#comment-15879641
 ] 

Eric Lin edited comment on SQOOP-3135 at 2/23/17 1:13 AM:
--

Also log Exception messages.

SQOOP-3135.2.patch


was (Author: ericlin):
Also log Exception messages.

> Not enough error message for debugging when parameters missing
> --
>
> Key: SQOOP-3135
> URL: https://issues.apache.org/jira/browse/SQOOP-3135
> Project: Sqoop
>  Issue Type: Improvement
>Affects Versions: 1.4.6
>Reporter: Eric Lin
>Assignee: Eric Lin
> Attachments: SQOOP-3135.2.patch, SQOOP-3135.patch
>
>
> Run the following sqoop command:
> {code}
> sqoop job --create test -- import --connect jdbc:mysql://localhost/test 
> --username root --password $pass --target-dir /tmp/test10 -m 1 --driver 
> com.mysql.jdbc.Driver  --table test
> {code}
> Due to $pass is not set, command will fail with the following error:
> {code}
> 16/12/21 05:48:32 INFO sqoop.Sqoop: Running Sqoop version: 1.4.7-SNAPSHOT
> 16/12/21 05:48:33 ERROR tool.JobTool: Error parsing arguments to the 
> job-specific tool.
> 16/12/21 05:48:33 ERROR tool.JobTool: See 'sqoop help ' for usage.
> {code}
> This is not informative, by checking the code in JobTool class:
> {code}
> // Now feed the arguments into the tool itself.
> try {
>   childOptions = childTool.parseArguments(parseableChildArgv,
>   null, childOptions, false);
>   childTool.appendArgs(extraChildArgv);
>   childTool.validateOptions(childOptions);
> } catch (ParseException pe) {
>   LOG.error("Error parsing arguments to the job-specific tool.");
>   LOG.error("See 'sqoop help ' for usage.");
>   return 1;
> } catch (SqoopOptions.InvalidOptionsException e) {
>   System.err.println(e.getMessage());
>   return 1;
> }
> {code}
> The ParseException pe's message has been dropped, we should print out the 
> message in the exception so that more meaningful message will be printed.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (SQOOP-3135) Not enough error message for debugging when parameters missing

2017-02-22 Thread Eric Lin (JIRA)

 [ 
https://issues.apache.org/jira/browse/SQOOP-3135?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Lin updated SQOOP-3135:

Attachment: SQOOP-3135.2.patch

Also log Exception messages.

> Not enough error message for debugging when parameters missing
> --
>
> Key: SQOOP-3135
> URL: https://issues.apache.org/jira/browse/SQOOP-3135
> Project: Sqoop
>  Issue Type: Improvement
>Affects Versions: 1.4.6
>Reporter: Eric Lin
>Assignee: Eric Lin
> Attachments: SQOOP-3135.2.patch, SQOOP-3135.patch
>
>
> Run the following sqoop command:
> {code}
> sqoop job --create test -- import --connect jdbc:mysql://localhost/test 
> --username root --password $pass --target-dir /tmp/test10 -m 1 --driver 
> com.mysql.jdbc.Driver  --table test
> {code}
> Due to $pass is not set, command will fail with the following error:
> {code}
> 16/12/21 05:48:32 INFO sqoop.Sqoop: Running Sqoop version: 1.4.7-SNAPSHOT
> 16/12/21 05:48:33 ERROR tool.JobTool: Error parsing arguments to the 
> job-specific tool.
> 16/12/21 05:48:33 ERROR tool.JobTool: See 'sqoop help ' for usage.
> {code}
> This is not informative, by checking the code in JobTool class:
> {code}
> // Now feed the arguments into the tool itself.
> try {
>   childOptions = childTool.parseArguments(parseableChildArgv,
>   null, childOptions, false);
>   childTool.appendArgs(extraChildArgv);
>   childTool.validateOptions(childOptions);
> } catch (ParseException pe) {
>   LOG.error("Error parsing arguments to the job-specific tool.");
>   LOG.error("See 'sqoop help ' for usage.");
>   return 1;
> } catch (SqoopOptions.InvalidOptionsException e) {
>   System.err.println(e.getMessage());
>   return 1;
> }
> {code}
> The ParseException pe's message has been dropped, we should print out the 
> message in the exception so that more meaningful message will be printed.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (SQOOP-3135) Not enough error message for debugging when parameters missing

2017-02-15 Thread Eric Lin (JIRA)

[ 
https://issues.apache.org/jira/browse/SQOOP-3135?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15868912#comment-15868912
 ] 

Eric Lin commented on SQOOP-3135:
-

Review requested: https://reviews.apache.org/r/56737/

> Not enough error message for debugging when parameters missing
> --
>
> Key: SQOOP-3135
> URL: https://issues.apache.org/jira/browse/SQOOP-3135
> Project: Sqoop
>  Issue Type: Improvement
>Affects Versions: 1.4.6
>Reporter: Eric Lin
>Assignee: Eric Lin
> Attachments: SQOOP-3135.patch
>
>
> Run the following sqoop command:
> {code}
> sqoop job --create test -- import --connect jdbc:mysql://localhost/test 
> --username root --password $pass --target-dir /tmp/test10 -m 1 --driver 
> com.mysql.jdbc.Driver  --table test
> {code}
> Due to $pass is not set, command will fail with the following error:
> {code}
> 16/12/21 05:48:32 INFO sqoop.Sqoop: Running Sqoop version: 1.4.7-SNAPSHOT
> 16/12/21 05:48:33 ERROR tool.JobTool: Error parsing arguments to the 
> job-specific tool.
> 16/12/21 05:48:33 ERROR tool.JobTool: See 'sqoop help ' for usage.
> {code}
> This is not informative, by checking the code in JobTool class:
> {code}
> // Now feed the arguments into the tool itself.
> try {
>   childOptions = childTool.parseArguments(parseableChildArgv,
>   null, childOptions, false);
>   childTool.appendArgs(extraChildArgv);
>   childTool.validateOptions(childOptions);
> } catch (ParseException pe) {
>   LOG.error("Error parsing arguments to the job-specific tool.");
>   LOG.error("See 'sqoop help ' for usage.");
>   return 1;
> } catch (SqoopOptions.InvalidOptionsException e) {
>   System.err.println(e.getMessage());
>   return 1;
> }
> {code}
> The ParseException pe's message has been dropped, we should print out the 
> message in the exception so that more meaningful message will be printed.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (SQOOP-3135) Not enough error message for debugging when parameters missing

2017-02-15 Thread Eric Lin (JIRA)

[ 
https://issues.apache.org/jira/browse/SQOOP-3135?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15868904#comment-15868904
 ] 

Eric Lin commented on SQOOP-3135:
-

Provided patch so that the same command will print out the following:

{code}
16/12/21 05:57:29 INFO sqoop.Sqoop: Running Sqoop version: 1.4.7-SNAPSHOT
16/12/21 05:57:29 ERROR tool.JobTool: Error parsing arguments to the 
job-specific tool: org.apache.commons.cli.MissingArgumentException: Missing 
argument for option: password
16/12/21 05:57:29 ERROR tool.JobTool: See 'sqoop help ' for usage.
{code}

which tells us exactly that password option was missing.

> Not enough error message for debugging when parameters missing
> --
>
> Key: SQOOP-3135
> URL: https://issues.apache.org/jira/browse/SQOOP-3135
> Project: Sqoop
>  Issue Type: Improvement
>Affects Versions: 1.4.6
>Reporter: Eric Lin
>Assignee: Eric Lin
> Attachments: SQOOP-3135.patch
>
>
> Run the following sqoop command:
> {code}
> sqoop job --create test -- import --connect jdbc:mysql://localhost/test 
> --username root --password $pass --target-dir /tmp/test10 -m 1 --driver 
> com.mysql.jdbc.Driver  --table test
> {code}
> Due to $pass is not set, command will fail with the following error:
> {code}
> 16/12/21 05:48:32 INFO sqoop.Sqoop: Running Sqoop version: 1.4.7-SNAPSHOT
> 16/12/21 05:48:33 ERROR tool.JobTool: Error parsing arguments to the 
> job-specific tool.
> 16/12/21 05:48:33 ERROR tool.JobTool: See 'sqoop help ' for usage.
> {code}
> This is not informative, by checking the code in JobTool class:
> {code}
> // Now feed the arguments into the tool itself.
> try {
>   childOptions = childTool.parseArguments(parseableChildArgv,
>   null, childOptions, false);
>   childTool.appendArgs(extraChildArgv);
>   childTool.validateOptions(childOptions);
> } catch (ParseException pe) {
>   LOG.error("Error parsing arguments to the job-specific tool.");
>   LOG.error("See 'sqoop help ' for usage.");
>   return 1;
> } catch (SqoopOptions.InvalidOptionsException e) {
>   System.err.println(e.getMessage());
>   return 1;
> }
> {code}
> The ParseException pe's message has been dropped, we should print out the 
> message in the exception so that more meaningful message will be printed.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (SQOOP-3135) Not enough error message for debugging when parameters missing

2017-02-15 Thread Eric Lin (JIRA)

 [ 
https://issues.apache.org/jira/browse/SQOOP-3135?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Lin updated SQOOP-3135:

Attachment: SQOOP-3135.patch

> Not enough error message for debugging when parameters missing
> --
>
> Key: SQOOP-3135
> URL: https://issues.apache.org/jira/browse/SQOOP-3135
> Project: Sqoop
>  Issue Type: Improvement
>Affects Versions: 1.4.6
>Reporter: Eric Lin
>Assignee: Eric Lin
> Attachments: SQOOP-3135.patch
>
>
> Run the following sqoop command:
> {code}
> sqoop job --create test -- import --connect jdbc:mysql://localhost/test 
> --username root --password $pass --target-dir /tmp/test10 -m 1 --driver 
> com.mysql.jdbc.Driver  --table test
> {code}
> Due to $pass is not set, command will fail with the following error:
> {code}
> 16/12/21 05:48:32 INFO sqoop.Sqoop: Running Sqoop version: 1.4.7-SNAPSHOT
> 16/12/21 05:48:33 ERROR tool.JobTool: Error parsing arguments to the 
> job-specific tool.
> 16/12/21 05:48:33 ERROR tool.JobTool: See 'sqoop help ' for usage.
> {code}
> This is not informative, by checking the code in JobTool class:
> {code}
> // Now feed the arguments into the tool itself.
> try {
>   childOptions = childTool.parseArguments(parseableChildArgv,
>   null, childOptions, false);
>   childTool.appendArgs(extraChildArgv);
>   childTool.validateOptions(childOptions);
> } catch (ParseException pe) {
>   LOG.error("Error parsing arguments to the job-specific tool.");
>   LOG.error("See 'sqoop help ' for usage.");
>   return 1;
> } catch (SqoopOptions.InvalidOptionsException e) {
>   System.err.println(e.getMessage());
>   return 1;
> }
> {code}
> The ParseException pe's message has been dropped, we should print out the 
> message in the exception so that more meaningful message will be printed.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (SQOOP-3135) Not enough error message for debugging when parameters missing

2017-02-15 Thread Eric Lin (JIRA)
Eric Lin created SQOOP-3135:
---

 Summary: Not enough error message for debugging when parameters 
missing
 Key: SQOOP-3135
 URL: https://issues.apache.org/jira/browse/SQOOP-3135
 Project: Sqoop
  Issue Type: Improvement
Affects Versions: 1.4.6
Reporter: Eric Lin
Assignee: Eric Lin


Run the following sqoop command:

{code}
sqoop job --create test -- import --connect jdbc:mysql://localhost/test 
--username root --password $pass --target-dir /tmp/test10 -m 1 --driver 
com.mysql.jdbc.Driver  --table test
{code}

Due to $pass is not set, command will fail with the following error:

{code}
16/12/21 05:48:32 INFO sqoop.Sqoop: Running Sqoop version: 1.4.7-SNAPSHOT
16/12/21 05:48:33 ERROR tool.JobTool: Error parsing arguments to the 
job-specific tool.
16/12/21 05:48:33 ERROR tool.JobTool: See 'sqoop help ' for usage.
{code}

This is not informative, by checking the code in JobTool class:

{code}
// Now feed the arguments into the tool itself.
try {
  childOptions = childTool.parseArguments(parseableChildArgv,
  null, childOptions, false);
  childTool.appendArgs(extraChildArgv);
  childTool.validateOptions(childOptions);
} catch (ParseException pe) {
  LOG.error("Error parsing arguments to the job-specific tool.");
  LOG.error("See 'sqoop help ' for usage.");
  return 1;
} catch (SqoopOptions.InvalidOptionsException e) {
  System.err.println(e.getMessage());
  return 1;
}
{code}

The ParseException pe's message has been dropped, we should print out the 
message in the exception so that more meaningful message will be printed.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (SQOOP-3061) Sqoop --options-file failed with error "Malformed option in options file" even though the query is correct

2017-02-15 Thread Eric Lin (JIRA)

[ 
https://issues.apache.org/jira/browse/SQOOP-3061?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15868800#comment-15868800
 ] 

Eric Lin commented on SQOOP-3061:
-

Hi [~maugli],

Thanks a lot for your help and guidance to get my very first commit. I will try 
to contribute more in the near future.

> Sqoop --options-file failed with error "Malformed option in options file" 
> even though the query is correct
> --
>
> Key: SQOOP-3061
> URL: https://issues.apache.org/jira/browse/SQOOP-3061
> Project: Sqoop
>  Issue Type: Bug
>Affects Versions: 1.4.6
>Reporter: Eric Lin
>Assignee: Eric Lin
>Priority: Minor
>  Labels: patch
> Attachments: SQOOP-3061.2.patch, SQOOP-3061.3.patch, 
> SQOOP-3061.4.patch, SQOOP-3061.5.patch, SQOOP-3061.patch
>
>
> if you have the following in the options file:
> --query
> SELECT * FROM test WHERE a = 'b'
> and then run 
> {code}
> sqoop --options-file 
> {code}
> it will fail with the following error:
> {code}
> 16/11/22 16:08:59 ERROR sqoop.Sqoop: Error while expanding arguments
> java.lang.Exception: Malformed option in options 
> file(/tmp/sqoop_runner_from_stdin_1112_12354__sqoop_options_file): SELECT * 
> FROM test WHERE a = 'b'
> at 
> org.apache.sqoop.util.OptionsFileUtil.removeQuoteCharactersIfNecessary(OptionsFileUtil.java:170)
> at 
> org.apache.sqoop.util.OptionsFileUtil.removeQuotesEncolosingOption(OptionsFileUtil.java:136)
> at 
> org.apache.sqoop.util.OptionsFileUtil.expandArguments(OptionsFileUtil.java:90)
> at 
> com.cloudera.sqoop.util.OptionsFileUtil.expandArguments(OptionsFileUtil.java:33)
> at org.apache.sqoop.Sqoop.runTool(Sqoop.java:199)
> at org.apache.sqoop.Sqoop.runTool(Sqoop.java:227)
> at org.apache.sqoop.Sqoop.main(Sqoop.java:236)
> Malformed option in options 
> file(/tmp/sqoop_runner_from_stdin_1112_12354__sqoop_options_file): SELECT * 
> FROM test WHERE a = 'b'
> {code}
> This is caused by function 
> org.apache.sqoop.util.OptionsFileUtil.removeQuoteCharactersIfNecessary only 
> checks for starting and ending quotes and will fail if the query does not 
> start with a quote but ends with a quote, like the example query above.
> {code}
>   private static String removeQuoteCharactersIfNecessary(String fileName,
>   String option, char quote) throws Exception {
> boolean startingQuote = (option.charAt(0) == quote);
> boolean endingQuote = (option.charAt(option.length() - 1) == quote);
> if (startingQuote && endingQuote) {
>   if (option.length() == 1) {
> throw new Exception("Malformed option in options file("
> + fileName + "): " + option);
>   }
>   return option.substring(1, option.length() - 1);
> }
> if (startingQuote || endingQuote) {
>throw new Exception("Malformed option in options file("
>+ fileName + "): " + option);
> }
> return option;
>   }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (SQOOP-3061) Sqoop --options-file failed with error "Malformed option in options file" even though the query is correct

2017-01-18 Thread Eric Lin (JIRA)

 [ 
https://issues.apache.org/jira/browse/SQOOP-3061?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Lin updated SQOOP-3061:

Attachment: SQOOP-3061.5.patch

Latest patch based on review - SQOOP-3061.5.patch

> Sqoop --options-file failed with error "Malformed option in options file" 
> even though the query is correct
> --
>
> Key: SQOOP-3061
> URL: https://issues.apache.org/jira/browse/SQOOP-3061
> Project: Sqoop
>  Issue Type: Bug
>Affects Versions: 1.4.6
>Reporter: Eric Lin
>Assignee: Eric Lin
>Priority: Minor
>  Labels: patch
> Attachments: SQOOP-3061.2.patch, SQOOP-3061.3.patch, 
> SQOOP-3061.4.patch, SQOOP-3061.5.patch, SQOOP-3061.patch
>
>
> if you have the following in the options file:
> --query
> SELECT * FROM test WHERE a = 'b'
> and then run 
> {code}
> sqoop --options-file 
> {code}
> it will fail with the following error:
> {code}
> 16/11/22 16:08:59 ERROR sqoop.Sqoop: Error while expanding arguments
> java.lang.Exception: Malformed option in options 
> file(/tmp/sqoop_runner_from_stdin_1112_12354__sqoop_options_file): SELECT * 
> FROM test WHERE a = 'b'
> at 
> org.apache.sqoop.util.OptionsFileUtil.removeQuoteCharactersIfNecessary(OptionsFileUtil.java:170)
> at 
> org.apache.sqoop.util.OptionsFileUtil.removeQuotesEncolosingOption(OptionsFileUtil.java:136)
> at 
> org.apache.sqoop.util.OptionsFileUtil.expandArguments(OptionsFileUtil.java:90)
> at 
> com.cloudera.sqoop.util.OptionsFileUtil.expandArguments(OptionsFileUtil.java:33)
> at org.apache.sqoop.Sqoop.runTool(Sqoop.java:199)
> at org.apache.sqoop.Sqoop.runTool(Sqoop.java:227)
> at org.apache.sqoop.Sqoop.main(Sqoop.java:236)
> Malformed option in options 
> file(/tmp/sqoop_runner_from_stdin_1112_12354__sqoop_options_file): SELECT * 
> FROM test WHERE a = 'b'
> {code}
> This is caused by function 
> org.apache.sqoop.util.OptionsFileUtil.removeQuoteCharactersIfNecessary only 
> checks for starting and ending quotes and will fail if the query does not 
> start with a quote but ends with a quote, like the example query above.
> {code}
>   private static String removeQuoteCharactersIfNecessary(String fileName,
>   String option, char quote) throws Exception {
> boolean startingQuote = (option.charAt(0) == quote);
> boolean endingQuote = (option.charAt(option.length() - 1) == quote);
> if (startingQuote && endingQuote) {
>   if (option.length() == 1) {
> throw new Exception("Malformed option in options file("
> + fileName + "): " + option);
>   }
>   return option.substring(1, option.length() - 1);
> }
> if (startingQuote || endingQuote) {
>throw new Exception("Malformed option in options file("
>+ fileName + "): " + option);
> }
> return option;
>   }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (SQOOP-3040) Lost time millisecond precision for Time data type when importing

2017-01-15 Thread Eric Lin (JIRA)

 [ 
https://issues.apache.org/jira/browse/SQOOP-3040?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Lin updated SQOOP-3040:

Assignee: Eric Lin

> Lost time millisecond precision for Time data type when importing
> -
>
> Key: SQOOP-3040
> URL: https://issues.apache.org/jira/browse/SQOOP-3040
> Project: Sqoop
>  Issue Type: Bug
>  Components: tools
>Affects Versions: 1.4.6
>Reporter: Eric Lin
>Assignee: Eric Lin
>
> To re-produce, create a MySQL database with time(6) data type:
> {code}
> CREATE TABLE `test` (
>   `a` time(6) DEFAULT NULL
> ) ENGINE=InnoDB DEFAULT CHARSET=latin1;
> INSERT INTO test VALUES ('16:56:53.09');
> {code}
> Import the data into Sqoop and data will become "16:56:53", the millisecond 
> precision will be lost.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (SQOOP-3061) Sqoop --options-file failed with error "Malformed option in options file" even though the query is correct

2017-01-08 Thread Eric Lin (JIRA)

 [ 
https://issues.apache.org/jira/browse/SQOOP-3061?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Lin updated SQOOP-3061:

Attachment: SQOOP-3061.4.patch

Latest patch based on review

> Sqoop --options-file failed with error "Malformed option in options file" 
> even though the query is correct
> --
>
> Key: SQOOP-3061
> URL: https://issues.apache.org/jira/browse/SQOOP-3061
> Project: Sqoop
>  Issue Type: Bug
>Affects Versions: 1.4.6
>Reporter: Eric Lin
>Assignee: Eric Lin
>Priority: Minor
>  Labels: patch
> Attachments: SQOOP-3061.2.patch, SQOOP-3061.3.patch, 
> SQOOP-3061.4.patch, SQOOP-3061.patch
>
>
> if you have the following in the options file:
> --query
> SELECT * FROM test WHERE a = 'b'
> and then run 
> {code}
> sqoop --options-file 
> {code}
> it will fail with the following error:
> {code}
> 16/11/22 16:08:59 ERROR sqoop.Sqoop: Error while expanding arguments
> java.lang.Exception: Malformed option in options 
> file(/tmp/sqoop_runner_from_stdin_1112_12354__sqoop_options_file): SELECT * 
> FROM test WHERE a = 'b'
> at 
> org.apache.sqoop.util.OptionsFileUtil.removeQuoteCharactersIfNecessary(OptionsFileUtil.java:170)
> at 
> org.apache.sqoop.util.OptionsFileUtil.removeQuotesEncolosingOption(OptionsFileUtil.java:136)
> at 
> org.apache.sqoop.util.OptionsFileUtil.expandArguments(OptionsFileUtil.java:90)
> at 
> com.cloudera.sqoop.util.OptionsFileUtil.expandArguments(OptionsFileUtil.java:33)
> at org.apache.sqoop.Sqoop.runTool(Sqoop.java:199)
> at org.apache.sqoop.Sqoop.runTool(Sqoop.java:227)
> at org.apache.sqoop.Sqoop.main(Sqoop.java:236)
> Malformed option in options 
> file(/tmp/sqoop_runner_from_stdin_1112_12354__sqoop_options_file): SELECT * 
> FROM test WHERE a = 'b'
> {code}
> This is caused by function 
> org.apache.sqoop.util.OptionsFileUtil.removeQuoteCharactersIfNecessary only 
> checks for starting and ending quotes and will fail if the query does not 
> start with a quote but ends with a quote, like the example query above.
> {code}
>   private static String removeQuoteCharactersIfNecessary(String fileName,
>   String option, char quote) throws Exception {
> boolean startingQuote = (option.charAt(0) == quote);
> boolean endingQuote = (option.charAt(option.length() - 1) == quote);
> if (startingQuote && endingQuote) {
>   if (option.length() == 1) {
> throw new Exception("Malformed option in options file("
> + fileName + "): " + option);
>   }
>   return option.substring(1, option.length() - 1);
> }
> if (startingQuote || endingQuote) {
>throw new Exception("Malformed option in options file("
>+ fileName + "): " + option);
> }
> return option;
>   }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (SQOOP-3039) Sqoop unable to export Time data "13:14:12.1234" into Time colum in RMDBS

2016-12-14 Thread Eric Lin (JIRA)

 [ 
https://issues.apache.org/jira/browse/SQOOP-3039?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Lin reassigned SQOOP-3039:
---

Assignee: Eric Lin

> Sqoop unable to export Time data "13:14:12.1234" into Time colum in RMDBS
> -
>
> Key: SQOOP-3039
> URL: https://issues.apache.org/jira/browse/SQOOP-3039
> Project: Sqoop
>  Issue Type: Bug
>  Components: tools
>Affects Versions: 1.4.6
>Reporter: Eric Lin
>Assignee: Eric Lin
>Priority: Minor
> Attachments: SQOOP-3039.2.patch, SQOOP-3039.patch
>
>
> To re-produce:
> Set up MySQL database with following schema:
> {code}
> CREATE TABLE `test` (
>   `a` time(2) DEFAULT NULL
> ) ENGINE=InnoDB DEFAULT CHARSET=latin1
> {code}
> Store the following data in HDFS:
> {code}
> 16:56:53.0999
> 16:56:54.1
> 16:56:53.
> 16:56:54.1230
> {code}
> run Sqoop export command to copy data from HDFS into MySQL:
> {code}
> qoop export --connect jdbc:mysql:///test --username root 
> --password password --table test  -m 1 --driver com.mysql.jdbc.Driver  
> --export-dir /tmp/test
> {code}
> Command will fail with the following error:
> {code}
> java.lang.RuntimeException: Can't parse input data: '16:56:53.0999'
> at t5.__loadFromFields(t5.java:223)
> at t5.parse(t5.java:166)
> at 
> org.apache.sqoop.mapreduce.TextExportMapper.map(TextExportMapper.java:89)
> at 
> org.apache.sqoop.mapreduce.TextExportMapper.map(TextExportMapper.java:39)
> at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:145)
> at 
> org.apache.sqoop.mapreduce.AutoProgressMapper.run(AutoProgressMapper.java:64)
> at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:784)
> at org.apache.hadoop.mapred.MapTask.run(MapTask.java:341)
> at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:163)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
> at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158)
> Caused by: java.lang.NumberFormatException: For input string: "53.0999"
> at 
> java.lang.NumberFormatException.forInputString(NumberFormatException.java:65)
> at java.lang.Integer.parseInt(Integer.java:580)
> at java.lang.Integer.parseInt(Integer.java:615)
> at java.sql.Time.valueOf(Time.java:108)
> at t5.__loadFromFields(t5.java:215)
> ... 12 more
> {code}
> Looks like Sqoop uses java.sql.Time.valueOf function to convert 
> "16:56:53.0999" to Time object, however, this function only accepts Time in 
> "hh:mm:ss" format:
> https://docs.oracle.com/javase/7/docs/api/java/sql/Time.html#valueOf(java.lang.String)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (SQOOP-3061) Sqoop --options-file failed with error "Malformed option in options file" even though the query is correct

2016-12-14 Thread Eric Lin (JIRA)

 [ 
https://issues.apache.org/jira/browse/SQOOP-3061?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Lin reassigned SQOOP-3061:
---

Assignee: Eric Lin

> Sqoop --options-file failed with error "Malformed option in options file" 
> even though the query is correct
> --
>
> Key: SQOOP-3061
> URL: https://issues.apache.org/jira/browse/SQOOP-3061
> Project: Sqoop
>  Issue Type: Bug
>Affects Versions: 1.4.6
>Reporter: Eric Lin
>Assignee: Eric Lin
>Priority: Minor
>  Labels: patch
> Attachments: SQOOP-3061.2.patch, SQOOP-3061.3.patch, SQOOP-3061.patch
>
>
> if you have the following in the options file:
> --query
> SELECT * FROM test WHERE a = 'b'
> and then run 
> {code}
> sqoop --options-file 
> {code}
> it will fail with the following error:
> {code}
> 16/11/22 16:08:59 ERROR sqoop.Sqoop: Error while expanding arguments
> java.lang.Exception: Malformed option in options 
> file(/tmp/sqoop_runner_from_stdin_1112_12354__sqoop_options_file): SELECT * 
> FROM test WHERE a = 'b'
> at 
> org.apache.sqoop.util.OptionsFileUtil.removeQuoteCharactersIfNecessary(OptionsFileUtil.java:170)
> at 
> org.apache.sqoop.util.OptionsFileUtil.removeQuotesEncolosingOption(OptionsFileUtil.java:136)
> at 
> org.apache.sqoop.util.OptionsFileUtil.expandArguments(OptionsFileUtil.java:90)
> at 
> com.cloudera.sqoop.util.OptionsFileUtil.expandArguments(OptionsFileUtil.java:33)
> at org.apache.sqoop.Sqoop.runTool(Sqoop.java:199)
> at org.apache.sqoop.Sqoop.runTool(Sqoop.java:227)
> at org.apache.sqoop.Sqoop.main(Sqoop.java:236)
> Malformed option in options 
> file(/tmp/sqoop_runner_from_stdin_1112_12354__sqoop_options_file): SELECT * 
> FROM test WHERE a = 'b'
> {code}
> This is caused by function 
> org.apache.sqoop.util.OptionsFileUtil.removeQuoteCharactersIfNecessary only 
> checks for starting and ending quotes and will fail if the query does not 
> start with a quote but ends with a quote, like the example query above.
> {code}
>   private static String removeQuoteCharactersIfNecessary(String fileName,
>   String option, char quote) throws Exception {
> boolean startingQuote = (option.charAt(0) == quote);
> boolean endingQuote = (option.charAt(option.length() - 1) == quote);
> if (startingQuote && endingQuote) {
>   if (option.length() == 1) {
> throw new Exception("Malformed option in options file("
> + fileName + "): " + option);
>   }
>   return option.substring(1, option.length() - 1);
> }
> if (startingQuote || endingQuote) {
>throw new Exception("Malformed option in options file("
>+ fileName + "): " + option);
> }
> return option;
>   }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (SQOOP-3042) Sqoop does not clear compile directory under /tmp/sqoop-/compile automatically

2016-12-14 Thread Eric Lin (JIRA)

 [ 
https://issues.apache.org/jira/browse/SQOOP-3042?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Lin reassigned SQOOP-3042:
---

Assignee: Eric Lin

> Sqoop does not clear compile directory under /tmp/sqoop-/compile 
> automatically
> 
>
> Key: SQOOP-3042
> URL: https://issues.apache.org/jira/browse/SQOOP-3042
> Project: Sqoop
>  Issue Type: Bug
>Affects Versions: 1.4.6
>Reporter: Eric Lin
>Assignee: Eric Lin
>Priority: Critical
>  Labels: patch
> Attachments: SQOOP-3042.1.patch, SQOOP-3042.2.patch
>
>
> After running sqoop, all the temp files generated by ClassWriter are left 
> behind on disk, so anyone can check those JAVA files to see the schema of 
> those tables that Sqoop has been interacting with. By default, the directory 
> is under /tmp/sqoop-/compile.
> In class org.apache.sqoop.SqoopOptions, function getNonceJarDir(), I can see 
> that we did add "deleteOnExit" on the temp dir:
> {code}
> for (int attempts = 0; attempts < MAX_DIR_CREATE_ATTEMPTS; attempts++) {
>   hashDir = new File(baseDir, RandomHash.generateMD5String());
>   while (hashDir.exists()) {
> hashDir = new File(baseDir, RandomHash.generateMD5String());
>   }
>   if (hashDir.mkdirs()) {
> // We created the directory. Use it.
> // If this directory is not actually filled with files, delete it
> // when the JVM quits.
> hashDir.deleteOnExit();
> break;
>   }
> }
> {code}
> However, I believe it failed to delete due to directory is not empty.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (SQOOP-3061) Sqoop --options-file failed with error "Malformed option in options file" even though the query is correct

2016-12-08 Thread Eric Lin (JIRA)

[ 
https://issues.apache.org/jira/browse/SQOOP-3061?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15728401#comment-15728401
 ] 

Eric Lin edited comment on SQOOP-3061 at 12/8/16 9:50 AM:
--

Provided new patch as suggested in review: https://reviews.apache.org/r/54251/, 
SQOOP-3061.2.patch.

Thanks [~vasas]!


was (Author: ericlin):
Provided new patch as suggested in review: https://reviews.apache.org/r/54251/.

Thanks [~vasas]!

> Sqoop --options-file failed with error "Malformed option in options file" 
> even though the query is correct
> --
>
> Key: SQOOP-3061
> URL: https://issues.apache.org/jira/browse/SQOOP-3061
> Project: Sqoop
>  Issue Type: Bug
>Affects Versions: 1.4.6
>Reporter: Eric Lin
>Priority: Minor
>  Labels: patch
> Attachments: SQOOP-3061.2.patch, SQOOP-3061.3.patch, SQOOP-3061.patch
>
>
> if you have the following in the options file:
> --query
> SELECT * FROM test WHERE a = 'b'
> and then run 
> {code}
> sqoop --options-file 
> {code}
> it will fail with the following error:
> {code}
> 16/11/22 16:08:59 ERROR sqoop.Sqoop: Error while expanding arguments
> java.lang.Exception: Malformed option in options 
> file(/tmp/sqoop_runner_from_stdin_1112_12354__sqoop_options_file): SELECT * 
> FROM test WHERE a = 'b'
> at 
> org.apache.sqoop.util.OptionsFileUtil.removeQuoteCharactersIfNecessary(OptionsFileUtil.java:170)
> at 
> org.apache.sqoop.util.OptionsFileUtil.removeQuotesEncolosingOption(OptionsFileUtil.java:136)
> at 
> org.apache.sqoop.util.OptionsFileUtil.expandArguments(OptionsFileUtil.java:90)
> at 
> com.cloudera.sqoop.util.OptionsFileUtil.expandArguments(OptionsFileUtil.java:33)
> at org.apache.sqoop.Sqoop.runTool(Sqoop.java:199)
> at org.apache.sqoop.Sqoop.runTool(Sqoop.java:227)
> at org.apache.sqoop.Sqoop.main(Sqoop.java:236)
> Malformed option in options 
> file(/tmp/sqoop_runner_from_stdin_1112_12354__sqoop_options_file): SELECT * 
> FROM test WHERE a = 'b'
> {code}
> This is caused by function 
> org.apache.sqoop.util.OptionsFileUtil.removeQuoteCharactersIfNecessary only 
> checks for starting and ending quotes and will fail if the query does not 
> start with a quote but ends with a quote, like the example query above.
> {code}
>   private static String removeQuoteCharactersIfNecessary(String fileName,
>   String option, char quote) throws Exception {
> boolean startingQuote = (option.charAt(0) == quote);
> boolean endingQuote = (option.charAt(option.length() - 1) == quote);
> if (startingQuote && endingQuote) {
>   if (option.length() == 1) {
> throw new Exception("Malformed option in options file("
> + fileName + "): " + option);
>   }
>   return option.substring(1, option.length() - 1);
> }
> if (startingQuote || endingQuote) {
>throw new Exception("Malformed option in options file("
>+ fileName + "): " + option);
> }
> return option;
>   }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (SQOOP-3061) Sqoop --options-file failed with error "Malformed option in options file" even though the query is correct

2016-12-08 Thread Eric Lin (JIRA)

[ 
https://issues.apache.org/jira/browse/SQOOP-3061?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15731695#comment-15731695
 ] 

Eric Lin edited comment on SQOOP-3061 at 12/8/16 9:49 AM:
--

Based on latest review feedback from [~maugli], I have made some changes into 
SQOOP-3061.3.patch


was (Author: ericlin):
Based on latest review feedback from [~maugli], I have made some changes.

> Sqoop --options-file failed with error "Malformed option in options file" 
> even though the query is correct
> --
>
> Key: SQOOP-3061
> URL: https://issues.apache.org/jira/browse/SQOOP-3061
> Project: Sqoop
>  Issue Type: Bug
>Affects Versions: 1.4.6
>Reporter: Eric Lin
>Priority: Minor
>  Labels: patch
> Attachments: SQOOP-3061.2.patch, SQOOP-3061.3.patch, SQOOP-3061.patch
>
>
> if you have the following in the options file:
> --query
> SELECT * FROM test WHERE a = 'b'
> and then run 
> {code}
> sqoop --options-file 
> {code}
> it will fail with the following error:
> {code}
> 16/11/22 16:08:59 ERROR sqoop.Sqoop: Error while expanding arguments
> java.lang.Exception: Malformed option in options 
> file(/tmp/sqoop_runner_from_stdin_1112_12354__sqoop_options_file): SELECT * 
> FROM test WHERE a = 'b'
> at 
> org.apache.sqoop.util.OptionsFileUtil.removeQuoteCharactersIfNecessary(OptionsFileUtil.java:170)
> at 
> org.apache.sqoop.util.OptionsFileUtil.removeQuotesEncolosingOption(OptionsFileUtil.java:136)
> at 
> org.apache.sqoop.util.OptionsFileUtil.expandArguments(OptionsFileUtil.java:90)
> at 
> com.cloudera.sqoop.util.OptionsFileUtil.expandArguments(OptionsFileUtil.java:33)
> at org.apache.sqoop.Sqoop.runTool(Sqoop.java:199)
> at org.apache.sqoop.Sqoop.runTool(Sqoop.java:227)
> at org.apache.sqoop.Sqoop.main(Sqoop.java:236)
> Malformed option in options 
> file(/tmp/sqoop_runner_from_stdin_1112_12354__sqoop_options_file): SELECT * 
> FROM test WHERE a = 'b'
> {code}
> This is caused by function 
> org.apache.sqoop.util.OptionsFileUtil.removeQuoteCharactersIfNecessary only 
> checks for starting and ending quotes and will fail if the query does not 
> start with a quote but ends with a quote, like the example query above.
> {code}
>   private static String removeQuoteCharactersIfNecessary(String fileName,
>   String option, char quote) throws Exception {
> boolean startingQuote = (option.charAt(0) == quote);
> boolean endingQuote = (option.charAt(option.length() - 1) == quote);
> if (startingQuote && endingQuote) {
>   if (option.length() == 1) {
> throw new Exception("Malformed option in options file("
> + fileName + "): " + option);
>   }
>   return option.substring(1, option.length() - 1);
> }
> if (startingQuote || endingQuote) {
>throw new Exception("Malformed option in options file("
>+ fileName + "): " + option);
> }
> return option;
>   }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (SQOOP-3061) Sqoop --options-file failed with error "Malformed option in options file" even though the query is correct

2016-12-08 Thread Eric Lin (JIRA)

 [ 
https://issues.apache.org/jira/browse/SQOOP-3061?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Lin updated SQOOP-3061:

Attachment: SQOOP-3061.3.patch

Based on latest review feedback from [~maugli], I have made some changes.

> Sqoop --options-file failed with error "Malformed option in options file" 
> even though the query is correct
> --
>
> Key: SQOOP-3061
> URL: https://issues.apache.org/jira/browse/SQOOP-3061
> Project: Sqoop
>  Issue Type: Bug
>Affects Versions: 1.4.6
>Reporter: Eric Lin
>Priority: Minor
>  Labels: patch
> Attachments: SQOOP-3061.2.patch, SQOOP-3061.3.patch, SQOOP-3061.patch
>
>
> if you have the following in the options file:
> --query
> SELECT * FROM test WHERE a = 'b'
> and then run 
> {code}
> sqoop --options-file 
> {code}
> it will fail with the following error:
> {code}
> 16/11/22 16:08:59 ERROR sqoop.Sqoop: Error while expanding arguments
> java.lang.Exception: Malformed option in options 
> file(/tmp/sqoop_runner_from_stdin_1112_12354__sqoop_options_file): SELECT * 
> FROM test WHERE a = 'b'
> at 
> org.apache.sqoop.util.OptionsFileUtil.removeQuoteCharactersIfNecessary(OptionsFileUtil.java:170)
> at 
> org.apache.sqoop.util.OptionsFileUtil.removeQuotesEncolosingOption(OptionsFileUtil.java:136)
> at 
> org.apache.sqoop.util.OptionsFileUtil.expandArguments(OptionsFileUtil.java:90)
> at 
> com.cloudera.sqoop.util.OptionsFileUtil.expandArguments(OptionsFileUtil.java:33)
> at org.apache.sqoop.Sqoop.runTool(Sqoop.java:199)
> at org.apache.sqoop.Sqoop.runTool(Sqoop.java:227)
> at org.apache.sqoop.Sqoop.main(Sqoop.java:236)
> Malformed option in options 
> file(/tmp/sqoop_runner_from_stdin_1112_12354__sqoop_options_file): SELECT * 
> FROM test WHERE a = 'b'
> {code}
> This is caused by function 
> org.apache.sqoop.util.OptionsFileUtil.removeQuoteCharactersIfNecessary only 
> checks for starting and ending quotes and will fail if the query does not 
> start with a quote but ends with a quote, like the example query above.
> {code}
>   private static String removeQuoteCharactersIfNecessary(String fileName,
>   String option, char quote) throws Exception {
> boolean startingQuote = (option.charAt(0) == quote);
> boolean endingQuote = (option.charAt(option.length() - 1) == quote);
> if (startingQuote && endingQuote) {
>   if (option.length() == 1) {
> throw new Exception("Malformed option in options file("
> + fileName + "): " + option);
>   }
>   return option.substring(1, option.length() - 1);
> }
> if (startingQuote || endingQuote) {
>throw new Exception("Malformed option in options file("
>+ fileName + "): " + option);
> }
> return option;
>   }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (SQOOP-2973) Is sqoop version 1.4.6 is compatible with hbase version 1.2.1?

2016-12-07 Thread Eric Lin (JIRA)

 [ 
https://issues.apache.org/jira/browse/SQOOP-2973?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Lin updated SQOOP-2973:

Attachment: hbase1.2.4-error.out

Hi [~vasas],

Sorry about the delay. I have attached the hbase1.2.4-error.out file which 
contains the verbose output of the following command:

{code}
sqoop import --connect jdbc:mysql://localhost/test --username root --password 
'password' --table hbase_import  -m 1 --hbase-table hbase_import 
--column-family cf --hbase-create-table --verbose  --hbase-row-key 'rk' 
--verbose
{code}

Thanks

> Is  sqoop version 1.4.6 is compatible with hbase version 1.2.1?
> ---
>
> Key: SQOOP-2973
> URL: https://issues.apache.org/jira/browse/SQOOP-2973
> Project: Sqoop
>  Issue Type: Bug
> Environment: Haddop  version 2.7.2
>Reporter: Rajib Mandal
> Attachments: hbase1.2.4-error.out
>
>
> We are getting below error while importing data from Oracle to Hbase
> error Exception in thread "main" java.lang.NoSuchMethodError: 
> org.apache.hadoop.hbase.HTableDescriptor.addFamily(Lorg/apache/hadoop/hbase/HColumnDescriptor;)V
>  
> The command executed
>  sqoop import --connect jdbc:oracle:thin:@//orcl. --username sysdba 
> --password  --table GENDER2 --columns "EMPLOYEE_ID,FIRST_NAME,GENDER" 
> --hbase-table employee --column-family GENDER2 --hbase-row-key EMPLOYEE_ID 
> --hbase-create-table
> Is there any way to resolve this?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (SQOOP-3061) Sqoop --options-file failed with error "Malformed option in options file" even though the query is correct

2016-12-01 Thread Eric Lin (JIRA)

[ 
https://issues.apache.org/jira/browse/SQOOP-3061?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15711666#comment-15711666
 ] 

Eric Lin commented on SQOOP-3061:
-

Hi [~vasas],

Thanks for the offer to review my patch. I have created the review in 
https://reviews.apache.org/r/54251/.

Cheers

> Sqoop --options-file failed with error "Malformed option in options file" 
> even though the query is correct
> --
>
> Key: SQOOP-3061
> URL: https://issues.apache.org/jira/browse/SQOOP-3061
> Project: Sqoop
>  Issue Type: Bug
>Affects Versions: 1.4.6
>Reporter: Eric Lin
>Priority: Minor
>  Labels: patch
> Attachments: SQOOP-3061.patch
>
>
> if you have the following in the options file:
> --query
> SELECT * FROM test WHERE a = 'b'
> and then run 
> {code}
> sqoop --options-file 
> {code}
> it will fail with the following error:
> {code}
> 16/11/22 16:08:59 ERROR sqoop.Sqoop: Error while expanding arguments
> java.lang.Exception: Malformed option in options 
> file(/tmp/sqoop_runner_from_stdin_1112_12354__sqoop_options_file): SELECT * 
> FROM test WHERE a = 'b'
> at 
> org.apache.sqoop.util.OptionsFileUtil.removeQuoteCharactersIfNecessary(OptionsFileUtil.java:170)
> at 
> org.apache.sqoop.util.OptionsFileUtil.removeQuotesEncolosingOption(OptionsFileUtil.java:136)
> at 
> org.apache.sqoop.util.OptionsFileUtil.expandArguments(OptionsFileUtil.java:90)
> at 
> com.cloudera.sqoop.util.OptionsFileUtil.expandArguments(OptionsFileUtil.java:33)
> at org.apache.sqoop.Sqoop.runTool(Sqoop.java:199)
> at org.apache.sqoop.Sqoop.runTool(Sqoop.java:227)
> at org.apache.sqoop.Sqoop.main(Sqoop.java:236)
> Malformed option in options 
> file(/tmp/sqoop_runner_from_stdin_1112_12354__sqoop_options_file): SELECT * 
> FROM test WHERE a = 'b'
> {code}
> This is caused by function 
> org.apache.sqoop.util.OptionsFileUtil.removeQuoteCharactersIfNecessary only 
> checks for starting and ending quotes and will fail if the query does not 
> start with a quote but ends with a quote, like the example query above.
> {code}
>   private static String removeQuoteCharactersIfNecessary(String fileName,
>   String option, char quote) throws Exception {
> boolean startingQuote = (option.charAt(0) == quote);
> boolean endingQuote = (option.charAt(option.length() - 1) == quote);
> if (startingQuote && endingQuote) {
>   if (option.length() == 1) {
> throw new Exception("Malformed option in options file("
> + fileName + "): " + option);
>   }
>   return option.substring(1, option.length() - 1);
> }
> if (startingQuote || endingQuote) {
>throw new Exception("Malformed option in options file("
>+ fileName + "): " + option);
> }
> return option;
>   }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (SQOOP-2970) Can we use oozie to schedule sqoop2 jobs

2016-11-27 Thread Eric Lin (JIRA)

[ 
https://issues.apache.org/jira/browse/SQOOP-2970?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15699415#comment-15699415
 ] 

Eric Lin commented on SQOOP-2970:
-

I believe that's more of a question for Oozie, rather than Sqoop?

> Can we use oozie to schedule sqoop2 jobs
> 
>
> Key: SQOOP-2970
> URL: https://issues.apache.org/jira/browse/SQOOP-2970
> Project: Sqoop
>  Issue Type: Wish
>  Components: sqoop2-hdfs-connector, sqoop2-jdbc-connector, 
> sqoop2-kite-connector, sqoop2-server
>Affects Versions: 1.99.6
>Reporter: Prakash K
>  Labels: easyfix
>
> I am using Oozie-4.2.0 and Sqoop-1.99.6 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (SQOOP-3061) Sqoop --options-file failed with error "Malformed option in options file" even though the query is correct

2016-11-24 Thread Eric Lin (JIRA)

 [ 
https://issues.apache.org/jira/browse/SQOOP-3061?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Lin updated SQOOP-3061:

Description: 
if you have the following in the options file:

--query
SELECT * FROM test WHERE a = 'b'

and then run 

{code}
sqoop --options-file 
{code}

it will fail with the following error:

{code}
16/11/22 16:08:59 ERROR sqoop.Sqoop: Error while expanding arguments
java.lang.Exception: Malformed option in options 
file(/tmp/sqoop_runner_from_stdin_1112_12354__sqoop_options_file): SELECT * 
FROM test WHERE a = 'b'
at 
org.apache.sqoop.util.OptionsFileUtil.removeQuoteCharactersIfNecessary(OptionsFileUtil.java:170)
at 
org.apache.sqoop.util.OptionsFileUtil.removeQuotesEncolosingOption(OptionsFileUtil.java:136)
at 
org.apache.sqoop.util.OptionsFileUtil.expandArguments(OptionsFileUtil.java:90)
at 
com.cloudera.sqoop.util.OptionsFileUtil.expandArguments(OptionsFileUtil.java:33)
at org.apache.sqoop.Sqoop.runTool(Sqoop.java:199)
at org.apache.sqoop.Sqoop.runTool(Sqoop.java:227)
at org.apache.sqoop.Sqoop.main(Sqoop.java:236)
Malformed option in options 
file(/tmp/sqoop_runner_from_stdin_1112_12354__sqoop_options_file): SELECT * 
FROM test WHERE a = 'b'
{code}

This is caused by function 
org.apache.sqoop.util.OptionsFileUtil.removeQuoteCharactersIfNecessary only 
checks for starting and ending quotes and will fail if the query does not start 
with a quote but ends with a quote, like the example query above.

{code}
  private static String removeQuoteCharactersIfNecessary(String fileName,
  String option, char quote) throws Exception {
boolean startingQuote = (option.charAt(0) == quote);
boolean endingQuote = (option.charAt(option.length() - 1) == quote);

if (startingQuote && endingQuote) {
  if (option.length() == 1) {
throw new Exception("Malformed option in options file("
+ fileName + "): " + option);
  }
  return option.substring(1, option.length() - 1);
}

if (startingQuote || endingQuote) {
   throw new Exception("Malformed option in options file("
   + fileName + "): " + option);
}

return option;
  }
{code}

  was:
if you have the following in the options file:

--query
SELECT * FROM test WHERE a = 'b'

and then run 

{code}
sqoop --options-file 
{code}

it will fail with the following error:

{code}
16/11/22 16:08:59 ERROR sqoop.Sqoop: Error while expanding arguments
java.lang.Exception: Malformed option in options 
file(/tmp/sqoop_runner_from_stdin_1112_12354__sqoop_options_file): SELECT * 
FROM test WHERE a = 'b'
at 
org.apache.sqoop.util.OptionsFileUtil.removeQuoteCharactersIfNecessary(OptionsFileUtil.java:170)
at 
org.apache.sqoop.util.OptionsFileUtil.removeQuotesEncolosingOption(OptionsFileUtil.java:136)
at 
org.apache.sqoop.util.OptionsFileUtil.expandArguments(OptionsFileUtil.java:90)
at 
com.cloudera.sqoop.util.OptionsFileUtil.expandArguments(OptionsFileUtil.java:33)
at org.apache.sqoop.Sqoop.runTool(Sqoop.java:199)
at org.apache.sqoop.Sqoop.runTool(Sqoop.java:227)
at org.apache.sqoop.Sqoop.main(Sqoop.java:236)
Malformed option in options 
file(/tmp/sqoop_runner_from_stdin_1112_12354__sqoop_options_file): SELECT * 
FROM test WHERE a = 'b'
{code}

This is caused by function 
org.apache.sqoop.util.OptionsFileUtil.removeQuoteCharactersIfNecessary only 
checks for starting and ending quotes and will fail if the query does not start 
with a quote but ends with a quote, like the example query above.


> Sqoop --options-file failed with error "Malformed option in options file" 
> even though the query is correct
> --
>
> Key: SQOOP-3061
> URL: https://issues.apache.org/jira/browse/SQOOP-3061
> Project: Sqoop
>  Issue Type: Bug
>Affects Versions: 1.4.6
>Reporter: Eric Lin
>Priority: Minor
>  Labels: patch
> Attachments: SQOOP-3061.patch
>
>
> if you have the following in the options file:
> --query
> SELECT * FROM test WHERE a = 'b'
> and then run 
> {code}
> sqoop --options-file 
> {code}
> it will fail with the following error:
> {code}
> 16/11/22 16:08:59 ERROR sqoop.Sqoop: Error while expanding arguments
> java.lang.Exception: Malformed option in options 
> file(/tmp/sqoop_runner_from_stdin_1112_12354__sqoop_options_file): SELECT * 
> FROM test WHERE a = 'b'
> at 
> org.apache.sqoop.util.OptionsFileUtil.removeQuoteCharactersIfNecessary(OptionsFileUtil.java:170)
> at 
> org.apache.sqoop.util.OptionsFileUtil.removeQuotesEncolosingOption(OptionsFileUtil.java:136)
> at 
> org.apache.sqoop.util.OptionsFileUtil.expandArguments(OptionsFileUtil.java:90)
> at 
> com.cloudera.sqoop.util.OptionsFileUtil.expandArguments(OptionsFileUtil.java:33)
> at org.apache.sqoop.Sqoop.runTool(Sqoop.java:199)
> at org.apache.sqoop.Sqoop.runTool(Sqoop.java:227)

[jira] [Updated] (SQOOP-3061) Sqoop --options-file failed with error "Malformed option in options file" even though the query is correct

2016-11-24 Thread Eric Lin (JIRA)

 [ 
https://issues.apache.org/jira/browse/SQOOP-3061?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Lin updated SQOOP-3061:

Description: 
if you have the following in the options file:

--query
SELECT * FROM test WHERE a = 'b'

and then run 

{code}
sqoop --options-file 
{code}

it will fail with the following error:

{code}
16/11/22 16:08:59 ERROR sqoop.Sqoop: Error while expanding arguments
java.lang.Exception: Malformed option in options 
file(/tmp/sqoop_runner_from_stdin_1112_12354__sqoop_options_file): SELECT * 
FROM test WHERE a = 'b'
at 
org.apache.sqoop.util.OptionsFileUtil.removeQuoteCharactersIfNecessary(OptionsFileUtil.java:170)
at 
org.apache.sqoop.util.OptionsFileUtil.removeQuotesEncolosingOption(OptionsFileUtil.java:136)
at 
org.apache.sqoop.util.OptionsFileUtil.expandArguments(OptionsFileUtil.java:90)
at 
com.cloudera.sqoop.util.OptionsFileUtil.expandArguments(OptionsFileUtil.java:33)
at org.apache.sqoop.Sqoop.runTool(Sqoop.java:199)
at org.apache.sqoop.Sqoop.runTool(Sqoop.java:227)
at org.apache.sqoop.Sqoop.main(Sqoop.java:236)
Malformed option in options 
file(/tmp/sqoop_runner_from_stdin_1112_12354__sqoop_options_file): SELECT * 
FROM test WHERE a = 'b'
{code}

This is caused by function 
org.apache.sqoop.util.OptionsFileUtil.removeQuoteCharactersIfNecessary only 
checks for starting and ending quotes and will fail if the query does not start 
with a quote but ends with a quote, like the example query above.

  was:
if you have the following in the options file:

--query
SELECT * FROM test WHERE a = 'b'

it will fail with the following error:

{code}
16/11/22 16:08:59 ERROR sqoop.Sqoop: Error while expanding arguments
java.lang.Exception: Malformed option in options 
file(/tmp/sqoop_runner_from_stdin_1112_12354__sqoop_options_file): SELECT * 
FROM test WHERE a = 'b'
at 
org.apache.sqoop.util.OptionsFileUtil.removeQuoteCharactersIfNecessary(OptionsFileUtil.java:170)
at 
org.apache.sqoop.util.OptionsFileUtil.removeQuotesEncolosingOption(OptionsFileUtil.java:136)
at 
org.apache.sqoop.util.OptionsFileUtil.expandArguments(OptionsFileUtil.java:90)
at 
com.cloudera.sqoop.util.OptionsFileUtil.expandArguments(OptionsFileUtil.java:33)
at org.apache.sqoop.Sqoop.runTool(Sqoop.java:199)
at org.apache.sqoop.Sqoop.runTool(Sqoop.java:227)
at org.apache.sqoop.Sqoop.main(Sqoop.java:236)
Malformed option in options 
file(/tmp/sqoop_runner_from_stdin_1112_12354__sqoop_options_file): SELECT * 
FROM test WHERE a = 'b'
{code}

This is caused by function 
org.apache.sqoop.util.OptionsFileUtil.removeQuoteCharactersIfNecessary only 
checks for starting and ending quotes and will fail if the query does not start 
with a quote but ends with a quote, like the example query above.


> Sqoop --options-file failed with error "Malformed option in options file" 
> even though the query is correct
> --
>
> Key: SQOOP-3061
> URL: https://issues.apache.org/jira/browse/SQOOP-3061
> Project: Sqoop
>  Issue Type: Bug
>Affects Versions: 1.4.6
>Reporter: Eric Lin
>Priority: Minor
>  Labels: patch
> Attachments: SQOOP-3061.patch
>
>
> if you have the following in the options file:
> --query
> SELECT * FROM test WHERE a = 'b'
> and then run 
> {code}
> sqoop --options-file 
> {code}
> it will fail with the following error:
> {code}
> 16/11/22 16:08:59 ERROR sqoop.Sqoop: Error while expanding arguments
> java.lang.Exception: Malformed option in options 
> file(/tmp/sqoop_runner_from_stdin_1112_12354__sqoop_options_file): SELECT * 
> FROM test WHERE a = 'b'
> at 
> org.apache.sqoop.util.OptionsFileUtil.removeQuoteCharactersIfNecessary(OptionsFileUtil.java:170)
> at 
> org.apache.sqoop.util.OptionsFileUtil.removeQuotesEncolosingOption(OptionsFileUtil.java:136)
> at 
> org.apache.sqoop.util.OptionsFileUtil.expandArguments(OptionsFileUtil.java:90)
> at 
> com.cloudera.sqoop.util.OptionsFileUtil.expandArguments(OptionsFileUtil.java:33)
> at org.apache.sqoop.Sqoop.runTool(Sqoop.java:199)
> at org.apache.sqoop.Sqoop.runTool(Sqoop.java:227)
> at org.apache.sqoop.Sqoop.main(Sqoop.java:236)
> Malformed option in options 
> file(/tmp/sqoop_runner_from_stdin_1112_12354__sqoop_options_file): SELECT * 
> FROM test WHERE a = 'b'
> {code}
> This is caused by function 
> org.apache.sqoop.util.OptionsFileUtil.removeQuoteCharactersIfNecessary only 
> checks for starting and ending quotes and will fail if the query does not 
> start with a quote but ends with a quote, like the example query above.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (SQOOP-3061) Sqoop --options-file failed with error "Malformed option in options file" even though the query is correct

2016-11-24 Thread Eric Lin (JIRA)

 [ 
https://issues.apache.org/jira/browse/SQOOP-3061?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Lin updated SQOOP-3061:

Attachment: SQOOP-3061.patch

Added patch to take care of the use case and added test cases to validate the 
change.

As part of the change, Sqoop will also reject any queries that do not have 
matched quotes, which are not detected before, like below:

SELECT * FROM test WHERE a = 'b"

Existing test cases all passed without issues.

> Sqoop --options-file failed with error "Malformed option in options file" 
> even though the query is correct
> --
>
> Key: SQOOP-3061
> URL: https://issues.apache.org/jira/browse/SQOOP-3061
> Project: Sqoop
>  Issue Type: Bug
>Affects Versions: 1.4.6
>Reporter: Eric Lin
>Priority: Minor
> Attachments: SQOOP-3061.patch
>
>
> if you have the following in the options file:
> --query
> SELECT * FROM test WHERE a = 'b'
> it will fail with the following error:
> {code}
> 16/11/22 16:08:59 ERROR sqoop.Sqoop: Error while expanding arguments
> java.lang.Exception: Malformed option in options 
> file(/tmp/sqoop_runner_from_stdin_1112_12354__sqoop_options_file): SELECT * 
> FROM test WHERE a = 'b'
> at 
> org.apache.sqoop.util.OptionsFileUtil.removeQuoteCharactersIfNecessary(OptionsFileUtil.java:170)
> at 
> org.apache.sqoop.util.OptionsFileUtil.removeQuotesEncolosingOption(OptionsFileUtil.java:136)
> at 
> org.apache.sqoop.util.OptionsFileUtil.expandArguments(OptionsFileUtil.java:90)
> at 
> com.cloudera.sqoop.util.OptionsFileUtil.expandArguments(OptionsFileUtil.java:33)
> at org.apache.sqoop.Sqoop.runTool(Sqoop.java:199)
> at org.apache.sqoop.Sqoop.runTool(Sqoop.java:227)
> at org.apache.sqoop.Sqoop.main(Sqoop.java:236)
> Malformed option in options 
> file(/tmp/sqoop_runner_from_stdin_1112_12354__sqoop_options_file): SELECT * 
> FROM test WHERE a = 'b'
> {code}
> This is caused by function 
> org.apache.sqoop.util.OptionsFileUtil.removeQuoteCharactersIfNecessary only 
> checks for starting and ending quotes and will fail if the query does not 
> start with a quote but ends with a quote, like the example query above.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (SQOOP-3061) Sqoop --options-file failed with error "Malformed option in options file" even though the query is correct

2016-11-24 Thread Eric Lin (JIRA)
Eric Lin created SQOOP-3061:
---

 Summary: Sqoop --options-file failed with error "Malformed option 
in options file" even though the query is correct
 Key: SQOOP-3061
 URL: https://issues.apache.org/jira/browse/SQOOP-3061
 Project: Sqoop
  Issue Type: Bug
Affects Versions: 1.4.6
Reporter: Eric Lin
Priority: Minor


if you have the following in the options file:

--query
SELECT * FROM test WHERE a = 'b'

it will fail with the following error:

{code}
16/11/22 16:08:59 ERROR sqoop.Sqoop: Error while expanding arguments
java.lang.Exception: Malformed option in options 
file(/tmp/sqoop_runner_from_stdin_1112_12354__sqoop_options_file): SELECT * 
FROM test WHERE a = 'b'
at 
org.apache.sqoop.util.OptionsFileUtil.removeQuoteCharactersIfNecessary(OptionsFileUtil.java:170)
at 
org.apache.sqoop.util.OptionsFileUtil.removeQuotesEncolosingOption(OptionsFileUtil.java:136)
at 
org.apache.sqoop.util.OptionsFileUtil.expandArguments(OptionsFileUtil.java:90)
at 
com.cloudera.sqoop.util.OptionsFileUtil.expandArguments(OptionsFileUtil.java:33)
at org.apache.sqoop.Sqoop.runTool(Sqoop.java:199)
at org.apache.sqoop.Sqoop.runTool(Sqoop.java:227)
at org.apache.sqoop.Sqoop.main(Sqoop.java:236)
Malformed option in options 
file(/tmp/sqoop_runner_from_stdin_1112_12354__sqoop_options_file): SELECT * 
FROM test WHERE a = 'b'
{code}

This is caused by function 
org.apache.sqoop.util.OptionsFileUtil.removeQuoteCharactersIfNecessary only 
checks for starting and ending quotes and will fail if the query does not start 
with a quote but ends with a quote, like the example query above.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (SQOOP-2973) Is sqoop version 1.4.6 is compatible with hbase version 1.2.1?

2016-11-22 Thread Eric Lin (JIRA)

[ 
https://issues.apache.org/jira/browse/SQOOP-2973?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15686459#comment-15686459
 ] 

Eric Lin commented on SQOOP-2973:
-

Hi [~vasas],

Thanks for the update. I have compiled the trunk version of Sqoop with 

{code}
ant -Dhadoopversion=260 -Dhbaseprofile=95
{code}

According to build.xml file, hbaseprofile=95 for hadoopversion=260 will build 
against hbase 1.2.0:

{code}

  
  







  

{code}

I checked the built output jar files, they are all for hbsae 1.2.0 version, so 
in theory that should work right? What did I do wrong or do I need to change 
the build file somehow?

Thanks

> Is  sqoop version 1.4.6 is compatible with hbase version 1.2.1?
> ---
>
> Key: SQOOP-2973
> URL: https://issues.apache.org/jira/browse/SQOOP-2973
> Project: Sqoop
>  Issue Type: Bug
> Environment: Haddop  version 2.7.2
>Reporter: Rajib Mandal
>
> We are getting below error while importing data from Oracle to Hbase
> error Exception in thread "main" java.lang.NoSuchMethodError: 
> org.apache.hadoop.hbase.HTableDescriptor.addFamily(Lorg/apache/hadoop/hbase/HColumnDescriptor;)V
>  
> The command executed
>  sqoop import --connect jdbc:oracle:thin:@//orcl. --username sysdba 
> --password  --table GENDER2 --columns "EMPLOYEE_ID,FIRST_NAME,GENDER" 
> --hbase-table employee --column-family GENDER2 --hbase-row-key EMPLOYEE_ID 
> --hbase-create-table
> Is there any way to resolve this?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


  1   2   >