[jira] [Commented] (SQOOP-3271) DirectNetezzaManager Fails for checkTable method for row validation
[ https://issues.apache.org/jira/browse/SQOOP-3271?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16344063#comment-16344063 ] Shyam Rai commented on SQOOP-3271: -- [~bbonnet] you are right about the ownership of the table. Shouldn't it work if there are appropriate privileges for the table to a non-owner ? > DirectNetezzaManager Fails for checkTable method for row validation > --- > > Key: SQOOP-3271 > URL: https://issues.apache.org/jira/browse/SQOOP-3271 > Project: Sqoop > Issue Type: Bug > Components: connectors >Affects Versions: 1.4.6 >Reporter: Shyam Rai >Priority: Major > > While using --direct method which invokes DirectNetezzaManager, checkTable > method tries to validate 1 row using this query > {code:java} > private static final String QUERY_CHECK_DICTIONARY_FOR_TABLE = > "SELECT 1 FROM _V_TABLE WHERE OWNER= ? " > + " AND TABLENAME = ? "; > {code} > For validity, the check introduced for the query > {code:java} > if (!rs.next()) {code} > is already at the first row and when assessed for next ResultSet, gets into > the exception clause. > Here is an example of the error: > {code:java} > [sqoop@hdp261 sqoopjar]$ sqoop export --connect > jdbc:netezza://10.10.20.14:5480/Test --table MYTEST --username admin > --password password --hcatalog-database default --hcatalog-table mysource > --input-fields-terminated-by "," --input-null-string "N" > --input-null-non-string "N" --direct --batch > Warning: /usr/hdp/2.6.1.0-129/hbase does not exist! HBase imports will fail. > Please set $HBASE_HOME to the root of your HBase installation. > Warning: /usr/hdp/2.6.1.0-129/accumulo does not exist! Accumulo imports will > fail. > Please set $ACCUMULO_HOME to the root of your Accumulo installation. > Listening for transport dt_socket at address: 1 > 17/12/22 20:36:35 INFO sqoop.Sqoop: Running Sqoop version: 1.4.6.2.6.1.0-129 > 17/12/22 20:36:35 WARN tool.BaseSqoopTool: Setting your password on the > command-line is insecure. Consider using -P instead. > 17/12/22 20:36:35 WARN tool.BaseSqoopTool: Input field/record delimiter > options are not used in HCatalog jobs unless the format is text. It is > better to use --hive-import in those cases. For text formats > 17/12/22 20:36:35 INFO manager.SqlManager: Using default fetchSize of 1000 > 17/12/22 20:36:35 INFO tool.CodeGenTool: Beginning code generation > 17/12/22 20:36:45 INFO manager.SqlManager: Executing SQL statement: SELECT > t.* FROM "MYTEST" AS t WHERE 1=0 > 17/12/22 20:36:45 INFO manager.SqlManager: Executing SQL statement: SELECT > t.* FROM "MYTEST" AS t WHERE 1=0 > 17/12/22 20:36:45 INFO orm.CompilationManager: HADOOP_MAPRED_HOME is > /usr/hdp/2.6.1.0-129/hadoop-mapreduce > Note: /tmp/sqoop-sqoop/compile/a82bdc2ed69a4cc79c5ca06fa06c18d8/MYTEST.java > uses or overrides a deprecated API. > Note: Recompile with -Xlint:deprecation for details. > 17/12/22 20:36:48 INFO orm.CompilationManager: Writing jar file: > /tmp/sqoop-sqoop/compile/a82bdc2ed69a4cc79c5ca06fa06c18d8/MYTEST.jar > 17/12/22 20:36:48 ERROR manager.DirectNetezzaManager: MYTEST is not a valid > Netezza table. Please make sure that you have connected to the Netezza DB > and the table name is right. The current values are > connection string : jdbc:netezza://10.10.20.14:5480/Test > table owner : admin > table name : MYTEST > 17/12/22 20:36:48 ERROR tool.ExportTool: Encountered IOException running > export job: java.io.IOException: MYTEST is not a valid Netezza table. Please > make sure that you have connected to the Netezza DB and the table name is > right. The current values are > connection string : jdbc:netezza://10.10.20.14:5480/Test > table owner : admin > table name : MYTEST > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (SQOOP-3271) DirectNetezzaManager Fails for checkTable method for row validation
[ https://issues.apache.org/jira/browse/SQOOP-3271?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16344084#comment-16344084 ] Benjamin BONNET commented on SQOOP-3271: Hi [~shyamsunder...@gmail.com], yes, it _should_ work if there are appropriate privileges but the fact is it does not work. Last year I submitted a Jira about that bug (see https://issues.apache.org/jira/browse/SQOOP-2821 ) and proposed a patch that fixes it. Unfortunately the patch has not been reviewed nor merged by committers yet. But feel free to give it a try and send me some feedback, please. Regards. > DirectNetezzaManager Fails for checkTable method for row validation > --- > > Key: SQOOP-3271 > URL: https://issues.apache.org/jira/browse/SQOOP-3271 > Project: Sqoop > Issue Type: Bug > Components: connectors >Affects Versions: 1.4.6 >Reporter: Shyam Rai >Priority: Major > > While using --direct method which invokes DirectNetezzaManager, checkTable > method tries to validate 1 row using this query > {code:java} > private static final String QUERY_CHECK_DICTIONARY_FOR_TABLE = > "SELECT 1 FROM _V_TABLE WHERE OWNER= ? " > + " AND TABLENAME = ? "; > {code} > For validity, the check introduced for the query > {code:java} > if (!rs.next()) {code} > is already at the first row and when assessed for next ResultSet, gets into > the exception clause. > Here is an example of the error: > {code:java} > [sqoop@hdp261 sqoopjar]$ sqoop export --connect > jdbc:netezza://10.10.20.14:5480/Test --table MYTEST --username admin > --password password --hcatalog-database default --hcatalog-table mysource > --input-fields-terminated-by "," --input-null-string "N" > --input-null-non-string "N" --direct --batch > Warning: /usr/hdp/2.6.1.0-129/hbase does not exist! HBase imports will fail. > Please set $HBASE_HOME to the root of your HBase installation. > Warning: /usr/hdp/2.6.1.0-129/accumulo does not exist! Accumulo imports will > fail. > Please set $ACCUMULO_HOME to the root of your Accumulo installation. > Listening for transport dt_socket at address: 1 > 17/12/22 20:36:35 INFO sqoop.Sqoop: Running Sqoop version: 1.4.6.2.6.1.0-129 > 17/12/22 20:36:35 WARN tool.BaseSqoopTool: Setting your password on the > command-line is insecure. Consider using -P instead. > 17/12/22 20:36:35 WARN tool.BaseSqoopTool: Input field/record delimiter > options are not used in HCatalog jobs unless the format is text. It is > better to use --hive-import in those cases. For text formats > 17/12/22 20:36:35 INFO manager.SqlManager: Using default fetchSize of 1000 > 17/12/22 20:36:35 INFO tool.CodeGenTool: Beginning code generation > 17/12/22 20:36:45 INFO manager.SqlManager: Executing SQL statement: SELECT > t.* FROM "MYTEST" AS t WHERE 1=0 > 17/12/22 20:36:45 INFO manager.SqlManager: Executing SQL statement: SELECT > t.* FROM "MYTEST" AS t WHERE 1=0 > 17/12/22 20:36:45 INFO orm.CompilationManager: HADOOP_MAPRED_HOME is > /usr/hdp/2.6.1.0-129/hadoop-mapreduce > Note: /tmp/sqoop-sqoop/compile/a82bdc2ed69a4cc79c5ca06fa06c18d8/MYTEST.java > uses or overrides a deprecated API. > Note: Recompile with -Xlint:deprecation for details. > 17/12/22 20:36:48 INFO orm.CompilationManager: Writing jar file: > /tmp/sqoop-sqoop/compile/a82bdc2ed69a4cc79c5ca06fa06c18d8/MYTEST.jar > 17/12/22 20:36:48 ERROR manager.DirectNetezzaManager: MYTEST is not a valid > Netezza table. Please make sure that you have connected to the Netezza DB > and the table name is right. The current values are > connection string : jdbc:netezza://10.10.20.14:5480/Test > table owner : admin > table name : MYTEST > 17/12/22 20:36:48 ERROR tool.ExportTool: Encountered IOException running > export job: java.io.IOException: MYTEST is not a valid Netezza table. Please > make sure that you have connected to the Netezza DB and the table name is > right. The current values are > connection string : jdbc:netezza://10.10.20.14:5480/Test > table owner : admin > table name : MYTEST > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (SQOOP-3281) Support for Hive UDFs on import
C Scyphers created SQOOP-3281: - Summary: Support for Hive UDFs on import Key: SQOOP-3281 URL: https://issues.apache.org/jira/browse/SQOOP-3281 Project: Sqoop Issue Type: Improvement Components: hive-integration Affects Versions: 1.4.6 Reporter: C Scyphers As many companies are using UDF to establish column level encryption during write time, Sqoop should support applying such a UDF during the write process. This would be an extension of the map-column-hive functionality, where the value of the parseColumnMapping would accept the UDF: {{sqoop import --verbose --connect "jdbcconnectionstring" --username user --password password --hive-import --hive-database hiveschematest --map-column-hive "emptest.id=int,emptest.name=varchar(100),emptest.ssn=UDF_ENCRYPT()" -m 1}} With this approach, the data does not have to be written to HDFS in the clear. This functionality can also be extended to other UDFs (naturally). -- This message was sent by Atlassian JIRA (v7.6.3#76005)