[jira] [Commented] (SQOOP-3149) Sqoop incremental import - NULL column updates are not pulled into HBase table

2018-12-04 Thread anjaiahspr (JIRA)


[ 
https://issues.apache.org/jira/browse/SQOOP-3149?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16709020#comment-16709020
 ] 

anjaiahspr commented on SQOOP-3149:
---

still I am facing same problem Sqoop incremental import data from any database 
to HBase, if source table's column from a row is updated to NULL, then target 
HBase table still showing the previous value for that column.

My Environment is HDP Cluster I am using 
{{org.apache.hadoop.hive.hbase.HBaseStorageHandler.}}

 

{{Thanks}}

{{Anjaiah M}}

 

> Sqoop incremental import -  NULL column updates are not pulled into HBase 
> table
> ---
>
> Key: SQOOP-3149
> URL: https://issues.apache.org/jira/browse/SQOOP-3149
> Project: Sqoop
>  Issue Type: Bug
>  Components: connectors/generic, hbase-integration
>Affects Versions: 1.4.6
>Reporter: Jilani Shaik
>Priority: Major
> Fix For: 1.4.7
>
> Attachments: hbase_delete_support_in_incremental_import
>
>
> Sqoop incremental import data from any database to HBase, if source table's 
> column from a row is updated to NULL, then target HBase table still showing 
> the previous value for that column. 
> So if you do a scan on the table for that row, HBase shows the previous 
> values of the column.
> Expected Result: Sqoop incremental import, If NULL columns are there in 
> source, then HBase need not store that and if it already exists need to 
> delete that column for a given row.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (SQOOP-3396) Add parquet numeric support for Parquet in Hive import

2018-12-04 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/SQOOP-3396?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16708940#comment-16708940
 ] 

ASF GitHub Bot commented on SQOOP-3396:
---

Github user szvasas commented on a diff in the pull request:

https://github.com/apache/sqoop/pull/60#discussion_r238731504
  
--- Diff: src/java/org/apache/sqoop/hive/HiveTypes.java ---
@@ -83,27 +89,58 @@ public static String toHiveType(int sqlType) {
   }
   }
 
-  public static String toHiveType(Schema.Type avroType) {
-  switch (avroType) {
-case BOOLEAN:
-  return HIVE_TYPE_BOOLEAN;
-case INT:
-  return HIVE_TYPE_INT;
-case LONG:
-  return HIVE_TYPE_BIGINT;
-case FLOAT:
-  return HIVE_TYPE_FLOAT;
-case DOUBLE:
-  return HIVE_TYPE_DOUBLE;
-case STRING:
-case ENUM:
-  return HIVE_TYPE_STRING;
-case BYTES:
-case FIXED:
-  return HIVE_TYPE_BINARY;
-default:
-  return null;
+  public static String toHiveType(Schema schema, SqoopOptions options) {
+if (schema.getType() == Schema.Type.UNION) {
+  for (Schema subSchema : schema.getTypes()) {
+if (subSchema.getType() != Schema.Type.NULL) {
+  return toHiveType(subSchema, options);
+}
+  }
+}
+
+Schema.Type avroType = schema.getType();
+switch (avroType) {
+  case BOOLEAN:
+return HIVE_TYPE_BOOLEAN;
+  case INT:
+return HIVE_TYPE_INT;
+  case LONG:
+return HIVE_TYPE_BIGINT;
+  case FLOAT:
+return HIVE_TYPE_FLOAT;
+  case DOUBLE:
+return HIVE_TYPE_DOUBLE;
+  case STRING:
+  case ENUM:
+return HIVE_TYPE_STRING;
+  case BYTES:
+return mapToDecimalOrBinary(schema, options);
+  case FIXED:
+return HIVE_TYPE_BINARY;
+  default:
+throw new RuntimeException(String.format("There is no Hive type 
mapping defined for the Avro type of: %s ", avroType.getName()));
+}
+  }
+
+  private static String mapToDecimalOrBinary(Schema schema, SqoopOptions 
options) {
+boolean logicalTypesEnabled = 
options.getConf().getBoolean(ConfigurationConstants.PROP_ENABLE_PARQUET_LOGICAL_TYPE_DECIMAL,
 false);
+if (logicalTypesEnabled && schema.getLogicalType() != null && 
schema.getLogicalType() instanceof Decimal) {
+  Decimal decimal = (Decimal) schema.getLogicalType();
+
+  // trimming precision and scale to Hive's maximum values.
+  int precision = Math.min(HiveDecimal.MAX_PRECISION, 
decimal.getPrecision());
+  if (precision < decimal.getPrecision()) {
+LOG.warn("Warning! Precision in the Hive table definition will be 
smaller than the actual precision of the column on storage! Hive may not be 
able to read data from this column.");
--- End diff --

Sorry, I meant that apart from the warning messages here we should mention 
it in the documentation too.


> Add parquet numeric support for Parquet in Hive import
> --
>
> Key: SQOOP-3396
> URL: https://issues.apache.org/jira/browse/SQOOP-3396
> Project: Sqoop
>  Issue Type: Sub-task
>Reporter: Fero Szabo
>Assignee: Fero Szabo
>Priority: Major
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] sqoop pull request #60: SQOOP-3396: Add parquet numeric support for Parquet ...

2018-12-04 Thread szvasas
Github user szvasas commented on a diff in the pull request:

https://github.com/apache/sqoop/pull/60#discussion_r238731504
  
--- Diff: src/java/org/apache/sqoop/hive/HiveTypes.java ---
@@ -83,27 +89,58 @@ public static String toHiveType(int sqlType) {
   }
   }
 
-  public static String toHiveType(Schema.Type avroType) {
-  switch (avroType) {
-case BOOLEAN:
-  return HIVE_TYPE_BOOLEAN;
-case INT:
-  return HIVE_TYPE_INT;
-case LONG:
-  return HIVE_TYPE_BIGINT;
-case FLOAT:
-  return HIVE_TYPE_FLOAT;
-case DOUBLE:
-  return HIVE_TYPE_DOUBLE;
-case STRING:
-case ENUM:
-  return HIVE_TYPE_STRING;
-case BYTES:
-case FIXED:
-  return HIVE_TYPE_BINARY;
-default:
-  return null;
+  public static String toHiveType(Schema schema, SqoopOptions options) {
+if (schema.getType() == Schema.Type.UNION) {
+  for (Schema subSchema : schema.getTypes()) {
+if (subSchema.getType() != Schema.Type.NULL) {
+  return toHiveType(subSchema, options);
+}
+  }
+}
+
+Schema.Type avroType = schema.getType();
+switch (avroType) {
+  case BOOLEAN:
+return HIVE_TYPE_BOOLEAN;
+  case INT:
+return HIVE_TYPE_INT;
+  case LONG:
+return HIVE_TYPE_BIGINT;
+  case FLOAT:
+return HIVE_TYPE_FLOAT;
+  case DOUBLE:
+return HIVE_TYPE_DOUBLE;
+  case STRING:
+  case ENUM:
+return HIVE_TYPE_STRING;
+  case BYTES:
+return mapToDecimalOrBinary(schema, options);
+  case FIXED:
+return HIVE_TYPE_BINARY;
+  default:
+throw new RuntimeException(String.format("There is no Hive type 
mapping defined for the Avro type of: %s ", avroType.getName()));
+}
+  }
+
+  private static String mapToDecimalOrBinary(Schema schema, SqoopOptions 
options) {
+boolean logicalTypesEnabled = 
options.getConf().getBoolean(ConfigurationConstants.PROP_ENABLE_PARQUET_LOGICAL_TYPE_DECIMAL,
 false);
+if (logicalTypesEnabled && schema.getLogicalType() != null && 
schema.getLogicalType() instanceof Decimal) {
+  Decimal decimal = (Decimal) schema.getLogicalType();
+
+  // trimming precision and scale to Hive's maximum values.
+  int precision = Math.min(HiveDecimal.MAX_PRECISION, 
decimal.getPrecision());
+  if (precision < decimal.getPrecision()) {
+LOG.warn("Warning! Precision in the Hive table definition will be 
smaller than the actual precision of the column on storage! Hive may not be 
able to read data from this column.");
--- End diff --

Sorry, I meant that apart from the warning messages here we should mention 
it in the documentation too.


---


[jira] [Commented] (SQOOP-3396) Add parquet numeric support for Parquet in Hive import

2018-12-04 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/SQOOP-3396?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16708865#comment-16708865
 ] 

ASF GitHub Bot commented on SQOOP-3396:
---

Github user fszabo2 commented on a diff in the pull request:

https://github.com/apache/sqoop/pull/60#discussion_r238707355
  
--- Diff: src/java/org/apache/sqoop/hive/HiveTypes.java ---
@@ -83,27 +89,58 @@ public static String toHiveType(int sqlType) {
   }
   }
 
-  public static String toHiveType(Schema.Type avroType) {
-  switch (avroType) {
-case BOOLEAN:
-  return HIVE_TYPE_BOOLEAN;
-case INT:
-  return HIVE_TYPE_INT;
-case LONG:
-  return HIVE_TYPE_BIGINT;
-case FLOAT:
-  return HIVE_TYPE_FLOAT;
-case DOUBLE:
-  return HIVE_TYPE_DOUBLE;
-case STRING:
-case ENUM:
-  return HIVE_TYPE_STRING;
-case BYTES:
-case FIXED:
-  return HIVE_TYPE_BINARY;
-default:
-  return null;
+  public static String toHiveType(Schema schema, SqoopOptions options) {
+if (schema.getType() == Schema.Type.UNION) {
+  for (Schema subSchema : schema.getTypes()) {
+if (subSchema.getType() != Schema.Type.NULL) {
+  return toHiveType(subSchema, options);
+}
+  }
+}
+
+Schema.Type avroType = schema.getType();
+switch (avroType) {
+  case BOOLEAN:
+return HIVE_TYPE_BOOLEAN;
+  case INT:
+return HIVE_TYPE_INT;
+  case LONG:
+return HIVE_TYPE_BIGINT;
+  case FLOAT:
+return HIVE_TYPE_FLOAT;
+  case DOUBLE:
+return HIVE_TYPE_DOUBLE;
+  case STRING:
+  case ENUM:
+return HIVE_TYPE_STRING;
+  case BYTES:
+return mapToDecimalOrBinary(schema, options);
+  case FIXED:
+return HIVE_TYPE_BINARY;
+  default:
+throw new RuntimeException(String.format("There is no Hive type 
mapping defined for the Avro type of: %s ", avroType.getName()));
+}
+  }
+
+  private static String mapToDecimalOrBinary(Schema schema, SqoopOptions 
options) {
+boolean logicalTypesEnabled = 
options.getConf().getBoolean(ConfigurationConstants.PROP_ENABLE_PARQUET_LOGICAL_TYPE_DECIMAL,
 false);
+if (logicalTypesEnabled && schema.getLogicalType() != null && 
schema.getLogicalType() instanceof Decimal) {
--- End diff --

I'm learning something new every day! :) removed 


> Add parquet numeric support for Parquet in Hive import
> --
>
> Key: SQOOP-3396
> URL: https://issues.apache.org/jira/browse/SQOOP-3396
> Project: Sqoop
>  Issue Type: Sub-task
>Reporter: Fero Szabo
>Assignee: Fero Szabo
>Priority: Major
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (SQOOP-3417) Execute Oracle XE tests on Travis CI

2018-12-04 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/SQOOP-3417?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16708859#comment-16708859
 ] 

Hudson commented on SQOOP-3417:
---

SUCCESS: Integrated in Jenkins build Sqoop-hadoop200 #1245 (See 
[https://builds.apache.org/job/Sqoop-hadoop200/1245/])
SQOOP-3417: Execute Oracle XE tests on Travis CI (fero: 
[https://git-wip-us.apache.org/repos/asf?p=sqoop.git=commit=302674d96b18bae3c5283d16603afb985b892795])
* (edit) COMPILING.txt
* (edit) .travis.yml
* (edit) build.gradle
* (edit) gradle.properties


> Execute Oracle XE tests on Travis CI
> 
>
> Key: SQOOP-3417
> URL: https://issues.apache.org/jira/browse/SQOOP-3417
> Project: Sqoop
>  Issue Type: Test
>Affects Versions: 1.4.7
>Reporter: Szabolcs Vasas
>Assignee: Szabolcs Vasas
>Priority: Major
>
> The task is to enable the Travis CI to execute Oracle XE tests too 
> automatically.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (SQOOP-3396) Add parquet numeric support for Parquet in Hive import

2018-12-04 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/SQOOP-3396?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16708864#comment-16708864
 ] 

ASF GitHub Bot commented on SQOOP-3396:
---

Github user fszabo2 commented on a diff in the pull request:

https://github.com/apache/sqoop/pull/60#discussion_r238707092
  
--- Diff: 
src/test/org/apache/sqoop/importjob/numerictypes/NumericTypesImportTestBase.java
 ---
@@ -65,240 +46,79 @@
  * 2. Decimal padding during avro or parquet import
  * In case of Oracle and Postgres, Sqoop has to pad the values with 0s to 
avoid errors.
  */
-public abstract class NumericTypesImportTestBase extends ImportJobTestCase 
implements DatabaseAdapterFactory {
+public abstract class NumericTypesImportTestBase extends ThirdPartyTestBase  {
 
   public static final Log LOG = 
LogFactory.getLog(NumericTypesImportTestBase.class.getName());
 
-  private Configuration conf = new Configuration();
-
-  private final T configuration;
-  private final DatabaseAdapter adapter;
   private final boolean failWithoutExtraArgs;
   private final boolean failWithPadding;
 
-  // Constants for the basic test case, that doesn't use extra arguments
-  // that are required to avoid errors, i.e. padding and default precision 
and scale.
-  protected final static boolean SUCCEED_WITHOUT_EXTRA_ARGS = false;
-  protected final static boolean FAIL_WITHOUT_EXTRA_ARGS = true;
-
-  // Constants for the test case that has padding specified but not 
default precision and scale.
-  protected final static boolean SUCCEED_WITH_PADDING_ONLY = false;
-  protected final static boolean FAIL_WITH_PADDING_ONLY = true;
-
-  private Path tableDirPath;
-
   public NumericTypesImportTestBase(T configuration, boolean 
failWithoutExtraArgs, boolean failWithPaddingOnly) {
-this.adapter = createAdapter();
-this.configuration = configuration;
+super(configuration);
 this.failWithoutExtraArgs = failWithoutExtraArgs;
 this.failWithPadding = failWithPaddingOnly;
   }
 
-  @Rule
-  public ExpectedException thrown = ExpectedException.none();
-
-  @Override
-  protected Configuration getConf() {
-return conf;
-  }
-
-  @Override
-  protected boolean useHsqldbTestServer() {
-return false;
-  }
-
-  @Override
-  protected String getConnectString() {
-return adapter.getConnectionString();
-  }
-
-  @Override
-  protected SqoopOptions getSqoopOptions(Configuration conf) {
-SqoopOptions opts = new SqoopOptions(conf);
-adapter.injectConnectionParameters(opts);
-return opts;
-  }
-
-  @Override
-  protected void dropTableIfExists(String table) throws SQLException {
-adapter.dropTableIfExists(table, getManager());
-  }
-
   @Before
   public void setUp() {
 super.setUp();
-String[] names = configuration.getNames();
-String[] types = configuration.getTypes();
-createTableWithColTypesAndNames(names, types, new String[0]);
-List inputData = configuration.getSampleData();
-for (String[] input  : inputData) {
-  insertIntoTable(names, types, input);
-}
 tableDirPath = new Path(getWarehouseDir() + "/" + getTableName());
   }
 
-  @After
-  public void tearDown() {
-try {
-  dropTableIfExists(getTableName());
-} catch (SQLException e) {
-  LOG.warn("Error trying to drop table on tearDown: " + e);
-}
-super.tearDown();
-  }
+  public Path tableDirPath;
 
-  private ArgumentArrayBuilder getArgsBuilder(SqoopOptions.FileLayout 
fileLayout) {
-ArgumentArrayBuilder builder = new ArgumentArrayBuilder();
-if (AvroDataFile.equals(fileLayout)) {
-  builder.withOption("as-avrodatafile");
-}
-else if (ParquetFile.equals(fileLayout)) {
-  builder.withOption("as-parquetfile");
-}
+  @Rule
+  public ExpectedException thrown = ExpectedException.none();
+
+  abstract public ArgumentArrayBuilder getArgsBuilder();
+  abstract public void verify();
 
+  public ArgumentArrayBuilder includeCommonOptions(ArgumentArrayBuilder 
builder) {
 return builder.withCommonHadoopFlags(true)
 .withOption("warehouse-dir", getWarehouseDir())
 .withOption("num-mappers", "1")
 .withOption("table", getTableName())
 .withOption("connect", getConnectString());
   }
 
-  /**
-   * Adds properties to the given arg builder for decimal precision and 
scale.
-   * @param builder
-   */
-  private void addPrecisionAndScale(ArgumentArrayBuilder builder) {
-

[GitHub] sqoop pull request #60: SQOOP-3396: Add parquet numeric support for Parquet ...

2018-12-04 Thread fszabo2
Github user fszabo2 commented on a diff in the pull request:

https://github.com/apache/sqoop/pull/60#discussion_r238706982
  
--- Diff: 
src/test/org/apache/sqoop/importjob/numerictypes/NumericTypesImportTestBase.java
 ---
@@ -65,240 +46,79 @@
  * 2. Decimal padding during avro or parquet import
  * In case of Oracle and Postgres, Sqoop has to pad the values with 0s to 
avoid errors.
  */
-public abstract class NumericTypesImportTestBase extends ImportJobTestCase 
implements DatabaseAdapterFactory {
+public abstract class NumericTypesImportTestBase extends ThirdPartyTestBase  {
 
   public static final Log LOG = 
LogFactory.getLog(NumericTypesImportTestBase.class.getName());
 
-  private Configuration conf = new Configuration();
-
-  private final T configuration;
-  private final DatabaseAdapter adapter;
   private final boolean failWithoutExtraArgs;
   private final boolean failWithPadding;
 
-  // Constants for the basic test case, that doesn't use extra arguments
-  // that are required to avoid errors, i.e. padding and default precision 
and scale.
-  protected final static boolean SUCCEED_WITHOUT_EXTRA_ARGS = false;
-  protected final static boolean FAIL_WITHOUT_EXTRA_ARGS = true;
-
-  // Constants for the test case that has padding specified but not 
default precision and scale.
-  protected final static boolean SUCCEED_WITH_PADDING_ONLY = false;
-  protected final static boolean FAIL_WITH_PADDING_ONLY = true;
-
-  private Path tableDirPath;
-
   public NumericTypesImportTestBase(T configuration, boolean 
failWithoutExtraArgs, boolean failWithPaddingOnly) {
-this.adapter = createAdapter();
-this.configuration = configuration;
+super(configuration);
 this.failWithoutExtraArgs = failWithoutExtraArgs;
 this.failWithPadding = failWithPaddingOnly;
   }
 
-  @Rule
-  public ExpectedException thrown = ExpectedException.none();
-
-  @Override
-  protected Configuration getConf() {
-return conf;
-  }
-
-  @Override
-  protected boolean useHsqldbTestServer() {
-return false;
-  }
-
-  @Override
-  protected String getConnectString() {
-return adapter.getConnectionString();
-  }
-
-  @Override
-  protected SqoopOptions getSqoopOptions(Configuration conf) {
-SqoopOptions opts = new SqoopOptions(conf);
-adapter.injectConnectionParameters(opts);
-return opts;
-  }
-
-  @Override
-  protected void dropTableIfExists(String table) throws SQLException {
-adapter.dropTableIfExists(table, getManager());
-  }
-
   @Before
   public void setUp() {
 super.setUp();
-String[] names = configuration.getNames();
-String[] types = configuration.getTypes();
-createTableWithColTypesAndNames(names, types, new String[0]);
-List inputData = configuration.getSampleData();
-for (String[] input  : inputData) {
-  insertIntoTable(names, types, input);
-}
 tableDirPath = new Path(getWarehouseDir() + "/" + getTableName());
   }
 
-  @After
-  public void tearDown() {
-try {
-  dropTableIfExists(getTableName());
-} catch (SQLException e) {
-  LOG.warn("Error trying to drop table on tearDown: " + e);
-}
-super.tearDown();
-  }
+  public Path tableDirPath;
 
-  private ArgumentArrayBuilder getArgsBuilder(SqoopOptions.FileLayout 
fileLayout) {
-ArgumentArrayBuilder builder = new ArgumentArrayBuilder();
-if (AvroDataFile.equals(fileLayout)) {
-  builder.withOption("as-avrodatafile");
-}
-else if (ParquetFile.equals(fileLayout)) {
-  builder.withOption("as-parquetfile");
-}
+  @Rule
+  public ExpectedException thrown = ExpectedException.none();
+
+  abstract public ArgumentArrayBuilder getArgsBuilder();
+  abstract public void verify();
 
+  public ArgumentArrayBuilder includeCommonOptions(ArgumentArrayBuilder 
builder) {
 return builder.withCommonHadoopFlags(true)
 .withOption("warehouse-dir", getWarehouseDir())
 .withOption("num-mappers", "1")
 .withOption("table", getTableName())
 .withOption("connect", getConnectString());
   }
 
-  /**
-   * Adds properties to the given arg builder for decimal precision and 
scale.
-   * @param builder
-   */
-  private void addPrecisionAndScale(ArgumentArrayBuilder builder) {
-
builder.withProperty("sqoop.avro.logical_types.decimal.default.precision", 
"38");
-builder.withProperty("sqoop.avro.logical_types.decimal.default.scale", 
"3");
-  }
-
-  /**
-   * Enables padding for decimals in avro and parquet import.
-   * @param builder

[GitHub] sqoop pull request #60: SQOOP-3396: Add parquet numeric support for Parquet ...

2018-12-04 Thread fszabo2
Github user fszabo2 commented on a diff in the pull request:

https://github.com/apache/sqoop/pull/60#discussion_r238707355
  
--- Diff: src/java/org/apache/sqoop/hive/HiveTypes.java ---
@@ -83,27 +89,58 @@ public static String toHiveType(int sqlType) {
   }
   }
 
-  public static String toHiveType(Schema.Type avroType) {
-  switch (avroType) {
-case BOOLEAN:
-  return HIVE_TYPE_BOOLEAN;
-case INT:
-  return HIVE_TYPE_INT;
-case LONG:
-  return HIVE_TYPE_BIGINT;
-case FLOAT:
-  return HIVE_TYPE_FLOAT;
-case DOUBLE:
-  return HIVE_TYPE_DOUBLE;
-case STRING:
-case ENUM:
-  return HIVE_TYPE_STRING;
-case BYTES:
-case FIXED:
-  return HIVE_TYPE_BINARY;
-default:
-  return null;
+  public static String toHiveType(Schema schema, SqoopOptions options) {
+if (schema.getType() == Schema.Type.UNION) {
+  for (Schema subSchema : schema.getTypes()) {
+if (subSchema.getType() != Schema.Type.NULL) {
+  return toHiveType(subSchema, options);
+}
+  }
+}
+
+Schema.Type avroType = schema.getType();
+switch (avroType) {
+  case BOOLEAN:
+return HIVE_TYPE_BOOLEAN;
+  case INT:
+return HIVE_TYPE_INT;
+  case LONG:
+return HIVE_TYPE_BIGINT;
+  case FLOAT:
+return HIVE_TYPE_FLOAT;
+  case DOUBLE:
+return HIVE_TYPE_DOUBLE;
+  case STRING:
+  case ENUM:
+return HIVE_TYPE_STRING;
+  case BYTES:
+return mapToDecimalOrBinary(schema, options);
+  case FIXED:
+return HIVE_TYPE_BINARY;
+  default:
+throw new RuntimeException(String.format("There is no Hive type 
mapping defined for the Avro type of: %s ", avroType.getName()));
+}
+  }
+
+  private static String mapToDecimalOrBinary(Schema schema, SqoopOptions 
options) {
+boolean logicalTypesEnabled = 
options.getConf().getBoolean(ConfigurationConstants.PROP_ENABLE_PARQUET_LOGICAL_TYPE_DECIMAL,
 false);
+if (logicalTypesEnabled && schema.getLogicalType() != null && 
schema.getLogicalType() instanceof Decimal) {
--- End diff --

I'm learning something new every day! :) removed 


---


[jira] [Commented] (SQOOP-3396) Add parquet numeric support for Parquet in Hive import

2018-12-04 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/SQOOP-3396?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16708863#comment-16708863
 ] 

ASF GitHub Bot commented on SQOOP-3396:
---

Github user fszabo2 commented on a diff in the pull request:

https://github.com/apache/sqoop/pull/60#discussion_r238706982
  
--- Diff: 
src/test/org/apache/sqoop/importjob/numerictypes/NumericTypesImportTestBase.java
 ---
@@ -65,240 +46,79 @@
  * 2. Decimal padding during avro or parquet import
  * In case of Oracle and Postgres, Sqoop has to pad the values with 0s to 
avoid errors.
  */
-public abstract class NumericTypesImportTestBase extends ImportJobTestCase 
implements DatabaseAdapterFactory {
+public abstract class NumericTypesImportTestBase extends ThirdPartyTestBase  {
 
   public static final Log LOG = 
LogFactory.getLog(NumericTypesImportTestBase.class.getName());
 
-  private Configuration conf = new Configuration();
-
-  private final T configuration;
-  private final DatabaseAdapter adapter;
   private final boolean failWithoutExtraArgs;
   private final boolean failWithPadding;
 
-  // Constants for the basic test case, that doesn't use extra arguments
-  // that are required to avoid errors, i.e. padding and default precision 
and scale.
-  protected final static boolean SUCCEED_WITHOUT_EXTRA_ARGS = false;
-  protected final static boolean FAIL_WITHOUT_EXTRA_ARGS = true;
-
-  // Constants for the test case that has padding specified but not 
default precision and scale.
-  protected final static boolean SUCCEED_WITH_PADDING_ONLY = false;
-  protected final static boolean FAIL_WITH_PADDING_ONLY = true;
-
-  private Path tableDirPath;
-
   public NumericTypesImportTestBase(T configuration, boolean 
failWithoutExtraArgs, boolean failWithPaddingOnly) {
-this.adapter = createAdapter();
-this.configuration = configuration;
+super(configuration);
 this.failWithoutExtraArgs = failWithoutExtraArgs;
 this.failWithPadding = failWithPaddingOnly;
   }
 
-  @Rule
-  public ExpectedException thrown = ExpectedException.none();
-
-  @Override
-  protected Configuration getConf() {
-return conf;
-  }
-
-  @Override
-  protected boolean useHsqldbTestServer() {
-return false;
-  }
-
-  @Override
-  protected String getConnectString() {
-return adapter.getConnectionString();
-  }
-
-  @Override
-  protected SqoopOptions getSqoopOptions(Configuration conf) {
-SqoopOptions opts = new SqoopOptions(conf);
-adapter.injectConnectionParameters(opts);
-return opts;
-  }
-
-  @Override
-  protected void dropTableIfExists(String table) throws SQLException {
-adapter.dropTableIfExists(table, getManager());
-  }
-
   @Before
   public void setUp() {
 super.setUp();
-String[] names = configuration.getNames();
-String[] types = configuration.getTypes();
-createTableWithColTypesAndNames(names, types, new String[0]);
-List inputData = configuration.getSampleData();
-for (String[] input  : inputData) {
-  insertIntoTable(names, types, input);
-}
 tableDirPath = new Path(getWarehouseDir() + "/" + getTableName());
   }
 
-  @After
-  public void tearDown() {
-try {
-  dropTableIfExists(getTableName());
-} catch (SQLException e) {
-  LOG.warn("Error trying to drop table on tearDown: " + e);
-}
-super.tearDown();
-  }
+  public Path tableDirPath;
 
-  private ArgumentArrayBuilder getArgsBuilder(SqoopOptions.FileLayout 
fileLayout) {
-ArgumentArrayBuilder builder = new ArgumentArrayBuilder();
-if (AvroDataFile.equals(fileLayout)) {
-  builder.withOption("as-avrodatafile");
-}
-else if (ParquetFile.equals(fileLayout)) {
-  builder.withOption("as-parquetfile");
-}
+  @Rule
+  public ExpectedException thrown = ExpectedException.none();
+
+  abstract public ArgumentArrayBuilder getArgsBuilder();
+  abstract public void verify();
 
+  public ArgumentArrayBuilder includeCommonOptions(ArgumentArrayBuilder 
builder) {
 return builder.withCommonHadoopFlags(true)
 .withOption("warehouse-dir", getWarehouseDir())
 .withOption("num-mappers", "1")
 .withOption("table", getTableName())
 .withOption("connect", getConnectString());
   }
 
-  /**
-   * Adds properties to the given arg builder for decimal precision and 
scale.
-   * @param builder
-   */
-  private void addPrecisionAndScale(ArgumentArrayBuilder builder) {
-

[GitHub] sqoop pull request #60: SQOOP-3396: Add parquet numeric support for Parquet ...

2018-12-04 Thread fszabo2
Github user fszabo2 commented on a diff in the pull request:

https://github.com/apache/sqoop/pull/60#discussion_r238707092
  
--- Diff: 
src/test/org/apache/sqoop/importjob/numerictypes/NumericTypesImportTestBase.java
 ---
@@ -65,240 +46,79 @@
  * 2. Decimal padding during avro or parquet import
  * In case of Oracle and Postgres, Sqoop has to pad the values with 0s to 
avoid errors.
  */
-public abstract class NumericTypesImportTestBase extends ImportJobTestCase 
implements DatabaseAdapterFactory {
+public abstract class NumericTypesImportTestBase extends ThirdPartyTestBase  {
 
   public static final Log LOG = 
LogFactory.getLog(NumericTypesImportTestBase.class.getName());
 
-  private Configuration conf = new Configuration();
-
-  private final T configuration;
-  private final DatabaseAdapter adapter;
   private final boolean failWithoutExtraArgs;
   private final boolean failWithPadding;
 
-  // Constants for the basic test case, that doesn't use extra arguments
-  // that are required to avoid errors, i.e. padding and default precision 
and scale.
-  protected final static boolean SUCCEED_WITHOUT_EXTRA_ARGS = false;
-  protected final static boolean FAIL_WITHOUT_EXTRA_ARGS = true;
-
-  // Constants for the test case that has padding specified but not 
default precision and scale.
-  protected final static boolean SUCCEED_WITH_PADDING_ONLY = false;
-  protected final static boolean FAIL_WITH_PADDING_ONLY = true;
-
-  private Path tableDirPath;
-
   public NumericTypesImportTestBase(T configuration, boolean 
failWithoutExtraArgs, boolean failWithPaddingOnly) {
-this.adapter = createAdapter();
-this.configuration = configuration;
+super(configuration);
 this.failWithoutExtraArgs = failWithoutExtraArgs;
 this.failWithPadding = failWithPaddingOnly;
   }
 
-  @Rule
-  public ExpectedException thrown = ExpectedException.none();
-
-  @Override
-  protected Configuration getConf() {
-return conf;
-  }
-
-  @Override
-  protected boolean useHsqldbTestServer() {
-return false;
-  }
-
-  @Override
-  protected String getConnectString() {
-return adapter.getConnectionString();
-  }
-
-  @Override
-  protected SqoopOptions getSqoopOptions(Configuration conf) {
-SqoopOptions opts = new SqoopOptions(conf);
-adapter.injectConnectionParameters(opts);
-return opts;
-  }
-
-  @Override
-  protected void dropTableIfExists(String table) throws SQLException {
-adapter.dropTableIfExists(table, getManager());
-  }
-
   @Before
   public void setUp() {
 super.setUp();
-String[] names = configuration.getNames();
-String[] types = configuration.getTypes();
-createTableWithColTypesAndNames(names, types, new String[0]);
-List inputData = configuration.getSampleData();
-for (String[] input  : inputData) {
-  insertIntoTable(names, types, input);
-}
 tableDirPath = new Path(getWarehouseDir() + "/" + getTableName());
   }
 
-  @After
-  public void tearDown() {
-try {
-  dropTableIfExists(getTableName());
-} catch (SQLException e) {
-  LOG.warn("Error trying to drop table on tearDown: " + e);
-}
-super.tearDown();
-  }
+  public Path tableDirPath;
 
-  private ArgumentArrayBuilder getArgsBuilder(SqoopOptions.FileLayout 
fileLayout) {
-ArgumentArrayBuilder builder = new ArgumentArrayBuilder();
-if (AvroDataFile.equals(fileLayout)) {
-  builder.withOption("as-avrodatafile");
-}
-else if (ParquetFile.equals(fileLayout)) {
-  builder.withOption("as-parquetfile");
-}
+  @Rule
+  public ExpectedException thrown = ExpectedException.none();
+
+  abstract public ArgumentArrayBuilder getArgsBuilder();
+  abstract public void verify();
 
+  public ArgumentArrayBuilder includeCommonOptions(ArgumentArrayBuilder 
builder) {
 return builder.withCommonHadoopFlags(true)
 .withOption("warehouse-dir", getWarehouseDir())
 .withOption("num-mappers", "1")
 .withOption("table", getTableName())
 .withOption("connect", getConnectString());
   }
 
-  /**
-   * Adds properties to the given arg builder for decimal precision and 
scale.
-   * @param builder
-   */
-  private void addPrecisionAndScale(ArgumentArrayBuilder builder) {
-
builder.withProperty("sqoop.avro.logical_types.decimal.default.precision", 
"38");
-builder.withProperty("sqoop.avro.logical_types.decimal.default.scale", 
"3");
-  }
-
-  /**
-   * Enables padding for decimals in avro and parquet import.
-   * @param builder

[jira] [Commented] (SQOOP-3396) Add parquet numeric support for Parquet in Hive import

2018-12-04 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/SQOOP-3396?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16708813#comment-16708813
 ] 

ASF GitHub Bot commented on SQOOP-3396:
---

Github user fszabo2 commented on a diff in the pull request:

https://github.com/apache/sqoop/pull/60#discussion_r238694723
  
--- Diff: 
src/test/org/apache/sqoop/importjob/numerictypes/NumericTypesParquetImportTestBase.java
 ---
@@ -0,0 +1,89 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.sqoop.importjob.numerictypes;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.fs.Path;
+import org.apache.parquet.schema.MessageType;
+import org.apache.parquet.schema.OriginalType;
+import org.apache.sqoop.importjob.configuration.ParquetTestConfiguration;
+import org.apache.sqoop.testutil.ArgumentArrayBuilder;
+import org.apache.sqoop.testutil.NumericTypesTestUtils;
+import org.apache.sqoop.util.ParquetReader;
+import org.junit.Before;
+
+import java.util.Arrays;
+
+import static org.junit.Assert.assertEquals;
+
+public abstract class NumericTypesParquetImportTestBase extends NumericTypesImportTestBase  {
+
+  public static final Log LOG = 
LogFactory.getLog(NumericTypesParquetImportTestBase.class.getName());
+
+  public NumericTypesParquetImportTestBase(T configuration, boolean 
failWithoutExtraArgs, boolean failWithPaddingOnly) {
+super(configuration, failWithoutExtraArgs, failWithPaddingOnly);
+  }
+
+  @Before
+  public void setUp() {
+super.setUp();
+tableDirPath = new Path(getWarehouseDir() + "/" + getTableName());
--- End diff --

nope.


> Add parquet numeric support for Parquet in Hive import
> --
>
> Key: SQOOP-3396
> URL: https://issues.apache.org/jira/browse/SQOOP-3396
> Project: Sqoop
>  Issue Type: Sub-task
>Reporter: Fero Szabo
>Assignee: Fero Szabo
>Priority: Major
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (SQOOP-3396) Add parquet numeric support for Parquet in Hive import

2018-12-04 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/SQOOP-3396?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16708811#comment-16708811
 ] 

ASF GitHub Bot commented on SQOOP-3396:
---

Github user fszabo2 commented on a diff in the pull request:

https://github.com/apache/sqoop/pull/60#discussion_r238694488
  
--- Diff: 
src/test/org/apache/sqoop/importjob/numerictypes/NumericTypesAvroImportTestBase.java
 ---
@@ -0,0 +1,59 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.sqoop.importjob.numerictypes;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.fs.Path;
+import org.apache.sqoop.importjob.configuration.AvroTestConfiguration;
+import org.apache.sqoop.testutil.ArgumentArrayBuilder;
+import org.apache.sqoop.testutil.AvroTestUtils;
+import org.apache.sqoop.testutil.NumericTypesTestUtils;
+import org.junit.Before;
+
+public abstract class NumericTypesAvroImportTestBase extends NumericTypesImportTestBase  {
+
+  public static final Log LOG = 
LogFactory.getLog(NumericTypesAvroImportTestBase.class.getName());
+
+  public NumericTypesAvroImportTestBase(T configuration, boolean 
failWithoutExtraArgs, boolean failWithPaddingOnly) {
+super(configuration, failWithoutExtraArgs, failWithPaddingOnly);
+  }
+
+  @Before
+  public void setUp() {
+super.setUp();
+tableDirPath = new Path(getWarehouseDir() + "/" + getTableName());
--- End diff --

Nope.


> Add parquet numeric support for Parquet in Hive import
> --
>
> Key: SQOOP-3396
> URL: https://issues.apache.org/jira/browse/SQOOP-3396
> Project: Sqoop
>  Issue Type: Sub-task
>Reporter: Fero Szabo
>Assignee: Fero Szabo
>Priority: Major
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (SQOOP-3396) Add parquet numeric support for Parquet in Hive import

2018-12-04 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/SQOOP-3396?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16708812#comment-16708812
 ] 

ASF GitHub Bot commented on SQOOP-3396:
---

Github user fszabo2 commented on a diff in the pull request:

https://github.com/apache/sqoop/pull/60#discussion_r238694607
  
--- Diff: 
src/test/org/apache/sqoop/importjob/numerictypes/NumericTypesImportTestBase.java
 ---
@@ -65,240 +46,79 @@
  * 2. Decimal padding during avro or parquet import
  * In case of Oracle and Postgres, Sqoop has to pad the values with 0s to 
avoid errors.
  */
-public abstract class NumericTypesImportTestBase extends ImportJobTestCase 
implements DatabaseAdapterFactory {
+public abstract class NumericTypesImportTestBase extends ThirdPartyTestBase  {
 
   public static final Log LOG = 
LogFactory.getLog(NumericTypesImportTestBase.class.getName());
 
-  private Configuration conf = new Configuration();
-
-  private final T configuration;
-  private final DatabaseAdapter adapter;
   private final boolean failWithoutExtraArgs;
   private final boolean failWithPadding;
 
-  // Constants for the basic test case, that doesn't use extra arguments
-  // that are required to avoid errors, i.e. padding and default precision 
and scale.
-  protected final static boolean SUCCEED_WITHOUT_EXTRA_ARGS = false;
-  protected final static boolean FAIL_WITHOUT_EXTRA_ARGS = true;
-
-  // Constants for the test case that has padding specified but not 
default precision and scale.
-  protected final static boolean SUCCEED_WITH_PADDING_ONLY = false;
-  protected final static boolean FAIL_WITH_PADDING_ONLY = true;
-
-  private Path tableDirPath;
-
   public NumericTypesImportTestBase(T configuration, boolean 
failWithoutExtraArgs, boolean failWithPaddingOnly) {
-this.adapter = createAdapter();
-this.configuration = configuration;
+super(configuration);
 this.failWithoutExtraArgs = failWithoutExtraArgs;
 this.failWithPadding = failWithPaddingOnly;
   }
 
-  @Rule
-  public ExpectedException thrown = ExpectedException.none();
-
-  @Override
-  protected Configuration getConf() {
-return conf;
-  }
-
-  @Override
-  protected boolean useHsqldbTestServer() {
-return false;
-  }
-
-  @Override
-  protected String getConnectString() {
-return adapter.getConnectionString();
-  }
-
-  @Override
-  protected SqoopOptions getSqoopOptions(Configuration conf) {
-SqoopOptions opts = new SqoopOptions(conf);
-adapter.injectConnectionParameters(opts);
-return opts;
-  }
-
-  @Override
-  protected void dropTableIfExists(String table) throws SQLException {
-adapter.dropTableIfExists(table, getManager());
-  }
-
   @Before
   public void setUp() {
 super.setUp();
-String[] names = configuration.getNames();
-String[] types = configuration.getTypes();
-createTableWithColTypesAndNames(names, types, new String[0]);
-List inputData = configuration.getSampleData();
-for (String[] input  : inputData) {
-  insertIntoTable(names, types, input);
-}
 tableDirPath = new Path(getWarehouseDir() + "/" + getTableName());
   }
 
-  @After
-  public void tearDown() {
-try {
-  dropTableIfExists(getTableName());
-} catch (SQLException e) {
-  LOG.warn("Error trying to drop table on tearDown: " + e);
-}
-super.tearDown();
-  }
+  public Path tableDirPath;
--- End diff --

Makes sense


> Add parquet numeric support for Parquet in Hive import
> --
>
> Key: SQOOP-3396
> URL: https://issues.apache.org/jira/browse/SQOOP-3396
> Project: Sqoop
>  Issue Type: Sub-task
>Reporter: Fero Szabo
>Assignee: Fero Szabo
>Priority: Major
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] sqoop pull request #60: SQOOP-3396: Add parquet numeric support for Parquet ...

2018-12-04 Thread fszabo2
Github user fszabo2 commented on a diff in the pull request:

https://github.com/apache/sqoop/pull/60#discussion_r238694607
  
--- Diff: 
src/test/org/apache/sqoop/importjob/numerictypes/NumericTypesImportTestBase.java
 ---
@@ -65,240 +46,79 @@
  * 2. Decimal padding during avro or parquet import
  * In case of Oracle and Postgres, Sqoop has to pad the values with 0s to 
avoid errors.
  */
-public abstract class NumericTypesImportTestBase extends ImportJobTestCase 
implements DatabaseAdapterFactory {
+public abstract class NumericTypesImportTestBase extends ThirdPartyTestBase  {
 
   public static final Log LOG = 
LogFactory.getLog(NumericTypesImportTestBase.class.getName());
 
-  private Configuration conf = new Configuration();
-
-  private final T configuration;
-  private final DatabaseAdapter adapter;
   private final boolean failWithoutExtraArgs;
   private final boolean failWithPadding;
 
-  // Constants for the basic test case, that doesn't use extra arguments
-  // that are required to avoid errors, i.e. padding and default precision 
and scale.
-  protected final static boolean SUCCEED_WITHOUT_EXTRA_ARGS = false;
-  protected final static boolean FAIL_WITHOUT_EXTRA_ARGS = true;
-
-  // Constants for the test case that has padding specified but not 
default precision and scale.
-  protected final static boolean SUCCEED_WITH_PADDING_ONLY = false;
-  protected final static boolean FAIL_WITH_PADDING_ONLY = true;
-
-  private Path tableDirPath;
-
   public NumericTypesImportTestBase(T configuration, boolean 
failWithoutExtraArgs, boolean failWithPaddingOnly) {
-this.adapter = createAdapter();
-this.configuration = configuration;
+super(configuration);
 this.failWithoutExtraArgs = failWithoutExtraArgs;
 this.failWithPadding = failWithPaddingOnly;
   }
 
-  @Rule
-  public ExpectedException thrown = ExpectedException.none();
-
-  @Override
-  protected Configuration getConf() {
-return conf;
-  }
-
-  @Override
-  protected boolean useHsqldbTestServer() {
-return false;
-  }
-
-  @Override
-  protected String getConnectString() {
-return adapter.getConnectionString();
-  }
-
-  @Override
-  protected SqoopOptions getSqoopOptions(Configuration conf) {
-SqoopOptions opts = new SqoopOptions(conf);
-adapter.injectConnectionParameters(opts);
-return opts;
-  }
-
-  @Override
-  protected void dropTableIfExists(String table) throws SQLException {
-adapter.dropTableIfExists(table, getManager());
-  }
-
   @Before
   public void setUp() {
 super.setUp();
-String[] names = configuration.getNames();
-String[] types = configuration.getTypes();
-createTableWithColTypesAndNames(names, types, new String[0]);
-List inputData = configuration.getSampleData();
-for (String[] input  : inputData) {
-  insertIntoTable(names, types, input);
-}
 tableDirPath = new Path(getWarehouseDir() + "/" + getTableName());
   }
 
-  @After
-  public void tearDown() {
-try {
-  dropTableIfExists(getTableName());
-} catch (SQLException e) {
-  LOG.warn("Error trying to drop table on tearDown: " + e);
-}
-super.tearDown();
-  }
+  public Path tableDirPath;
--- End diff --

Makes sense


---


[GitHub] sqoop pull request #60: SQOOP-3396: Add parquet numeric support for Parquet ...

2018-12-04 Thread fszabo2
Github user fszabo2 commented on a diff in the pull request:

https://github.com/apache/sqoop/pull/60#discussion_r238694723
  
--- Diff: 
src/test/org/apache/sqoop/importjob/numerictypes/NumericTypesParquetImportTestBase.java
 ---
@@ -0,0 +1,89 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.sqoop.importjob.numerictypes;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.fs.Path;
+import org.apache.parquet.schema.MessageType;
+import org.apache.parquet.schema.OriginalType;
+import org.apache.sqoop.importjob.configuration.ParquetTestConfiguration;
+import org.apache.sqoop.testutil.ArgumentArrayBuilder;
+import org.apache.sqoop.testutil.NumericTypesTestUtils;
+import org.apache.sqoop.util.ParquetReader;
+import org.junit.Before;
+
+import java.util.Arrays;
+
+import static org.junit.Assert.assertEquals;
+
+public abstract class NumericTypesParquetImportTestBase extends NumericTypesImportTestBase  {
+
+  public static final Log LOG = 
LogFactory.getLog(NumericTypesParquetImportTestBase.class.getName());
+
+  public NumericTypesParquetImportTestBase(T configuration, boolean 
failWithoutExtraArgs, boolean failWithPaddingOnly) {
+super(configuration, failWithoutExtraArgs, failWithPaddingOnly);
+  }
+
+  @Before
+  public void setUp() {
+super.setUp();
+tableDirPath = new Path(getWarehouseDir() + "/" + getTableName());
--- End diff --

nope.


---


[GitHub] sqoop pull request #60: SQOOP-3396: Add parquet numeric support for Parquet ...

2018-12-04 Thread fszabo2
Github user fszabo2 commented on a diff in the pull request:

https://github.com/apache/sqoop/pull/60#discussion_r238694488
  
--- Diff: 
src/test/org/apache/sqoop/importjob/numerictypes/NumericTypesAvroImportTestBase.java
 ---
@@ -0,0 +1,59 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.sqoop.importjob.numerictypes;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.fs.Path;
+import org.apache.sqoop.importjob.configuration.AvroTestConfiguration;
+import org.apache.sqoop.testutil.ArgumentArrayBuilder;
+import org.apache.sqoop.testutil.AvroTestUtils;
+import org.apache.sqoop.testutil.NumericTypesTestUtils;
+import org.junit.Before;
+
+public abstract class NumericTypesAvroImportTestBase extends NumericTypesImportTestBase  {
+
+  public static final Log LOG = 
LogFactory.getLog(NumericTypesAvroImportTestBase.class.getName());
+
+  public NumericTypesAvroImportTestBase(T configuration, boolean 
failWithoutExtraArgs, boolean failWithPaddingOnly) {
+super(configuration, failWithoutExtraArgs, failWithPaddingOnly);
+  }
+
+  @Before
+  public void setUp() {
+super.setUp();
+tableDirPath = new Path(getWarehouseDir() + "/" + getTableName());
--- End diff --

Nope.


---


[jira] [Commented] (SQOOP-3396) Add parquet numeric support for Parquet in Hive import

2018-12-04 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/SQOOP-3396?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16708806#comment-16708806
 ] 

ASF GitHub Bot commented on SQOOP-3396:
---

Github user fszabo2 commented on a diff in the pull request:

https://github.com/apache/sqoop/pull/60#discussion_r238693735
  
--- Diff: src/java/org/apache/sqoop/hive/HiveTypes.java ---
@@ -83,27 +89,58 @@ public static String toHiveType(int sqlType) {
   }
   }
 
-  public static String toHiveType(Schema.Type avroType) {
-  switch (avroType) {
-case BOOLEAN:
-  return HIVE_TYPE_BOOLEAN;
-case INT:
-  return HIVE_TYPE_INT;
-case LONG:
-  return HIVE_TYPE_BIGINT;
-case FLOAT:
-  return HIVE_TYPE_FLOAT;
-case DOUBLE:
-  return HIVE_TYPE_DOUBLE;
-case STRING:
-case ENUM:
-  return HIVE_TYPE_STRING;
-case BYTES:
-case FIXED:
-  return HIVE_TYPE_BINARY;
-default:
-  return null;
+  public static String toHiveType(Schema schema, SqoopOptions options) {
+if (schema.getType() == Schema.Type.UNION) {
+  for (Schema subSchema : schema.getTypes()) {
+if (subSchema.getType() != Schema.Type.NULL) {
+  return toHiveType(subSchema, options);
+}
+  }
+}
+
+Schema.Type avroType = schema.getType();
+switch (avroType) {
+  case BOOLEAN:
+return HIVE_TYPE_BOOLEAN;
+  case INT:
+return HIVE_TYPE_INT;
+  case LONG:
+return HIVE_TYPE_BIGINT;
+  case FLOAT:
+return HIVE_TYPE_FLOAT;
+  case DOUBLE:
+return HIVE_TYPE_DOUBLE;
+  case STRING:
+  case ENUM:
+return HIVE_TYPE_STRING;
+  case BYTES:
+return mapToDecimalOrBinary(schema, options);
+  case FIXED:
+return HIVE_TYPE_BINARY;
+  default:
+throw new RuntimeException(String.format("There is no Hive type 
mapping defined for the Avro type of: %s ", avroType.getName()));
+}
+  }
+
+  private static String mapToDecimalOrBinary(Schema schema, SqoopOptions 
options) {
+boolean logicalTypesEnabled = 
options.getConf().getBoolean(ConfigurationConstants.PROP_ENABLE_PARQUET_LOGICAL_TYPE_DECIMAL,
 false);
+if (logicalTypesEnabled && schema.getLogicalType() != null && 
schema.getLogicalType() instanceof Decimal) {
+  Decimal decimal = (Decimal) schema.getLogicalType();
+
+  // trimming precision and scale to Hive's maximum values.
+  int precision = Math.min(HiveDecimal.MAX_PRECISION, 
decimal.getPrecision());
+  if (precision < decimal.getPrecision()) {
+LOG.warn("Warning! Precision in the Hive table definition will be 
smaller than the actual precision of the column on storage! Hive may not be 
able to read data from this column.");
--- End diff --

Do you think we should remove this warning? (I think, even if it's 
redundant, it's useful to write this out.)


> Add parquet numeric support for Parquet in Hive import
> --
>
> Key: SQOOP-3396
> URL: https://issues.apache.org/jira/browse/SQOOP-3396
> Project: Sqoop
>  Issue Type: Sub-task
>Reporter: Fero Szabo
>Assignee: Fero Szabo
>Priority: Major
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] sqoop pull request #60: SQOOP-3396: Add parquet numeric support for Parquet ...

2018-12-04 Thread fszabo2
Github user fszabo2 commented on a diff in the pull request:

https://github.com/apache/sqoop/pull/60#discussion_r238693735
  
--- Diff: src/java/org/apache/sqoop/hive/HiveTypes.java ---
@@ -83,27 +89,58 @@ public static String toHiveType(int sqlType) {
   }
   }
 
-  public static String toHiveType(Schema.Type avroType) {
-  switch (avroType) {
-case BOOLEAN:
-  return HIVE_TYPE_BOOLEAN;
-case INT:
-  return HIVE_TYPE_INT;
-case LONG:
-  return HIVE_TYPE_BIGINT;
-case FLOAT:
-  return HIVE_TYPE_FLOAT;
-case DOUBLE:
-  return HIVE_TYPE_DOUBLE;
-case STRING:
-case ENUM:
-  return HIVE_TYPE_STRING;
-case BYTES:
-case FIXED:
-  return HIVE_TYPE_BINARY;
-default:
-  return null;
+  public static String toHiveType(Schema schema, SqoopOptions options) {
+if (schema.getType() == Schema.Type.UNION) {
+  for (Schema subSchema : schema.getTypes()) {
+if (subSchema.getType() != Schema.Type.NULL) {
+  return toHiveType(subSchema, options);
+}
+  }
+}
+
+Schema.Type avroType = schema.getType();
+switch (avroType) {
+  case BOOLEAN:
+return HIVE_TYPE_BOOLEAN;
+  case INT:
+return HIVE_TYPE_INT;
+  case LONG:
+return HIVE_TYPE_BIGINT;
+  case FLOAT:
+return HIVE_TYPE_FLOAT;
+  case DOUBLE:
+return HIVE_TYPE_DOUBLE;
+  case STRING:
+  case ENUM:
+return HIVE_TYPE_STRING;
+  case BYTES:
+return mapToDecimalOrBinary(schema, options);
+  case FIXED:
+return HIVE_TYPE_BINARY;
+  default:
+throw new RuntimeException(String.format("There is no Hive type 
mapping defined for the Avro type of: %s ", avroType.getName()));
+}
+  }
+
+  private static String mapToDecimalOrBinary(Schema schema, SqoopOptions 
options) {
+boolean logicalTypesEnabled = 
options.getConf().getBoolean(ConfigurationConstants.PROP_ENABLE_PARQUET_LOGICAL_TYPE_DECIMAL,
 false);
+if (logicalTypesEnabled && schema.getLogicalType() != null && 
schema.getLogicalType() instanceof Decimal) {
+  Decimal decimal = (Decimal) schema.getLogicalType();
+
+  // trimming precision and scale to Hive's maximum values.
+  int precision = Math.min(HiveDecimal.MAX_PRECISION, 
decimal.getPrecision());
+  if (precision < decimal.getPrecision()) {
+LOG.warn("Warning! Precision in the Hive table definition will be 
smaller than the actual precision of the column on storage! Hive may not be 
able to read data from this column.");
--- End diff --

Do you think we should remove this warning? (I think, even if it's 
redundant, it's useful to write this out.)


---


[jira] [Commented] (SQOOP-3396) Add parquet numeric support for Parquet in Hive import

2018-12-04 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/SQOOP-3396?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16708801#comment-16708801
 ] 

ASF GitHub Bot commented on SQOOP-3396:
---

Github user fszabo2 commented on a diff in the pull request:

https://github.com/apache/sqoop/pull/60#discussion_r238693021
  
--- Diff: src/java/org/apache/sqoop/hive/HiveTypes.java ---
@@ -79,8 +85,42 @@ public static String toHiveType(int sqlType) {
   default:
 // TODO(aaron): Support BINARY, VARBINARY, LONGVARBINARY, DISTINCT,
 // BLOB, ARRAY, STRUCT, REF, JAVA_OBJECT.
-return null;
+return null;
+  }
+  }
+
+  private static String mapDecimalsToHiveType(int sqlType, SqoopOptions 
options) {
+if 
(options.getConf().getBoolean(ConfigurationConstants.PROP_ENABLE_PARQUET_LOGICAL_TYPE_DECIMAL,
 false)
+&& (sqlType == Types.NUMERIC || sqlType == Types.DECIMAL)){
+  return HIVE_TYPE_DECIMAL;
+}
+return HIVE_TYPE_DOUBLE;
+  }
+
+
+  public static String toHiveType(Schema schema) {
+if (schema.getType() == Schema.Type.UNION) {
+  for (Schema subSchema : schema.getTypes()) {
+if (subSchema.getLogicalType() != null && 
subSchema.getLogicalType() instanceof Decimal) {
--- End diff --

I learn something new every day :), removed.


> Add parquet numeric support for Parquet in Hive import
> --
>
> Key: SQOOP-3396
> URL: https://issues.apache.org/jira/browse/SQOOP-3396
> Project: Sqoop
>  Issue Type: Sub-task
>Reporter: Fero Szabo
>Assignee: Fero Szabo
>Priority: Major
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] sqoop pull request #60: SQOOP-3396: Add parquet numeric support for Parquet ...

2018-12-04 Thread fszabo2
Github user fszabo2 commented on a diff in the pull request:

https://github.com/apache/sqoop/pull/60#discussion_r238693021
  
--- Diff: src/java/org/apache/sqoop/hive/HiveTypes.java ---
@@ -79,8 +85,42 @@ public static String toHiveType(int sqlType) {
   default:
 // TODO(aaron): Support BINARY, VARBINARY, LONGVARBINARY, DISTINCT,
 // BLOB, ARRAY, STRUCT, REF, JAVA_OBJECT.
-return null;
+return null;
+  }
+  }
+
+  private static String mapDecimalsToHiveType(int sqlType, SqoopOptions 
options) {
+if 
(options.getConf().getBoolean(ConfigurationConstants.PROP_ENABLE_PARQUET_LOGICAL_TYPE_DECIMAL,
 false)
+&& (sqlType == Types.NUMERIC || sqlType == Types.DECIMAL)){
+  return HIVE_TYPE_DECIMAL;
+}
+return HIVE_TYPE_DOUBLE;
+  }
+
+
+  public static String toHiveType(Schema schema) {
+if (schema.getType() == Schema.Type.UNION) {
+  for (Schema subSchema : schema.getTypes()) {
+if (subSchema.getLogicalType() != null && 
subSchema.getLogicalType() instanceof Decimal) {
--- End diff --

I learn something new every day :), removed.


---


[jira] [Commented] (SQOOP-3396) Add parquet numeric support for Parquet in Hive import

2018-12-04 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/SQOOP-3396?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16708798#comment-16708798
 ] 

ASF GitHub Bot commented on SQOOP-3396:
---

Github user fszabo2 commented on a diff in the pull request:

https://github.com/apache/sqoop/pull/60#discussion_r238691269
  
--- Diff: src/java/org/apache/sqoop/hive/HiveTypes.java ---
@@ -79,8 +85,42 @@ public static String toHiveType(int sqlType) {
   default:
 // TODO(aaron): Support BINARY, VARBINARY, LONGVARBINARY, DISTINCT,
 // BLOB, ARRAY, STRUCT, REF, JAVA_OBJECT.
-return null;
+return null;
+  }
+  }
+
+  private static String mapDecimalsToHiveType(int sqlType, SqoopOptions 
options) {
+if 
(options.getConf().getBoolean(ConfigurationConstants.PROP_ENABLE_PARQUET_LOGICAL_TYPE_DECIMAL,
 false)
+&& (sqlType == Types.NUMERIC || sqlType == Types.DECIMAL)){
--- End diff --

This piece of code was reverted. 


> Add parquet numeric support for Parquet in Hive import
> --
>
> Key: SQOOP-3396
> URL: https://issues.apache.org/jira/browse/SQOOP-3396
> Project: Sqoop
>  Issue Type: Sub-task
>Reporter: Fero Szabo
>Assignee: Fero Szabo
>Priority: Major
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] sqoop pull request #60: SQOOP-3396: Add parquet numeric support for Parquet ...

2018-12-04 Thread fszabo2
Github user fszabo2 commented on a diff in the pull request:

https://github.com/apache/sqoop/pull/60#discussion_r238691269
  
--- Diff: src/java/org/apache/sqoop/hive/HiveTypes.java ---
@@ -79,8 +85,42 @@ public static String toHiveType(int sqlType) {
   default:
 // TODO(aaron): Support BINARY, VARBINARY, LONGVARBINARY, DISTINCT,
 // BLOB, ARRAY, STRUCT, REF, JAVA_OBJECT.
-return null;
+return null;
+  }
+  }
+
+  private static String mapDecimalsToHiveType(int sqlType, SqoopOptions 
options) {
+if 
(options.getConf().getBoolean(ConfigurationConstants.PROP_ENABLE_PARQUET_LOGICAL_TYPE_DECIMAL,
 false)
+&& (sqlType == Types.NUMERIC || sqlType == Types.DECIMAL)){
--- End diff --

This piece of code was reverted. 


---


[jira] [Resolved] (SQOOP-3417) Execute Oracle XE tests on Travis CI

2018-12-04 Thread Fero Szabo (JIRA)


 [ 
https://issues.apache.org/jira/browse/SQOOP-3417?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Fero Szabo resolved SQOOP-3417.
---
Resolution: Fixed

> Execute Oracle XE tests on Travis CI
> 
>
> Key: SQOOP-3417
> URL: https://issues.apache.org/jira/browse/SQOOP-3417
> Project: Sqoop
>  Issue Type: Test
>Affects Versions: 1.4.7
>Reporter: Szabolcs Vasas
>Assignee: Szabolcs Vasas
>Priority: Major
>
> The task is to enable the Travis CI to execute Oracle XE tests too 
> automatically.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (SQOOP-3417) Execute Oracle XE tests on Travis CI

2018-12-04 Thread Fero Szabo (JIRA)


[ 
https://issues.apache.org/jira/browse/SQOOP-3417?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16708772#comment-16708772
 ] 

Fero Szabo commented on SQOOP-3417:
---

Hi [~vasas],

Your change is now committed. Thank you for your contribution, good catch!

> Execute Oracle XE tests on Travis CI
> 
>
> Key: SQOOP-3417
> URL: https://issues.apache.org/jira/browse/SQOOP-3417
> Project: Sqoop
>  Issue Type: Test
>Affects Versions: 1.4.7
>Reporter: Szabolcs Vasas
>Assignee: Szabolcs Vasas
>Priority: Major
>
> The task is to enable the Travis CI to execute Oracle XE tests too 
> automatically.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (SQOOP-3417) Execute Oracle XE tests on Travis CI

2018-12-04 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SQOOP-3417?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16708770#comment-16708770
 ] 

ASF subversion and git services commented on SQOOP-3417:


Commit 302674d96b18bae3c5283d16603afb985b892795 in sqoop's branch 
refs/heads/trunk from [~fero]
[ https://git-wip-us.apache.org/repos/asf?p=sqoop.git;h=302674d ]

SQOOP-3417: Execute Oracle XE tests on Travis CI

(Szabolcs Vasas via Fero Szabo)

This closes #65


> Execute Oracle XE tests on Travis CI
> 
>
> Key: SQOOP-3417
> URL: https://issues.apache.org/jira/browse/SQOOP-3417
> Project: Sqoop
>  Issue Type: Test
>Affects Versions: 1.4.7
>Reporter: Szabolcs Vasas
>Assignee: Szabolcs Vasas
>Priority: Major
>
> The task is to enable the Travis CI to execute Oracle XE tests too 
> automatically.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (SQOOP-3417) Execute Oracle XE tests on Travis CI

2018-12-04 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/SQOOP-3417?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16708771#comment-16708771
 ] 

ASF GitHub Bot commented on SQOOP-3417:
---

Github user asfgit closed the pull request at:

https://github.com/apache/sqoop/pull/65


> Execute Oracle XE tests on Travis CI
> 
>
> Key: SQOOP-3417
> URL: https://issues.apache.org/jira/browse/SQOOP-3417
> Project: Sqoop
>  Issue Type: Test
>Affects Versions: 1.4.7
>Reporter: Szabolcs Vasas
>Assignee: Szabolcs Vasas
>Priority: Major
>
> The task is to enable the Travis CI to execute Oracle XE tests too 
> automatically.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] sqoop pull request #65: SQOOP-3417: Execute Oracle XE tests on Travis CI

2018-12-04 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/sqoop/pull/65


---


[jira] [Commented] (SQOOP-3417) Execute Oracle XE tests on Travis CI

2018-12-04 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/SQOOP-3417?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16708504#comment-16708504
 ] 

ASF GitHub Bot commented on SQOOP-3417:
---

GitHub user szvasas opened a pull request:

https://github.com/apache/sqoop/pull/65

SQOOP-3417: Execute Oracle XE tests on Travis CI



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/szvasas/sqoop SQOOP-3417

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/sqoop/pull/65.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #65


commit 1ea1e35324b9cde28703b03c94ba5166897eeecb
Author: Szabolcs Vasas 
Date:   2018-12-03T15:17:44Z

Oracle JDBC driver is now downloaded from a Maven repository.




> Execute Oracle XE tests on Travis CI
> 
>
> Key: SQOOP-3417
> URL: https://issues.apache.org/jira/browse/SQOOP-3417
> Project: Sqoop
>  Issue Type: Test
>Affects Versions: 1.4.7
>Reporter: Szabolcs Vasas
>Assignee: Szabolcs Vasas
>Priority: Major
>
> The task is to enable the Travis CI to execute Oracle XE tests too 
> automatically.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] sqoop pull request #65: SQOOP-3417: Execute Oracle XE tests on Travis CI

2018-12-04 Thread szvasas
GitHub user szvasas opened a pull request:

https://github.com/apache/sqoop/pull/65

SQOOP-3417: Execute Oracle XE tests on Travis CI



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/szvasas/sqoop SQOOP-3417

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/sqoop/pull/65.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #65


commit 1ea1e35324b9cde28703b03c94ba5166897eeecb
Author: Szabolcs Vasas 
Date:   2018-12-03T15:17:44Z

Oracle JDBC driver is now downloaded from a Maven repository.




---


[jira] [Assigned] (SQOOP-3417) Execute Oracle XE tests on Travis CI

2018-12-04 Thread Szabolcs Vasas (JIRA)


 [ 
https://issues.apache.org/jira/browse/SQOOP-3417?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Szabolcs Vasas reassigned SQOOP-3417:
-

Assignee: Szabolcs Vasas

> Execute Oracle XE tests on Travis CI
> 
>
> Key: SQOOP-3417
> URL: https://issues.apache.org/jira/browse/SQOOP-3417
> Project: Sqoop
>  Issue Type: Test
>Affects Versions: 1.4.7
>Reporter: Szabolcs Vasas
>Assignee: Szabolcs Vasas
>Priority: Major
>
> The task is to enable the Travis CI to execute Oracle XE tests too 
> automatically.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (SQOOP-3417) Execute Oracle XE tests on Travis CI

2018-12-04 Thread Szabolcs Vasas (JIRA)
Szabolcs Vasas created SQOOP-3417:
-

 Summary: Execute Oracle XE tests on Travis CI
 Key: SQOOP-3417
 URL: https://issues.apache.org/jira/browse/SQOOP-3417
 Project: Sqoop
  Issue Type: Test
Affects Versions: 1.4.7
Reporter: Szabolcs Vasas


The task is to enable the Travis CI to execute Oracle XE tests too 
automatically.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)