[
https://issues.apache.org/jira/browse/SQOOP-1869?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14249356#comment-14249356
]
Sqoop QA bot commented on SQOOP-1869:
-------------------------------------
Testing file
[SQOOP-1869.1.patch|https://issues.apache.org/jira/secure/attachment/12687649/SQOOP-1869.1.patch]
against branch sqoop2 took 0:24:14.669903.
{color:red}Overall:{color} -1 due to 2 errors
{color:red}ERROR:{color} Some unit tests failed
{color:red}ERROR:{color} Failed unit test:
{{org.apache.sqoop.connector.idf.TestCSVIntermediateDataFormat}}
{color:green}SUCCESS:{color} Clean was successful
{color:green}SUCCESS:{color} Patch applied correctly
{color:green}SUCCESS:{color} Patch add/modify test case
{color:green}SUCCESS:{color} License check passed
{color:green}SUCCESS:{color} Patch compiled
{color:green}SUCCESS:{color} All integration tests passed
Console output is available
[here|https://builds.apache.org/job/PreCommit-SQOOP-Build/798/console].
This message is automatically generated.
> Sqoop2: Expand schema matching to support two schemaless connectors
> -------------------------------------------------------------------
>
> Key: SQOOP-1869
> URL: https://issues.apache.org/jira/browse/SQOOP-1869
> Project: Sqoop
> Issue Type: Improvement
> Reporter: Gwen Shapira
> Assignee: Gwen Shapira
> Fix For: 1.99.5
>
> Attachments: SQOOP-1869.0.patch, SQOOP-1869.1.patch,
> SQOOP-1869.2.patch
>
>
> Currently the schema matches errors out if both FROM and TO connectors are
> empty. This prevents us from supporting HDFS->Kafka.
> I suggest to change the code to support the following:
> 1. Empty schema will contain a single byte[] field with whatever the
> connector writes into it.
> 2. As happens now, one connector is null and the other has a schema, the
> schema that exists will be used to parse the data.
> 3. If we have two empty schemas, the TO connector will get a byte[] and
> presumably know what to do with it.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)