[
https://issues.apache.org/jira/browse/SQOOP-1869?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14249509#comment-14249509
]
Sqoop QA bot commented on SQOOP-1869:
-------------------------------------
Testing file
[SQOOP-1869.4.patch|https://issues.apache.org/jira/secure/attachment/12687669/SQOOP-1869.4.patch]
against branch sqoop2 took 0:25:59.798591.
{color:green}Overall:{color} +1 all checks pass
{color:green}SUCCESS:{color} Clean was successful
{color:green}SUCCESS:{color} Patch applied correctly
{color:green}SUCCESS:{color} Patch add/modify test case
{color:green}SUCCESS:{color} License check passed
{color:green}SUCCESS:{color} Patch compiled
{color:green}SUCCESS:{color} All unit tests passed
{color:green}SUCCESS:{color} All integration tests passed
Console output is available
[here|https://builds.apache.org/job/PreCommit-SQOOP-Build/800/console].
This message is automatically generated.
> Sqoop2: Expand schema matching to support two schemaless connectors
> -------------------------------------------------------------------
>
> Key: SQOOP-1869
> URL: https://issues.apache.org/jira/browse/SQOOP-1869
> Project: Sqoop
> Issue Type: Improvement
> Reporter: Gwen Shapira
> Assignee: Gwen Shapira
> Fix For: 1.99.5
>
> Attachments: SQOOP-1869.0.patch, SQOOP-1869.1.patch,
> SQOOP-1869.2.patch, SQOOP-1869.4.patch
>
>
> Currently the schema matches errors out if both FROM and TO connectors are
> empty. This prevents us from supporting HDFS->Kafka.
> I suggest to change the code to support the following:
> 1. Empty schema will contain a single byte[] field with whatever the
> connector writes into it.
> 2. As happens now, one connector is null and the other has a schema, the
> schema that exists will be used to parse the data.
> 3. If we have two empty schemas, the TO connector will get a byte[] and
> presumably know what to do with it.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)