[
https://issues.apache.org/jira/browse/SQOOP-1556?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14149802#comment-14149802
]
Jarek Jarcec Cecho commented on SQOOP-1556:
-------------------------------------------
The current patch doesn't seem to be valid sphinx syntax, I'm getting errors in
the resulting document. It seems that the paragraph immediately after {{..
note::}} needs to be indented. I was able to get it working by doing something
like:
{code}
Interaction with Hadoop is taken cared by common modules of Sqoop 2 framework.
.. note::
Sqoop 2 also has an engine interface. At the moment the only engine is
MapReduce, but we may support additional engines in the future. Since many
parallel execution engines are capable of doing their own reads/writes there
may be a question of whether support for specific data stores should be done
through a new connector or new engine. Our guideline is: Connectors should
manage all data extract/load. Engines manage job life cycles. If you need to
support a new data store and don't care how jobs run - you are looking to add a
connector.
Connector Implementation
++++++++++++++++++++++++
{code}
I'm not sure what is the expected formatting, so I'll let [~gwenshap] finish
that :)
> Sqoop2: Add documentation clarifying connectors vs. engines
> -----------------------------------------------------------
>
> Key: SQOOP-1556
> URL: https://issues.apache.org/jira/browse/SQOOP-1556
> Project: Sqoop
> Issue Type: Improvement
> Components: docs
> Reporter: Gwen Shapira
> Assignee: Gwen Shapira
> Attachments: SQOOP-1556.0.patch
>
>
> Sqoop2 allows pluggable connectors and engines. There is some overlap in
> functionality, so we want to document when to implement each.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)