[
https://issues.apache.org/jira/browse/SQOOP-1900?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14246177#comment-14246177
]
Veena Basavaraj commented on SQOOP-1900:
----------------------------------------
Let me try to rephrase it again,
these apis in sqoop writable makes sense to me
{code}
@Override
public void write(DataOutput out) throws IOException {
out.writeUTF(toIDF.getCSVTextData());
}
@Override
public void readFields(DataInput in) throws IOException {
toIDF.setCSVTextData(in.readUTF()); }
{code}
I fail to understand is why the IDF API methods are required, since be it any
kind of writable in hadoop or spark will basically use its own implementation.
ALL IDF needs to provide is the
getCSVTextData()
getObjectArrayData()
getData()
and its corr setters.
> IDF API read/ write method
> ---------------------------
>
> Key: SQOOP-1900
> URL: https://issues.apache.org/jira/browse/SQOOP-1900
> Project: Sqoop
> Issue Type: Sub-task
> Components: sqoop2-framework
> Reporter: Veena Basavaraj
> Fix For: 1.99.5
>
>
> At this point I am not clear what the real use of the following 2 methods are
> in the IDF API. Can anyone explain? I have not seen it used anywhere in the
> code I might be missing something
> {code}
> /**
> * Serialize the fields of this object to <code>out</code>.
> *
> * @param out <code>DataOuput</code> to serialize this object into.
> * @throws IOException
> */
> public abstract void write(DataOutput out) throws IOException;
> /**
> * Deserialize the fields of this object from <code>in</code>.
> *
> * <p>For efficiency, implementations should attempt to re-use storage in
> the
> * existing object where possible.</p>
> *
> * @param in <code>DataInput</code> to deseriablize this object from.
> * @throws IOException
> */
> public abstract void read(DataInput in) throws IOException;
> {code}
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)