[ 
https://issues.apache.org/jira/browse/SQOOP-1804?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14305002#comment-14305002
 ] 

Veena Basavaraj commented on SQOOP-1804:
----------------------------------------

{code}
[INFO] --- maven-surefire-plugin:2.14:test (default-test) @ test ---
[INFO] 
[INFO] --- maven-jar-plugin:2.4:jar (default-jar) @ test ---
[INFO] Building jar: 
/Users/vbasavaraj/Projects/SqoopRefactoring/sqoop2/test/target/test-2.0.0-SNAPSHOT-hadoop200.jar
[INFO] 
[INFO] --- maven-surefire-plugin:2.14:test (integration-test) @ test ---
[INFO] Surefire report directory: 
/Users/vbasavaraj/Projects/SqoopRefactoring/sqoop2/test/target/surefire-reports

-------------------------------------------------------
 T E S T S
-------------------------------------------------------

-------------------------------------------------------
 T E S T S
-------------------------------------------------------
Running org.apache.sqoop.integration.connector.jdbc.generic.FromHDFSToRDBMSTest
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 63.586 sec
Running org.apache.sqoop.integration.connector.jdbc.generic.FromRDBMSToHDFSTest
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 155.637 sec
Running org.apache.sqoop.integration.connector.jdbc.generic.PartitionerTest
Tests run: 0, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.584 sec
Running org.apache.sqoop.integration.connector.jdbc.generic.TableStagedRDBMSTest
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 62.76 sec
Running org.apache.sqoop.integration.connector.kafka.FromHDFSToKafkaTest
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 70.001 sec
Running org.apache.sqoop.integration.connector.kafka.FromRDBMSToKafkaTest
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 70.324 sec
Running 
org.apache.sqoop.integration.repository.derby.upgrade.Derby1_99_4UpgradeTest
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 9.205 sec
Running 
org.apache.sqoop.integration.repository.derby.upgrade.DerbyRepositoryUpgradeTest$DerbySqoopMiniCluster
Tests run: 0, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.177 sec
Running 
org.apache.sqoop.integration.repository.derby.upgrade.DerbyRepositoryUpgradeTest
Tests run: 0, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.195 sec
Running 
org.apache.sqoop.integration.server.SubmissionWithDisabledModelObjectsTest
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 82.273 sec
Running org.apache.sqoop.integration.server.VersionTest
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.801 sec

Results :

Tests run: 13, Failures: 0, Errors: 0, Skipped: 0

[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary:
[INFO] 
[INFO] Sqoop .............................................. SUCCESS [  0.671 s]
[INFO] Sqoop Common Test Libraries ........................ SUCCESS [  1.677 s]
[INFO] Sqoop Common ....................................... SUCCESS [ 30.536 s]
[INFO] Sqoop Connectors ................................... SUCCESS [  0.015 s]
[INFO] Sqoop Connector SDK ................................ SUCCESS [  5.755 s]
[INFO] Sqoop Core ......................................... SUCCESS [  6.063 s]
[INFO] Sqoop Repository ................................... SUCCESS [  0.010 s]
[INFO] Sqoop Common Repository ............................ SUCCESS [  1.706 s]
[INFO] Sqoop Derby Repository ............................. SUCCESS [01:07 min]
[INFO] Sqoop PostgreSQL Repository ........................ SUCCESS [  1.952 s]
[INFO] Sqoop Tools ........................................ SUCCESS [  0.650 s]
[INFO] Sqoop Security ..................................... SUCCESS [  0.751 s]
[INFO] Sqoop Execution Engines ............................ SUCCESS [  0.010 s]
[INFO] Sqoop Mapreduce Execution Engine ................... SUCCESS [ 29.499 s]
[INFO] Sqoop Submission Engines ........................... SUCCESS [  0.009 s]
[INFO] Sqoop Mapreduce Submission Engine .................. SUCCESS [  0.695 s]
[INFO] Sqoop Generic JDBC Connector ....................... SUCCESS [ 17.586 s]
[INFO] Sqoop HDFS Connector ............................... SUCCESS [ 11.108 s]
[INFO] Sqoop Kite Connector ............................... SUCCESS [  8.996 s]
[INFO] Sqoop Kafka Connector .............................. SUCCESS [  1.738 s]
[INFO] Sqoop Server ....................................... SUCCESS [  3.061 s]
[INFO] Sqoop Client ....................................... SUCCESS [  2.377 s]
[INFO] Sqoop Shell ........................................ SUCCESS [  3.598 s]
[INFO] Sqoop Documentation ................................ SUCCESS [ 24.670 s]
[INFO] Sqoop Tomcat additions ............................. SUCCESS [  0.529 s]
[INFO] Sqoop Distribution ................................. SUCCESS [  0.647 s]
[INFO] Sqoop Integration Tests ............................ SUCCESS [08:50 min]
[INFO] ------------------------------------------------------------------------
[INFO] BUILD SUCCESS
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 12:33 min
[INFO] Finished at: 2015-02-04T01:47:05-08:00
[INFO] Final Memory: 356M/1132M
[INFO] ------------------------------------------------------------------------
vbasavaraj:sqoop2 vbasavaraj$ mvn clean integration-test 

{code}

> Repository Structure + API: Storing/Retrieving the From/To config inputs ( 
> editable/ overrides)
> -----------------------------------------------------------------------------------------------
>
>                 Key: SQOOP-1804
>                 URL: https://issues.apache.org/jira/browse/SQOOP-1804
>             Project: Sqoop
>          Issue Type: Sub-task
>            Reporter: Veena Basavaraj
>            Assignee: Veena Basavaraj
>             Fix For: 1.99.5
>
>         Attachments: SQOOP-1804-v1.patch, SQOOP-1804-v4.patch, 
> SQOOP-1804-v5-1.patch, SQOOP-1804-v5.patch, SQOOP-1804.patch
>
>
> Details of this proposal are in the wiki.
> https://cwiki.apache.org/confluence/display/SQOOP/Delta+Fetch+And+Merge+Design#DeltaFetchAndMergeDesign-Wheretostoretheoutputinsqoop?
> Update: The above highlights the pros and cons of each approach. 
> #4 is chosen, since it is less intrusive, more clean and allows U/Edit per 
> value in the output easily.
> Will use this ticket for more detailed discussion on storage options for the 
> output from connectors
> 1. 
> {code}
> // will have FK to submission
>  public static final String QUERY_CREATE_TABLE_SQ_JOB_OUTPUT_SUBMISSION =
>      "CREATE TABLE " + TABLE_SQ_JOB_OUTPUT + " ("
>      + COLUMN_SQ_JOB_OUT_ID + " BIGINT GENERATED ALWAYS AS IDENTITY (START 
> WITH 1, INCREMENT BY 1), "
>      + COLUMN_SQ_JOB_OUT_KEY + " VARCHAR(32), "
>      + COLUMN_SQ_JOB_OUT_VALUE + " LONG VARCHAR,"
>      + COLUMN_SQ_JOB_OUT_TYPE + " VARCHAR(32),"
>      + COLUMN_SQD_ID + " VARCHAR(32)," // FK to the direction table, since 
> this allows to distinguish output from FROM/ TO part of the job
>    + COLUMN_SQRS_SUBMISSION + " BIGINT, "
>    + "CONSTRAINT " + CONSTRAINT_SQRS_SQS + " "
>      + "FOREIGN KEY (" + COLUMN_SQRS_SUBMISSION + ") "
>        + "REFERENCES " + TABLE_SQ_SUBMISSION + "(" + COLUMN_SQS_ID + ") ON 
> DELETE CASCADE "
> {code}
> 2.
> At the code level, we will define  MOutputType, one of the types can be BLOB 
> as well, if a connector decides to store the value as a BLOB
> {code}
> class JobOutput {
> String key;
> Object value;
> MOutputType type;
> }
> {code}
> 3. 
> At the repository API, add a new API to get job output for a particular 
> submission Id and allow updates on values. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to