[ 
https://issues.apache.org/jira/browse/GOBBLIN-865?focusedWorklogId=312446&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-312446
 ]

ASF GitHub Bot logged work on GOBBLIN-865:
------------------------------------------

                Author: ASF GitHub Bot
            Created on: 13/Sep/19 23:45
            Start Date: 13/Sep/19 23:45
    Worklog Time Spent: 10m 
      Work Description: arekusuri commented on pull request #2722: GOBBLIN-865: 
Add feature that enables PK-chunking in partition
URL: https://github.com/apache/incubator-gobblin/pull/2722#discussion_r324393059
 
 

 ##########
 File path: 
gobblin-salesforce/src/main/java/org/apache/gobblin/salesforce/SalesforceSource.java
 ##########
 @@ -146,12 +156,101 @@ protected void addLineageSourceInfo(SourceState 
sourceState, SourceEntity entity
 
   @Override
   protected List<WorkUnit> generateWorkUnits(SourceEntity sourceEntity, 
SourceState state, long previousWatermark) {
+    String partitionType = state.getProp(PARTITION_TYPE, "");
+    if (partitionType.equals("PK_CHUNKING")) {
+      // pk-chunking only supports start-time by 
source.querybased.start.value, and does not support end-time.
+      // always ingest data later than or equal source.querybased.start.value.
+      // we should only pk chunking based work units only in case of 
snapshot/full ingestion
+      return generateWorkUnitsPkChunking(sourceEntity, state, 
previousWatermark);
+    } else {
+      return generateWorkUnitsStrategy(sourceEntity, state, previousWatermark);
+    }
+  }
+
+  /**
+   * generate workUnit with noQuery=true
+   */
+  private List<WorkUnit> generateWorkUnitsPkChunking(SourceEntity 
sourceEntity, SourceState state, long previousWatermark) {
+      SalesforceBulkJobId salesforceBulkJobId = 
executeQueryWithPkChunking(state, previousWatermark);
+      List<WorkUnit> ret = createWorkUnits(sourceEntity, state, 
salesforceBulkJobId);
+      return ret;
+  }
+
+  private SalesforceBulkJobId executeQueryWithPkChunking(
+      SourceState sourceState,
+      long previousWatermark
+  ) throws RuntimeException {
+    State state = new State(sourceState);
+    WorkUnit workUnit = WorkUnit.createEmpty();
+    try {
+      WorkUnitState workUnitState = new WorkUnitState(workUnit, state);
+      workUnitState.setId("Execute pk-chunking");
 
 Review comment:
   Hi @zxcware 
   is this OK? I am trying to set id for workUnit.
 
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
-------------------

    Worklog Id:     (was: 312446)
    Time Spent: 4h 50m  (was: 4h 40m)

> Add feature that enables PK-chunking in partition 
> --------------------------------------------------
>
>                 Key: GOBBLIN-865
>                 URL: https://issues.apache.org/jira/browse/GOBBLIN-865
>             Project: Apache Gobblin
>          Issue Type: Task
>            Reporter: Alex Li
>            Priority: Major
>              Labels: salesforce
>          Time Spent: 4h 50m
>  Remaining Estimate: 0h
>
> In SFDC(salesforce) connector, we have partitioning mechanisms to split a 
> giant query to multiple sub queries. There are 3 mechanisms:
>  * simple partition (equally split by time)
>  * dynamic pre-partition (generate histogram and split by row numbers)
>  * user specified partition (set up time range in job file)
> However there are tables like Task and Contract are failing time to time to 
> fetch full data.
> We may want to utilize PK-chunking to partition the query.
>  
> The pk-chunking doc from SFDC - 
> [https://developer.salesforce.com/docs/atlas.en-us.api_asynch.meta/api_asynch/async_api_headers_enable_pk_chunking.htm]



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

Reply via email to