[ 
https://issues.apache.org/jira/browse/GOBBLIN-865?focusedWorklogId=313878&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-313878
 ]

ASF GitHub Bot logged work on GOBBLIN-865:
------------------------------------------

                Author: ASF GitHub Bot
            Created on: 17/Sep/19 19:50
            Start Date: 17/Sep/19 19:50
    Worklog Time Spent: 10m 
      Work Description: arekusuri commented on pull request #2722: GOBBLIN-865: 
Add feature that enables PK-chunking in partition
URL: https://github.com/apache/incubator-gobblin/pull/2722#discussion_r325346830
 
 

 ##########
 File path: 
gobblin-salesforce/src/main/java/org/apache/gobblin/salesforce/SalesforceExtractor.java
 ##########
 @@ -979,39 +1118,33 @@ private void 
fetchResultBatchWithRetry(RecordSetList<JsonElement> rs)
   @Override
   public void closeConnection() throws Exception {
     if (this.bulkConnection != null
-        && 
!this.bulkConnection.getJobStatus(this.bulkJob.getId()).getState().toString().equals("Closed"))
 {
+        && 
!this.bulkConnection.getJobStatus(this.getBulkJobId()).getState().toString().equals("Closed"))
 {
       log.info("Closing salesforce bulk job connection");
-      this.bulkConnection.closeJob(this.bulkJob.getId());
+      this.bulkConnection.closeJob(this.getBulkJobId());
     }
   }
 
-  public static List<Command> constructGetCommand(String restQuery) {
+  private static List<Command> constructGetCommand(String restQuery) {
     return Arrays.asList(new RestApiCommand().build(Arrays.asList(restQuery), 
RestApiCommandType.GET));
   }
 
   /**
    * Waits for the PK batches to complete. The wait will stop after all 
batches are complete or on the first failed batch
    * @param batchInfoList list of batch info
-   * @param retryInterval the polling interval
+   * @param waitInterval the polling interval
    * @return the last {@link BatchInfo} processed
    * @throws InterruptedException
    * @throws AsyncApiException
    */
-  private BatchInfo waitForPkBatches(BatchInfoList batchInfoList, int 
retryInterval)
+  private BatchInfo waitForPkBatches(BatchInfoList batchInfoList, int 
waitInterval)
       throws InterruptedException, AsyncApiException {
     BatchInfo batchInfo = null;
     BatchInfo[] batchInfos = batchInfoList.getBatchInfo();
 
-    // Wait for all batches other than the first one. The first one is not 
processed in PK chunking mode
-    for (int i = 1; i < batchInfos.length; i++) {
-      BatchInfo bi = batchInfos[i];
-
-      // get refreshed job status
-      bi = this.bulkConnection.getBatchInfo(this.bulkJob.getId(), bi.getId());
-
-      while ((bi.getState() != BatchStateEnum.Completed)
-          && (bi.getState() != BatchStateEnum.Failed)) {
-        Thread.sleep(retryInterval * 1000);
+    for (BatchInfo bi: batchInfos) {
 
 Review comment:
   I had a ticket for this - 
https://jira01.corp.linkedin.com:8443/browse/DSS-22221
   pkchunking is using this function.
   Sometimes, the parent may not be the first element in the list. (see 
screenshot in ticket)
   
 
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
-------------------

    Worklog Id:     (was: 313878)
    Time Spent: 5h 40m  (was: 5.5h)

> Add feature that enables PK-chunking in partition 
> --------------------------------------------------
>
>                 Key: GOBBLIN-865
>                 URL: https://issues.apache.org/jira/browse/GOBBLIN-865
>             Project: Apache Gobblin
>          Issue Type: Task
>            Reporter: Alex Li
>            Priority: Major
>              Labels: salesforce
>          Time Spent: 5h 40m
>  Remaining Estimate: 0h
>
> In SFDC(salesforce) connector, we have partitioning mechanisms to split a 
> giant query to multiple sub queries. There are 3 mechanisms:
>  * simple partition (equally split by time)
>  * dynamic pre-partition (generate histogram and split by row numbers)
>  * user specified partition (set up time range in job file)
> However there are tables like Task and Contract are failing time to time to 
> fetch full data.
> We may want to utilize PK-chunking to partition the query.
>  
> The pk-chunking doc from SFDC - 
> [https://developer.salesforce.com/docs/atlas.en-us.api_asynch.meta/api_asynch/async_api_headers_enable_pk_chunking.htm]



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

Reply via email to